text
stringlengths 83
79.5k
|
---|
H: Accurate readings from Hall sensor flow meter on Raspberry Pi 4 vs Arduino
I have a working project on my Arduino which tells me how much water has flowed through the sensor (http://www.swissflow.com/sf800.html). It works pretty well (give or take few ml.) The Arduino is too limited for my needs and wanted to port it to the Rasberry Pi 4.
The readings I am getting from the Pi are not random but they are way too low and vary wildly from pin to pin. I'm very much a software dev so the Arduino was perfect, being plug and play. I assumed the same about the Pi but seem to be very wrong.
Is this just a limitation on the board or am I seriously getting something wrong? Has anyone actually managed to reliably read from a Hall Sensor on a Pi with minimum variance (+- 1%)?
I used the code here as a test program and the results are just unusable.
#!/usr/bin/python
import RPi.GPIO as GPIO
import time, sys
#import paho.mqtt.publish as publish
FLOW_SENSOR_GPIO = 13
#MQTT_SERVER = "192.168.1.220"
GPIO.setmode(GPIO.BCM)
GPIO.setup(FLOW_SENSOR_GPIO, GPIO.IN, pull_up_down = GPIO.PUD_UP)
global count
count = 0
def countPulse(channel):
global count
if start_counter == 1:
count = count+1
GPIO.add_event_detect(FLOW_SENSOR_GPIO, GPIO.FALLING, callback=countPulse)
while True:
try:
start_counter = 1
time.sleep(1)
start_counter = 0
flow = (count / 7.5) # Pulse frequency (Hz) = 7.5Q, Q is flow rate in L/min.
print("The flow is: %.3f Liter/min" % (flow))
#publish.single("/Garden.Pi/WaterFlow", flow, hostname=MQTT_SERVER)
count = 0
time.sleep(5)
except KeyboardInterrupt:
print('\nkeyboard interrupt!')
GPIO.cleanup()
sys.exit()
Any guidance would be great. At the moment I'm thinking of just resorting to reading serial output from the Arduino on the Pi which isn't something I want to do but seems like the only option.
AI: The MCu is targeted for a real time process control, meanwhile the SoC, PC,...are made for non-real time processes.
You would need a special RT Kernel for your board, but even then it is very questionable.
|
H: How should I understand this 1.5 V to 90 V voltage step-up network?
I am learning about Jim Williams's avalanche pulse generator.
It is in page 93 of AN47 - High Speed Amplifier Techniques.
I don't understand the circuit's voltage step up network, which is 1.5 V to 90 V.
It looks like C2 and D2 form a Villard voltage doubler. C2, D2, D3 and C1 look like a Greinacher voltage doubler.
What is the purpose of D1 and C4?
How should I understand this 1.5 V to 90 V diode-capacitor voltage step-up network?
AI: D1 and C4 are a normal part of the LT1073 circuit:
The junction of D1 and C4 is a DC voltage that would normally be the output of the circuit.
C2, D2, D3, and C1 form a Greinacher voltage doubler that is fed by the pulses from the boost converter.
The way it is built, D1 and C4 generate a DC voltage, then C2, D2, D3, and C1 generate a higher DC voltage that is added to the voltage from D1 and C4.
You get about three times the unregulated voltage from this circuit than you would if you removed C2, D2, D3, and C1.
This is of course regulated to the required 90V with the feedback through the 10M and 24k feedback network.
|
H: PTC Resettable fuse
I need to limit the current of a motor to around 1A. I would like to use a PTC fuse so that it trips in the rare occasion the motor is forced by the user to draw too much current because the motor is having to work harder than it should.
I understand the PTC fuse has a hold current, e.g. 750mA (the current at which it is guaranteed to hold) and a trip current e.g. 1.5A (the current where it is guaranteed to trip). Inbetween these currents, the fuse may trip, it may not, though I suppose it is more likely to trip if it is made to allow a current to flow between its hold and trip current for long periods of time.
In the datasheet for RKEF075, on page 2 it states that the max time to trip is 1.5 seconds, but that this time to trip is at 8A, not the trip current of 1.5A. Does this mean the fuse will take a long time to trip if 1.5A flows? Could it possibly allow up to 8A to flow (even for a short period of time)?
AI: The datasheet says it trips in less than 1.5 seconds at 8A.
This means that 8 amps is flowing through it, for potentially 1.5 seconds before it trips.
This is standard for any type of fuse: it takes time for it to get hot enough to break. The lower the current, the less heat, the longer it will take to break.
There will be a time somewhere for how long it may take before breaking at 1.5A, I have seen this as 60s and at 120s depending on the fuse.
Other things to note about PTCs is that while they are resettable, they do have a memory effect. Often get significantly higher resistance after the first reset, and then they are less and less reliable the more times they get triggered. I try to avoid them personally.
For your application, I would always go for dual devices to make sure things are safe:
A standard fuse to break during major over current events or short circuit events
Current monitoring triggering a transistor to turn off the motor for the more common motor jam or over load events.
Yes this is a lot more complicated than just a PTC, but it is a lot safer and leads to a more reliable product.
If you are using the motor in a controlled environment in your desk/office/workshop then a PTC will do the job. But if you’re letting anyone else use it, always put in the extra safety, it’s good for your motor as well as for the user.
|
H: Why was this MOSFET destroyed?
I built a simple voltage regulator circuit like this:
I'm using an LM723 but any op-amp will do. When I shorted the output, the MOSFET was destroyed instantly but I don't know why.
I have tested many MOSFETs from cheap Chinese ones to a real IRFZ48. They were all destroyed the same way. They were all power MOSFETs able to handle continuous current more than 5A and very high peak current as I have tested. I don't understand how a short circuit can instantly destroy it, even when the 25 V supply instantly changes to 1 A constant current and drops to a very low voltage. When I un-short it, the 3 pins of the MOSFETs are shorted, thus dead.
Some quirks though:
The MOSFETs is only destroyed when I short it while outputting a low voltage like 2 V, they aren't destroyed when I adjust the output to 20 V then short it. Just the slightest touch from 2 V to ground and they are dead, and I can't even see any sparks. After that, the output is stuck at 25 V.
I tried using an IGBT, and it didn't get damaged no matter how I short it. I just don't get why it didn't while the power MOSFETs did. I don't have any power BJTs so I couldn't test those.
I tried using a P-channel one with 25 V to source and controlling it through an NPN transistor, it was also instantly destroyed when shorted, no matter what voltage I short it at.
I don't believe the output capacitor from my SMPS can instantly put out more than 210 A when shorted so it shouldn't exceed any peak current. VGS has a Zener diode so it also shouldn't exceed the maximum rating. I can't seem to find anything that would exceed maximum ratings or break the MOSFETs, so where did it go wrong?
AI: Your MOSFET is a source follower and, when you short it out there is 25 volts between drain and source and several amps flowing (maybe 5 amps). That's a power of roughly 125 watts. Here's the safe operating area for the IRFZ48N from its data sheet: -
On the above graph in red I've drawn a line at 25 volts (drain source voltage) and it crosses the 10 ms duration graph at 3.1 amps. In other words, this device is going to have problems with even a couple of amps and will fail certainly if left shorted out for any reasonable length of time at even very modest currents (less than 1 amp).
The IRFZ48N is intended for switching applications and not linear applications hence why the safe operating area graph doesn't consider time durations greater than 10 ms.
The MOSFETs only gets killed when I short it while outputting a low
voltage like 2V, they don't get killed when I adjust the output to 20V
Yes, this is a more likely scenario. When outputting only 2 volts, the gate voltage might be at 4 or 5 volts in order to control the output level to 2 volts. So, as soon as you apply a short, the MOSFET is instantly operating in it's linear region and then, U1 is trying to raise the gate voltage higher but it can't do it quickly enough to avoid the catastrophe of MOSFET failure.
|
H: Keypad interfacing on microcontroller
I am extremely new to dealing with microcontrollers and programming in general.
The picture below is a circuit board using a PIC18F4520 microcontroller.
As of now, I am focused on the 4x4 matrix keypad and how it works but I am completely lost.
Articles online say that you connect the keypad's 8 pins to microcontroller ports but the circuit below only has 7 pins.
AI: The keypad has eight pins:
There are only seven pins that connect the key pad module to the processor:
The 74C922 is a keypad encoder. It has a four bit output (ABCD) that tells you which of the sixteen keys of the keypad was pressed.
There are four pins for the four bits (ABCD,) an output enable (/OE) and data available (DA - active when a valid keypress is detected.) That's six pins. There's a seventh pin on the connector, but those six are responsible for the keypad.
The "articles online" that you have read assume that you are connection the keypad directly to the microprocessor so that the microprocessor can decode the key presses. The circuit example you give does it differently, though. It has the 74C922 decoding the keypad rather than the microprocessor.
If you would like to see how to decode a 4x4 matrix keypad in software, take a look at the Arduino keypad code. It shows you how to read a 4x4 matrix keypad.
Download the keypad.zip file and unpack it. Look in "Keypad.cpp" to see how it is done.
|
H: 13-48 V SEPIC - > 19.5 Vout; 0.5 A
I am trying to design a SEPIC converter with the parameters stated above and have already drawn a schematic using the LM5022 IC. I have built the system and noticed a few things:
The output voltage is a stable 19.66 V with an open circuit load, but the duty cycle of the signal that drives the gate of the FET is really small (~10% or less), while the calculations predict a 40-60% duty cycle as the input varies. I assumed that this is due to the schematic being calculated for a 0.5 A output current, thus I tried to test the system with such load.
Placing a 40 ohm load made the FET instantly burn (it is rated for 1.5 A. MOSFET Datasheet) and the coupled inductors get really hot (they are rated for 900 mA. CM Choke datasheet) In practice, I am using a common-mode choke for the build (which I could not show on the schematic). I measure drain-source voltage peaks that could potentially exceed the FET rating (100 V). The inductor current at loads of 500-1000 ohms looks like short bursts instead of continuous triangle waves as is expected.
I am not sure at this point what other information to include, so please ask me anything I have missed.
EDIT: I have done simulations of the practical condition where my FET burns, Here are the first 2 ms of the ltspice graphs (I could not simulate the controller, thus am using a PWM to control the converter):
!!!One point, which grabbed my attention, the simulations spazz out if the PWM is at 5 volts, anything higher works just fine!!!
FURTHER EDIT!!!
I have just build the SEPIC from the simulations without a driver controller and used a fixed duty cycle PWM to control it. It works impeccably. I probed all of the nodes with an oscilloscope, everything is almost 100% like in the simulations, which leads me to believe that the controller is not designed/working properly. The one thing that I do not understand about it is the compensation network, thus have not calculated the values for that and have put ones from the datasheet. Could anyone suggest an idea about that? (Thank you all for the valuable support up until now ^^)
I apologize for the ugly circuit
AI: "the coupled inductors get really hot (they are rated for 900 mA.) "
Your inductors are saturating and the resulting high current spikes are killing the MOSFETs. The DC rating of a CM choke is typically well above the saturation if the currents in the windings are not matched.
|
H: sniffing a 32.768kHz low-power oscillator
I'm troubleshooting a real-time clock and suspect that under certain conditions the oscillator stops. How could I check if it is running, especially when the RTC chip is on battery power? It is Maxim Dallas DS1307 with external crystal.
The oscilloscope input capacitance would disturb the picture too much, I think. I thought about holding a coil (say, 100 turns diameter 1") close to the PCB traces and probing the coil with the oscilloscope.
AI: Pin X1 is a high impedance input. If you probe there, probe capacitance will influence the oscillator.
However, pin X2 is the low impedance output of the internal inverter, so if you stick a X10 probe there, its capacitance will matter much less.
Then, there is a SQW/OUT pin that will output a square wave:
If you enable it in the I2C registers, then you can probe it and check its frequency without disturbing the oscillator at all.
|
H: Connecting multiplexer/demultiplexer input to Vcc of the same chip. Will this cause any performance issues?
In one of my circuits, I am connecting the I/P pin of multiplexer/demultiplexer to Vcc (5V) as shown below. I have 2 questions.
When switching, will the current draw cause performance deviations in this connection?
Can I connect the EN signal (coming from an Arduino) to D also instead of connecting to supply?
AI: There should not be any significant current drawn from any of the switch pins, the datasheet specifies an absolute maximum of 30 milliamperes for any pin.
This goes for both questions 1 & 2.
If the device is used within its rated current limits, connecting the input D to VCC permanently shall not affect the normal operation of the device. The resistor added will even make it tolerate the situation where one of the 'S' terminals is held low by an external circuit so the device will try to drive the 'D' terminal low. The resistor will limit the current sunk in this situation.
|
H: How is an 8p10c connector possible?
I thought I understood how modular connector naming worked: npmc, where n is the number of positions and m the number of loaded positions, always loading from the middle out. So for an 8p8c connector, there are eight positions and all of them are loaded with contacts, or for a 6p4c connector there are six positions and the middle four of them are loaded, with the outer two left empty.
But then I encountered this product category on digikey, of 8p10c connectors.
How can you have more positions loaded than there are positions? Where are the other two contacts?
AI: It appears this naming convention is applicable primarily to the jack, not the connector crimped onto the cable. For example, here is an 8P10C connector from CUI devices with the following pinout:
As you can see, the connector internally still has 8 pins that contact the modular connector of the cable, but the PCB footprint has 10 pins (not counting the LED pins) where pin numbers 5-6 are center taps for the magnetics.
I haven't found/seen an 8P10C connector which is the same width as a standard 8P but has 10 contacts.
There is a wider modular connector named the 10P10C and also referred to as RJ50, but technically RJxx is a "registered jack" and there appears to be a lot of historical misuse of 8P8C vs RJ45 (but it is rather pedantic).
Update:
The Molex 0432028101 (and presumably others) definitely shows 10 pins loaded in an 8P-wide jack:
I've not (yet) been able to find an 8P10C connector to compare it to an 8P8C, but I presume the outer connectors are simply added, making the outer housing thinner as needed to accommodate the extra wire pair. With the apparent differences (e.g. the CUI jack above) between these two jacks and existing 10P10C / RJ50 connectors, the potential for confusion and incompatibility seems high.
|
H: Passive filter design with input load
I took several DSP courses years ago, but never actually designed a filter in real life (just solved equations.) Now, in my human job, I have been tasked with designing a filter that has an input load (R4) in an already existing filter chain (see the picture of the SPICE model I have created.)
My filter needs to go inside filter 1, and needs to filter out noise above .4MHz - I tried adding a simple RC filter with the specified corner frequency and it did not produce the expected results. I then tried trying to calculated the effective cutoff frequency of the cable, filter 2, and load but did not find anything that made sense.
How can I go about designing this filter in a "smart" way? My instructions from a coworker were to just plug in RLC filters and tweak them until they work, but that seems wrong. I have MATLAB and used it to do DSP before but I don't know how to translate a physical, passive circuit into MATLAB and generate a real circuit with actual capacitors, inductors, etc.
Clarification: R4 is the input resistance from the driving device. The Load section is the device, which is measured to have L2, C3, and R3 (it is an electrode on a device). Filter 2 is what is connected on board the device, and cannot be changed. Thus why only Filter 1 can be changed.
AI: Due to the interaction between the different components in an RLC filter, empirically determining the LC values of the added stage in LTSpice is probably the quickest way if Filter 1 can't be changed.
If you can change Filter 1 values, you can make a textbook filter. In the SPICE simulation below, I show your original filter and two textbook filters (minus the loss resistors) in a Butterworth and 2dB Chebychev configuration. With inductor and wiring loss resistances added, the slope of the filter won't be as good.
I'm using a bespoke filter program that is based off of "Simplified Modern Filter Design" by Philip R. Geffe. This is my favorite book on RLC filters since it has an easy to understand discussion of how RLC filters work, something many filter books avoid. This book can be found in university technical libraries.
|
H: Is there a free circuit simulator program that is able to make a SPICE (.cir extension) file?
I don't have a lot of money and I need some program that can simulate circuits and output a .cir file.
AI: Micro-cap is free and can accept and output .cir files.
More here: http://www.spectrum-soft.com/index.shtm
|
H: Where did I go wrong with picking my baud rates?
So I have an xbee (API mode, series 2) communicating with either a naked 328p on a breadboad (8 MHz external oscilator) or an adafruit pro trinket (328p based, 12 MHz external oscilator). Though never both at the same time.
Note the xbee has a 16 MHz internal oscilator of some description - and doesn't really the "standard" baud rates.
I picked a baud rate of 200,000 as a non-standard rate that's a perfect multiple of both clock speeds. Should be able to work with either flawlessly. And yet.... it doesn't.
Test 1: The 8 MHz breadboard works perfectly at 200K, with 8 bits and
1 stop bit. Fine as intended.
Test 2: The 12 MHz trinket... just
doesn't at 200K, 8N1.
Test 3: 12 MHz at 200K, 8N1 - no joy.
Test 4: 12 MHz, 100K 8N2 works.
And now I'm just confused. I appear to have missed something basic somewhere. What speed should I have used? Why?
AI: Assuming the Adafruit Pro Trinket is just an ATMega328P running at 12 MHz, it cannot divide 12 MHz to an UART bit rate of 200 Kbps, while an ATMega328P running at 8 MHz can.
So some clock speeds are just not compatible with some baud rates. A classic example is that an 8 MHz AVR can't use standard baud rate of 115200, but if you change the 8 MHz crystal to somewhat slower and commonly available 7.3728 MHz crystal that is known to be UART baud rate friendly, you can achieve exactly 115200 bps with no baud rate error other than the max 0.01% of the crystal itself.
So first of all the UART does not sample the data line at the baud rate, it uses oversampling. The ATMega328 is capable of oversampling ratios of 16x which is the default mode of operation for almost all UARTs, and by using the U2X bit the oversampling ratio can be halved to 8x to achieve higher bit rates at the expense of being less tolerant to deviations in the bit rate.
Let's do the math.
8 MHz / 16 = 500 kHz. Can't divide to 200k. Can't work without U2X.
8 MHz / 8 = 1000 kHz. Can divide to 200k. Will work with U2X.
12 MHz / 16 = 750 kHz. Can't divide to 200k. Can't work without U2X.
12 MHz / 8 = 1500 kHz. Can divide to 200k. Can't work even with U2X.
12 MHz / 16 = 750 kHz. Can't divide to 100k. Can't work without U2X.
12 MHz / 8 = 1500 kHz. Can divide to 100k. Will work with U2X.
You could have checked the datasheet how the UART works and what bit rates are possible, there is a long section about that with good looking diagrams. As some MCUs can derive their clocks differently, this will only apply to the ATMega328P, however many AVRs are quite similar.
|
H: DDS output power
I am designing a circuit that uses a DDS (AD5932) and am having trouble finding its output power. I need to know the power because the output will be fed into a mixer with characteristics that depend on input power.
I have found that it is most likely determined by the resistance on the DAC's output (source - scroll down to Output Power Considerations), but the datasheet does not mention any equations for determining a resistor value.
Can anyone provide resources for determining the resistor value for a desired output power or if there is another way to determine output power?
AI: From the datasheet:
So the output looks like a current source across 200 ohms to ground. I'd buffer it before feeding it to a mixer, but then I don't know what mixer you are contemplating.
It comes with a DC offset which may or may not be an issue.
|
H: What is gpio bank?
embedded and BSP newbie here. Recently I found a term named "gpio bank", but I cannot find good resources from google.
What's it?
difference of gpio bank and gpio controller?
difference of gpio bank and pin
linux kernel code sample:
soc->bank_num = of_irq_count(child);
if (soc->bank_num == 0 || soc->bank_num > GPIO_MAX_BANK_NUM) {
dev_err(&pdev->dev, "Invalid gpio bank(irq)\n");
return -EINVAL;
}
for (i = 0; i < soc->bank_num; i++) {
res = platform_get_resource(pdev, IORESOURCE_MEM, i);
if (res == NULL) {
dev_err(&pdev->dev, "no mem resource for gpio[%d]!\n", i);
return -ENXIO;
}
soc->regbase[i] = devm_ioremap(&pdev->dev,
res->start, resource_size(res));
if (soc->regbase[i] == 0) {
dev_err(&pdev->dev, "devm_ioremap() failed\n");
return -ENOMEM;
}
}
AI: GPIO = General Purpose Input Output, referring to pins that send and/or receive single bits of digital information (high/low voltage).
A GPIO 'bank' is a group of GPIO bits that can be accessed simultaneously by the CPU or DMA. The number of bits in a group is usually limited by the size of the internal data bus, so for example an 8 bit MCU with 24 I/O pins would need at least 3 GPIO 'banks'. Sometimes the bits are split into more banks because some work at different voltages or have alternate functions, or because the particular package doesn't have enough pins to bring out all the bits in a group.
A GPIO 'controller' is a circuit in the MCU that controls the operation of GPIO pins. The term is typically used in sophisticated I/O systems which may need to perform operations independently from the CPU. Simple systems often just have control registers which the CPU writes to for configuring pin direction etc.
|
H: Which voltage to use for my motor controller if I connect two motors to it?
I am embarking on a robotics project. As a part of that I will be connecting two motors to a motor controller designed for two (+1) motors. Each motor has voltage 6-12 V and the motor controller has 5-24 V.
My question is how does the motor controller distribute voltage from the power supply? If I connect 24 V to it, will that be too much for my motors? Or will it be evenly distributed between them?
Here are the motors I am using: Premium N20 Gear Motor (298:1 Ratio, 90 RPM)
And here is the motor controller: TReX Jr Dual Motor Controller
Sorry if the question is basic, I am just getting into this and trying to understand what means what.
Edit: Thank you for the answer!
AI: how does the motor controller distribute voltage from the power
supply?
It distributes the voltage to each motor in parallel, so each one is independently connected to the full supply voltage. If you apply 24 V then each motor will get 24 V, which is way too much for 6-12 V rated motors.
|
H: Using a USB adapter inside an enclosure?
I want to make a device that uses AC 220-250V power (to control relays to switch devices on/off) and prefer a single wire (+/-/gnd) to a wall socket box (thus not a separate 5V adapter).
Would it be a strange to use a 5V USB adapter inside an enclosure like below?
I hid the top cover of the enclosure and put a yellow box around it for better viewability.
I'm intending to get the AC power inside through a hole or connector and directly 'solder' it to the two long socket pins of the USB adapter.
Note that the GX connector will not be used for 220V AC (I will use an IEC connector or cut a hole in the box as I connect/screw it to a power strip.
The reason is that I'm not that good into electronics and creating my own AC ->5V DC power supply including all needed protection seems a bit hard.
AI: If you're not comfortable designing a mains -> \$5~\mathrm{V_{dc}}\$ convertor (understandable), then there are plenty of modules that you can get, rather than bodging in an adaptor with pins. You can get them in a wide range of capacities and sizes. They won't be as cheap as just picking up no-name 5 V adaptor, but will be a better fit for an application like yours. In particular, you will know the supply specifications and form factor.
A potted module:
An open frame module:
|
H: Do circuit reclosers of substations respond to any fault on the low voltage side of distribution transformer?
As I don't have any practical experience in power distribution networks, I'm pretty confused about a few things.
Typically a lot of transformers are connected to an 11KV distribution line ( 11KV feeder.)
For an 11KV/440V three-phase transformer, if any fault (L-L / L-G / L-L-L-G) occurs on the low voltage side, the fuses of the transformer will trip.
Is there any possibility that the ACR (or OCR for old substations) will trip and will shut down the whole 11KV line?
If a fault occurs on one transformer's low voltage side, will the other transformer on the same 11KV line be affected?
AI: If a fault occurs on one transformer's low voltage side, will the
other transformer on the same 11KV line be affected?
Short answer: it shouldn't be affected.
Longer answer: The 11 kV feed to a whole bunch of 11 kV / 440 volt transformers has to provide power simultaneously to all of them. In other words, the 11 kV feed is capable of \$N\cdot P\$ where P is the power per transformer and N is the number of transformers. And to use your scenario: -
an 11KV distribution line ( 11KV feeder.)
It will have massive power capacity compared to each individual transformer that connects to it. Hence, it's unlikely that any problematic event on the secondary side of one transformer is going to spoil the whole picture for other transformers.
In other words, the reclosers on the distribution side are rated much in excess of what can occur as a fault on the secondary side of one transformer. But, your question says this: -
Is there any possibility that the ACR (or OCR for old substations)
will trip
And so I cannot categorically state that this won't happen in some obscure circumstances.
|
H: Data frame ACK bit is always high from I2C target device read
I am using a Raspberry Pi Zero board as an I2C controller and a Raspberry Pi Pico as an I2C target and passing data between them using I2C.
I use something like this:
unsigned char buf[256];
write(fd, buf, 1)
to write 1 byte data from controller to target. The SDA and SCL pins show what I expect in the oscilloscope.
I use something like this for the controller read 1 byte data from the target:
unsigned char buf[256];
read(fd, buf, 1)
At this point I am getting a HIGH in the data frame ACK bit, when I thought it should be low, as per this tutorial I am following.
For an I2C target address of 0x22, this is the waveform I am getting:
The address frame seems correct (bits from LSB to MSB, RW bit set to 1, ACK set to 0).
The data bits in the data frame look correct (0xA8 from MSB to LSB), but the ACK bit is high.
The target Raspberry Pi Pico program is written in such a way that every read() function will return a byte from the next element of a uint_8 array in it's memory. I am calling the read() function above inside a loop.
I see that the ACK bit is always HIGH every time this function is called. But it looks like the I2C read process itself is executing correctly, as the data read in the I2C controller's program matches the data in the I2C target's memory.
What is happening here? Am I getting the correct waveform on the SDA line?
https://www.ti.com/lit/an/slva704/slva704.pdf?ts=1636643901705&ref_url=https%253A%252F%252Fwww.google.com%252F
See page 5 Sec2.3 point 4
A master-receiver is done reading data and indicates this to the slave through a NACK.
ACK bit for data frames is what I expect if doing read(fd, buf, 2), with the very last data frame having ACK bit HIGH as here.
AI: The waveform is correct.
The target device is not supposed to send ACK at that time, on the contrary, it is the role of the MCU host to send that bit, and since it is the last byte transferred, it must be a logic high NAK bit, from the MCU to target.
|
H: In FPGAs, is it safe to execute non-blocking assignments like `b <= a; a <= 0;` in the same clock cycle?
I have a piece of code in Verilog which needs to assign the value of shift register to an output register when the shifting has finished, and I want to reset the value of the shift register in the same clock cycle. Like the following:
[shift register processing during several clock cycles...]
output_buffer <= shift_register;
shift_register <= 0;
I'm aware of non blocking assignments are done in parallel, meaning that they are done 'at the same time'. Having that on mind, the above piece of code should have an undefined behaviour: if both assignments happen 'in parallel', because in the end nothing can happen literally at the same time.
I have tried this on a simulator and it works as I expect, that is, output_buffer gets the shift reg. value and the latter is reset. However, in general, is it safe to execute this on a real FPGA?
AI: Yes, you should get the same behavior in an FPGA as you do in simulation, assuming the Verilog code looks something like this:
always @(posedge clk) begin
if (load) begin
output_buffer <= shift_register;
shift_register <= 0;
end else ...
...
end
This code does have defined behavior. At the rising edge of the clock, the Verilog simulator first reads the values on the RHS of the nonblocking assignments before it updates the LHS values. This means that output_buffer is guaranteed to get the previous value of shift_register, not 0.
Similarly, the FPGA synthesis tool should create a set of flip flops with the proper timing to guarantee that output_buffer gets the previous value of shift_register, assuming proper timing constraints are applied.
|
H: Sticky/Intermittent Relay Driven by MCU and NPN Transistor
I have an MCU (1.8 V logic levels) driving an NPN transistor to turn an relay on/off. The relay has a 12 V DC coil with resistance of 1028R per datasheet. This would require a current of 12/1028 = 11.6mA to drive.
The transistor has a min gain of 100 per datasheet. Hence a min base current of 11.6 mA/100 = 0.116 mA required.
Analysing the circuit, (ignoring 22k resistor for simplicity) the base current is set to (3.3 - 0.7) / 1k = 2.6 mA (>> required 0.116 mA). This should be more than enough to saturate the transistor and ensure that the relay is on.
The problem is that the relay does not always turn on (seen as a production issue). The transistor circuit looks good to me. I'm wondering if the TXS0108E level translator is part of the problem. Or am I not driving the relay properly?
I will put the scope on the circuit shortly but any help would be appreciated.
AI: Yes, the birectional level translator is a part of the problem.
In fact, it most likely is the cause of the problem.
A bi-directional logic level shifter with automatic direction control is just unsuitable for the task of driving the transistor.
The level shifter is meant for logic level signals, which are within the specs of logic levels.
The transistor base input with the resistors most likely does not fit into specs for a valid logic input or output for the chip. It is possible that due to component manufacturing tolerances the level translator might determine that the transistor is a low logic output so it changes direction and starts to drive the MCU output pin.
Even if some boards have passed the production testing, it still means that there is a design flaw and the problem can appear systematically during product use. All it takes is some component aging or minor changes in temperature or capacitance so power supply voltages rise rates differ, and the level translator will start driving the pin wrongly.
|
H: How to calculate power requirement of Zener diode for specific frequency of operation?
I'm planing to control injector from MCU and this is schematics I came up with.
Power supply voltage is 14V, inductance of L1 is 0.0014H, resistance of injector 16.6Ohm, and switching frequency is 166Hz and lower.
If I'm not mistaken, then peak power will be 12W at the moment when M1 stops conducting after inductor is saturated. Does it really mean that diode should be rated the same? Since surge power of diodes are larger than continuous power rating, it makes me think that diode that is rated under 12W should be able to withstand flyback energy for a split second.
What is the correct way of calculating power requirements of D1 given these conditions?
AI: Assume all the energy in the inductance is dumped into the zener. The energy stored in an inductor is
W = \$\text{LI}^2/2\$
Power is just energy per unit time, so multiply by the frequency.
|
H: Open loop system instability
I'm studying system stability, and I know a closed loop system can be unstable due to the feedback. I also know that an open system can be BIBO unstable (e.g., a capacitor with a direct current input). What I'm not sure about is, can an open loop system be Lyapunov unstable?
Both Bode and Nyquist study instability via the open loop transfer function of a closed loop system, so they seem to require a closed loop; and in the books I have I didn't find anything about studying instability for open loop systems; I spoke with a couple of engineers, one of which said open loop systems can't be unstable (considering an impulsive input), while the other said they can; though, he wasn't really able to give an example of an open loop unstable system.
AI: Yes, open-loop systems can be unstable (examples below, there's innumerable ones).
"Open loop" can be a fuzzy term, and if there was a pitcher of beer on the line I could prove that any unstable system "has internal feedback, now gimme that beer".
The first real ground truth is that natural as-found systems can be stable or not.
The second real ground truth (and possibly what was misleading that first engineer) is that with open-loop control we cannot change the stability properties of a system -- you have what you have. In the case of some nonlinear systems you can avoid unstable operating points, but you can't make the system operate at such a point for any length of time, because it's unstable.
The third real ground truth is that with closed-loop control we can change the stability properties of a system: we can make a stable system unstable, or with active control we can make an unstable system stable (both of my examples below can be stabilized, BTW).
Examples
Find the nearest broom, place it vertically on the floor with the stick on the floor and the broomhead up. Let go.
Up until the moment that it smacks into the floor and stops moving, that's an unstable system, and it's operating in open loop. You can tell it's open-loop by writing the differential equations for its motion. If you ignore friction, air resistance (and the floor), and if you linearize the equations around the operating point, you'll see that it has two modes: \$e^{-at}\$ and \$e^{+at}\$ (i.e., it has poles at \$s = \pm a\$). That second mode is unstable.
Find the nearest NPN power transistor. Put it in a circuit with a healthy (for it) voltage on the collector, and bias it with a fixed voltage to flow a healthy (for it) current from collector to emitter.
In the absence of a truly massive heat sink, as it flows current, it'll heat up. As it heats up, it'll flow more current. That'll make it heat up more. While the precise dynamic equation is both complicated and nonlinear, if you simplify it and linearize it then the dominant behavior will once again be an unstable 1st-order response of \$e^{+at}\$.
|
H: STM32 ADC Input Voltage Range
I'm using the ADC on pin 14, PA2, on the LQFP64 STM32G474CB (datasheet). What is the ADC input range for this pin? Here is my assessment let me know if my understanding is incorrect:
Table 12. Pin Definitions shows this pin's I/O structure as FT_a as shown below:
The FT_a type is 5V tolerant I/O with an analog switch function supplied by Vdda according to Table 11. Legend/abbreviations used in the pinout table shown below:
The Absolute maximum rating for FT_xxx pin is min(Vdd, Vdda) + 4 according to Table 14 Voltage characteristics shown below:
Finally, the Vdda analog supply voltage for the ADC is listed as 3.6V max according to Table 17. General operating conditions shown below:
So with all that said, PA2 is tolerant to an absolute maximum voltage of Vdd + 4 (assuming Vdd is 3.3V) so 7.3V but typically 5V. However, when using this pin as an ADC it would saturate the ADC reading if the input voltage exceeded 3.6V due to 3.6V being Vdda's max. The PA2 wouldn't be damaged unless this pin unless it was exposed to Vdd+4V or higher. I reviewed this post and it seems to be in alignment but I just want to make sure. STM32 ADC Input voltage
Is my assessment correct?
AI: However, when using this pin as an ADC it would saturate the ADC
reading if the input voltage exceeded 3.6V due to 3.6V being Vdda's
max.
The ADC would saturate at actual VREF+. The maximum rating of VDDA has nothing to do with saturation.
PA2 is tolerant to an absolute maximum voltage of Vdd + 4 (assuming
Vdd is 3.3V) so 7.3V but typically 5V.
Since the formula has min(VDD,VDDA) if your VDDA is less than VDD then it will define the maximum, not VDD. Other than that the assumption seems to be correct.
Having said that, running MCU at absolute maximum ratings is a recipe for premature death, as even small supply deviation can push it over the limit.
You also missed one more condition, the maximum allowed difference between VREF+ and VDDA, which is only 0.4V. This means that even if you push VDDA to its 3.6V maximum, you can use at most 4V as VREF+. And since that is where ADC will saturate, applying more than 4V to analog inputs does not make any sense, regardless of whether or not they can survive it.
|
H: Does DC become AC through alternating permanent magnets?
As the title suggests.
Does direct current running through a circuit become alternating current if it goes through a linear line of alternating magnetic fields of permanent magnets, such as that of a Halbach array?
AI: Faraday's laws of electro-magnetic induction came from his realisation that "spatially varying (and also possibly time-varying, depending on how a magnetic field varies in time) electric field always accompanies a time-varying magnetic field". In your proposal the magnetic field is static. It does not vary with time so there will be no effect on the current through the wire.
|
H: How to increase the fan speed connected to a usb port
I have 6 PC fans 12V 0.12A connected in parallel. They are connected with a usb cable to a mobile charger. When I was connecting them to the motherboard they were much faster than now though the mobile charger could go up to 9V. As a possible solution for increasing the fan speed came to my mind to use a 1 female 2 male usb splitter and connect 2 males to two supplies. But this wouldnt increase the voltage but maybe the amperage. I know V = R * I. Does this also apply in this case? I mean can I cover up the voltage by increasing the amperage in order of increasing the speed?
AI: I have 6 PC fans 12V 0.12A connected in parallel.
The rating of the fan tells you roughly everything you need to know: That at 12 volts, they will draw 120 mA. The rotation speed (RPM) of the fan will also be maximal at 12 volts*.
When I was connecting them to the motherboard they were much faster than now though the mobile charger could go up to 9V.
A fan header on a motherboard supplies 12 volts. A USB charger supplies (typically) 5 volts.
With six fans connected in parallel, they would draw 720 mA at 12V. At voltages < 12 V, they will rotate at a lower speed and draw less current. If you're not quite sure how current is determined by the load, review this question.
As a possible solution for increasing the fan speed came to my mind to use a 1 female 2 male USB splitter and connect 2 males to two supplies. But this wouldn't increase the voltage but maybe the amperage.
I'm not quite sure what you mean by this. If you want to connect two USB supplies in parallel, you'll in theory be able to deliver more current, but the voltage remains the same. If they were connected in series, in theory the voltage would double, but USB supplies are not meant to be connected in series like that.
I know V = R * I. Does this also apply in this case?
Fans are driven by an electric motor which includes a winding or coil of wire, which is an inductor. Although \$V = R\times I\$ is correct, the resistance component of the motor only applies if measured while not in motion. When the motor is operating, impedance is the property that determines current flow, and is based on the motor's load, construction, temperature, etc. For more information, @Transistor wrote a great answer about resistance in DC motors here.
I mean can I cover up the voltage by increasing the amperage in order of increasing the speed?
Do you mean "Can I make up for the lack of voltage by increasing the amperage?"
If so, no. Increasing the current that a power supply can provide does not automatically increase the current that a load will draw. Review the first question I linked for details.
Unless you are limited to using USB-based power supplies, you will only achieve a maximum speed related to the voltage applied. Be careful not to overload the supply. Ideally you will want to use a 12 volt supply capable of ≥ 720 mA. (You should always plan on ~20% extra headroom for inrush and startup currents. Given the ubiquity and low cost of 12V 1A supplies, that's what I would recommend.) Alternatively you could use a DC-DC converter to boost the 5V (or 9V) up to 12V. It will require more current from the low-voltage side, as well as incur efficiency losses. This question addresses the topic of converting voltages.
*Within manufacturer spec.
|
H: How does a unity gain buffer work?
Say we slow down time to a crawl, and turn on a power source connected to a unity gain buffer. Initially, the non-inverting input receives the signal and the inverting input receives nothing, so the output is indeed the signal. Right after, the signal is fed back to the inverting input. Would that not mean that the output should then be 0, since the differential amplifier would output the difference between the signal and itself?
AI: You've applied the non-idealities of the op amp inconsistently, and have reached a false contradiction. In particular, you've assumed that the op amp has infinite open-loop gain when you concluded that output = input, but then added the non-ideality of finite open-loop gain (meaning that diff. input = 0 implies output = 0) halfway through the problem.
If you accept that the open-loop gain is a finite value \$A_0\$, then it can be shown that your overall closed-loop gain of the unity-gain amplifier is \$\frac{A_0}{1+A_0}\$. This is of course consistent: if you apply 1 V to the non-inverting input, then your output is \$\frac{A_0}{1+A_0}\$, the difference in the inputs is \$\frac{1}{1+A_0}\$, and multiplying that by \$A_0\$ matches up.
If you instead accept that the open-loop gain is infinite, then you can only reach the conclusion that in feedback, the two inputs have equal voltage and hence the output must equal the input.
However, let's actually take a really simple op amp model, and slow down time as you asked. I'll demonstrate that this steady-state gain isn't all we see, and there are actually some pretty cool slew-rate behaviors. This should cover this clarification comment of yours:
If I get this right, the inverting input gets nothing, the difference between the two is then just the initial signal, which swings to say the positive supply rail if the initial signal was positive. But then, the signal minus the feedback would be negative and get pushed to the negative supply rail, right? Doesn't this just lead to infinite oscillations?
I'll take a little five-transistor1, single-stage op amp (using TSMC's 180nm mixed-signal process, and not optimized for slew rate). Every op amp will be different in what it does. Some may undershoot. Some may overshoot like mine. Some may swing around the output for a few oscillations if they're only barely stable at unity gain.
The principle of this circuit is as follows. It's not representative of every op amp, but knowing the theory behind the example is important toward understanding the slew rate remarks I'll make soon.
NMOS_IDC_IN and NMOS_IDC_OUT form a current mirror which delivers a constant current to the differential pair.
NMOS_IN_P and NMOS_IN_N form a differential pair. When VIN_P is above VIN_N, more current is sunk by NMOS_IN_P from the left branch and less is sunk by NMOS_IN_N from the right branch. When the two are equal, the currents are equal.
When less current is sunk from the right branch by NMOS_IN_N, the output directly rises.
When more current is sent via the left branch, the PMOS_LOAD_P and PMOS_LOAD_N pair copy the current back down the right branch.
The resulting small signal currents are sent into the load.
The gist here is that this amplifier operates in a differential-to-single-ended transconductance mode. We deal in differences of voltages for the input, and we send current in or out of the output pin.
Here's a simple testbench:
At the start, our op amp is already showing its non-ideal, finite gain. We put in 400 mV, we got out 407 mV.
Next, I'm going to abruptly increase the input voltage of the unity-gain buffer from 0.4 V to 1.4 V, while looking at three things:
The input voltage (red, dotted)
The output voltage (yellow)
The internal gate voltage that's responsible for delivering current into the output (green), which I'll call PGATE. The lower this voltage goes, the more current we can deliver into the output.
Over the 100 ps that the input is swinging, nothing happens. The amp is just too sluggish to respond much. We get 65 mV of output swing for 1000 mV of input swing. It's mostly the input spike directly getting conducted through parasitic capacitances into the output.
Now let's look at the whole output slope:
NMOS_IN_P turns on more strongly; It's sinking a fair amount of current but it can only sink as much current as the current mirror sinks (50 uA). As it turns on, we see PGATE drop and the upper current mirror activates to send more current into the load.
At the same time, NMOS_IN_N cuts off.
We're now slewing the output as fast as this amplifier possibly could - the left branch (NMOS_IN_P) is taking every ounce of bias current it can, and sending a copy of that bias current into the load since the upper mirror is copying it. At the same time, NMOS_IN_N is cut off, taking no current. No matter how hard we drive VIN_P, it can't go any faster (i.e. we are thinking about a constant slew rate, not a gain as a function of the input voltage). The left branch can't carry more than the total bias current, and the right branch can't carry less than no current.
As we reach the point where VIN = VOUT, things are still not quite in balance. NMOS_IN_N is completely shut off and will be sluggish to turn on. Likewise, the upper current mirror is driving a large amount of current, and turning it off will take a while, so the amplifier overshoots. As the current mirror dials its output back to the steady state bias current, and as NMOS_IN_N turns back on, the output settles to its final value, 1.3973 V (a tad shy of 1.4 V).
1 Six transistors, actually. Only five are part of the op amp core; the sixth establishes a bias voltage as the reference side of the tail current mirror.
|
H: Why won't "copy room format" in Altium Designer copy component placement?
I have a multi-channel design in Altium Designer which was done specifically to speed up component placement within each generated room on the PCB.
After adding some components to the schematic, updating the PCB, and arranging new components in one of the rooms, when I try to use the Copy Room Format function with the "Copy Component Placement" option, nothing happens and I get a dialog indicating that "0 components out of N rooms were updated."
I've restarted Altium, checked component channel offsets, ensured the room definitions (rules) are enabled, and tried selecting only one component at a time, etc. What else can I try to get these component placements copied across rooms?
I am using Altium Designer 21.8.1.
AI: After much trial and error, I inspected component classes (Design > Classes) and noticed that the new components were not members of the room classes.
I checked the project options (Project > Project Options) to ensure that class generation was enabled for the needed rooms and for component classes.
I decided on a whim to try importing the changes from the schematic (from the PCB editor), even though I'd already done so by choosing to update the PCB from the schematic editor. Surprisingly, the detected differences included components which were not members of the room classes. After accepting the ECO and applying it, the Copy Room Format function worked!
In sum:
Check if all components in the rooms are members of the appropriate room class.
Try importing changes from the PCB editor after updating the PCB from the SCH editor.
|
H: Why does 10BASE-T require hubs?
As I understand it, some of the physical layer options for Ethernet include:\
10BASE5 (now considered obsolete): a single length of thick coaxial cable.
10BASE2: thin coaxial cable, of which multiple lengths can be connected by T connectors.
10BASE-T: twisted pair, of which multiple segments can be connected by hubs or switches.
Is there a reason why 'twisted pair instead of thin coaxial' has to go with 'connect segments with hubs instead of T connectors'? Something about the electrical properties of twisted pair that make it unsuitable for the simpler way to connect segments? Or is it just considered more convenient to use hubs or switches for other reasons?
AI: There is nothing specific about coax (rg6) and cat 5 unshielded twisted pair that prevents the latter from working in a bus topology. The extra shielding in rg6 helps but the twisting in cat5 should do the same. Cat 5 can carry multiple bidirectional signals without issue. Ethernet hubs especially cheaper older ones without ICs are essentially a bus without the convenience of a single shared conductor. The benefits of a star cat 5 topology comes from switches, the reduction of sharing the common bus leading to higher bandwidth availability, material cost and in some ways weight. Duplex communication from the added conductors also speed things up. You also reduce the single points of failure.
The change from bus to star topologies is more to do about speed limitations than electrical properties.
In short, 10base-T doesn't require hubs or switches but they sure do make things better.
|
H: MOS capacitor - why is the total capacitance just the oxide capacitance C_ox?
All of the textbook that I have read mention that the total capacitance is just the oxide capacitance (Cox) for a MOS capacitor in an inversion and accumulation mode. It makes sense that is true for the accumulation case, since there is no depletion region formed in the semiconductor. It also makes sense that the total capacitance for the depletion mode is the oxide capacitance and the depletion capacitance in series. But why is it that for an inversion mode, the total capacitance is just Cox, i.e. why do we ignore capacitance due to the depletion region formed in the semiconductor?
AI: It is important to remember that the capacitance of a MOS capacitor is the differential capacitance. In a "normal" capacitor you can calculate capacitance by dividing charge by voltage. But here we mean the derivative of charge with respect to voltage. The differential capacitance tells us how much the voltage will change if we add or subtract a small amount of charge.
Depletion mode is the odd one out. As you stated, you take the oxide and depletion capacitances in series. You do this because adding or removing charges happens at the edge of the depletion region. The thickness of the capacitor is the oxide plus the depletion region.
But in both accumulation and inversion modes the additional charges are added or removed right under the oxide, at the semiconductor surface, not at the edge of the depletion region. So the thickness of the depletion region has no effect.
In inversion you have a depletion region but you get to ignore it when calculating the capacitance because all those charges in the depletion region remain unchanged when you vary the voltage within inversion mode. They dont affect the differential capacitance since they are constants.
|
H: precise 50 Hz oscillator
I need an astable multi-vibrator at exactly 50 or 100 Hz with exactly 50% duty cycle.
using circuits like 555 can't give me that with the common well-known values of components R and C , let alone the tolerance of components .
so , any suggestions ??
thanks
AI: Exact is not possible. How precise does your oscillator need to be and how much jitter can you withstand? What is your budget? Since you're mentioning a 555 timer, I'm assuming that spending over €1000 is not desirable.
Some inexpensive options:
You can use a temperature compensated crystal oscillator (TCXO) and get 1 or 2 ppm accuracy for under €5 up to a very fancy meal at a 5 star eatery. TCXOs do drift, perhaps no more than 1ppm per year, so you may want an adjustable TCXO if accuracy is important. However, you'll need an accurate source to calibrate the oscillator.
If you need good precision at low cost, you can buy GPS based timing systems that have a disciplined oscillator, often a 10MHz output with 0.001 ppm accuracy. It usually takes a couple minutes for the oscillator to reach specified accuracy after satellite lock. If you do an Internet search, you can find kits for perhaps €100.
Divide the output frequency down to 100Hz, then use a flip-flop to give you a 50% duty cycle at 50Hz.
|
H: Level shifting I2C with VREF1 = 2.8V and VREF2 = 3.3V
I'm trying to connect a few devices on an I2C bus - primarily a GNSS module and a microcontroller. The microcontroller is the bus controller and must run on 3.3V, while the GNSS module internally uses 2.8V logic. How can I shift this 0.5V differential between 2.8 and 3.3V with an level shifting IC?
I figured this was exactly what I2C buffers are made for. There are circuits to level-shift I2C busses using discrete transistors, but I'd like a convenient VSSOP package, an enable pin, and a known-good configuration. The classic IC for this seems to be the PCA9306, available from NXP, Texas Instruments, OnSemi, and probably others.
Unfortunately, they don't seem to be designed for 2.8 to 3.3V shifting: Both NXP and On Semi warn (using the exact same language and similar diagrams; looks rather a lot like plagarism guys...) that:
In the Enabled mode, the applied enable voltage and the applied voltage at Vref(1) should be
such that Vbias(ref)(2) is at least 1 V higher than Vref(1) for best translator operation.
TI has a more reasonable value for the MOSFET thevenin voltage:
VREF2 Reference voltage [Minumum:] VREF1 + 0.6V
EN pin high logic must not exceed Vref2 + Vth (0.6V)
but they also state (page 12, section 8.1.5):
PCA9306 has the capability of being used with its VREF1 voltage equal to VREF2
How can I make this work? I'm happy to drive the EN pin with a 3.3V GPIO (there's already one that does this), but how does this work around the minimum voltage requirement?
AI: NXP's application note AN10441: Level shifting techniques in I²C-bus design suggests to use simple MOSFETs for level shifting:
Please note that the gate voltage must be the lower voltage.
The PCA9306 consists of nothing more than three MOSFETs. The third MOSFET allows to raise the gate voltage slightly above the lower supply in order to speed up switching (see TI's application note Voltage-Level Translation With the LSF Family), but that is not necessary for a slow protocol like I²C.
When you connect EN to the 2.8 V supply, you have the same circuit as in AN10441. (The VREF1/VREF2 connections do not matter in this case; you can leave them open.)
|
H: Aluminium foil as a ground plane for 900 MHz antenna
I have designed and manufactured an antenna that works around 900 MHz. The design includes a ground plane and metallic enclosure that are not strictly necessary, but do affect the performance of the antenna. At the moment I don’t have the ground plane ready, but I would like to test my antenna already. Would a typical, very thin, household aluminium foil be conductive enough to simulate a ground plane at around 900 MHz or should I wait for the actual PCB with copper?
AI: At 900 MHz aluminium foil is good, the skin effect https://en.wikipedia.org/wiki/Skin_effect prevents a thicker plate to give any advantage. The skin depth is about 0,003 millimeters, a 0,01 millimeters thick foil would be in practice as good as much thicker solid aluminium. Tightly layered two 0,005 mm foils would perform as well as one 0,01 mm foil assuming there's no lossy material between them.
One millimeter uneveness of the surface means nothing at 900MHz. You can put it on something stiffer(=wood, iron, plastic, cardboard) to prevent it losing its form due to gravity, wind or a slightest thump. Making a good contact to the feeding line needs some experimenting. There are plenty of aluminium soldering tutorials but I have never tried one.
|
H: Unexpected behavior when using 5V from one power supply and Ground from another one
I am trying to use an old phone charger with 5V DC output. I measured the output to be 6.4V.
It works for powering an ESP8266 through VIN but does not work to power an IR receiver I want to use.
The receiver works when I power it through a different source (5V or 3V and ground from an Arduino.) It also does not work when I use one of the pins of the ESP8266 which gives me 3.3V when I measure it. It is also not working when I use 5V from the Arduino and ground from the phone charger (or vice versa.) This last bit confuses me most because I assumed which ground I use does not make any difference but perhaps I lack some basic knowledge here.
Finally, when I try to measure the current across the Arduino V5 out and the phone charger ground, it gives me 7V AC, and some really low DC voltage. I am rather convinced I am clearly lacking some understanding here. Can anyone explain this behavior to me?
Finally, if there is an easy fix to this to get the IR receiver working, I would be more than interested.
EDIT:
I finally got the IR receiver working. It actually works with both 6.4V and 3.3V but I went for 3.3V sourced directly from the LDO on the board. The problem I had before with this was that the GPIO I used to receive the signal is not working. I might have damaged it previously somehow. As far as I understand, the ESP8266 GPIOs can handle 5V and perhaps also 6.4V (?) input. The board I am using does not have a pin for 3.3V out so I soldered I wire directly to the LDO, which I am not sure is a good idea but it is working (perhaps there should be a capacitor in-between).
AI: Can anyone explain this behavior to me?
Ground on one charger does not necessarily mean it connects to ground on another charger or "system". Ground is not "earth" so there is almost certainly a massive galvanic isolation between the two. Earth is different; earths are meant to be galvanically connected but, by no means is using earths as a common line for electronic circuits at all guaranteed to be successful either. They may allow power to be applied but they will also transfer fault currents from other appliances and circuits (i.e. wholly not recommended).
The AC voltage you measure between two different grounds from two chargers is due to capacitive coupling across the internal transformers inside said devices.
Finally, if there is an easy fix to this to get the IR receiver
working, I would be more than interested.
Use the correct supply for the device noting that when you fed it with 6.4 volts, you may have damaged it.
|
H: Oscilloscope or bench multi-meter for ultra high sensor sample rate measurement
I need a device to measure the output of a load cell at very high sample rates. I am aiming for a sample rate of at least 100 kHz. For this goal, I wanted to ask if it would be better to use a bench multi-meter measuring resistance, or an oscilloscope and a 5v dc power supply. If I were to use an oscilloscope, how would I set the trigger to allow me to capture the very short change in voltage? Thank you for any help that you can provide.
AI: Sampling rate is not a problem for oscilloscopes, but most lower cost oscilloscopes (e.g. less than about $2k) are only 8-bit resolution. Very expensive ones (over $5k) can be 10-bit resolution.
Some well-known brand bench multimeters can sample at 50 kHz, and store 1M samples; These will have plenty of resolution (16-24 bits) these have a USB interface and are easy to use to manually capture those samples; triggering may however be an issue.
If you can use 10-12 bits resolution, a small Arduino-based device (e.g. PJRC's Teensy 4.0) can easily sample reliably at 1 MHz. You'd have to write some software to customize it for your specific case, including triggering.
|
H: truth table from logic circuit
How should a truth table look for the logic circuit below if there is one output that is determined by just one of the inputs?
AI: Your truth table is correct.
Your update with labelling also makes everything unambiguous and clear which is a major help in validating and troubleshooting.
|
H: LTspice simulation: unexpected behaviour of circuit
I am trying to build a JFET driven current source in a more complex circuit topology than shown here.
Unfortunately, the simulation is not giving me the results I would expect.
I could trace the "misbehaviour" of my circuit down to this simplified version of it.
I would like to control VGS of the JFET and therefore the current of R2 with VBat1. As you can see in the image, I can't get the desired voltage to the node G. Without the RC-Filter the circuit works fine.
The graph of V(g) is as expected: a linear increase from 2V to 10V. With the filter I get this graph:
For the capacitors I selected Würth X5R. The curve doesn't change if I use different types of capacitors.
What am I doing wrong? What do I need to change to get the desired linear curve?
AI: You only have 2 V on the drain. As the gate gets above 2V + pn-diode-drop, that gate-drain (and gate-source) will become forward biased and since you are driving via 10 MΩ, it will clamp the voltage.
With 10 V on the drain, you might see more expected results; the source will follow about 2 V below the gate voltage.
For systems like this, 10 M & 10 pF is not a robust set of parameters -- try 10 k and 10 nF.
|
H: Transformer Inductance Ratio (datasheet)
I am looking at a current sense transformer, PE-51687NL (Digikey). I must be missing something in understanding the datasheet.
Digikey says the inductance is 20mH. I assume that this is the "single-turn" primary inductance of the magnetic core.
The datasheet says the secondary has 100 turns and the minimum secondary inductance is 2mH.
I know that the turns ratio is related to the inductance ratio: L1/L2 = (n1/n2)^2.
No matter how I twist the numbers I can't get the datasheet to make sense! Would someone help me make sense of this datasheet?
AI: It's a 100:1, through-hole current transformer (CT).
There are errors in the Digikey description.
Here's the datasheet.
The 100 turn secondary,with an inductance of 2.0 mH, is to be terminated with a 100 Ω burden.
With 20A through the 1 turn primary, the secondary current would be (1/100) * 20 = 0.2 A and the secondary voltage 0.2 * 100 = 20 V.
In other words, the CT scale factor would be 1 secondary Volt per 1 primary Ampere.
|
H: High-power positive/negative to positive/negative boost converter design
I'm looking for an efficient and fairly cost-effective way to boost the voltage of a center-tapped battery pack (+/-180V to +/-250V) to +/-350V. The output load is quite variable anywhere from 0.5A to 15A What's a good topology for this application?
Is a 3-phase converter like this a reasonable choice? Please also take a look at the orientation of the N-channel MOSFETs since I'm not 100% sure about that.
simulate this circuit – Schematic created using CircuitLab
AI: Probably not the best choice of MOSFET, here an example (for the positive upper part) of what you get with an interleaved boost converter with duty variable for control (not designed).
I have found also this note. Perhaps can also help : AN-1126
Added behavior with "load change" to see "performance" open loop ...
|
H: Time relay and solenoid valve
For a project, I need to control a normally closed solenoid valve and be able to apply a current on the valve only for a set amount of time before closing it again, and only once (so no cycle).
I think I could use a time relay to do so, but it seems there are many different functions and types of relays, and I'm a bit lost between Off Delay, On delay, Single-shot, and whether it is even the right way to do what I want.
Thanks.
AI: You have referred to a multi-function timer in which one of ten functions may be selected.
The function that meets your requirement would be B Interval (Power on).
The supply voltage is to be applied to A1 and A2. The solenoid valve is to be switched by the 'NO' contact (terminal #18 & terminal #15).
|
H: Snapdragon SoC includes cellular modem RF. Does it still need a baseband CPU?
As per https://www.qualcomm.com/products/snapdragon-425-mobile-platform
, the Snapdragon 425 includes cellular modem RF, WiFi, Bluetooth and GPS.
Does that mean that it does not need a baseband CPU, or dedicated IC for WiFi/Bluetooth and GPS?
Can anybody who wants to design a PXB include it is his design? How to obtain drivers / firmware for the components like the SoC itself and other ICs to include in Linux driver and build Android?
AI: The Snapdragon 425 contains the RF part of those interfaces and the logic to retrieve/encode data.
A base band CPU is still needed to process and present data to the user.
Possibly, the base band CPU and the RF modems are packaged together on the same chip.
Device drivers of consumer electronics chips like that Snapdragon are not open source. They are developed and optimized by Google or by Qualcomm or by Linaro or by some specialized third party authorized software house.
If you want to build your own Android or Linux device join one of the following open source projects:
https://en.m.wikipedia.org/wiki/List_of_open-source_mobile_phones
Android is built on Linux but the user interface and many device drivers are written by Google mostly in C.
Google is developing Fuchsia which, nobody knows when, should replace the Linux part of Android.
Google's newest Nest Hub is the first product running Fuchsia OS.
Take a look here:
https://en.wikipedia.org/wiki/Fuchsia_(operating_system)
|
H: The circuit modeling a surge test
In the surge test as far as I know, a capacitor is charged to a high voltage and is discharged via an RL load. The voltage is a decaying sinusoidal across the capacitor. This is the general picture I have about this test. Does anyone know about the equivalent circuit of the surge test. My concern is how the high dc voltage across the capacitor does not damage the low impedance motors during the surge test.
AI: Because the energy stored in the capacitor is very low. Hence it does not damage the load. But if given for the longer duration it may certainly damage the load.
|
H: FPGA over-voltage protection for input trigger signal
I don't have much experience with PCB design, but I am currently designing one to connect together several components for a work project. One of these components is connecting an external trigger signal to an input pin on a Neso Artix 7 FPGA board. The device that generates the trigger may vary (e.g. an analog waveform generator) and the voltage of the trigger signal can therefore vary as well (although it will probably be below a maximum of 15 V or so). The trigger signal will contain pulses or a square wave at a rate of several kHz.
The FGPA operates at a logic level of 3.3 V so I think it would be a good idea to limit the voltage over the FGPA pin, in case a higher voltage trigger signal is connected. I've read about several approaches, such as an op-amp voltage clamp or a Zener diode shunt regulator. Because the trigger will likely have very high slew rates, I'm afraid that the op-amp clamp might not reach the clamping voltage quickly enough, thus possibly still damaging the FPGA.
For the Zener diode approach, I've come up with the schematic below. However, I have read that there are some drawbacks to using Zener diodes, mainly concerning power dissipation.
My question
In this scenario, what would be the best method for protecting the FPGA from voltages higher than 3.3 V? In the case that that would be the Zener diode shunt regulator, do I need to take extra measures with regard to the power dissipation, or would the resistor in the circuit below be enough?
For the output signal, minimal time delay and slew rate are essential, since the device triggers on an edge.
Many thanks in advance!
simulate this circuit – Schematic created using CircuitLab
AI: It depends on how fast you want it...
Zener diodes have pretty high capacitance, so you'll need a low value series resistor, which means it will draw a lot of current from the signal source. With your 100R value, a 15V source would have to provide (15-3.3)/100 = 117mA current, and the resistor would burn 1.3W. Both are inconvenient.
If the signal is slow you can use a higher resistor, for example 3k3, which will make dissipation negligible but combined with Zener diode capacitance it will lowpass your signal at a few hundred kHz and introduce some phase shift too.
So you can use a pair of low capacitance diodes, there are many choices of dual diodes in SOT-23 available. You can use Schottky diode for a lower threshold voltage:
The first resistor limits current from the source, diodes limit voltage between -0.6V and 0.6V above VCC (or 0.3-0.4V if you use Schottky diodes). Since it will still go below GND and above VCC, the second resistor limits current into the FPGA protection diodes in case they conduct before the dual diode.
However it requires VCC to be able to sink some current, which won't happen if the loads are pretty low, for example a microcontroller in sleep mode. If the load on 3V3 is an FPGA, it'll draw enough power to sink the input current, so that's fine.
I've used this too:
The transistor and diodes make a shunt regulator at about 2.1V, the top diode and transistor add 0.6V twice, that will clip input voltage at 3V3. It follows the power supply, so it will behave correctly if the device is unpowered too.
You could also use a unidirectional TVS diode instead. It works like a Zener diode, with much lower capacitance at the cost of much lower accuracy. "Unidirectional" means it works like a normal diode in reverse, which is what you want since you're not interested in negative voltages.
So say you get a TVS diode specified for minimum 3V3, it will sink almost zero current at 3V3, but it will clamp the voltage so somewhere around 4.5-5V. So you still need a protection resistor to the FPGA pin, but it will conduct much less current than if the input was 15V and not limited to 5V.
|
H: Can I do math on LTspice parameters?
I would expect the voltage source to output 20V but the simulation throws an error instead.
AI: You need to write {2*V} instead of {V}*2.
|
H: Understanding questions about MOSFET’s drain and gate series resistor
I build a temperature circuit in PSpice with a TLV3701, a MOSFET and 3 light bulbs (\$R_6\$, \$R_7\$, \$R_8\$).
The \$R_9\$ resistor is 108.18 Ω (can be bought, I checked). As soon as the room temperature falls below 21 ℃, the comparator switches through, and when the temperature rises, it switches off.
I now have two questions:
I first installed a series resistor \$R_1\$ because I read in the data sheet of the TLV3701 that it should not output more than 10 mA. I've found that this 10 mA doesn't flow at all even when \$R_1\$ = 1 Ω. May I remove this resistor?
As you can see from the red label, 18 mV are left at the drain input. How does that happen? I know it's a semiconductor, there are physical effects, but what exactly happens there, so that 18 mV are “left over”?
Edit:
Question № 1 is OK. It's called \$R_{DS}(On)\$.
In real life I thought of this MOSFET: https://asset.conrad.com/media10/add/160267/c1/-/en/000156110DS01/datenblatt-156110-stmicroelectronics-stp16nf06l-mosfet-1-n-kanal-45-w-to-220ab.pdf
But: PSpice is not yet set that way.
R9 is equal to 108.182 Ω and R3 is said to be similar to get 3.118 V.
Also, why do you decide to add a pull-up resistor? I found out that the TLV3701 needs one. I didn't work without…
AI: I first installed a series resistor R1 because I read in the data sheet of the TLV3701 that it should not output more than 10mA. I've found that this 10mA doesn't flow at all even when R1 = 1Ω. May I remove this resistor?
Yes. In high-speed, high-power switching circuits such as switching power supplies and motor drive amplifiers, a resistor in series with the gate dampens unwanted oscillations cause by the gate capacitance and the wiring inductances. This does not apply to a simple power switch application such as yours.
As you can see from the red label, 18mV are left at the drain input. How does that happen? I know it's a semiconductor, there are physical effects, but what exactly happens there, so that 18mV are “left over”?
When a power MOSFET is fully enhanced (saturated), the channel has a minimum possible resistance. This is labeled Rds(on) on the datasheet: The Resistance from the Drain to the Source when the transistor is fully "on". With Ohm's Law, you can calculate Vds, the voltage from the drain to the source, at any load current.
|
H: Fully identifying the system behavior
"Any arbitrary nonzero input signal x(t) would be suitable for fully identifying the system behavior, by observing the corresponding output y(t)." Is this statement true? I could not prove it.
AI: "Any arbitrary nonzero input signal x(t) would be suitable for fully identifying the system behavior, by observing the corresponding output y(t)." Is this statement true?
Yes, assuming the system is linear.
Take the Laplace transform of a known input, and the Laplace transform of the corresponding output, divide the latter by the former, and you have the transfer function of the system.
To find the output corresponding to some arbitrary input, take the Laplace transform of this arbitrary input. Multiply by the transfer function. Take the inverse-Laplace transform of the result. That is the output corresponding to your arbitrary input.
|
H: Purpose of reverse biased diode in pulse generation circuit
(Caveat: "young player" at large here.)
The circuit below (not my creation) debounces a push button and generates a 1us negative pulse when it's pressed. I breadboarded it and it works as expected, but I am trying to understand the purpose of the reverse biased diode.
My understanding is that as the first inverter goes high, the 100pF capacitor charges through the 10k resistor, which normally pulls the node to logic level 0, giving a short pulse of logic level 1 to the last inverter, creating the desired negative pulse.
But what is the purpose of the diode shown in the schematic? I don't see any way it would ever be anything but reverse biased, nor that it could ever be subjected to its reverse breakdown voltage.
I tried to remove it (it's unspecified in the schematic, I used an 1N4148) and compare single shot references on the scope (measured at the 10k node), but could see absolutely no difference between the signals with or without the diode.
(Actually not really sure about the 1k resistor's role either as I'd expect the HC input to draw so little current that there'd be no voltage drop over it.)
Update: Annotated the schematic below based on the answers, showing how it makes sense that the diode is in fact forward biased in this state.
AI: When the first inverter output goes HIGH, the 100 pF capacitor will slowly charge up to 5 V through the 10K resistor. It will now have nearly 5 V across its plates.
When the inverter output goes LOW, the capacitor's (let's call it) top plate will be connected to near-0 V by the inverter output. This plate is charged to 5 V wrt to the bottom plate, so the bottom plate voltage will now be -5 V.
The diode is now forward biased as its anode is at 0 V and its cathode at -5 V. It conducts, discharging the capacitor quickly.
The effect of this RC and diode circuit is to charge the capacitor slowly and discharge it very quickly. So when the first inverter goes HIGH, there will be a pulse generated at the second inverter input. And when the first inverter goes LOW, there will be no pulse generated. So that RC and diode is an edge filter.
If you work either side of the inverters, you can then see how the circuit produces the final pulse it does and for which switch action (press or release) it does it.
Note that the second inverter has an internal diode connected the same way as this diode, in parallel with it. Stressing the internal diodes is typically avoided to not reduce the reliability of the inverter. The external diode is a significantly higher current part.
|
H: How does a driven right leg work?
A driven right leg (DRL) circuit is often added to a biopotential amplifier to reduce the common mode voltage (i.e. to increase the common mode rejection ratio, CMRR).
A basic form of biopotential amplifier is just an instrumentation amplifier which has two stages: a buffer stage and a single output amplification stage.
The DRL is added after the first stage, as illustrated in the picture below.
The big picture regarding why the DRL improves the CMRR is that the human body acts as an antenna to pick up electromagnetic interference (EMI) from nearby power lines or other electrical devices and the driven right leg somehow siphons part of the common mode voltage away after the first stage but before the second one.
Unfortunately, I'm having trouble understanding how precisely...
Also, passive grounding is when the right foot is actually connected to ground, if I understand this correctly. Why is it still called grounding if the right foot (or any part of the body) is not actually connected to ground? Why is the right leg (and not another part of the body, maybe even the left leg) used to ground the circuit?
AI: The classic instrumentation amplifier circuit consisting of opamps 1 and 2 in your diagram have a common mode rejection ratio of 0db, otherwise called "no rejection at all". In other words, the outputs are still differential, both containing common mode noise, and still requiring a third differential stage to remove that noise. This can be seen here:
simulate this circuit – Schematic created using CircuitLab
V1 produces a 1V low frequency input offset common to both inputs. This represents common-mode input "noise", from a source with some impedance Rnoise. This noise is added to the signal from V2, a smaller and higher frequency 100mV signal, representing the ECG potentials to be measured. It can be clearly seen that the two differential inputs IN1 and IN2 contain this "common mode" noise offset.
Take a look at the outputs:
Here we see than OUT1 and OUT2 (blue and orange) are complements of each other, which both still contain the 1Hz common mode noise. OUT3 is the result of subtracting OUT2 from OUT1. Obviously, what you want is OUT3, a proper single-ended single signal representing the potential difference between IN1 and IN2.
Usually instrumentation amplifier ICs include this final difference stage, to provide a single-ended output, with the common-mode element eliminated.
However, your diagram performs common mode rejection without that third stage, by offsetting the body's potential to half-way between the two outputs. This is bootstrapping the body to have the same potential as the output's mid-point, thereby shifting the potentials of all three participants (body, and opamps) into the same regime. Common mode noise (V1 in my example) is eliminated because the bi-potential amplifier itself is measuring potentials relative to its own, imposed, idea of what the body's potential should be.
Here is that scenario simulated:
simulate this circuit
Without some proper analysis, I can't tell you the significance of different amplitudes of the signals at OUT1 and OUT2, but it is clear that the common-mode noise is gone. And you still benefit from the balanced inputs afforded to you by the instrumentaion amplifier setup.
I don't know offhand what advantage this bootstrapping offers over a simple difference stage following the instrumentaion amplifier stage, but it does work. I imagine that by employing this bootstrapping technique and a difference amplifier for OUT1 and OUT2, you can achieve really good common-mode rejection.
Perhaps there are other benefits that your book can point out. I would be interested to know.
|
H: Phone's touchscreen becomes unresponsive when charging
I have a Samsung A12. When using a charger for the phone other than the manufacturer intended, it seems that the phone's screen just becomes unresponsive whenever it is charging. What could be the reason for this phenomenon?
AI: The other chargers emit too much electromagnetic interference for the device to work.
They are cheaper as the may be missing components inside or not otherwise designed to bring common mode interference down to acceptable level for the touch screen to work.
It is possible that they emit interference that is within legal limits, and the phone can also withstand intreference within legal limits, but third party chargers may just emit interference that is just too much for the phone and charger from manufacturer suppresses more interference than legally needed to allow the phone to work.
|
H: In LTspice XVII, 74HC107 has an error, but I can't figure out what the problem is
I made a circuit like this picture above with LTspice XVII.
Q(0), Q(1), Q(2) are output, and CL is CLK (clock pulse).
JKFFs are negative-edge-triggered JK flip-flop (ie. 74HC107).
And, the graph of this circuit should look like this picture below:
But, LTspice shows a graph like this:
(Sorry, I made every effort to be clearer but I can't make it.)
I can't understand that Q(0) turns out 5V at 0μs ~ 20μs. I mean, negative edge-triggered JK flip-flop works when CLK changes from 5V to 0V. So, shouldn't Q(0) be 0V at 0μs ~ 20μs? By the way, why does Q(0) in LTspice turn out 5V at 0μs ~ 20μs?
Did I make a mistake when I set the circuit up?
More details
Component attribute of 74HC107:
Component attribute of 74HC08:
74HC08 = 2-input, 1-output And Gate
Information about CLK(V2):
Information about Edit Simulation Command
AI: LTspice is an analog simulator. Before running the .tran analysis of your circuit, the simulator calculates the initial state of the circuit, the voltages and currents at t=0, using the 74HC107 model which it takes from the 74hc.lib library. If you know the simulator algorithms and the library internals and can manage with this knowledge to your avail, it is possible to predict what the initial state the simulator arrive at after completing the initial stage of your .tran run.
But you know this result from your attempt, and it is not what you expect it should be: you'd like to start from Q[0] = 0, Q[1] = 0, Q[2] = 0; however, the simulator starts from Q[0] = 1, Q[1] = 0, Q[2] = 0. In principle, you can shift slightly the timing in the beginning and separate the power on and the CLK pulse train start with the purpose of arriving at the "correct" initial state, but you should not. Remember, the purpose of simulating the circuit is to examine the behavior of a real hardware design, and the practical implementation of a digital counter embraces the initial setting of the counter into a required state. In your case, you want it to be Q[0] = 0, Q[1] = 0, Q[2] = 0. The 74HC107 device has an input for re-setting the device, named an asynchronous reset input, which is active LOW. 74HC107 is a dual JK flip flop, and it has two pins, 13 and 10, one input for each of its two flip flops. In datasheets, these inputs are designated \$1 \overline R\$ and \$2 \overline R\$.
In your circuit these inputs are named CLRs, and you connected these pins to a constant 5V. You should add to your circuit the reset signal connected to these CLR inputs and start the simulation run with these inputs at LOW (OV). After a delay that guarantees that flip flops are set to Q[0] = 1, Q[1] = 0, Q[2] = 0, you set the CLR inputs to HIGH (5V). For the entire interval of this initial delay, the CLK signals connected to CLK inputs should be disabled (gated with this common CLR signal). With this initial setting, you will have the expected timing diagram, starting from [0, 0, 0], and not from [5V, 0, 0], as is the case with your circuit.
|
H: Two LEDs connected in series don't work
I'm very new to electrical engineering and I ran into a strange issue while trying to connect two LEDs in series today. Both my blue and white LEDs work separately, but when I try to connect them together, they do not work (see pictures).
Is there something I'm doing wrong?
(I did try changing the orientation of the LEDs to make sure my cathodes and anodes are connected to where they're supposed to be)
AI: The LEDs both need about 3V to work, so together in series they need about 6V to work, but the IO pin only gives out 3.3V. So there is no way the LEDs could work in series.
Also do not connect LEDs to IO pins without series resistors to limit the current, current might be too high and the MCU or LED could be damaged permanently.
|
H: KiCad: ratsnet not connecting pads
In my schematic I have a relay connected to a pin header (the two are in different .sch files), the connection is called RELAY_COM, this connection is not shown in the ratsnets in pcbnew, which prevents me from routing the two together.
I did the footprints and symbols myself for both of them, and although I rebuilt the netlist many times I cannot make the connection show up in pcbnew.
This is also true for other connections, like RELAY_NO.
What am I doing wrong?
Thanks
AI: You are using simple net labels. These only connect nets within one sheet.
If you are using multiple sheets, you need to use either:
global labels: these connect everywhere
hierarchical labels: these define connection points between sheets, essentially allowing you to treat each sheet as a component with its own pins.
See the eeschema documentation topics "Wires, Buses, Labels, Power ports" and "Connections - hierarchical labels" for more details.
|
H: What exactly is meant by a "ground sense operational amplifier"?
I have found this specification in many datasheets, but I have failed to find a good explanation about what exactly is the issue that a ground sense op amp is meant to deal with.
AI: Appears to be similar meaning to "single-supply" op-amp (eg. LM358) - input range includes the negative rail, and the output can swing fairly close (hundreds of mV) to the negative rail while sinking a bit of current.
ROHM uses the term:
Input and output are operable GND sense
It's a translation from Japanese so you have to expect a bit of idiosyncrasy. The important things are the numbers in the datasheet.
On a separate subject, it's a little odd that they would recommend a maximum load capacitance of 0.01 nF (10pF) but mention that it will not oscillate even with a load of several nF (several hundred times higher). Possibly an error.
|
H: FPGA pin numbers
I'm having trouble understanding the structure of FPGA datasheets.
From what I can tell, there is usually a table of "Signals Descriptions" with the pin names and their function, but I don't see any indication of the pin number.
See example below, from Lattice "iCE40" datasheet.
How are the actual pin numbers determined? Is this defined elsewhere? Is it decided only by the VHDL/Verilog code (and if so, can any number be any signal)?
AI: The iCE40 represents a FAMILY of FPGA devices and not a specific part. So to determine the pinout you need to choose a specific model from the list of available parts offered and then consult the pinout document for that variant.
The Lattice web site has all of these available. You might start at:
Lattice iCE40 pinouts
|
H: Is the BSS138 capable of level shifting this voltage difference?
The goal is to control the IRF4905 with the ESP32. From my understanding (which is limited) the IRF4905 will have the lowest RDS(on) with a -15 V VGS. To achieve this voltage difference I have a linear voltage regulator dropping my ~33 V (the power source is a 8s Li-ion, so the voltage range is 28.8 V - 33.6 V) to 15 V. Assuming the voltage regulator connects directly to the gate of the IRF4905 it would have a voltage difference between source and gate of 13.8 V - 18.6 V effectively turning on.
My question is, can I use the BSS138 as shown in the schematic below? From my understanding the BSS138 can have a drain-source voltage of 50 V and a gate source voltage of +/-20 V. I should be within spec on both of these but I don't have any experience with MOSFETs.
Note: I'm using the IRF4905 p-channel instead of a n-channel as I only want voltage at the motor connector when the MOSFET is on (to prevent a shock if someone touches the connector and ground in a n-channel configuration) so I believe that means I need to use a P-channel and switch before the load.
I believe there should be a 10k resistor pulling to ground from pin 1 of the BSS138 to prevent floating the gate.
AI: It won't work because the output swing of 0/3.3V of the ESP32 is inadequate.
If you want to make a high-side switch you can do something like this:
simulate this circuit – Schematic created using CircuitLab
However, the switching is relatively slow with this kind of simple gate driver and it would be unwise to feed the driver with a PWM. It's probably okay with on/off control, but more analysis would be required to be sure it won't kill M2.
For high frequency switching it would be better to use a proper gate driver and a low-side switch (use an N-channel MOSFET).
D1 is necessary, otherwise the large below-ground inductive spike when the motor turns off will avalanche and probably destroy M2.
Since I have a model of the IRF4905, I did a simulation of a 8.3A average 50% PWM control of a 2\$\Omega\$ 500uH motor. Yours will likely be different, of course. I used 5kHz for the PWM frequency.
Average motor current: 8.27A
RMS motor current: 8.33A (my fake motor has enough inductance to smooth the current fairly well)
Average dissipation of the IRF4905 1.91W, peak is a couple hundred W briefly while switching.
Increasing the PWM frequency to 25kHz increases typical dissipation in the transistor to 7.6W.
The problem will become more obvious if you go to less optimal duty cycles than 50%, I would not use this above a few kHz. If your motor is low inductance then the MOSFET may fail because the RMS current is too high, especially during startup.
Frankly, I would suggest getting a MOSFET driver chip unless you are prepared to run through a stack of IRF4905s.
|
H: Calculation for energy consumed over a period of 24 hours
I'm currently studying the textbook Fundamentals of Electric Circuits, 7th edition, by Charles Alexander and Matthew Sadiku. Chapter 1.5 Power and Energy gives the following practice problem:
Practice Problem 1.6
A home electric heater draws 12 A when connected to a 115 V outlet. How much energy is consumed by the heater over a period of 24 hours?
The answer is said to be 33.12 k watt-hours.
The chapter says that the energy absorbed or supplied by an element from time \$t_0\$ to time \$t\$ is
$$w = \int_{t_0}^t p \ dt = \int_{t_0}^t \nu i \ dt, \tag{1.9}$$
where \$ \nu \$ is the voltage, \$ i \$ is the current, and \$w\$ is energy in Joules. It then says that 1 Wh = 3,600 J.
So, by my calculation, we have
$$\int_0^{24} 12 \times 115 \ dt = \left[ 1380t \right]_0^{24} = 33120 \text{ J}.$$
But, converted to watt-hours, as in the textbook's answer, this would be \$ \dfrac{33120}{3600} = 9.2 \text{ Wh}\$. Am I misunderstanding something? Are my units incorrect? Or is this, perhaps, a textbook error?
AI: Let's do some back-of-the-envelope stuff before we start losing decimal places.
The heater uses 12 A at 115 V, that's 1380 W, or just over 1 kW. So in one hour, it uses just over 1 kWh. In 24 hours, it must use a bit more than 24 kWh. To do it exactly, in 24 hours, it uses 1.38 * 24 = 33.12 kWh, as in the book.
So where's your mistake? A watt is a Joule per second, not hour. Multiply equation (1.9) by 3600 to get from hours to seconds, and you'll get the right answer.
|
H: Resistor in my AC LED circuit gets fried
I have designed a dead simple LED circuit I calculated the required resistance to be (220-2)/0.02 = ~11k so I used 10k+1k resistors in series but for some reason, the 10K resistor started smoking in like 10 seconds although I estimated the power usage to be 2 V*0.02 A = 0.04 W which is way below the rated power of my resistor.
On the second circuit, I used 4 47k resistors in parallel so I get a resistance around 12k this time the resistors did not burn but still got very hot.
I can't understand what I'm missing or I don't know exactly.
AI: First: Your power calculation isn't correct. 2V*0.02a = 0.04w is the calculation if you had 2 volts across the resistor. You don't. You have approximately1 220 - 2 = 218 V. The power dissipation is hence roughly 4.4 W if conducting throughout the AC cycle.
Additionally: LEDs aren't intended to withstand a large reverse voltage. On the negative cycle of the waveform, the LED initially has no current through it, so there is no voltage drop across the resistors. This places up to 220 V * sqrt(2) = 311 V of reverse voltage across the LED, destroying it almost instantly (they are designed to handle around 5 V or so of reverse voltage).
You'll can add a shunt diode which will also protect the LED. It can also be a second LED, in which case you twice the light - one lights on one half-cycle, one lights on the other half-cycle. Of course, twice the resistor current means twice the resistor power dissipation.
Additionally, be very careful with mains circuits in general as they can be extremely dangerous to you and your home (as they can electrocute, start fires, etc). Based on the mistake you're currently making here, I strongly suggest that you practice with battery powered circuits instead - while you may still destroy components, you do not risk deadly electric shocks or house fires. Thankfully, this mistake did not lead to a fire, although it had a risk of doing so (unless you use special flame-proof resistors which are guaranteed not to burn down your home if overloaded).
A much deeper and more accurate understanding is necessary to safely work with 220 VAC (or 110 VAC).
1 Approximate, because in reality 220 is Vrms and we're subtracting an offset of 2 V.
|
H: LTspice is rounding / misinterpreting PWL file values
When I'm loading PWL files into a voltage source or a current source, I'm not getting what is provided in PWL file content. I'm attaching three pictures to show file content, schematic and plot.
In a PWL file I've tried to change a format and values of a few entries to exclude some possible issues, but to no avail.
I've started from a file with much smaller values and a bit more of significant digits (2.0050101 to 2.0161231). I thought that it might be a precision issue, so I've increased values and removed number of significant digits. Slightly different behavior, but still wrong results.
Why is LTspice not interpreting it as it should, and how can I fix it? I was googling for the specification of PWL, but nothing related came up :/
I'm also presenting special characters, so that formatting is clear.
On the plot in green is a measured value and in red is what I would expect to see.
Schematic
AI: The reason you see it like this is because you are using a very high dynamic range for the values: time points in the range of hundreds of thousands (1e5) coupled with values that vary in the range of hundreds of microvolts (1e-4)! Due to the compression algorithm the display of the waveform appears distorted. The solution is to add .opt plotwinsize=0. Be careful as the .RAW file may grow very large now. The .save command will help if that's the case.
|
H: What kind of component is this, and is it damaged?
My central heating controller stopped working. It's essentially an AA battery powered device with a thermistor, some buttons, and an LCD screen, which lets me program desired temperatures for various day/time periods. When the controller decides heat is required, it enables its relay which powers on the boiler for heating.
When I went to turn up the temperature temporarily today, I saw that the screen no longer came on. Trying various newly charged batteries didn't help. The controller did also die a year ago while in the middle of using it, but it worked again after taking it apart, rebuilding, and replacing the batteries. I believe the unit has consistently complained for maybe three years about a low battery, even with new batteries.
The PCB surfaces look pretty much pristine (AFAICT), aside from this one component, which I'm unsure of its purpose and whether it's intact. It looks a bit like a capacitor that has leaked — but it's labelled as "L3", which apparently can denote an inductor or bead (I'm not familiar with them). The body has "CTC" written on it a few times, but I didn't notice any other markings. Other capacitor-like components on the board are labelled as "Cx".
Photos I took a year ago of this component also look identical to now, so I don't know if this is the component at fault (though I suppose debugging the board further is a separate topic).
What is this component, and is this normal? Are there any meaningful measurements I can make with a multimeter to test its functionality?
AI: L3 is an inductor and it is normal to have glue on it. Most likely not damaged and problem is elsewhere. Multimeter on continuity mode should read approximately 0 ohms over it.
|
H: Nested Loops on verilog not behaving as expected
I have a problem with verilog.
So the structure for my code is I have top module,
then have another 2 sub-module that I called on the top module
Here's some of my code
genvar i;
genvar j;
wire[2:0] encoder[7:0]; //4bit reg array of 8 element
wire[8:0] save_tmp[7:0];
wire [8:0] tmp; //1bit
generate
for(i=0;i<N;i=i+1) begin: gen_loop
module1 mod1(.prev(0), .cur(1), .next(1), .out(encoder[i]));
for(j=0;j<N;j=j+1) begin: gen_ppg
module2 mod2(.encoded(encoder[i]), .in1(1'b0), .in2(1'b0), .out(tmp[j]));
end
assign save_tmp[i] = tmp;
end
endgenerate
I can get the output from the first module (through encoder) and also no problem when I sent it into the second module (module2). Here's my second module:
module module2(encoded, in1, in2, out);
input [2:0] encoded;
input in1, in2;
output out;
wire nand1, nand2;
assign nand1 = ~(in1 & encoded[2]);
assign nand2 = ~(in2 & encoded[1]);
assign out = ~(nand1 & nand2) ^ encoded[0];
endmodule
I'm trying to make a testbench just for calling module2 and it's working.
But when I put it on the top function, I can get the value from nand1 and nand2 but not the output.
Can anyone knows the problem ?
AI: Each index of tmp as more than one driver. If any of the drivers values conflict, the result is an X. Each index of tmp needs to have only one driver. There are two ways to do this.
One is to add more bits to tmp and assign explicitly:
genvar i;
genvar j;
wire[2:0] encoder[7:0]; //4bit reg array of 8 element
wire[8:0] save_tmp[7:0];
wire [8:0] tmp [7:0]; //1bit <-- Add range [7:0]
generate
for(i=0;i<N;i=i+1) begin: gen_loop
module1 mod1(.prev(0), .cur(1), .next(1), .out(encoder[i]));
for(j=0;j<N;j=j+1) begin: gen_ppg
module2 mod2(.encoded(encoder[i]), .in1(1'b0), .in2(1'b0), .out(tmp[i][j])); // <-- Add [i]
end
assign save_tmp[i] = tmp[i]; // <-- Add [i]
end
endgenerate
The other approach is the declare tmp inside the generate loop. Then each index of the loop will have its own scope limited instance.
genvar i;
genvar j;
wire[2:0] encoder[7:0]; //4bit reg array of 8 element
wire[8:0] save_tmp[7:0];
generate
for(i=0;i<N;i=i+1) begin: gen_loop
wire [8:0] tmp; //1bit <-- tmp is now local to the gen_loop index
module1 mod1(.prev(0), .cur(1), .next(1), .out(encoder[i]));
for(j=0;j<N;j=j+1) begin: gen_ppg
module2 mod2(.encoded(encoder[i]), .in1(1'b0), .in2(1'b0), .out(tmp[j]));
end
assign save_tmp[i] = tmp;
end
endgenerate
|
H: Is there a way to convert a time delay into a sample delay?
I have been given a time delay of a signal and the sampling frequency, however, I am trying to calculate the delay in samples. My logic is: delay frequency divided by sampling frequency, however, this doesn't give integer answers, which is what I expected as this is the time at which each sample is taken. However, if I do sampling frequency divided by delay frequency it gives integers for the sampling delay, but this doesn't make sense to me.
Could anyone please explain the conversion between delay in time and delay in samples.
Thank you.
AI: You know the time delay and the sampling frequency. The reciprocal of the sampling frequency is the time between samples. Thus if you divide the time delay by the time between samples (using the same units such as seconds), you will get the delay in units of sample periods. This should result in an integer (within the tolerances of the time delay and sampling frequency).
|
H: How can this 74 series Inverter circuit be analyzed?
I'm trying to analyze the keymech scanning / sampling circuit in the WASP synthesizer
So far I understand that the keymech is sampled one key at a time by the 74 series logic driven by the dual inverter clock and I understand how the muxing works.
However in the top left of this circuit there are 4 inverters (U35A/B/C/F) and 2 SR latches (U44A/B) I'm not sure how I should analyze this section.
The input to this section (node 4 of U46C) is the logic value of the active key however this signal is connected to both the input and output of U35B by 100K unless switch U46C is activated by U33A and U35F which happens when Q3 and Q4 of U32 are both low or both high
From a functional view I think that this circuit is meant to latch the current MUX inputs through U30 when a key press is detected however Im not sure how to follow this circuit through
AI: U35 is operating in linear mode, like an inverting amp. I believe those (4046) are from CD4000 series initially.
When the switch U46C is closed, the closed loop around U35A & U35B puts the two of amplifiers at a stable state, by creating balancing point at C47.
At this operating mode, the "input" signal (analogous to summing OPA configuration) coming through U39 & 40 & 47 sees very low impedance to the output of U35B, thus very low gain, thus less effective to output of U35B.
When the switch U46C opens, the input signal is amplified by seeing gain through R136, and positive AC feedback through C46. In turn, U35C (biased to have output low) sharply amplifies the signal, and toggles the output.
Not only CD4000 series, but 74LS00 series inverters do have (quasi) linear operating region, though narrow. In earlier days, I used 74LS04 for video amplifier.
Edited,
I found you are still waiting for some more. I like puzzles, and wonder why not many do. I am going to detail the answer. I know I will make mistakes, unless I can emulate or probe the circuitry. Meantime, I am not sure how much you understood what I explained already.
However in the top left of this circuit there are 4 inverters
(U35A/B/C/F) and 2 SR latches (U44A/B) I'm not sure how I should
analyze this section.
U35F is an inverter that switches on/off U46C, an analog switch, not digital. What the analog switch does is to cancel (significantly reduce) the influence of the signal coming to the pin 4.
That happens, because U46C switches on, then shorts the signal on pin 4 to the output of U35B, which is considerably low impedance compared to the source impedance of the signal on U46C-pin4.
That assumes the keys are resistive to the touch/pressure/depth. I doubt the keys generates current with complexity. Otherwise, the circuitry does not do anything. There the sensitivity/threshold level is trimmed by P5, the 33K pot.
When U46C opens, U35B, an inverting AMP, sees the input signal (voltage through resistance, thus current) and R136, a feedback resistor, is freed, and the signal is amplified.
U35 is just amplifying the change of U35B output voltage sharply, acquit enough to trigger U44B, that is the latched key status, pressed/depressed.
I have explained U35A & U35B, as "balancing/biasing/edge detecting" up there.
The input to this section (node 4 of U46C) is the logic value of the active key
I am 100% sure that the input signal to U46C-pin4 is current, likely resistive pressure sensitive element driven by the reference voltage. So, it can detect the "touch".
however this signal is connected to both the input and output of U35B by 100K unless switch U46C is activated by U33A and U35F which happens when Q3 and Q4 of U32 are both low or both high
Explained up there.
From a functional view I think that this circuit is meant to latch the current MUX inputs through U30 when a key press is detected however Im not sure how to follow this circuit through
Yes, when the key is pressed, U30 latches the key number (which key). Except the circuitry at the top left, it is digital logic. U32 generates timing.
|
H: How is clock gating physically achieved inside an FPGA or ASIC?
It is bad idea to add logic gates in clock signal path. How is clock gating achieve in FPGA and ASIC designs and how does it prevent glitch in the output signal i.e the gated clock as it is enabled or disabled?
AI: Is it a bad idea to gate clocks? It depends.
In the ASIC there’s well-understood timing for clock paths, so it’s reasonable to instance a standard cell on the clock tree to gate a sub-region’s clock. On ASIC then, not only is clock gating not ‘a bad idea’, it’s widely used as a means to save power.
Not so much with the FPGA. In fact, it’s never a good idea to create gated clocks directly out of FPGA fabric logic; the synthesis tools will warn you about it if not outright forbid it. Why? The resulting inserted skew becomes impossible to manage at higher frequencies, even if the gated clock doesn’t glitch (which it will without careful design.)
This brings up a common issue: modeling ASIC clock gating on FPGAs. It isn’t really feasible to just define the clock gate in HDL and hope for the best. It needs special handing.
You can model ASIC-like gated-clock behavior in your FPGA using clock-enable flops for your synchronous blocks. This can be dealt with as a synthesis option in your flow, which will identify the gated clock domain and convert its flops to FDCEs. Vivado example: https://support.xilinx.com/s/article/982650?language=en_US
Some FPGAs do support clock gating, using dedicated clock gate resources with predictable timing and glitch-free behavior. More here from Xilinx (look for BUFGCE): https://www.xilinx.com/support/documentation/user_guides/ug572-ultrascale-clocking.pdf ; other FPGAs will be similar.
Note that with BUFGCE, you’re still obliged to meet clock setup/hold from clock enable to clock rising edge. Still, that’s an easier constraint to meet than making an asynchronous clock gate out of fabric logic. If you’re modeling an ASIC you have to account for the difference between BUFGCE and whatever standard cell you’re using in the ASIC.
Finally, you asked how clock gating is actually done. Tl, dr: the enable is latched to prevent disturbing the clock pulse with a glitch. More here: https://anysilicon.com/the-ultimate-guide-to-clock-gating/
|
H: What does charge on an electron mean?
As I have read , charge is electrons , protons or neutrons.
Now , according to definition of current , current is the rate of flow of charge I.e flow of electrons. Then , I=Q/t . Unit of Q is coulomb & Q is charge. Right , so Coulomb is actually like the No of electrons flowing through the conductor. Also , $$Q=n*e$$
But there is another point in my book that charge on an electron = $$-1.6*10^{-19}$$. Now, this is confusing. Charge is electron , what does charge on an electron mean ? If charge is a like a numerical value , change on an electron means like there is some kind of energy the electron has. Is it that charge has two definitions?
Also , if charge is a number. What does it mean by negative symbol for charge ?
AI: As I have read , charge is electrons , protons or neutrons.
Electrons and protons are both particles that have a charge, and neutrons are uncharged.
Now , according to definition of current , current is the rate of flow of charge
Yes
I.e flow of electrons.
A flow of electrons will give you a flow of charge, or a current. A flow or movement of other charged entities (protons, ions, charged objects like the sectors of a Wimshurst machine) will also give you a current
Then , I=Q/t .
When measured in consistent units, yes. The I is in amperes, Q in Coulombs, t in seconds.
Unit of Q is coulomb & Q is charge.
A Coulomb is one unit of charge. There are other units of charge, like the electron charge, which is the charge on one electron, or the Faraday, which is the charge on a mole (about 6x10^23) of electrons. One Coulomb is the charge on about 6*10^18 electrons.
Right , so Coulomb is actually like the No of electrons flowing through the conductor.
No, you're getting your units mixed up here. The number of electrons flowing through a conductor is a number, a pure number, a counting number, without units. A Coulomb has units of charge.
Also , \$Q=n∗e\$
True if all quantities are in consistent units. For instance, if Q is in units of electron charge, n is a number, and e is one electron charge. Or if both Q and the electronic charge are measured in Coulombs.
But there is another point in my book that charge on an electron = −1.6∗10−19
I hope it doesn't say just that. It should say
charge on an electron = −1.6∗10−19 Coulombs
Now, this is confusing. Charge is electron , what does charge on an electron mean ? If charge is a like a numerical value , change on an electron means like there is some kind of energy the electron has. Is it that charge has two definitions?
An electron has charge, just like an electron has mass. They are both properties of an electron. A large number of electrons will have this much mass, or that much charge.
Charge is not like a numerical value. But the amount of charge can be measured as a number times a unit of charge.
Energy is a whole different ball-game, best to stay away from it in this level of discussion. Both charge and rest mass are intrinsic properties of an electron, it has them whatever the circumstances. When an electron is described as having energy, that's in some context of external fields or inertial frame, and is not an intrinsic property of the electron.
Also , if charge is a number. What does it mean by negative symbol for charge ?
It's just indicating that it's the opposite polarity to 'conventional positive charge', ie opposite to the charge on a proton.
Way back at the time of the Greeks, it was known that there were two polarities of charge, from friction charging of glass and amber. When modern experiments started, with Benjamin Franklin amongst many others, a convention was decided upon, for which to call positive and negative. When much later experiments revealed the structure of the atom, it was found that the convention labelled the proton charge as positive, and the electron as negative.
Some students think, or even insist, that this convention is wrong, as the first time they meet the concept of an electron, it's drifting along metal wires as an electric current. This insistence is just 'metallic conductor Chauvinism'. If you're in a profession where you really need to think about charge carriers, like semiconductors, plasma physics or electrolysis, then you might be dealing with holes or ions as well, and you need to treat all the charge carriers with their correct signs, and there's no practical preference for one or the other.
A convention is just that, a convention. If you stick to 'conventional' current, conventional charge, then it all works out.
|
H: I'm looking for a component / circuit that distinguishes between eight states
I use PSpice For TI2020, so I don't have a pre-assembled ring counter in the library. I'm looking for a way of setting a specific output to High from eight states that come from 3 flip-flops (up counter). I made a few pictures for illustration. I just don't know how to build a composition of logic gates. I don't expect you to completely solve this; I just need a thought.
Or spoken as a programmer (pseudocode C#):
private byte find_out()
{
if (a8Z[0] && !a8Z[1] && !a8Z[2])
{
return 1;
}
else if (!a8Z[0] && a8Z[1] && !a8Z[2])
{
return 2;
}
else if (a8Z[0] && a8Z[1] && !a8Z[2])
{
return 3;
}
else if (!a8Z[0] && !a8Z[1] && a8Z[2])
{
return 4;
}
else if (a8Z[0] && !a8Z[1] && a8Z[2])
{
return 5;
}
else if (!a8Z[0] && a8Z[1] && a8Z[2])
{
return 6;
}
else if (a8Z[0] && a8Z[1] && a8Z[2])
{
return 7;
}
else
{
return 255;
} // If I got all of the above lines of code correctly, this should never occur.
}
AI: What you are asking about is called a "decoder" and in this case a "3-to-8" decoder.
The decoder takes a 3 bit binary input and outputs a single line on the output corresponding to the input.
|
H: Using USB-C charger D+ and D- pins for comms with an STM32 chip
I'm looking at the MAX77751 single cell Li-Ion and LiPo charger chip that features both USB-C and USB 2.1 detection. The battery would power a design that uses an STM32L433CCUx chip which has another USB 2.1 connector for comms. Can I use the D+ and D- pins from the USB-C receptacle for communication with the STM32 chip as well? If yes, would I need to disconnect the D+ and D- pins from the MAX77751?
AI: Short answer, no.
This chip uses D+/D- pin specifically for the USB-C power negotiation in order to get higher voltage, this means the chip actively communicates over these lines, thereof, you can't plug an STM32 there.
You would need a chip that is acting like a hub or bypass to be able to communicate upstream.
Alternatively, you could use a chip like MAX77961 (or alike) and then do the power negotiation with the STM32 (if it supports it)
|
H: Will ferrite cores reduce the surge on the 5V line?
I think my ATMEGA AVR MCU is malfunctioning due to 5 VDC line surge.
The MCU is installed in a factory and the power line is very noisy.
I captured the 5VDC line. There were high frequency (10 MHz) 13 V peak ringing waves.
If I install ferrite core for each line and 2 wires together, will it be effective?
What type of capacitors are effective?
**
AI: You need to filter out that ringing noise by some form of decoupling.
Solution 1
I suggest you to get:
A set of ferrite cores
A set of high capacitance electrolytic capacitors
A set of ceramic capacitors
Design or copy a filter on the power supply 13 V
Use ceramic capacitors to filter the the VDD of your microcontroller.
Solution 2
Buy an isolated DC/DC converters that you put between the factory power supply and your board.
https://www.digikey.com/en/product-highlight/t/traco-power/ten-wirh-railway-industrial-dc-dc-converters
https://www.digikey.com/en/supplier-centers/traco-power
https://www.google.com/search?q=digikey+isolated+DC+dc+converter&client=firefox-b-d&source=lnms&tbm=isch&sa=X&ved=2ahUKEwiF0dGQ-Jz0AhWdhP0HHdaIAAIQ_AUoAnoECAEQBA&biw=1920&bih=899&dpr=1
|
H: How to choose a common mode choke for ethernet?
I'm currently working on a design which contains gigabit ethernet. Due to some weird size constraints, I can't use a magjack or one of those transformer ICs, so I'm using discrete magnetics. I've managed to pick a transformer (actually three... would be nice if they stayed in stock long enough to lay out the board), but am not exactly sure how to choose a common mode choke.
In this marketing copy from Bourns, the SRF2012A-801Y is mentioned as a CMC for gigabit ethernet. But I don't know why (or even if!) this is a good choice. Is it the impedance (seems kinda high to me)? DC resistance/max current? Will any old common mode choke work for gigabit ethernet, or is there something to it? What about wire wound vs multilayer chip ones like this?
AI: If you are plannig to go a test laboratory with your finished product, I strongly suggest you to buy an off-the-shelf common mode filter suited for Gigabit ethernet.
The common mode you mentioned, Panasonic EXC14CG/EXC14CE, might not work because the datasheet doesn't say anything about Gigabit ethernet. Yet, it's too little for filtering ethernet currents.
Stick with what datasheets state. If that particular feature is not expressively mentioned in the datasheet, than that product is not meant for what you got in mind.
|
H: Can I control a solenoid valve below its nominal voltage?
I need to control the following 2 ways direct solenoid valve ("Series 252 D01") :
https://www.emerson.com/documents/automation/catalog-series-252-dental-manifolds-asco-en-6779448.pdf
It is rated at 24 VDC. If I apply this voltage, the valves opens (it is normally closed) and it consumes approx. 170 mA (meaning its power is 4 W).
But the valve also opens up at approx. 15 V (it consumes 100 mA -> 1.5 W). Once it is opened, I can go down to 3 V (it consumes 0.18 mA -> 0.5 W), and closes at 2 V.
Because the solenoid heats up at 24 VDC (approx. 50 °C after 15 min), I was wondering if I could control it as follows : I open it at 24 VDC and maintain its opening by reducing the voltage to 3 V. I can achieve this behavior with a PWM and a transistor.
Do you think it is a good or a bad idea ? What could go wrong ? Is it something which is usually done or not at all ?
If not, is it possible to reduce the heat while maintaining the proper functionning of the valve ?
Thanks.
AI: You are seeing hysteresis on the operation of the valve and that is common with magnetic actuators.
Your approach is OK but be aware of a few points:
You might need to be confident that a random spare from the same supplier will work over the same operating range. You might find that a replacement was not as sensitive and either wouldn't pick up or wouldn't hold up.
Power dissipation will be \$ \frac {V^2} R \$ so reducing the voltage to 75% will result in a power reduction to 56%. There's no need to go as low as 3 V unless energy consumption is an issue.
I didn't check the datasheet but if it's a regular pilot-operated valve then the solenoid opens the pilot air which moves the spool under air-pressure rather than direct magnet operation. You might find that the valve behaves differently with variations in pressure if the solenoid is not able to open the pilot fully.
Related to the previous point, you might find that the valve switching response is different at lower voltage.
Be sure to put a snubber diode around the coil.
|
H: How to design an amplitude modulator with common-emitter amplifier circuit?
I am facing a problem on how to design an amplitude modulator using a common-emitter amplifier circuit.
I am asked to use a carrier frequency of 50MHz and an input signal of 1kHz to 14kHz. In order to solve this problem, I designed a common-emitter circuit, as in my other question. Then I added a modulator signal at the biasing circuit yet I can't obtain a good amplitude modulated signal.
I have searched various sources but the information is not clear and not providing any calculations. I need to know how the values of the components are calculated and work well in simulation. I have found a sample circuit without calculations for reference.
Edit: This is my current configuration right now.
AI: I need to know how the values of the components are calculated and worked well in simulation.
Let's work backwards. The voltage that appears on the collector of Q1 should look something like this. (ignore the time-scale, this is taken from another Q/A).
This signal consists of two components. An AM signal plus an amplified version of the modulating signal.
If your collector voltage looks like that, you can get the AM signal by filtering away, or attenuating the component which is the amplified version of the modulating signal. This is (imperfectly) accomplished by the RC high pass filter consisting of RL and the output capacitor. By selecting RL and Cout values such that the cutoff frequency of the RC high pass filter is around that of the carrier frequency.
Use the formula:
$$f = \frac{1}{2\pi RC}$$
where f is the cutoff frequency.
If you pick the cutoff frequency to be equal to the carrier frequency, then you can calculate what Rload * Cout needs to be.
To get the collector voltage to look like that in the diagram, one needs
The gain of the transistor to be set properly
The biasing of the transistor to be set properly
The emitter voltage to reflect the modulating signal, but much less so the carrier signal.
The last point is (imperfectly) accomplished by the low pass filter formed by the combination of Re and Ce.
Choose values for Re and Ce so that the cutoff frequency is the same as the modulating frequency.
Again, use the formula
$$f = \frac{1}{2\pi RC}$$
The emitter voltage should look something like this:
If too much carrier frequency signal appears on the emitter, the voltage on the collector may look something like this:
\$\uparrow\$ NOT WHAT YOU WANT AT COLLECTOR! \$\uparrow\$
Again, working backward, one needs to fix the gain so that the transistor neither goes into saturation nor cutoff. Either of these situations will greatly distort your AM signal. Run a simulation of the circuit with the collector being monitored. You need to adjust the gain and bias of the circuit so that the voltage swing is something like from 0.7V to Vcc-0.3V. If you go too close to ground the transistor is in saturation, and if you go too close to cutoff. You can modify the gain in a few different ways. Increasing Re will decrease the gain. Increasing Rc will increase the gain.
As you adjust the gain, you may discover that bias also needs to be adjusted. That is, if the transistor goes near Vcc-0.3V, but doesn't go anywhere near 0.7V. Or alternatively, if the transistor goes near 0.7V, but doesn't go anywhere near Vcc-0.3V. You can adjust the bias by changing the values of R1 or R2.
I can’t understand why the Em is greater than Ec
Both Em (Vmodulation) and Ec (Vcarrier) get amplified. However, from the image above showing the collector voltages, it should be clear that the smallest amplitude of the AM envelope occurs when the top of the amplified carrier gets close to Vcc. The smaller the carrier frequency peak-to-peak voltage at this point, the closer one can get to 100% AM modulation without clipping. Hence the reason for keeping the carrier voltage small. A larger Vcarrier means that the % of AM modulation must be less.
|
H: Do you need a current limiting resistor on an enable pin?
I am using MCP16312-E/MS buck regulator and need the component to always be on. I am therefore connecting the enable pin to the Vin supply. Do you need a current limiting resistor? Whilst the resistor I have in series with the enable pin will limit the current, it will also 'take' all of the supply voltage so the voltage at the enable pin itself will be 0V? Is this correct? If so, what should be done instead?
AI: I absolutely agree with @DirkBruere - you should read the datasheet.
I read the datasheet and it says nothing
Let's see if that is true.
First, the very first picture in it has EN pin connected to Vin. This tells you that for "always ON" applications you don't need the resistor at all.
Second, in the Absolute Maximum Ratings you can see that maximum input voltage is 32V and EN pin can be 0.3V higher, i.e. 32.3V maximum. This confirms that you can connect EN pin directly to input voltage.
Third, in DC Characteristics you can see that maximum EN Input Leakage Current is 1 microampere at 5V. It won't be too much higher than that at higher input voltages.
1 uA current through 1 MOhm resistor allows you to calculate voltage drop on resistor. In this case 1uA * 1MOhm = 1V. So, if your input voltage is 5V then the voltage on EN pin would be 5V - 1V = 4V.
Finally, in the same table you look for EN input voltage characteristics, and find that EN Input Logic High minimum is 1.85V, so anything above it will switch device ON. Calculated above 4V voltage is certainly sufficient. However depending on input capacitance the start-up time can be quite high. For this reason 1k-10k resistors are more typical for this pin, just as @DirkBruere suggested.
So, from the datasheet it is clear that 1) you don't need resistor at all, but 2) if you add it then 1 MOhm resistor will work, although smaller value is more appropriate.
|
H: Why will clock signals have different phases after a frequency divider?
I have a question about data transport between two chips with series format.
I have two chips. Chip 1 will sent 8-bits data to chip 2. The picture shows a solution. I will provide a 100MHz clock from a frequency divider for the digital component and a 800MHz clock for the shift register.
One of the problems for this solution is that I will have 2 different phases.
Why will there be two different phases if my div1 and div2 are the same?
AI: Dividers are essentially counters, incrementing on each rising or falling edge (not both). Even a divide-by-two counts from 0 to 1 and rolls over. Unless you have a mechanism to ensure that each one starts counting on the exact same edge (or when others are rolling over to zero), they may have different counts. You can use a reset line, but you'd have to ensure that the reset propagates identically to each counter (including IC operational differences) and takes effect during the same cycle (don't want async reset to occur too near a counting edge, or sync reset to violate setup and hold times).
If you don't synchronize them reliably, they have a good chance of coming up with different counts when they start. Once started, transient noise on the clock could set them out of sync without possibility for correction unless enforced by a separate system.
|
H: Which terminals are collector, emitter, and gate on this IGBT?
I am new to IGBTs. I want to experiment with them. I got this older IGBT but unfortunately have no data on it (and all the data I find on the web is fake by resellers.)
How do I know which is gate, collector, and emitter?
AI: Going by the schematic on the side, this is not an IGBT. It's two diodes. One source I can find corroborates this, calling it a "dual rectifier module", but it's pretty hard to find information on this part in general, for some reason.
So there is no collector, gate, or emitter. Terminal 1 is the common cathode of both diodes, and terminals 2 and 3 are their anodes.
|
H: Verilog negation operator on inout-type signals
inout b,c;
assign c = ~b;
In the code above, doing such assign will result in XXX unknown situation which I presume is due to conflicting drivers?
However, assign c = b; does not result in XXX unknown situation though. Why?
I tried to use not a1(c, b); , but I think the simple not primitive gate does not support inout type
Any suggestions or comments ?
vcs.log:44:Error-[IGOE] Illegal gate output expression
vcs.log-45-TB.v, 49
vcs.log-46- The following expression is illegally connected to gate.
vcs.log-47- Expression: c
vcs.log-48- Source info: not a1(c, b);
vcs.log-49- The gate connection must be a scalar net or bit-select of vector net.
AI: The generation of Xs has nothing to do with inout ports—it has to do with which expressions can propagate the Z state. Boolean and arithmetic expressions as well as gate-level primitives treat the Z state the same as X. So the boolean negation of Z is always X.
A direct assignment and the conditional operator can propagate Z states. MOS primitives can too.
So the following would work in Verilog:
assign c = (b===3'bz) ? 3'bz : ~b; // need to know the size of b
In SystemVerilog:
assign c = (b==='z) ? 'z : ~b;
|
H: What happens if you connect a galvanometer to an AC source?
What happens if you connect a galvanometer to an AC source?
I have seen many answers online along the lines of there not being any deflections as the net average current/net average magnetic field is zero (depends on the site but I won't cite them as I don't wholeheartedly trust them from bottom of my heart.)
I argue that since there is indeed an instantaneous current we must have an instantaneous deflection, right? I also want to argue two more things on what we may observe if we were to do a live demonstration.
Firstly the galvanometer coil must get damaged after a while due to the eddy current (this is a moving coil galvanometer I am considering currently. I also heard there's a metallic core in the galvanometer to make the pointer come to rest immediately.) So I argue that the MCG must get broken after a while.
Secondly, assuming the coil doesn't break, we can easily imagine the maximum deflection being produced when there is maximum current flowing. That's what I am assuming in my mind. Since my AC current flows at 220 V, 50 Hz I assume there must be at least 50 times the pointer going from one end to the other in a single second which would be impossible to be seen by a human eye thus rendering the whole act of finding deflection pointless. Maybe the galvanometer won't even move due to the inertia. Would I need to have a spare moving coil galvanometer and an AC source to try that out?
AI: Or or maybe the Galvanometer won't even move due to the Inertia.
At 50 Hz, that will certainly be the case. If the frequency is low enough, the needle would follow the change in current. At some frequency, the needle will move, but it or some part of the mechanism will break due to the stress of repeatedly changing direction. A few days ago, I observed one cycle at a time a few times without apparent damage. I dropped a magnet through a coil attached to a galvanometer.
Here are a few frames of a movie of a BLDC motor, turned by hand, generating alternating current through a galvanometer.
|
H: How I can desolder this WSON8 chip?
I have the following chip that I want to desolder:
I tried with my 60W soldering iron and a pair of tweezers and desoldering braid, but it won't budge; also, my soldering iron won't heat to more than 400°C.
Do I need more heat or a different technique?
AI: You need a board heater of some sort. The only solder you're able to get to with an iron and braid is the inspection fillet on the side of the terminals. You're getting very little, if any, of the actual solder between the terminals and the pad and none of the exposed thermal pad in the center of the package. (See page 28 of this datasheet for a typical WSON-8 device.) You'll need to either bake the whole board, blast a hot-air iron on the opposite side of the chip, or put it on a hot plate to melt all the solder simultaneously. You probably destroyed the chip with the 400C iron though so it may be a moot point.
|
H: Damping Ratio Implications for an Increasing Resistance In an RLC Circuit
I have managed to derive the 2nd order ODE for a simple RLC circuit (this circuit is part of a booster dc-dc converter). I derived an expression for the damping ratio of the circuit and fact checked this with derivations in literature. This is shown below:
Now from the expression above the damping ratio is inversely proportional to the load resistance given L and C are kept constant (which they are). However, this seems counter-intuitive to me. I would have thought for certain that as resistance is increased a higher damping ratio is expected. Likewise, from Ohm's law if resistance increases (given the initial and final transient voltage is kept constant) you would have a lower current which should result in a larger damping ratio.
Please could someone explain why physically increasing the resistance decreases the damping ratio?
Thank you.
AI: There is at least two node connections for an RLC circuit
Series
Parallel
There are some weird combinations of series-parallel and parallel-series but sticking to the two types.
Series
simulate this circuit – Schematic created using CircuitLab
This arrangement has a Quality factor of: \$ Q = \frac{1}{R}\cdot\sqrt{\frac{L}{C}} \$
and knowing that the damping factor is \$ \zeta = \frac{1}{2Q} \$ we therefore have a damping expression of:
\$\zeta = \frac{R}{2}\cdot \sqrt{\frac{C}{L}} \$
With increase resistance, the damping factor increases.
Parallel
simulate this circuit
This arrangement has a Quality factor of: \$ Q = R\cdot\sqrt{\frac{C}{L}} \$
and thus \$\zeta = \frac{1}{2R}\cdot \sqrt{\frac{L}{C}} \$
With decrease resistance, the damping factor increases.
If you think about why this is the case, a low damping (ie high Q) implies that there is a lot of energy flowing between the two energy storage devices. For the series case, zero resistance would imply infinite current could flow, likewise infinite resistance would imply zero current would flow THUS: the higher the resistance the higher the damping.
Now consider the parallel case. The damping resistance is across the network. If this resistance was zero it would be shorting out the energy storage devices and thus no current would flow between them to resonate. Likewise if this resistance was infinite this current would cycle between the two energy storage devices. THUS: the lower the resistance, the higher the damping
|
H: How to calculate auxilary capacitor for flyback
here is my circuit,
simulate this circuit – Schematic created using CircuitLab
The components try to restart but no results.
Vcc output in yellow.
I don't find in internet how to calculate Caux and Raux for starting correctly, event when I draw more current from the output.
SMPS: https://www.farnell.com/datasheets/2200510.pdf
PWM: https://www.ti.com/general/docs/suppproductinfo.tsp?distId=26&gotoUrl=http%3A%2F%2Fwww.ti.com%2Flit%2Fgpn%2Fuc3845
AI: Your Caux might be to small especially at light loads where Ton is short and the PWM frequency is low.
Try 4.7 to 10 uF (50 V ratings)
Take a look here. NXP uses 4.7 uF for example:
https://www.nxp.com/docs/en/user-guide/UM10506.pdf
The inductance of 6.8 uH helps filtering the square wave signal popping out of the auxiliary winding. I don't use it in my designs though.
I've never seen my life Raux in the auxiliary windings of Flyback power supplies.
Raux might be the source of the problem because it discharges quickly Caux.
|
H: Can a synchronous buck converter be operated in reverse?
Consider the following buck converter circuit:
When Q1 is on and Q2 is off, current flows through the inductor and into the capacitor and the load, and the energy stored in the inductor increases. When the switches change states, the inductor and the capacitor use their stored energy to supply the load; Q2 is on and completes the current loop.
Now, if we connect the supply to the "output" side and the load to the "input" and reverse the order of the two steps, we see that the inductor charges up when Q2 is on, then discharges into the load when Q1 is on. This is equivalent to a boost converter circuit.
My question is: If we connect two devices (two batteries, a battery and a motor/generator, etc.) to each side of the converter, are we able to change the direction of current flow?
AI: Yes, and boy, didn't it come as a surprise to me the first time I did it! I saw what was happening, said "T'oh!", and had to spin the board.
If you put a capacitor across \$V_{in}\$ and look at the power flow from right to left, you'll see that you have a boost converter.
To a first order (i.e., ignoring components going up in puffs of smoke, possibly with sound effects), the inductor insures that the average voltage at \$v_{SW}\$ is equal to \$V_o\$. The FETs -- if they're switched in the "typical" way where the top FET is on whenever the bottom FET is off, and visa versa -- insure that the average voltage at \$v_{SW}\$ is equal to the duty cycle times \$V_{in}\$.
This is a well-known effect when you're driving a motor -- the motor can back-drive the \$V_{in}\$ rail, and unless the supply behind it can absorb the power you need to have a mechanism for dumping the current into something safe, like a great big resistor (AKA "braking resistor", which, when you buy switching motor drives, you are often expected to supply yourself).
|
H: Electronics and electrical projects certification
What certification is required to sell my engineering projects online in India? if I build my project as product to sell online what certification is required for this as well?
AI: If you're selling a project you're fine. Unless maybe people lives are involved there aren't regulations or law that your project must fullfil.
If you're selling an electronic product things are different. Your product has to be compliant to safety and EMC regulations.
|
H: relays 5V 10A and 12V 30V stop working after a few days of use
We built a system for heating 3 bioelectrochemical reactors, but after a certain time of operation the system starts to heat up. We identified that when this occurs, the relay works despite the light being on (indicating that it is activated) while the voltage is at 0 V on the heating strips (added 3.3 A).
the load of system is AC. The relay I was using was 4 ways 5V 10 A (10A/250VAC and 10A/30VDC)
When we change the relay, the problem is resolved for a while, until the relay is heated up and functioning again - (in this case, the malfunction of the relay starts only with one of the relays and later as other routes stopped working - after a few days.
Considering that the problem was the relay, we decided to replace it with a more robust one, we changed it to 12 V 30A, it worked for 20 days and then 1 of the 3 relays stopped working - (but still with the light on) then the heating stopped. With this I exchange the relay for one of the same type and it is working for the time being
We use the dimmer to control the voltage and avoid an overheating problem if it occurs, however in such a way that the 3 reactors keep heating (do you think this would cause some problem in the relay)?
What do you think might be causing this relay problem? and how could you prevent the heating/operation from being interrupted?
thanks
JC
AI: Mechanical relays have a life in the 50,000 to 100,000 operations range at full current. You can look up the particular datasheets for those in your relay boards.
If the relays are carrying close to full current and are switching every few seconds 24/7 they will not last long. For example, 100,000 operations at 3 seconds per cycle will wear the relay out in about 3 days.
You can use the relays to switch more robust contactors, slow down the switching rate (for example, a couple times a minute rather than every few seconds), or use solid-state relays. Each have their pluses and negatives. Solid state relays produce quite a bit of heat (roughly 1W per ampere) and tend to fail 'on' in case of voltage or current surges. Mechanical contactors are robust but noisy (electrically and acoustically) and are inductive, requiring some care driving them (eg. snubbers or small suitable solid-state relays). Slowing down the cycle time can cause excessive swings in temperature.
|
H: Current source not working in LTspice XVII
I am trying to implement a circuit (CMOS implementation of DVTC) with the help of LTspice (I'm very new to this software.)
The built in current source component is not working in LTspice. Following is the circuit diagram and the error that I get while running the application:
Circuit
Error
I was facing same error in case of NMOS and PMOS but I resolved that using some external library and components.
I have updated my LTspice and also tried downloading it from scratch but still I'm getting the same error. Can anyone help me?
AI: Please rename all the components so that they don’t have a single quotation mark in their name. The single quotation mark corresponds to curly bracket and it confuses the LTSpice.
|
H: Voltage divider plus Zener protection on op-amp input
I am designing a circuit to scale an input voltage to the range of an analog to digital converter pin on a microcontroller, and also provide electrical protection for the pin.
I put together the following circuit which consists of a resistor voltage divider, a Zener diode, and a rail to rail op amp (single ended, powered by the microcontroller's digital supply voltage: 3.3V).
Will this circuit function as intended to both divide the voltage and provide over-voltage protection? I expect an input voltage range of 0V-6V but want to be prepared for voltages outside this range.
The idea was to divide the voltage in half before hitting the op-amp input, and to include a Zener diode for additional protection against both high voltage and negative voltage in a single component. I then realized that the voltage in this circuit might not be divided, but instead be shunted through the Zener immediately, leading to clipping of the input voltage rather than voltage division.
Which behavior will I get in this scenario, clipping or voltage division, and why? If I get clipping, what modifications would you recommend to get my intended behavior?
AI: You are right about the clipping behaviour of the zener diode, and about the dividing behaviour of the resistors, and your design will do what you require.
In fact, you can test this behaviour in the simulator, to be sure:
simulate this circuit – Schematic created using CircuitLab
As AnalogKid pointed out, the clipping won't be sharp, but that's of no concern as long as the output signal is linear within the region of interest to you, and doesn't ever reach a value that could damage the following stage (the opamp, in your case).
As you can see, the output potential is half of the input, is restricted to a maximum of about 3.6V, and is also "accidentally" clipped at -0.6V or so, when the zener diode is forward biased, thereby adding protection against negative input potentials.
If you really need to buffer this output with the opamp, then you also have the benefit of the opamp's own output being clipped to the supply rails, of +3.3V and 0V.
According to the MCP6002 datasheet, it will tolerate input voltages extending 1V beyond either supply rail, which means that with a 3.6V zener diode as shown, the opamp's inputs will never be exposed to damaging potentials outside that range of -1V to +4.3V.
From the datasheet you can find out the opamp's common-mode input voltage range, with supply potentials of 0V & 3.3V. It is surprisingly good, at -0.3V to +3.6V. Your clamped/divided signal could possibly deviate outside this zone of "guaranteed behaviour", if it descends below -0.3V. If that happens then you can't trust the opamp output to be what you expect, but it won't be damaged.
Your design is good to go.
There's an alternative though. Since you have a +3.3V reference, you can clamp your divided signal to it with regular or schottkey signal diodes:
simulate this circuit
The response is very similar to your original design, except it has no need for a zener diode, instead relying on regular diodes to limit excursions beyond the power supply rails. This will be sufficient to protect your opamp.
I remind you again that the opamp's output cannot possibly extend beyond its own supply potentials, and is guaranteed to stay between 0V and 3.3V, regardless of its inputs.
|
H: Supply voltage range for the OP07 op-amp
OP07 op-amp datasheet
When it says "Supply Voltage Range +-18V," can I supply it 0 to 36V if I want only single ended operation, or am I limited to 0 to 18V only?
Whatever is your answer, is it applicable to all op-amps?
AI: I have two circuits here:
simulate this circuit – Schematic created using CircuitLab
I guarantee you that the voltages at the inputs and output of both opamps is the same except for an 18 V DC offset. The left circuit will have no DC offset, the right circuit has a DC offset. So if opamp in the left circuit outputs 0 V then the opamp in the right circuit outputs 18 V.
From the opamp's point of view, is there a difference?
Can the opamp "know" in which circuit it is used?
Yes or No?
The correct answer is of course No, the opamp cannot tell the difference. The voltages on its pins are the same so from the opamp's point of view, there can be no difference.
In the end, the difference between a +/- 18 V supply or a single 36 V supply is only the ground reference.
|
H: Is ping pong buffer the same thing as a double buffering?
In ping pong buffer we have two buffers, when one is being written by upstream component, the other is being read by the downstream component. When writing into one buffer is completed, the downstream component will switch to reading that as the other buffer is filled by the upstream component. This appears similar to how double buffering works.
Are ping-pong buffer and double buffer, two terms for the same thing?
AI: No, they are not the same. Both are 'double buffering' in that they involve more than one buffer. However, ping-pong is a specific type, and the phrase 'double buffering' is usually (though not always) reserved for the not ping-pong type.
In ping-pong buffering, there are two buffers, either of which can be used for output. While one provides output, the other can be filled asynchronously. The buffers are then switched over when required. The essence of ping-pong buffering is that the output goes back and forth between the two buffers, just like a ping-pong ball goes back and forth, between the halves of the table.
Typically ping-pong buffering is used for video memory, especially when the memory is shared with the system. We have a large amount of information, and already have I/O addressing support for a simple and rapid switch of address spaces.
In double buffering, there is a first buffer that always receives the input, then a second buffer dedicated to driving the output, and a signal to transfer from one to the other.
Examples of double buffering are found in the HC595 shift register, and the MAX534 quad DAC - for the ability to receive and store the programmed word without changing the actual output until later. We have a small amount of information, and easy to connect memory, aka registers.
|
H: Can a screw hole in between PCB antenna reduce efficiency significantly?
I am attaching a photo of what I have done.
This hole is not suited there, but how much it is going to affect the performance of the antenna?
AI: If the hole is empty and is not metalized, then it won't affect the performance of your antenna.
If the hole is meant for mechanical fixing, use plastic screws.
Metal screws might de-tune your antenna.
A small metal screw won't affect much the overall performance of your RF system.
What de-tunes antennas are walls or big objects.
If your product is wall mounted than the wall will certanly de-tune your antenna a certain amount.
|
H: Having battery cells in parallel does increase autonomy ? (My load is a constant power load. DC/DC)
I have 4 AA alkaline cells and I'm trying to figure out which is the best cell configuration to achieve maximum autonomy with constant power load.
I understand that if your load is a constant current device the best approach is the parallel configuration. If you can work in the low voltage region (1.5V down to 0.8V for alkaline) the capacity of all the cells stacks and you get more autonomy.
Scheenshot (Energizer E91 AA alkaline datasheet - single cell) :
But if my load is a DC/DC (boost for this case), Having a series or parallel configuration does increase my autonomy in some way ? I'm thinking that when the cells voltage is decreasing the DC/DC demands more current; then the overall autonomy is the same as in series configuration.
Example :
I know that Alkanine cells capacity is also dependable of how much current you demand but my question is more related to the parallel-series configuration with any kind of cells (Alkaline, NiMH, ...) using a DC/DC as load.
Edits : Typo
AI: Paralleling them will increase the capacity arithmetically, so if you put two 2,000 mAh batteries in parallel you get 4,000 mAh.
Putting cells in parallel increases the battery ampere-hour rating, putting them in series increases the available electromotive force.
You need to make sure that your battery will supply the voltage required for the time you want, beyond that you might as well just parallel them up if you can. Just try to use identical models from the same batch, because if one string is at a higher voltage than the other it will try to charge it up.
|
H: fitting using unstable poles
I am trying to take into account the frequency dependent behavior of R and L in an LTspice simulation. So I fit their frequency behavior using Vector fitting technique and deduce the equivalent electrical circuit that reproduces the same behavior. Then, I run my transient simulation. My question is, will the fact that some of the poles used for the fitting are unstable (have a positive real part) impact in any way my time domain simulation?
AI: My question is, will the fact that some of the poles used for the fitting are unstable (have a positive real part) impact in any way my time domain simulation?
No, it won't impact your time domain simulation.
As long as LTSpice converges to a solution, that solution is pretty numerically accurate even though the system you are modeling contains unstable poles.
|
H: IMD measurements and power levels of inputs
If I do a two tone test and increase the power of only one tone by 1db, will the 3rd order harmonic be up by 3db only for the one I increased or for both or something else?
AI: If your distortion is cubic (which is the usual but not always correct assumption made about systems on which we do two tone IMD tests), then the distortion output will increase by the order of that frequency in the IMD frequency expression.
If you have two tones f1 and f2, and increase f1 amplitude by 1dB, then
3f1 increases by 3dB
2f1 +/- f2 increases by 2 dB
f1 +/- 2f2 increases by 1 dB
3f2 remains unchanged
This leads to the rather intuitive behaviour that raising one input tone also raises the 3rd order IMD tone on that side.
|
H: Can I control IRFZ44N Mosfet directly from 5V microcontroller?
I know that IRFZ44N is not a logic level Mosfet, but I need it to control only 12v 1.2A (14.4W). Can I control it by applying only 5V on gate from an atmega microcontroller?
I checked the datasheet and I also tested on breadboard by simply appling 5V from USB, and it seems to work fine, it remains cold all the time.
simulate this circuit – Schematic created using CircuitLab
I am new to electronics. Sorry for probably stupid question.
I tried to find the question on internet, but all the time people does not recommend to control a non-logic level mosfet directly from an microcontroller. Maybe they need to control high current circuits, but in my case it is just a short led strip.
AI: It's a borderline scenario. See this extract from the Vishay data-sheet that you linked: -
I say "borderline" because the graph above is typical i.e. it's an average scenario and won't cover the extremities of how maybe a hundred devices (or more) might behave.
With 5 volts between gate and source, the IRFZ44N will typically drop about 0.1 volts when handling a drain current of 1 or 2 amps. It might rise to (say) 300 mV in some extremes and, it might be as lows as 30 mV in other extremes.
If the MOSFET drops 300 mV whilst conducting 1.2 amps, the power dissipation would be 0.36 watts and, due to self-heating, the MOSFET's conduction resistance may rise a bit causing the worst case volt drop of 300 mV and ambient temperatures to be more like 0.5 volts. Now, it's dissipating 600 mW. However, if your external load is defining the current flow, then you don't need to worry about thermal runaway.
Can you circuit handle that? Is it too much? Is there too much heat and losses?
Only you can say.
Check also the differences between other supplier offerings and the Vishay data-sheet that you linked. Make sure you use the latest data-sheet when doing this.
|
H: 6 pin B50K potentiometer: DIY alternative?
I have a 6-pin B50K potentiometer used for the Bass/Treble control of a desktop speakers set.
This pot is broken and getting this piece is costly to get (price, time - mainly from China).
I don't need to tweak the Bass/Treble at the hardware level ( never had to ), is there a way to bypass this with a basic resistor? Which one, pls?
thanks
AI: The 6-pin pot is dual channel. This means you have two pots in one enclosure. The top level 3-pins are the first pot and the bottom level 3-pins are the second pot. You can choose to replace them with 4 regular 25K resistors. Replacing the 50K variable value with the center value of 25K. This gives you the 'center' value of the original setup.
|
H: Streams supported or not?
I'm reading documentation about this component: https://www.microchip.com/en-us/product/LAN7800 .
It is a Ethernet to USB 3.0 Bridge. In a documentation point appears this:
The USB functionality consists of five major parts. The USB PHY, UDC
(USB Device Controller), URX (USB Bulk Out Receiver), UTX (USB Bulk In
Transmitter), and CTL (USB Control Block).
The UDC is configured to support one configuration, one interface, one alternate setting, and
four endpoints. Streams are not supported in this device. The URX and
UTX implement the Bulk-Out and Bulk-In endpoints respectively. The CTL
manages Control and Interrupt endpoints.
I'm reading USB Complete by Jan Axelson, and I have seen this table:
Where appears each of the four type transfer types on USB, and appears Bulk, Interrupt and Isochronous transfers as "streams", only control appears as message. If streams are not supported by the component, only can be used with control transfers or I'm understanding wrong the datasheet?
Thank you so much.
AI: Control transfers are not suitable for data transmissions. They are meant for setting up a logic channel.
URX (USB Bulk Out Receiver), UTX (USB Bulk In Transmitter)
That is the answer: Microchip's bridge supports bulk transfer, that is, high speed asynchronous (*) data transfer.
(*) In this context asynchronous means that the USB host or the USB device can transfer data at any time.
Isochronous, which is the opposite of asynchronous, is meant, for example, for USB audio speakers that need data (digital voice packets) every a certain time interval negotiated when setting up a logic channel.
This component DOES NOT support video transmissions.
Read page 19 of the datasheet:
https://ww1.microchip.com/downloads/en/DeviceDoc/LAN7800-Data-Sheet-DS00001992G.pdf
|
H: Instrumentation amplifiers in DIY EMG sensors for myography
I've read a couple of articles about EMG sensors. I have seen two DIY projects so far:
Super simple muscle (EMG) sensor
An IR muscle contraction sensor
Why are instrumentation amplifiers used in these EMG sensors?
They used INA12x - quite a good one. I thought that the main advantage of such amps is their ability to reject noise- but this can happen only if the signals on both inputs are in opposite phase.
For EMG tasks they connect the ground wire to the "bony" part of the body, and the other two wires somewhere on top of the muscle. The difference signal is at the output of the amplifier.
I can only think that this is for a better balance of the output signal.
AI: That's an Instrumentation Amp, not an op amp.
They don't reject "noise" -- they reject "common mode signals".
The amplification is of the difference between the two inputs, and thus they attenuate (greatly) signals that are the same on both electrodes (in phase, not out of phase)
|
H: Relation between charger output voltage and battery output voltage
I'm searching into making a simple solar battery charger.
Right now I am kind of a newbie in this field so I am wondering what kind of voltage does the charger need to output in order to charge a 48 V battery or a 60 V one.
AI: A charger will always need to output a slightly higher voltage than the battery to make current flow into the battery and charge it.
The highest voltage that a charger should output depends on the voltage that the battery will reach when fully charged. This voltage depends on the battery chemistry.
Learn more about batteries and how to charge them at Battery University
|
H: RP2040 Custom PCB BOOTSEL not working
I designed a PCB that uses the RP2040 microcontroller. I just received it and wanted to power it up and put my code onto it, but it doesn't seem to work fine. I am struggling to find out what could be the reason, since there are so many possibilities. Here's the schematic and a screenshot from KiCad with the PCB:
(I used a power plane on the top layer to get the power everywhere, I removed it here so it is clear how the traces were made. This is my first design, so any advices regarding the layout would also be greatly appreciated.)
It seems to be powered just fine, I verified it with a multimeter and components are getting power, but when I try to go into BOOTSEL mode (by connecting FLASH_BOOT in J4 to ground and restarting), it doesn't show up on my PC as a USB mass storage device. I have a special connector with USB- and USB+ pins exposed (J4), which I connect to my PC through a breakout board with an USB-A plug (like this one: https://www.pololu.com/product/2585). I think maybe this is the part where I got something wrong - I think the microcontroller is working fine, but the USB connection is not working properly.
Any help would be greatly appreciated.
AI: Just a few comments on your design, which may or may not be the root cause of your problem.
I'll refer to the official guide.
you've exiled the decoupling capacitors (C1-C11) to a corner of the board, which sort of defeats the purpose: each decoupling capacitor is supposed to serve one specific pin of the RP2040, maybe two adjacent pins if space is tight. They're not meant to graze all together in the shadow of a tree at the other end of the field ;-) This means the VCC trace between each capacitor and its corresponding pin should be direct (not taking any detours through vias or around another part) and short (2-3 millimetres max, ideally). Refer to section 2.1.2 and 2.1.3, and note the comments re: pin 44 and 45 on figure 5.
The "debug reset" section doesn't look right. How can the RESTART signal affect the RUN signal when that one is directly tied to VCC? It looks like R8 and C15 where meant to be some sort of debouncing filter, but the schematic is probably incorrect and these two parts (R and C) are also located at different ends of the board when they should be next to each other.
There is no impedance control on the USB data lines (see section 2.4.1). This is potentially problematic although it's hard to say whether it's fatal on this particular design. Cabling between J2 and the pololu breakout board could also be an issue if D+ and D- are not twisted together.
Crystal oscillator: is the part ref (TXC 7M1200044) correct? According to the datasheet, the crystal is between pins 1 and 3, not 1 and 2. KiCad has several generic crystal symbols depending on the pin configuration, you may have picked the wrong one. That would definitely be fatal ;-)
|
H: What does "1/99 tap" mean in a block diagram?
I am still trying to understand the modulator in an optical free space transmitter.
I have found the following block scheme:
It is from "Direct-detection free-space laser transceiver test-bed". I am trying to make clear for me each part of this scheme.
What does 1/99 tap (read) mean?
AI: From referenced article:
The CW master laser output is modulated using two LiNbO3 Mach-Zehnder modulators manufactured by EOSpace.
MZM are the same color and just before the 1/99 tap.
From: Segmented Mach–Zehnder Modulator With 32-nm CMOS Distributed Driver
Output light from the modulator is tapped 1% for power monitoring, while 99% passes through an optical amplifier (OA).
The 1/99 tap extracts 1% of the optical power for power monitoring allowing 99% to be output to the next stage.
|
H: NPN Transistor Switching - Negative Gain Calculated and Not Switching off Fully = Blown Transistor?
I have the attached NPN transistor being switched by an MCU. I've made some measurements and done my calculations as a result of observing that the NPN is not fully switching off (the relay stays on).
Calculations show a negative base current and gain. Does this imply that the transistor is blown?
I've also just noticed that when off, Vin < Vb which does not seem right.
AI: The 6.1V voltage across the device when off suggests that you have the transistor in backwards.
Most silicon epitaxial bipolar transistors have a reverse base to emitter breakdown voltage in the 5V-7V range and a very low reverse Hfe (less than 10).
If you have put the transistor in backward when the transistor is supposed to be off the base-emitter junction (which is now connected where the collector should be) will breakdown and limit the voltage to ~6v-8v (the collector to the base junction is in series with the base-emitter).
When you try to make the transistor conduct it will have a very low gain. This doesn't account for the large voltage across the base to collector junction in that condition. I would expect it to be the usual 0.7V. It is possible that being used in reverse has caused damage to the transistor.
Schematic of relay driver with transistor reversed.
simulate this circuit – Schematic created using CircuitLab
Note: This is intentionally drawn with Q1 connections reverse!
|
H: Is more clock speed means higher risk to circuit can have race condition
It is a very simple question but it made me think. I have been working on finite state machines. I came to topic of finite state machines from combination circuits. In the book it says that sequential circuits which uses register with same clock input prevent race condition since it is synchronous design.
My question, is there a possibility that the clock cycle is so fast, delay on the circuit causes race condition?
AI: A race condition is a situation where a signal is divided into two or more paths, and depending upon whether it traverses one path faster or slower than another path, the results could be different.
Given this definition, if a race condition exists, then it does not matter how fast or slowly signals change in the circuit, the race condition will still exist.
There are circuits that technically have race conditions in their logic that are, nevertheless, extremely reliable. This reliability is achieved by strict control over the timing delays in the circuit. If the manufacturing process is such that one path is guaranteed to "win", the race condition will not have adverse effects.
There are other circuits that technically have race conditions in their logic that are reliable if the signals supplied to these circuits obey given timing restraints, for example set-up and hold-times for data inputs for registers.
Excessive clock speeds will almost certainly make those circuits unreliable. However, technically, it is not that the excessive clock speeds "create a higher risk of a race condition", but that they create a higher risk that an existing race condition (i.e. a situation where different propagation speeds along different paths will result in in different outcomes) will have unwanted outcomes.
|
H: Ethernet RJ45 M-F cable (socket <-> plug)
During renovation I've asked the electrician to use two long patch cables originally running from my router to create RJ45 sockets with the intent to later connect the router to the socket using a standard patch cable. So there is a patch cable with one plug cut of and replaced by a socket.
Now, the patch cable connection doesn't work: a device connected to the other side of the cable (either of the two) doesn't detect signal. I've checked one of the sockets which has replaced the original plug and it seems to be wired correctly (read: "as specified on the connector") as 568B.
Since I have problem wrapping my head around what pin is supposed to be connected where on the various cable connections (my confusion is such, that reading How to wire up two RJ45 sockets? didn't help), my questions are:
Does it matter whether the original patch cable was wired as 568B or 568A, straight or cross?
Should I be using a cross patch cable or even a less standard wiring to connect the router to the socket? Or is it likely that the connector is either wired incorrectly (missing contact somewhere) or that the cable has been damaged?
Can you please suggest how to proceed to locate the problem (part of the cable has plastered into the wall, so I would prefer not having to cut it out/replace it)?
Update: devices see temporarily a sign of signal, yet don't pick it up for serious communication, Linux kernel log:
[x0.041818] e1000e eth0: NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx
[x0.041822] e1000e eth0: 10/100 speed: disabling TSO
[x0.884898] e1000e eth0: NIC Link is Down
Could this be a sign of bad contact in the new socket connector?
Result:
the problem was actually in the socket connector - it has metal casing and the individual cords were slightly sticking out touching the casing thus short circuiting some five of the lines all together.
AI: It does not really matter whether your house wired 568B or 568A, as long as termination is the same on both ends. Electrically they are compatible.
Usually router connected to device with straight cable. So, connect it to the socket with straight cable. Again, does not matter which one. Same on the other side - connect device to the socket using straight cable.
If you have multimeter the easiest way to check electrical continuity of the wiring in the wall is to buy 6" patch cable, cut it in half, strip the ends and insert into both sockets. Then:
twist all wires on one side together and check continuity between one wire and all the others on the other side. They should all ring.
twist 4 pairs of wires (e.g. green to white-green) on one side and check continuity of each pair and between the pairs. Each pair should ring, between pairs there should be no connection.
You can also use a battery and a lamp instead of multimeter, if you don't have one.
If all wires in a wall check out properly then the problem must be in your patch cables. Luckily they are dime a dozen nowadays.
The only other potential problem that I can think of is bad wiring job, for example too long untwisted ends or sockets not rated for the speed. This usually affects gigabit connection. 10 and 100 megabit should be fine.
Bad connection most certainly can be a reason for communication problems. I just don't know if what you have is caused by one. I do recall struggling a lot with extension I made using two patch cables and $1 F-F coupler from Monoprice. $5 one did not fare any better. Only when I bought (awfully overpriced, IMHO) $15 coupler my troubles went away.
|
H: Can someone explain why this diode is reverse biased?
In this question, we are asked to draw a picture of this circuit with the switch closed and opened.
When the switch is closed, the diode is replaced by an open circuit. I understand that this would only happen if the output voltage is greater than the input voltage when the switch is closed, but how can we be sure that this is what occurs?
AI: If the transistor is on it shorts the inductor to ground, and so the diode is reversed biased because the capacitor has positive voltage.
|
H: Wheatstone Bridge strain gauge thermal compensation design
I'm designing a full Wheatstone bridge for measuring the bending strain in a steel bar.
I have a circuit available to me that uses an ADS1232 to measure the output of the wheatstone bridge.
When looking at loadcells online I see that they have 6 cables (sense + and sense-). From what I understand, these cables go to the excitation points of the wheatstone bridge and allow the circuit to measure the voltages, which cannot be done with the excitation cables themselves as they would have a voltage drop due to the much larger current draw. The sense cables do not draw much current as they are going into an opamp. Is this understanding correct?
I also don't understand why there are two resistors in series with the bridge. I've read online that they are for temperature compensation, but nothing goes into more detail then that.
If I was to attach strain gauges onto a steel beam, would I need those resistors? and where would they go, and with what value?
Would I need anything apart from the 4 strain gauges in a full bridge configuration with sense cables going to the REF_P and REF_N terminals of the ADS1232 IC?
AI: Strain gauge elements made of nickel have a positive temperature coefficient of about +0.6% per °C at room temperature. If you energize the bridge with a constant current, the "gain" of the bridge will have that coefficient as well since the percentage change in resistance is pretty much constant with temperature.
By energizing the bridge with a constant voltage and adding resistors that have a similar temperature coefficient, the output of the bridge a reasonable voltage (usually 2.5 to 10V) can be used and the output at off-balance will be more constant with temperature.
Because there are 4 wires, each with resistance, connecting the bridge to the measuring instrumentation, we would like that resistance to have minimal effect on the performance. By measuring the voltages (the difference is what matters) using the sense wires we can either compensate for the resistances or, more alternatively, drive the "input" wires such that the voltages at the strain gauge are fixed values, say with a couple of op-amps.
Here is the typical application for the ADS1232 from the datasheet:
This circuit assumes the bridge can be powered directly from +5V and there is no compensation for the wire resistances (no sense and force wires). If you want to compensate, you would either have to add that circuitry or find some way to use the ADC to measure those voltages and compensate digitally (there are two unused inputs). You can compare the bridge resistance with the wiring resistance to see if that's necessary in your case.
The typically application shows few additional components but if there is a cable attached you'd want to add protection and filtering, which would help with the anti-aliasing filtering.
Commercial instrumentation strain gauge signal conditioners are often galvanically isolated, because of the low level of the signals and the desire for the installation to be trouble-free.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.