text
stringlengths 83
79.5k
|
---|
H: Will a regular neodymium magnet attract an electromagnet not currently being supplied with power?
I have a setup for one of my robotics projects where a neodymium magnet is placed across from an electromagnet. I power the magnet with DC through a L298N motor controller that is connected to an Arduino.
Will my neodymium magnet attract the electromagnet across from it when there is no power being supplied to the electromagnet? I know that the electromagnet has a somewhat magnetic core.
My goal in this setup is for the neodymium magnet to not be touching the electromagnet until power is supplied to the electromagnet, then they can attract. I do not want to repel with the electromagnet, thus why I am asking this question.
Basically I have my electromagnet attached with a 3d printed stick to a gear, and a magnet (at the moment before I asked this question) positioned stationary across from that electromagnet (the angle is not straight but close to straight). What I would like to do is when the electromagnet is given power, it will attract towards the stationary magnet and turn the gear
AI: A non-energised electromagnet is also known as a 'lump of soft iron'. A permanent magnet will attract a lump of soft iron.
If you energise the magnet slightly, it will increase or decrease the pull from the permanent magnet, depending on the direction of magnetisation. At some distances and energisation currents, you may get zero force.
If you want there to be zero force between permanent magnet and electromagnet, then it needs to be air-cored, not iron-cored.
|
H: Input impedance of emitter follower
I decided to calculate the input impedance of the following emitter follower circuit as a practice.
I used this well-known small signal model for the BJT:
For voltage gain, I obtained
$$A_\text{v}=\frac{v_\text{out}}{v_\text{in}}=\frac{\beta/r_\pi-1/R_\text{B}}{\beta/r_\pi+1/R_\text{L}}$$
which is somewhat smaller that unity, as expected. For input impedance, I found
$$r_\text{in}=\frac{v_\text{in}}{i_\text{in}}=\frac{v_\text{in}}{v_\text{in}/R_\text{B}+(v_\text{in}-v_\text{out})/r_\pi}=R_\text{B}\;||\; (R_\text{L}||R_\text{B})\left(\beta+\frac{r_\pi}{R_\text{L}}\right)$$
I'd like to know if I've calculated it correctly. The book The Art of Electronics obtains \$r_\text{in}=R_\text{B}\;||\;(\beta+1)R_\text{L}\$ by a more or less different method.
AI: Hamm, the calculation are quite simple.
simulate this circuit – Schematic created using CircuitLab
From the small-signal model we have:
I apply KVL around the loop:
$$v_{in} = i_b r_\pi + i_eR_L $$
Additional we know that \$I_E = I_B + I_C = I_B + \beta I_B = I_B(\beta + 1) \$
Therefore
$$v_{in} = i_b r_\pi + i_eR_L = i_b r_\pi + i_b(\beta +1)R_L = i_b(r_\pi +(\beta +1)R_L) $$
And the input current is \$i_{in} = i_{RB} + i_b = \frac{v_{in}}{R_B} + \frac{v_{in}}{r_\pi + (\beta +1)R_L} \$
So now we can find the input impedance:
$$r_{in} = \frac{v_{in}}{i_{in}} = R_B||(r_\pi +(\beta +1)R_L) $$
And if we treat the voltage across the RL resistor as output we will get:
$$v_o = i_e*R_L = i_b(\beta +1)R_L$$
Therefore the voltage gain is
$$ \frac{v_o}{v_{in}} = \frac{i_b(\beta +1)R_L}{i_b(r_\pi +(\beta +1)R_L)} = \frac{(\beta +1)R_L}{r_\pi +(\beta +1)R_L}$$
Also, notice that if we substitute for \$r_\pi = (\beta+1)r_e\$ the gain expresion becomes:
$$ \frac{v_o}{v_{in}} = \frac{(\beta +1)R_L}{(\beta+1)r_e +(\beta +1)R_L} = \frac{(\beta +1)R_L}{(\beta+1)(r_e+R_L)} = \frac{R_L}{r_e + R_L} $$
A voltage divider equation.
Where \$r_e = \frac{V_T}{I_E}\$
|
H: How does I2C handle data loss?
Does I2C have any data loss prevention mechanism? If so, how does I2C recover?
AI: I2C has several mechanisms to prevent data loss:
multi-master collision detection and arbitration
ACK/NACK after each byte
clock stretching (slave slows down the master)
SMBus adds a couple of enhancements:
Packet Error Check (PEC), a CRC sent after each transfer
limits on clock stretch and maximum clock period
time-out to prevent bus lock-up
Taken together, these mechanisms can be built upon to make robust I2C communications in single-master or multi-master systems.
There is however no built-in mechanism in I2C or SMBus to guarantee delivery of data. That level of error recovery would be implemented at a higher level. It is up to the driver and application to monitor I2C transactions and determine what to do if an error occurs, be it I2C or SMBus.
Is it possible to implement a guaranteed-delivery protocol over I2C? Sure, with some work. I see references / requests for PPP over I2C for example, which would use TCP/IP layered on top of the link.
|
H: Should I place ferrite barrel/ring on my Nokia 5110 module data lines?
I have Nokia 5110 LCD module from AliExpress. My motherboard is ruled by ATmega64A and control pins for LCD are grouped together as pin header (male, 8 pins in total). The module itself is placed on the external side of plastic case and will be connected with 8-wire ribbon cable (female-to-female).
My question is: should I place ferrite barrel (or make few turns of 8 wires on ferrite ring) to reduce EMI/RFI as it usually done on VGA, DVI and USB cables?
AI: No, if you have unwanted glitches, a ferrite would not solve it.
The ferrites are on cables to keep conducted and radiated EMI/RFI within limits for regulatory compliance reasons.
What will affect more is what kind of cable is used, whether the cable is shielded and if the electrical signals are properly driven and terminated to have reliable communication between them.
|
H: Fault Ride Through requirements for wind turbines
First a short introduction: (the question is below the first image).
The capability of fault ride-through of wind turbines is the most important requirement related to national grid codes. Based on that, wind turbine should remain connected to the network during various scenarios of fault grid to maintain the system stability. [1]
The figure below is an example voltage profile for which a wind power plant must be able to keep up production if the voltage in the point of connection is above the red line.
Question:
A single line diagram for a single wind turbine connected to the grid is shown below. According to information received from the turbine manufacturer, it can supply the required fault-ride-through current if the short circuit capacity from the grid, at the LV side of the turbine generator is higher than 3 times the nominal current for the generator. The requirement is not related to relay tripping.
I can't see why this is a requirement, and how it can be relevant for the turbine's FRT capabilities.
Why does the turbine/converter require a short circuit capacity of 3xIn in order to supply the needed FRT current? Is it the frequency converter that requires this current?
AI: Give this NERC document, Integrating Inverter-Based Resources into Low Short Circuit Strength Systems, a read. It is not just a fault ride-through issue. Grid following inverters require some minimum grid strength to operate.
Annex C of the upcoming P2800 will address this topic. Here and here are some links on P2800.
|
H: DC -DC buck converter W/o PWM signal( kind of)
Hey so i'm new to electronics and I've tried constructing a buck converter (using software only) circuit without the use of an external PWM source (microcontroller, 555 circuit) and i came up with this (steps down the voltage from 12 to 4.75 V using a p-MOSFET whose gate voltage is altered by the change of the output voltage).
Can you guys tell me what can be improved in the circuit and what impracticalities might present themselves when constructing it IRL?
Note: It would also be appreciated if someone were to show me how to calculate the efficiency of this circuit.
simulate this circuit – Schematic created using CircuitLab
AI: What you've built (or tried to build, see below) is a linear regulator. Its efficiency will be the ratio of Vin to Vout. In this case, assuming your 4.75V output target, the efficiency will be about 21%.
MORE:
Unfortunately, your design doesn't even work as a linear regulator. The reason why is that it has no reference to compare against to track the output voltage. Nevertheless I simulated it with a 1.5V threshold FET and got a 3.6V output with that load; the output varies under load. Try it yourself, here.
Below is a functioning linear regulator using a p-FET pass element, using feedback from the output and a 1.2V reference and an op-amp to drive the gate. Simulate it here.
A DCDC will have this same basic loop control: feedback, comparison to a reference, and some gain.
|
H: What is this schematic symbol? (Circled B with four diagonal hatches)
(It's not on the wiki page for electronic symbols, or this sparkfun page with lots of symbols.)
AI: The symbol is for a lamp (with rays of light eminating from it).
The 'B' is, most likely, for 'blue'. This would be typical for an industrial machine reset button.
The dotted line would usually be connected to a switch contact indicating that the lamp is part of an illuminated button.
I can't find a reference for the symbol but it's quite common on American machine ladder schematics that I have seen.
Figure 1. Indicator lamps on a ladder diagram. Image source: All About Circuits.
|
H: Powering an op amp from a Reference voltage?
Let's say I had a low power op amp (pre-amp stage) going to an ADC (LTC2412).
My question is: If I wanted a super precise, low noise, low TC drift, positive supply line could I just power the +A5V line from a voltage reference IC like the LTC6655LN-5? My circuit only needs about 3mA ( 6mA projected worse?)to run.
I have added the schematic to show that the +2.5V line is produced from the +A5V line. So therefore changes in supply to the ADC will also affect the +2.5V common mode voltage ( going to the CH- channel on the ADC), allowing for more linearity. So I am looking for a very low noise +5V source, with preferably low TC drift. I am unsure of the tradeoffs of using a voltage reference as the supply or using an LDO or using a reference+LDO combo (like the LTC6655LN-5 and LT3045) Like this:
Thanks
AI: This Vref IC up to $21/pc is overkill for most Op Amp applications. It is limited to +/-5mA so a 6mA load exceeds spec.
Unless a specific need is defined as designed as a voltage reference and not a power source, this is a poor match to power Op Amps.
The LT3045 is a suitable buffer. But choosing an ultra-stable supply does not make a design great alone. In fact, it draws doubt on the design without specs.
|
H: Piezo connected to ESP32 - Why doesn't it kill the input?
I have a ESP32, and a piezo disk connected as a sensor:
Piezo - to GND
Piezo + to Analog In
50K Resistor in parallel to piezo
My understandin is that the piezo generates a voltage when it is mechanically deformed either through vibrations (like sound) or through mechanical pressure onto it. The resistor is there to get rid of the charge after the fact, and avoid the analog input floating somewhere. The lower the resistor value, the less sensitive the system will be. In my case I chose a pretty low resistor value to catch only very loud noises (gun firing.) It works and all is well, but I would like to learn and understand one thing:
If the piezo is hit VERY hard, couldn't it theoretically generate high high voltages that are way beyond the 3.3V that the ESP32 can handle? If so, how can this voltage be limited, and why doesn't anyone limit it in any of the schematics I found online?
AI: There are such schematics that try to protect the input.
Often there are resistors in series and clamp diodes to prevent voltages from damaging the MCU input.
|
H: Store byte instruction in RISC-V assembly
I have a short snippet of RISC-V assembly that I'm having trouble understanding. I'm not sure if I'm interpreting the instructions wrong, from my interpretation it seems as if the branch (BNE) will be taken but it is given that it should not be.
Given commented code:
000001b8 <test_4>:
lui sp,0xfffff # -> load [sp] = 0xfffff000
addi sp,sp,-96 # -> add -96 to [sp], resulting in [sp] = 0xffffefa0
sb sp,2(ra) # -> store 0xffffffff (sign extend [23:16] of [sp]) to memory
lh a4,2(ra) # -> load halfword from data memory to [a4] = 0xffffffff
lui t2,0xfffff
addi t2,t2,-96 # -> [t2] = 0xffffefa0
li gp,4
bne a4,t2,56c <fail> # -> guaranteed fail since 0xffffffff != 0xffffefa0?
Any help would be greatly appreciated!
AI: You seem to misunderstand SB:
sb sp,2(ra) # -> store 0xffffffff (sign extend [23:16] of [sp]) to memory
This comment is not correct. This stores 0xa0 (lower 8 bits of sp) to memory. The address is ra+2.
a4 will have the value of 0xSSSSXXa0 where XX is whatever value happens to be in the next byte and SSSS is the sign extension of it. If that byte was 0xef then a4 can indeed have the value 0xffffefa0.
|
H: ESP32 - Read multiple analog inputs
I need to read multiple analog inputs as fast as possible on a ESP32. With as fast as possible, i mean that ideally all inputs are read simultaneously.
Currently I simply read them one after the other with consecutive AnalogREAD() calls in my code, but it happens that I read a value on a Input where no current is applied. I have read somewhere some months ago that in these cases it is good to put a pause between Read calls to give it the time to switch to the proper input.
Is this true? What is a reasonable amount of time to put in there, keeping in mind that I would need to read tehm all (5 inputs) as quick as possible?
All inputs are on the ADC 1.
AI: The fastest way would be to use two different ADCs so you can simultaneously sample and convert two analog inputs.
The second fastest way (according to the reference manual) would be to tell the ADC a list of channels which to sample and tell it to go sample the channels in the list, writes results to memory using DMA, and reports with a scan complete interrupt when it is done going through the list.
There is little benefit of delaying between taking ADC conversions, if it helps then it is a sign of the sampled signals having too high impedances to get a stable result, which could be also a sign of the ADC being incorrectly configured for the expected signal impedances, and just longer sampling period (or slower ADC clock) might fix it. Usually the switch between channels is done when the analog sampling circuit is not connected to the analog pin.
|
H: Custom D Flip Flop in Logisim Simulation Error
I am building a custom D flip flop in Logisim as a part of the project for my computer organization course and I am not allowed to use the built-in flip flops. When I designed this flip flop everything went well and every wire and connection was green and there was no error, however, when I reset the Simulation or try to use the circuit as a single component in another circuit, the problem shows up, and some of the internal wirings turn to red including outputs.
Does anybody know a solution to overcome this problem?
thanks in advance. Sorry for the bad English.
AI: You need to connect the clear input to the gates from which you get the outputs Q and Q'. When clear is pressed, Q = 0 and Q' = 1 for proper operation of the flip-flop.
An easier option would be to have a single output Q, and generate Q' from Q by connecting Q to a NOT gate.
Without clearing the flip-flop, the previous state would be unknown. In some situations, this unknownness of the previous state propagates to the following states, and the output remains unknown. Hence use a proper clear input to clear the flip-flop before testing other inputs.
Your problem is the same one described here:
Question: How is the Q and Q' determined the first time in JK flip flop?
Answer from a simulation point of view: https://electronics.stackexchange.com/a/534934/238188
|
H: Emitter follower as single-ended power amplifier
I have seen that the following circuit is mentioned as a single-ended class A power amplifier.
Also, push-pull configuration is mentioned as a better power amplifier, which isn't single-ended, and has two transistors in emitter follower configuration.
Can't we use only a single transistor in emitter follower configuration to have a single-ended class A power amplifier, driven by previous small-signal amplifier stages? The input impedance of emitter follower is almost \$\beta \$ times the impedance of its load. So a transformer may not be needed for impedance matching as used in the circuit shown above.
AI: Can't we use only a single transistor in emitter follower configuration to have a single-ended class A power amplifier
Sure we can, I guess you mean something like this:
simulate this circuit – Schematic created using CircuitLab
Now imagine that I want a certain output power, that means a certain current needs to be delivered to the speaker. The speaker needs AC, there cannot be a DC current flowing through the speaker so that's why C1 blocks the DC.
Now imagine we need a signal of up to 100 mA to the speaker. That signal needs to be positive and negative. That's from -100 mA to + 100 mA so a total range of 200 mA.
The current through Q1 can only be positive so the solution with the lowest current consumption is to make the current through Q1 vary between 0 and 200 mA. When there is no input signal, 100 mA flows through Q1 and R3. That is what we call quiescent current or biasing current. This current flows even when you turn the volume all the way down! Imagine what this would in a battery powered device. Sure we can lower that current but then the maximum volume would be limited as well.
This high quiescent current consumption is the result of choosing a class A stage. A class-A push pull stage would have the same quiescent power consumption.
The advantage of using a transformer like in the circuit that you show is that you can trade voltage and current. Suppose the speaker still needs 100 mA but I'm using a 10:1 (primary:secondary) transformer. Then at the primary side the current will be 10x lower! OK, the disadvantage is that I will need a voltage that is 10x higher. But with low ohmic speakers (8 Ohm etc) that is not an issue, at 100 mA that would result in 0.8 V. Multiply that by 10 and we get 8 V which is a reasonable output voltage for a supply of 12 V.
So in summary: yes we can use an emitter follower and indeed it does have a low output impedance. However what you didn't think about is that although the small signal impedance of an emitter follower is quite low (1/gm) that does not mean it can deliver enough current to drive a low impedance load like a speaker directly. That's why a transformer is a good idea, it transforms the low impedance of the speaker into a higher impedance that is easier to handle for a low power circuit.
|
H: What will be the transfer function of the block diagram?
I have tried it for hours but I am not getting the correct answer.
Here is the link to the question: https://ibb.co/gFhrwCW
Here is the link to my attempt at solving it: https://ibb.co/GcMZC13
AI: You should probably attack it this way (a helping start): -
One of the mistakes you make is in calculating what G2 and G3 in parallel are. They are simply added to each other after making the initial modification shown above.
I think you should be able to take it from here. But, if not and given that you have accepted the answer, here's how I would approach it as an exercise: -
It's looking fairly straightforward now so I'll finish.
|
H: How does an operating system or program detect the CPU model name?
What kind of binary compatibility is present for 2 processors sharing an Instruction Set?. I had asked a question on Computer Science Stack Exchange, to which I got an answer which said:
As a trivial problem, you can write a program that will try to print on which processor it is running, how many cores, how much memory etc. etc. Such a program should obviously produce different results on an x86 and an AMD CPU.
I could not figure out how to write a program in any programming language or Assembly code that correctly prints the name of the processor. Such a program, would print "Intel i3-3220" on an Intel i3-3220 machine and "AMD Ryzen 5000 ..." (Don't know the exact model name) on an AMD Ryzen 5000 machine. Since both processors are binary compatible, I should be able to write a program that runs on both of them without recompiling. If a program exists to print the name of the processor, it has to detect the processor first. Of course, it can ask the Operating System the processor name, but this isn't the same as writing a program to detect the CPU model. I don't think such a program exists. But the operating system correctly detects the CPU Model.
Ubuntu uses the same operating system for AMD and Intel x86-64 processors. There is only one x86-64 bit version available called AMD64, that runs on both AMD and Intel x86-64 machines.
How does the operating system detect the CPU model with the same, binary compatible program?
AI: The architecture defines what results software can expect when running on an implementation conforming to the architecture. It is essentially a contract that ensures binary compatibility between different implementations and microarchitectures.
In general, the ISA only makes up a (small) portion of the architecture (x86, AMD64, ARM, RV etc). The architecture also defines registers, debug, exception handling, [..]. Any design implementing a given architecture must also include these various items to conform to that architecture.
In your particular question regarding CPU identification, the architecture will define methods of determing the implementation ID, optional architectural features, etc. Jonathan's answer covers the CPUID instruction in x86(_64) and shows the binary compatibility when running on different implementations of the architecture. Here's an answer on SO which describes using register accesses to find similar information in an ARM64 (AArch64) implementation. (Incidentally, the different ways one accesses this information in these two examples (x86, ARM) is a good example of the conceptual difference between a CISC and RISC architecture.)
|
H: when does induction machine supply capacitive power
Can an induction machine whether it being a motor or a generator supply capacitive reactive power?
AI: A motor or generator consists of coils. Therefore, the voltage coming from the motor/generator will mainly be inductive reactive power.
|
H: Can you just rely on battery BMS's undervoltage protection for normal use
Let's say you buy a lithium battery with BMS circuit included. The BMS protects against undervoltage (and other things) and only 2 wire-leads come out of the battery (so no seperate charging wire). eg. Lithium battery.
Is it okay to use this in a self-designed electronic device with no further attention for the voltage of the battery and no means of disconnecting the battery from the device electronically on the device side?
On one side I would assume yes and use the UVP threshold as the voltage when the battery is dead. But I heard others say that you should better not trip this protection and check the battery voltage on the device itself and let itself shut down when the voltage is near the UVP voltage. This would be because when in UVP kicks in the battery disconnects itself entirely and no recharging could be done + the battery lifespan diminishes.
AI: Yes, you can rely on it in this case, the battery has S8261-G2J battery protection chip which has undervoltage cut-off.
It will definitely cut off before battery is too empty and can get damaged.
However, there is no mention which exact version of S8261 it is. When the UVP has tripped, some versions allow recharging and some versions deny recharging.
So it is possible that the battery can go too empty and it is unchargeable via the BMS. This is because some batteries can get damaged when too empty, and manufacturer does not recommend charging again. Some manufacturers allow it, and hopefully the BMS allows charging after UVP has tripped.
It is best that the BMS is only used as the last line of protection, and your circuit should make sure it disconnects itself before BMS UVP happens, to be able to charge it.
|
H: What kind of terminal block is this?
I've perused a few resources (linked below) trying to learn more about this strip / block, but they mostly have side views / cross sections. I just want to know what the things in this picture are called so I can replace them -- starting with the parts of the terminal block / strip itself.
https://www.c3controls.com/white-paper/applications-of-electrical-terminal-blocks-in-industrial-automation/
https://realpars.com/terminal-blocks/#
EDIT: Here are my thoughts so far:
The white strips running horizontally along the middle are jumpers -- they connect wires of the same number label. The taller blocks on the middle-left are type 3C. The entire strip is mounted on a 35 mm DIN rail.
AI: These are rail-mounted feed-through push-in terminal blocks. You can get them for all kinds of rail systems, the ones I've linked to are for NS15 rails.
|
H: How do I choose series resistor for stacked Zener diodes?
I have 12-30V input, and I need ~10V and ~5V. So I feel like an easy way to do this would be to stack 5.1V Zener diodes front-to-back and have taps for both of the voltages I need. Something like the image below. But, I'm having trouble determining the appropriate series resistor. I only need 4mA from the 10.2V tap and 1mA from the 5.1V tap. Any guidance would be appreciated!
(Image source: Electronics Tutorials - Zener Diodes Connected in Series)
AI: Zeners with a resistor sadly aren't the way to go here. The 12-30V input voltage range makes the resistor impractical.
Let's assume worst-case conditions: 10V at 4mA out, 5V at 1mA, 12V in. This means your resistor has to pass at least 5mA with 2V across it. Easy enough, that's 400 Ohms.
But let's now crank the input voltage up to 30 Volts. That leaves 20 Volts across the resistor, which is still 400 Ohms, resulting in 50mA flowing through it. This, in turn, means that your circuit always consumes at least 1.5 Watts (50mA at 30V) whenever it's powered on. That's most likely unacceptable. In reality, your resistor will have to be smaller than 400 Ohms because the Zeners need a minimum current to operate, making the problem even worse.
There are two-terminal constant-current LED drivers that would work nicely in place of the resistor, like the AL5809-15. Alternatively, you could use a 10V and 5V LDO regulator.
Of course, your Zeners must be able to handle the full input current (15mA in case of the AL5809-15). With 5V Zeners, that'd be 75mW of dissipation per diode. Typical Zeners are rated for at least 300mW so this will not be a problem.
|
H: 4-wire fan controlled by potentiometer and 555 IC not behaving properly
Here's the schematics:
Forgive me for the newbie schematics. This is the first time making one.
I used an LMC555CN IC and this Delta BFB1012VH-5D84 Blower Fan (Can't find a datasheet for this one).
I can control the fan with the potentiometer but the problem is it's speed is maxed out when I crank the pot all the way to the left and right but would slow down on it's slowest speed at around 30° and speed up accordingly from there.
Is there something wrong with what I did?
AI: A PWM fan will have some minimum ‘starting’ drive before it will begin rotating. This is usually about 25%. Once it starts, it will continue to run with smaller PWM values, as small as 5%.
With that out of the way, there’s a more important issue in your circuit: you shouldn’t drive the PWM pin with OUT. This pin is swinging 12V, the fan can’t accept more than 5V. This came up in another question recently, and I modified the circuit to use DISCH to drive the fan.
Here’s that diagram, with values for a 25kHz PWM:
Simulate it here
The fan isn't too fussy about the exact PWM frequency. 25kHz is the recommended spec from the Intel Form Factor Specification, chosen to be high enough to avoid the PWM chop making acoustic noise in the fan. It should not be higher than that; lower is acceptable.
More about that here: https://noctua.at/pub/media/wysiwyg/Noctua_PWM_specifications_white_paper.pdf
|
H: Induction motor with reversing stator magnetic field
The question:
The answer:
The strength of the torque depends on the difference in speed between the stator's rotating magnetic field and the rotor's rotation. The larger the difference in speed the greater the induced voltage, current and torque.
When the motor is starting up, let’s assume the magnetic field is rotating synchronous speed. The difference in speed between the rotor and stator (motor) is 1,800 RPM, which means there will be a large induced voltage, current and torque. The rotor will start to rotate to catch up with the stator’s rotating magnetic field.
As the stator increases its speed to 900 RPM, the difference in speed between the stator and rotor will only be 900, so the induced voltage, current and torque will be less.
Suddenly, the rotating magnetic field is flipped and rotated counterclockwise. So the speed will be -1800 RPM. The difference between the rotating magnetic field and the stator will be very large. The relative velocity of the stator and the rotor will be large. This will induce an even larger voltage and current in the rotor, which will also generate a larger torque than at start-up.
Immediately following the switch, the rotor will still turn in its same direction, but will be subject to an opposite torque, voltage and current. So (A) is incorrect.
The rotor will be subject to a torque opposite to its direction, so it will begin to slow down. However, the induced voltage will immediately increase. So (B) is incorrect.
When the rotor is slowing down, shouldn't the voltage reduce?
I don't understand why the voltage will INCREASE!
Since the induced voltage increases, the induced current will increase. So (C) is correct.
The stator will reverse direction and the rotor will slow down. So (D) is incorrect.
The correct answer is most nearly, (C).
Source:
NCEES
PE Electrical and Computer: Power Practice Exam, 2020
ISBN 978-1-947801-16-5
AI: When the stator connection is reversed, the synchronous speed becomes negative 1800 RPM. Slip is the difference between synchronous speed and rotating speed. The slip changes from something like 50 RPM to something like 3550 RPM. The induced voltage in the rotor is proportional to slip. That would cause a huge increase in rotor current it it were not for the effect of the rotor reactance. It is still a pretty big increase because the current increases from something near rated current to something above locker rotor current.
Regarding Plugging Torque
Although I agree that (C) is the correct answer, I don't agree that the torque will be larger than at start up. The image below from Fitzgerald, Kingsley, Umans, Electric Machinery supports my assertion. Considering the induced voltage is an acceptable line of reasoning, but it is better to consider the Steinmetz equivalent circuit. However the solution of the equivalent circuit is probably not sufficient because it does not consider the effects of the size, shape and position of the rotor bars. In many motor designs, the minimum torque during acceleration has a minimum between standstill and operating speed. That may also mean that the torque rises as the speed increases in the reverse direction. The torque in the plug reverse region may be higher or lower than locked-rotor torque depending on rotor design.
|
H: Using an RGB TFT screen on the Raspberry Pi 4
I want to use the HT0700EI02A TFT screen on the Raspberry Pi 4 Compute Module.
Raspberry Datasheet: https://datasheets.raspberrypi.org/cm4/cm4-datasheet.pdf
Screen Datasheet: https://cdn.ozdisan.com/ETicaret_Dosya/623388_201461.pdf
Apparently, this screen uses a protocol called 24-bit RGB. But in addition to HDMI, raspberri only has the DPI (Parallel RGB Display) interface.
The screen's datasheet informs you that 8 data inputs are used for each RGB color, totaling 24 pins. The raspberry datasheet states that the DPI interface uses 24 data pins. LCD_VSYNC,LCD_HSYNC, DE, DCLK pins are shown in both datasheets. So far, it seems to be the same protocol.
However, some pins on the screen do not match with raspberry, such as L/R(39), U/D(40).
My question is: are they the same protocol? How do I turn on the display on my raspberry?
I didn't find much information about the protocol and its use on the Raspberry Pi. If anyone can give me a tip to start the work, I appreciate it.
Thank you so much
AI: That's the same protocol.
And, you turn on the display by providing all the necessary power supplies, such as for the panel and backlight. The TFT panel manual must have instructions how to power it and reset it properly to show a picture, or it may refer to the display driver IC data.
L/R and U/D are not part of the pixel transmission protocol, as they control the horizontal and vertical flip of the display. There is no need for the RPi to drive those, that is pixel data.
Now, the next problem is, you need to figure out which RPi data pins are red, green and blue bits, so that you know in which order to connect them to the display.
|
H: Understanding battery capacity measurement circuit
I came across this battery capacity circuit on the internet and it seems interesting and useful. So I would like to build it myself. However, I don't fully understand the working of some of its parts so would like to learn how those work.
Forgive me if some of my questions seem too obvious to you but they aren't for me :)
How is the switch S2 controlling the relay? and is it directly connected to the relay with the dotted lines?
The relay shown is a dual pole one, so I assume that the switch S3 will be connected to one of the pole, while the switch S2 will be connected to the other pole. Is that correct?
What is the purpose of the 3.3V diode next to the 12V source? Is it there to regulate voltage in some way? Also the symbol itself is confusing, as on the internet I could find the same symbol for a Zener as well as for the Schottky diode. Which one is it in this case?
The reference point is shown twice in the circuit, so does it provide a kind of verification to check if the circuit is properly connected or is it actually the voltage point that feeds the main circuit?
I suppose the "COM" is common ground and all the other grounds are connected to this point. Right?
I have never seen the electromechanical clock thingy. Is it a thing of past or still used these days? Would be very helpful if you can give a real example as searching on google shows me pictures of huge clocks :)
lastly, how would the circuit stop discharging the battery upon reaching the cutoff voltage? I assume the 2n7000 will only disconnect the electromechanical clock. So I wonder how will the discharge circuit be stopped at the end of test?
AI: How is the switch S2 controlling the relay? and is it directly connected to the relay with the dotted lines?
S2 is the relay contact. When current flows through the coil, S2 closes. S1 initially connects the battery to activate the relay, then S2 holds it on.
The relay shown is a dual pole one, so I assume that the switch S3 will be connected to one of the pole, while the switch S2 will be
connected to the other pole. Is that correct?
Correct.
What is the purpose of the 3.3V diode next to the 12V source? Is it there to regulate voltage in some way? Also the symbol itself is
confusing, as on the internet I could find the same symbol for a Zener
as well as for the Schottky diode. Which one is it in this case?
It's a Zener diode which will conduct if the reverse voltage is over 3.3V. In conjunction with the series resistor, that gives you a regulated 3.3V reference.
The reference point is shown twice in the circuit, so does it provide a kind of verification to check if the circuit is properly
connected or is it actually the voltage point that feeds the main
circuit?
Those symbols indicate that there is a connection between those points not indicated by a line. They're usually used for nodes with a lot of connections, or with connections on another sheet. They can also be used to force meaningful net labels instead of autogenerated ones.
I suppose the "COM" is common ground and all the other grounds are connected to this point. Right?
Right. It's drawn here to indicate a "common" or ground terminal (or off-sheet connection) for the 12V supply.
I have never seen the electromechanical clock thingy. Is it a thing of past or still used these days? Would be very helpful if you can
give a real example as searching on google shows me pictures of huge
clocks :)
It's intended to indicate the discharge time of the battery. You might be able to use a battery-operated analog watch initialized to 12:00. I think they specify an electromechanical clock so that it retains the last reading, but there's other timers that would work.
lastly, how would the circuit stop discharging the battery upon reaching the cutoff voltage? I assume the 2n7000 will only disconnect
the electromechanical clock. So I wonder how will the discharge
circuit be stopped at the end of test?
The battery level is measured at the + input of IC1B, and compared to the level set by P2 on the negative input. As long as the battery voltage is higher than the set point, the output of IC1B is high, and Q2 keeps the relay energized. When the battery voltage goes below that point, the output of IC1B goes low, Q2 turns off, and the relay turns off.
|
H: BJT As a Switch. Efficency?
I needed a 2 digit display for a project and simply ordered a device 1, without the proper thought. To be fair I have learned a few things about 7 segment LED displays. Unfortunately I got one which is a common anode type, rather then a common cathode, for which there seem to be plenty of drivers, like the MAX6955.
That's OK I'm not short of GPIO pins on a uC so just control the 7 segment from the uC. Be a bit of timing involved illuminating each segment one at a time but not great issue. So I set about thinking of a control circuit and started with 9 BJT NPN transistors acting as switches. That seems to work in my head, but I'm wondering about efficiency. I better add a circuit:
Obviously I'm missing base resistors for all 9 BJTs which will be connected via a base resistor to a uC GPIO pin.
I'm happy enough with the 7 transistors switching the segments, as they just switch to ground, like a low side switch.
However the two transistors which connect power to either of the two 7 segment displays aren't as good in my head. To switch a 7 segment digit off I have to turn on the BJT, effectively shorting the Collector to the emitter. So with the 7 segment digit disabled VCC is connected to Ground through Rc1 or Rc2 and wasting power.
I guess since I'm timing the 7 segment displays and only lighting one segment, A-G, at a time I can get rid of the 7 resistors on the cathode side of the LED segments and only have Rc1 and Rc2. But the voltage drop across an LED is 2V so with a segment connected in there's VCC - 2V drop across Rc1, or Rc2, depending on which is on. Turning off the digit by shorting Collector to emitter means there's the whole VCC dropped across Rc1, or Rc2, so there be more current drain when the digit is turned off, rather then powered.
I'm going to look for another solution, but I wondered was my thinking correct. Using the BJT as a high side switch means there's more current drain when the load, in this case an LED, is off rather then switched on. If it wasn't for that I'd go with this solution but it seems not right to me.
1 https://www.lumex.com/spec/LDD-E304NI.pdf
AI: You can interface the LEDs directy on GPIO configured as open drain. Then use a PNP BJT to select the display
simulate this circuit – Schematic created using CircuitLab
|
H: Purpose of Bus to Bus Entry (Kicad [Eeschema])
On the far right of Eeschema, there is a button below Wire to Bus Entry: Bus to Bus Entry.
What is the purpose of this, and how can it be used?
AI: It serves no purpose other than a 45° aesthetic. It has been removed in KiCad version 6 and replaced with a bus element of similar orientation/length.
|
H: Getting the energy consumption in joules over a period when having current and voltage
I'm trying to calculate the energy consumption over a period of 1.3 seconds in joules (J).
I have a sample every 100 ms (0.1 second) with the current and voltage (in volts) (in practice, the samples have a much higher granularity, but for the sake of simplicity ;)).
Time (s)
Current (mA)
Voltage (V)
0.0
30200
3.55
0.1
30000
3.6
0.2
30200
3.65
0.3
30100
3.55
0.4
30000
3.6
0.5
30100
3.55
0.6
30200
3.6
0.7
30100
3.65
0.8
30000
3.55
0.9
30000
3.6
1.0
30200
3.6
1.2
30000
3.55
1.3
30100
3.6
How can I calculate the consumed energy consumption in joules over this period of 1.3 seconds?
My idea was to:
Convert mA to ampere by diving each mA value by 1000: e.g., 30.2, 30, 30.2, 30.1, 30, etc.
Taking the mean of the current and voltage, 30.09 A and 3.59 V respectively..
Multiplying these to get the power: 30.09 A * 3.59 V = ~108 W
Since watt is also joules per second we multiply it by 1.3 to get the total consumed energy of this period in joules: 1.3 s * 108 W = 140.4 J.
Is this correct? Does it make sense? Are there better, more accurate ways to measure the energy consumption in joules over a period?
AI: Convert \$\text{m}A\$ to \$A\$.
Multiply \$A\$ and \$V\$ for each time period to get \$W\$.
Multiply \$ W\$ by the length of the time period (\$0.1\text{ s}\$)
to get \$J\$.
Add up all the \$J\$. Simple numerical integration.
|
H: Drained 12 V car battery jumping another car
I was jumping another car with a 12 V car battery and it jumped the other car. However, it seemed to kill the 12 V car battery in my car and then I couldn't start mine. I used a multimeter and found out that the battery only had about 9 V left in it. Can someone explain what caused this to happen? Could I have caused a short or something?
AI: You normally leave your car running while jump starting another car.
"Jump starting" transfers energy from the battery in your car to the battery in the other car as well as providing current to the starter motor. Charging the battery in the other car lowers the charge of your battery. The starter motor draws a lot of current, further reducing the charge in your battery.
What you have there is not unexpected if you jump a car without the motor running.
You'll need to charge the battery of your car with a regular charger - or find someone to jump start your car. Just remember that the car providing the boost should keep its motor running.
When you leave the motor on your car running while jump starting another car, you get several advantages:
You can rev your motor and charge the other car from your alternator instead of from your battery - your battery doesn't discharge as much.
You can put more charge into the jumped car battery because you can let your motor run for a while to charge it. If you just use your battery, then the two batteries will equalize at some point lower than a full charge. Say the "dead" battery only has half a charge and yours is full. If you charge the dead one from your battery, then both batteries end up with something less than 75% charge.
Your battery doesn't have to crank up your motor when you are done. The motor is already running so once you disconnect the jumper cables it goes straight back to recharging your own battery.
|
H: Alternative clock generator for CDCE62005 reference clock?
I am working on a circuit which uses the CDCE62005 to generate clock signals for a DAC and an FPGA.
The evaluation board of the CDCE62005 uses a PE7745DU-30.72M for the primary clock reference - but I can not find that part anywhere.
Is there any similar part available which would work fine too?
AI: The part appears to be a 3.3V cystal oscillator, with an LVPECL differential output, which oscillates at 30.72MHz.
If you are making your own board based on the demo, you simply need to find an oscillator with those specs. In fact it doesn't even need to be differential or LVPECL, the CDCE62005 supports a wide variety of IO standards for its reference clock.
That particular frequency seems a bit niche, but similar parts do exist - e.g. 530AB30M7200DG (Digikey sell them with MOQ of 50). Alternatively you could try using a different frequency - the CDCE62005 is highly configurable clock generator, so you should be able to adjust the various parameters to get the same output frequencies from a different reference clock.
|
H: Trouble Understanding Dual Transistor Flashing Circuit
I am having trouble understanding how this circuit works. I've breadboarded it out and the LED flashes but I can't seem to wrap my head around how the NPN vs the PNP is creating the flash. Any help is greatly appreciated!
Schematic:
simulate this circuit – Schematic created using CircuitLab
AI: It's a little bit complicated, but let me try to explain. It does require understanding that the voltage source is current-limited in some way — I assume you're using a lithium coin cell or something similar to power the circuit, which I've modeled in your diagram as a voltage source in series with a significant resistance.
The fundamental concept is that the two transistors are wired in such a way that they mimic an SCR. Mutual feedback ensures that once one starts conducting, they both conduct until the current is interrupted by some means. The means in this case is that the feedback to Q1 is through C1, which blocks DC.
When power is first applied, R1 biases Q1 into conduction, which in turn provides base current to Q2, turning it on, too. This connects the battery to the LED and C1. Since C1 is initially discharged, it now starts to charge quickly, limited only by the source resistance of the battery. It is now the current through C1 that is keeping Q1 turned on, since the voltage drop across R2 has reduced the voltage at the upper end of R1.
Note that the left end of C1 is clamped to about 0.65 V by the B-E junction of Q1. When the voltage across C1 rises enough, the LED turns on, which slows the charging of C1. Eventually the current through C1 drops low enough that Q1 starts to cut off, which also cuts off Q2. All of this happens rather quickly, during the LED flash.
Once Q2 cuts off, C1 now has a significant charge on it, but the LED pulls its positive end toward 0V, which means that the negative end of C1 gets pulled to a negative voltage, insuring that Q1 (and Q2) cuts off fully.
After that, the only current flowing is through R1, which is now trying to charge C1 in the opposite direction, which takes quite a while because of the high resistance. But once the left end of C1 reaches about +0.65 V, Q1 starts to turn on again, starting the cycle over from the beginning.
The circuit would probably work a bit better if there was an additional resistor connected between the collector of Q1 and the base of Q2 (about 1000 Ω), just to limit the amount of battery current that gets diverted through that path.
|
H: Photonicinduction - Less Turn Equals "Same Amps" but "Less Voltage"
I was recently watching a video by the youtuber ‘Photonicinduction’ and I came across this video.
When the video was at 1:50, I noticed that there were 4 Turns on the primary and secondary so its a 1:1 Transformer.
The cable he is using is 400mm power cable rated at 400A. He stated that if he got 250A on the primary, he could get 20kA on the secondary.
As he is based in the UK, the mains voltage is 240V. By knowing this, I ran the following calculations:
$$
P=IV
$$
$$
P=(250)(240)=60kW
$$
$$
V=P/I
$$
$$
V=(60kW)/(20kA)=3V
$$
This shows that the voltage going through the secondary is 3V which explains why he does not get electrocuted. But at 4:33, he reduces the turns to 2 turns on both sides. He then claims, “Same amount of amps but, the voltage will be less.”
I tried to search for an equation to see how this is even possible but, I am not sure what equation he used or even if he used an equation at all.
I was wondering if anyone can show me the mathematical relationship of the number of Turns of the transformer and the voltage.
AI: If the limiting factor is the wire resistance - which wouldn't be surprising in Photonicinduction's high-current experiments - then less turns means there is less wire, therefore less resistance, therefore less voltage for the same amount of current.
I was wondering if anyone can show me the mathematical relationship of the number of Turns of the transformer and the voltage.
The mathematical relationship between number of turns and voltage is quite well-known:
$$\frac{V_1}{N_1} = \frac{V_2}{N_2}$$
that is, the voltage per turn is the same on both sides of the transformer. And the current scales inversely to the voltage:
$${I_1}{N_1} = {I_2}{N_2}$$
When considering the wire resistance (not just an ideal transformer), both current and voltage will change when a load is added. When connecting a load \$R_L\$ and a series wire resistance of \$R_W\$ (\$R_{turn}\$ per turn),
$${I_2}
= \frac{V_2}{R_L + R_W}
= \frac{V_{turn}N_2}{R_L + N_2R_{turn}}
= \frac{V_{turn}}{\frac{R_L}{N_2} + R_{turn}}$$
If I haven't made a mistake in my math, if \$R_L\$ is very low compared to \$R_{turn}\$, the equation is dominated by \$R_{turn}\$ which does not scale with the number of turns. Whether this is actually true in Photonicinduction's experiments, I'm not sure. Perhaps not, because the \$R_L\$ is the part that gets hot, but then again, the transformer winding has a lot more volume and surface area to dissipate heat compared to the load.
I am not sure what equation he used or even if he used an equation at all.
It is not clear whether he did. He's shown plenty of similar experiments. Of course both the current and voltage will drop when a load is added, but he may have seen that the current remains about the same in this kind of scenario and developed it as a rule of thumb.
Or perhaps he meant "same amount of short-circuit amps, but the open-circuit voltage will be less." It is clear he is able to calculate things when needed, but I wouldn't expect to see any of that on his YouTube channel.
|
H: Question regarding serial in serial out shift register
I started learning serial in serial out shift register today as below picture.
I have a question but can't search the reason behind on internet. As an example, when input "110" is connected to a serial in serial out shift register with 3 block below, it need extra 2 random bit "XX110" to get the 110 output right? If this the case, then why not only use 1 block? Isn't it will directly get the output result of "110" without the need of extra bit? Don't really understand the purpose of shift register, what the point of adding more block to it?
AI: For Serial in, Serial Out, yes, they are first in, first out devices. The oldest bits are clocked out.
There are a number of advantages. Usually all the bits are available on their own output lines, in parallel, even for serial in, serial out devices.
Also, when you stop clocking, the bits stay where they are, serving as rudimentary memory.
|
H: Motor power, torque and RPM importance (simulator steering wheel)
I would like to build a DIY racing simulator steering wheel very similar to this one.
My question is, how will the specifications of the motor used affect the experience at the wheel after the pulley system?
I understand torque is easy to manipulate with gears and pulleys, and that it will just change the turning force felt by the driver, but how will maximum power affect what the wheel can/cannot do, and is it related to the torque spec? Also, is RPM an important factor?
AI: Power = torque x rpm. In a Permanent Magnet DC motor rpm is proportional to voltage and torque is proportional to current. Current is set by the torque load, while voltage is set by the supply but reduced by voltage drop across the motor's winding resistance (according to Ohm's Law). The result is performance curves that look like this:-
Maximuum power output occurs at ~50% of no-load rpm and 50% of stall torque, at which point the motor is 50% efficient so it wastes as much power as it delivers to the load. At higher loading it wastes even more power and puts out less. Most motors are not designed to work in this area except at very low voltage, because otherwise they get too hot and burn out. This is why the right hand side of the graph shows dotted lines.
Your steering wheel simulator is effectively a servo which attempts to move the steering wheel to a position, but is torque limited so the operator can override the force it produces. The motor controller should adjust voltage to limit current and torque when the wheel is being moved by the operator, permitting full torque output when the wheel is held or moved against the servo motion.
For a given maximum motor power, the wheel can either move quickly with low holding torque, or slowly with high torque, depending on the gear ratio. Most motors are designed to run at thousands of rpm, so a relatively high gear ratio will be required to reduce the wheel speed, which also increases torque by the same proportion.
So you must decide what combination of maximum torque and speed you want, then choose a motor that has the characteristics you want. If the gear ratio is fixed then you must use a motor which runs at the required speed (divided by your gear ratio) on the supply voltage you are using and has the stall torque (multiplied by the gear ratio) that you want. This will also determine the maximum output power, which may be higher than the motor's rated power output (which depends on its ability to dissipate heat in its working environment).
|
H: How to get a single pulse for 10 milliseconds?
I am trying to get a single pulse of 10 milliseconds on the output of my 555 timer. I following is my schematic:
I am supplying 12 V for Vcc. The value of Resistor R and capacitor C is 100 ohms and 100 microfarads.If my trigger switch is pressed and let go, I get a 10 millisecond output. The problem I'm getting is that if my trigger switch is pressed for a long time, the square wave I get always stays on at 12 and doesn't go down after 10 ms which isn't something that I want since I want to replace my trigger switch with an optocoupler so I really want my output to only stay high for 10 ms. Is there a way around this? How can I get a 10 ms square wave even when my trigger switch stays pressed for a long time?
AI: You need to protect the trigger from being asserted for longer than your 10ms output pulse.
To do this, you can make your switch trigger produce a narrow pulse using some R and C parts.
simulate this circuit – Schematic created using CircuitLab
Circuit Operation
When SW1 is open, then C1 is discharged via R2 and output is high.
When SW1 is closed, at \$t_0\$ C1 acts as a short and output immediately goes to zero. Then C1 charges through R1 and you get the RC rising edge. If SW1 is closed forever, then the output voltage will be the voltage divider of R1 and R2 since C1 will go to open circuit as it is fully charged. (See the glitch around 11ms on the graph.) Once SW1 is re-opened, then the voltage will go all the way high (12V in this example) and C1 will discharge through R2.
The green trace is what the signal looks like on the OutputPulse. Run that to the trigger (pin 2) input on your circuit.
|
H: Opamp delay off circuit check (updated)
I have a relay that I want to have a 3 second delay switch off from when a pressure switch (normally open) returns to the open state. When the pressure is below a certain level the pressure switch closes, once the pressure has reached the set level the switch opens.
I've built this circuit on my breadboard tonight running at 12 V and it gives me a 3 second delay, driving an automotive relay 100 ohm coil resistance.
Does anyone have any suggestions on improvements?
I've updated the design with the great feedback received. The LED was a white LED at the time, but I'm intending to use a green one in the final design so altered the resistor. I've made the changes as suggested, such as a current limiting resistor for SW1. With regards to the back EMF diode I saw this post (Why don't relays incorporate flyback diodes?). Sand used design 4. It said the Zener voltage should be 2*vcc so I put a 24v zener in. Does this sound right? Also should I remove D1 now?
AI: If that's an ordinary red LED and the op amp saturates at say 10V then there is about 20mA through the LED. Although this would be just about within the LED's spec' it would usually be an unnecessarily high LED current when the LED is just being used as an indicator. I would consider increasing R4 to reduce the LED's current to below 10mA which should still give the LED sufficient brightness.
.... also, when an LED is being used as a "RELAY ON" indicator it (and its series resistor) would usually be seen in parallel with the relay coil so that the transistor is sinking the LED's current rather than the op amp sourcing it.
You could turn the comparator into a Schmitt trigger by adding some hysteresis to the comparator. This would be done by adding say a 1M resistor between pins 1 & 3. The idea is to create two switching levels, one capacitor charging and two capacitor discharging. This should then remove any possible relay chatter problems caused by a slowly charging capacitor with any noise about. Hysteresis will slightly alter the timing of the circuit as it currently stands.
If you decide not to utilise hysteresis then at the very least there should be a capacitor (say 47uF) across R2 to filter out any supply ripple/noise getting to the threshold level but you can't add this capacitor if you decide to add positive feedback (hysteresis).
EDIT
The zener, when used in that position, should have a value of between Vcc and the maximum Vce specification of the transistor (perhaps not too close to Vce(max) to give a safety margin). Using a zener, rather than an ordinary diode, has the advantage that the energy in the relay's coil is removed faster thereby switching off the relay quicker but this is only necessary if a faster switch off is required and in the vast majority of applications an ordinary diode will suffice. The higher the zener voltage, the faster the switch off but keep the zener value less than Vce(max). You require either an ordinary diode or a zener but not both. If you were to use both then the generated back EMF at transistor switch off time would be limited by the ordinary diode to 0.7V above Vcc and the zener, which would never come into service, would be redundant.
|
H: OPA344NA op-amp MEMS Mic
I'm planning on getting Audio Data from a MEMS Analog Microphone with an OPA344Nna as the OP-AMP.
I'd like to just make sure the audio circuit looks correct.
(Analog/Audio is something a bit new to me, but I'd like to get it working for wake words / other)
I'm planning on taking readings using the ESP32's built-in I2S Analog read with 16Khz and 2 buffers for 1-second audio data (further processing done on this later, but hardware first!).
What I'm asking is if the circuit looks correct.
Any advice or suggestions is welcome!
The whole application is a smartwatch, please refer to the sensors section
for the audio schematic.
AI: As shown, for self-bias, crystal load and audio spectrum fixes.
|
H: Can't connect to STM32 chip
I have soldered a STM32F103VF chip (LQFP100) on a breakout board, which I want to connect to via my ST-Link (clone). However, ST-Util (and STM32CubeProgrammer) cannot connect to the chip.
MCU Datasheet: https://www.st.com/resource/en/datasheet/stm32f103vf.pdf
Picture of the setup:
Pinout:
I've made the following connections:
VDD_1, VDD_2, VDD_3, VDD_4, VDD_5 to 3.3V
VSS_1, VSS_2, VSS_3, VSS_4, VSS_5 to GND
VDDA to 3.3V
VSSA to GND
VREF+ to 3.3V
VREF- to GND
BOOT0 via 4.7kOhm resistor to either 3.3V or GND (see bottom)
PB2 (BOOT1) to GND
NRST, PA13 (SWDIO), PA14 (SWCLK) to the ST-Link programmer
two 100µF 16V electrolytic capacitors between GND and 3.3V
one 2.2nF ceramic capacitor between GND and 3.3V
3.3V and GND are supplied by bench power supply, GND is shared with ST-Link
Schematic diagram: schematic
No matter what setting in ST-Util or STM32CubeProgrammer I try regarding reset mode (Software / Hardware / Core) or frequency, having NRST disconnected or connected, it only says
01:50:32 : Error: No STM32 target found!
Observataions and notes:
when BOOT0 is LOW (aka "run firmware") and a reset is performed (NRST touches GND for a second), the current draw is 2mA
when BOOT0 is HIGH (aka "run internal bootlaoder") and a reset is performed, current draw jumps to 12mA
when NRST is connect to GND, current draw is ~1mA
datasheet table 18 says supply current in run mode on HSI 16MHz, all peripherals enabled, is 12.2mA
however, per bootloader document, when the bootloader is active, RCC is set to HSI+PLL to 24MHz, equating to 18.9mA (all peripherals enabled) or 11.6mA (all peripherals disabled)
oscillator pins (for HSE and LSE) have been left floating, they also shouldn't be necessary
chip has not been programmed before by me, comes from RS-Online retailer
I checked that 3.3V reaches all VDD pins on the chip, directly on the top side of the pin on the package
I checked for shorts after soldering the chip on the breakout board and removed them all before hooking it up (my soldering of that LQFP100 did not go very smoothly.. removing bridges took a dozen attempts with soldering braid)
The ST-Link clone has worked absolutely fine in the past, connecting to STM32, STM8 and GD32 chips
attempting to connect when the chip was reset with BOOT0 = HIGH or LOW does not make difference, both fail
I'm out of ideas on why I can't connect to the chip via SWD, and also why the current consumption is so low in "run firmware" mode. With nothing programmed, the chip should encounter an invalid instruction pretty soon, hardfault and be stuck in a loop, but consumption does not show that :/.
AI: Mystery is solved.
I did not connect the SWDIO pin PA13 to the ST-Link, but, because I plugged it into the wrong pin on the breakout board, I was connecting it to PA12, some unrelated GPIO pin.
Once that single wrong connection is fixed, the connection works immediately.
What got me onto this path is that per comments, I was able to connect to the chip via the UART bootloader, so the chip was not dead as I thought.
Lesson of the day: Always tripple check your connections.
Also it shows that above schematic works, if properly connected.
Still thanks for the help and nudge to check connections again.
|
H: Clipping when using a CD4051 analog multiplexer
Regarding the CD4051 analog multiplexer,
Data Sheet
If I connect VDD to +5 V and VSS, VEE to GND. The analog signals are coming from modules with the analog signal having a DC offset that can be as low as -300 mv.
Does this analog signal pass through the multiplexer as is, or will it be clipped at zero volts (VEE)?
AI: It will work okay, in fact series resistance will be near minimum, but leakage will be relatively high (perhaps in the few uA range) increase at high junction temperature (perhaps >100uA at Tj=100°C) due to forward biasing the isolation diffusion junctions and input protection diodes.
The transmission gate resistance will typically be more than 200\$\Omega\$ (more at higher temperatures) so error could typically exceed 10% at Tj=100°C even if the source is low impedance.
If that's unacceptable (and keep in mind that higher negative voltage can damage the chip should excess current be allowed to flow) you might want to consider a charge pump such as the TC7660 or ICL7660. An 8 pin package plus two capacitors are all that is required.
Since the CD4051 (unlike the 4016/4066) includes input level shifters, you can simply connect the 0/5V control inputs directly. That is my recommendation if you need to accurately pass signals of -300mV over a range of temperature.
|
H: What is this chip in a switching power supply marked 1ADJM?
I'm trying to identify a chip, possibly a switch, used in a switching power supply.
It's 6-pin, and marked 1ADJM (or maybe IADJM)
AI: It is MP1470 step down converter from MPS.
Datasheet here.
|
H: How to implement SPI SCK with ESP8266
I have a ESP8266-01 and a Microchip 23LC1024 memory chip which communicates via SPI bus, and I want to drive it using the ESP8266-01. My thought is to implement the SPI protocol in software i.e. bit-banging. My concern is about driving the clock signal in very specific timings. Basically this question is a generic question about SPI: If SPI is a synchronous protocol, which means that the clock of the master tells the slave when to read/write data, does the clock frequency must be very specific? I mean - does the clock must go HIGH and LOW in a very rigid time intervals?
AI: The clock frequency is irrelevant, as long as it's low enough.
The datasheet says the maximum frequency is 20 MHz, so that limits the highest speed. That equals to 50ns clock period.
The other requirement for the clock is that it must be high for at least 25ns and low for at least 25ns, which totals up to 50ns.
The third requirement for the clock is that it must transition fast enough, it must go from low to high, and from high to low, in 20ns.
There is no requirement for minimum clock frequency or maximum value for the clock to be high or low. You can transmit bits how irregularly you want, and even take infinitely long time periods between bits.
|
H: Measuring AC voltages with battery-powered oscilloscope
I'm wondering if I can view only for few seconds the waveform of a small transformer on my DSO.
It can handle up to 800V according to the manual (transformer outputs are 100V maximum.) From what I know, connecting the DSO to an AC voltage will make the BNC connector live, is this true? If yes, it wouldn't be a problem because I just want to see it for 2 seconds and then disconnect it. Will it blow up my scope?
If you can give me some links to study (for battery-powered oscilloscopes, not the usual ones), will be greatly appreciated.
AI: Using a battery powered oscilloscope to make measurements is as safe as using a battery powered digital multimeter. Since it is not connected to mains power and is rated to handle the voltage, you should have no problems. As you noted, however, the input BNC connector, which is not grounded, will have voltage with respect to earth ground so you should be careful. In any case you should read the manual for your oscilloscope as it should have information about safety precautions.
|
H: Why does the number of holes in an N-type semiconductor increase with temperature?
I was watching this video.
I understand that the dopant atoms introduce huge numbers of free electrons that essentially overshadow any additional electrons that break free due to higher temperatures.
What I don’t understand is why the hole concentration increases. Surely, since we have so many free electrons, any additional holes that are created due to higher temperatures should get instantly filled by the large number of electrons. Hence, shouldn’t P remain constant?
AI: point is: nothing is ever instant. Nothing: So, while the number of free charge carriers of poth polarities increases, and so do the recombinations, the number of available ones at every point in time increases just as well.
|
H: How does starting torque increase by increasing rotor resistance?
As the formula below shows that the torque is both directly and inversely proportional to the rotor resistance. However, since it's inversely proportional to the square of the resistance, it is the predominant relation.
Increasing the rotor resistance reduces the torque.
But, in the below figure it gives the opposite effect, whereby increasing the rotor resistance tends to increase the torque.
Since the rotor resistance is R2/s, the slip is maximum at starting i.e. close to 1 and reduces as the motor speed increases and at 1750 rpm, the slip is 1/36, so, the rotor resistance will become very high
36*R2
This means increasing the rotor resistance actually reduces the torque both at low and high speeds.
Am I right?
Source:
FE Reference Handbook 10.0.1
by NCEES
ISBN 978-1-947801-11-0
AI: Key points from Fitzgerald, Kingsley, Umans:
In the normal operating range, increasing R2 increases the rotor impedance, necessitating higher slip for a given torque.
Slip at maximum torque is directly proportional to R2 but the maximum torque is not changed by changing R2.
R2/s = R2 + R2(1-s)/s
The power dissipated in R2 represents rotor resistance losses.
The power dissipated in R2(1-s)/s represents the electrical power converted to mechanical power, torque x speed.
|
H: Can MOSFET withstand the ESD from drain when a TVS diode is installed between gate and source?
Assume that an N-channel MOSFET is used.
A TVS diode is inserted between gate and source.
The source is connected to GND. (Not real grounded (not earthed.))
The drain is open and connected to the external connector.
connector is touchable.
Can ESD at the drain connector damage the MOSFET?
Vgs is suppressed by TVS,
Drain-source will be breakdown and then charges will flow into the GND.
I have no idea about Vdg rise influence.
AI: The MOSFET should have a maximum dV/dt rating in its datasheet. Multiply that rating with the FET's output capacitance to get the maximum allowed drain charging current. Any ESD current less than this current should not damage the FET and be dissipated via avalanche breakdown, which most FETs are rated to handle up to an often very large maximum energy.
Let's take the IRF1010E as an example: Max dV/dt is 4V/ns, output capacitance is 690pF typical. 4V/ns * 690pF = 2.76A. The Human Body Model's internal resistance is 1500 Ohms, meaning that a 4kV HBM ESD event (2.6A peak) will not exceed the maximum allowed drain dV/dt and should therefore not damage the FET.
If possible, adding a small capacitor from drain to ground will improve the ESD handling capability further. A TVS diode of course won't hurt either; however, the FET itself can act as a very effective voltage clamp as well due to avalanche breakdown, so an external TVS diode might be redundant. Keeping dV/dt under control is more important for the FET's survival, hence the capacitor.
|
H: How do you calculate power loss for a diode?
For example, the DSB20I15PA Schottky diode datasheet mentions a threshold voltage of 0.23V and a slope resistance of 7.2mΩ.
I need to calculate the power loss at different voltages, as the battery drains. e.g. 14V down to 10V.
Is it as simple as subtracting 0.23V from your source voltage, or is it more complicated than that? Do you need to factor in the slope resistance too? Does current play a factor? How about diode temperature?
I'm using a 12V battery and a multimeter to measure the drop (by measuring pin to pin on the diode). In my test circuit, the drop appears to be 0.325V when there is a load, but only 0.168V when there is no load. The load is a DC motor.
AI: The two parameters given are a linear approximation of a portion of the V-I curve of the Schottky diode at a certain junction temperature. Vf0 is a theoretical intercept, not something you can measure directly, whereas rF is the average slope of the curve (in the example curve, between the points of about 65A and 210A, so you need to be able to produce those currents to measure it).
Vf = Vf0 + If * rF
The power loss is the product of the instantaneous forward current and voltage.
Here is information on a different device, illustrating the approximation:
The maximum is going to be higher than the typical, as shown in the curve.
The maximum power loss from forward current will be higher at lower junction temperatures since the forward voltage will be higher, but usually that's not a concern.
|
H: Why I haven't 0V when I put 5V in my cmos
When I Have 5v in my cmos, usually my NMOS have to be close and 0V, however there is a residual tension, I use MbreakN3 NMOS in PSPICE.
I don't think this is rd resistance because youtube tuto use it and it works.
Thank you
AI: That's not a valid CMOS circuit.
The P-FET is upside down, you have drain up and source down.
|
H: Common grounding between two microcontrollers
I am trying to get two microcontrollers to communicate. One is a Tiva-C TM4C123GH6PM (3.3 V.) The other is an Arduino (5 V.) I will use UART.
Should the grounds of the two microcontrollers be connected to each other? Why is a common ground needed?
AI: Why is a common ground needed?
'Ground' is the reference point in a circuit which is assumed as 0V. Voltages at different nodes are quantified with respect to this ground point.
So if two microcontrollers are 'talking' to each other serially in terms of signals, then both micro-controllers should have a common reference point to 'understand' or 'agree upon' what's the voltage level of the signal coming from the other micro-controller. If one sends 3.3V, the other one needs to read it as 3.3V itself.
So yea, they should have a common ground.
|
H: where to place registers in VHDL modules
I'm a software guy by trade and I have been dabbling in digital design on FPGA using the open source toolchain. I have made a few designs and generally understand how the handle verilog and VHDL.
One of the things I'm wondering is not really a black and white question but rather I'm wondering whether there is a "best practice".
I generally divide the designs I make into modules. For example I have a uart that I can include in a design. Now the transmit part of the uart has a feedback mechanism that indicates whether it is ready to receive another 8 bit word to transmit.
Now I could choose to make either the input or the output registered essentially forcing a pipeline. However this would have some impact on users of the design. If I register the input or the output the feedback is always delayed by 1 clock cycle. This is not so bad and might even be what you want. But If both input and output are registered the feedback is delayed by 2 clock cycles which might be confusing for users.
Is there some kind of best-practice to follow here. As in most of the time register your input or your output?
Any help is appreciated!
AI: UART might not be the best example to drive this question, since the external activities associated with a UART are generally so slow relative to the other activities internal to the FPGA that an extra clock or two in a handshake hardly makes any difference at all.
However, in the high-speed video signal processing pipelines that I build (the bulk of my work these days, it seems), I generally register the outputs of each module. Then any modules receiving these signals can use combinatorial logic on them without worrying about excessive input delays, glitches, etc.
Inputs get registered only if they are known to be coming from a different clock domain or a completely asynchronous source, in which case, I have a library of CDC (clock domain crossing) modules that I use.
|
H: Low frequency transformer core material
For the first time of my life, I am trying to do a low frequency transformer. I’m used to seeing low frequency transformers built with, what I think to be, laminated silicon steel.
I have some difficulties to find a graphic which compares low frequency core material in function of the core losses. I think that at those frequencies (50/60 Hz) the core losses are really low. Even if it is the case I would appreciate to have some datas to convince myself.
If you know where I can find a graph which compareres the different magnetic material at those frequencies or any tips to give, it would be a pleasure to hear it :)
Thank you very much and have a nice day :)
AI: Core flux losses decrease with thinner silicon transformer steel and were originally graded by a number that represented the loss in W/kg. Then as they were able to produce very thin materials of cold rolled grain oriented steel or CRGOS the numbers did not always correlate to precise losses, yet were able to reduce losses from 0.9 W/kg down to 0.3 and even less.
Iron and hot rolled steel have far greater losses, while ferrite has far lower permeability.
With the key words in this answer you will be able to search and find suppliers with all the ratings and graphs with the tips you are seeking.
The losses are a function of eddy currents in the thickness of the material. Each layer is coated with a very thin silicate coating for an electrical insulation which gives each layer high capacitance with a tiny gap yet very high permeability to increase saturation levels of layers and greater linearity. The LC equivalent circuit of the core thus makes it suitable only for low frequencies.
MOT units are also made this way but welded across the seams to eliminate hum but that shorts all of the insulated laminations and makes them far less efficient as a power transformer. For some high voltage cores it is important to remember if you want higher performance to keep the edges clean of magnetic conductive dust particles, which is another topic of insulation breakdown from contaminated partial discharge.
This is an answer with clues rather than what I would do to explicitly hand you the answer from a web search to refresh my experience.
|
H: Arduino EMF detector / non-contact voltage detector
I am trying to build an EMF detector or non-contact voltage detector based on an Arduino to be used as a prop. The detected EMF value should be read from an analog input and based on the level, 1-5 LEDs should light up. I found two circuit designs which didn't look too complicated.
The first "circuit" basically consists of a floating pin / wire which is pulled to ground with a high value resistor (1 or 3.3 MΩ).
However this design didn't work for me, I was able to detect turned on lamps, extension cords somehow. But the circuit wasn't very reliable. Often it triggered once and stopped after that or turned on randomly. I tried various resistor values and "antenna" types like copper plates, long wires and coils.
I had much better results with the following circuit. First I didn't use the Arduino at all and it worked perfectly, although the range was very limited. I also didn't use the 1 MΩ and 100 kΩ resistors and I don't really understand the purpose of them. Limit the current to avoid accidental triggering?
Now, instead of turning on a LED with the second circuit, I want to somehow measure the current at the third transistor. My first idea was to remove the LED and measure the voltage at the 220 Ω resistor in reference to ground. This didn't work at all, I always measured 5 V, which makes sense I guess. Do I need two resistors instead to create a voltage divider?
AI: The purpose of the high value resistors is essential to this circuit. Together with the transistor stages, they provide a very high amplification of the signal that is received on the copper strip.
You do not need to remove the LED. You should be able to measure the voltage of the node between the resistor and the LED with an Arduino. But you would need a 5V instead of 9V supply, because the Arduino ADC can only handle voltages up to 5V.
|
H: Why is this dynamic microphone grounded / wired with more than the two pins? is it important?
I am rewiring a dynamic microphone that does not require any phantom power (also wicked simple no off / on), and I noticed, when I only wire the two wires to two of the three pins it still works and seems to sound great.
When I have taken apart other microphones and looked at how they are wired, they all seem to have this same configuration of having one wire go to what I think may be the ground pin and then the third XLR pin.
There is probably some logical reason but I don't know why.
Is it not safe / proper to wire only the two even if it works? What is the ground for and what is the purpose it is wired like this?
I had a friend mention something like if you use phantom power on a dynamic microphone it can mess it up, but I leave my phantom power on all the time and it doesn't do that. Is that why it is wired like this? (I attached the pictures to show.)
I have very little knowledge about electronics / I mostly have just hacked my way through in the past / mess with it until it works, and I don't really know what the ground is for / what the lead means and all the proper terminology. I want to make sure I am doing it properly to the standards of the industry.
AI: This may be most easily understood by looking at an old-fashioned transformer balanced configuration.
simulate this circuit – Schematic created using CircuitLab
Figure 1. (a) An unbalanced microphone feeding a 600 Ω (Lo-Z) input. (b) A balanced version of the microphone feeding a high impedance unbalanced input using a transformer. (c) A phantom-powered microphone circuit using balanced / unbalanced transformers.
How they work:
Figure 1a shows a simple unbalanced configuration. One of the mic capsule's wires is connected to shield. The circuit is unbalanced and may be prone to hum pickup.
Figure 1b is balanced and will be much less susceptible to hum as the hum will be "common mode", appear on both inputs to the transformer and get cancelled out. This is likely the situation you are describing and it should be clear that disconnecting the screen doesn't affect the microphone but will likely result in more noise as the microphone case will not be grounded.
Figure 1c shows a phantom power configuration. The 48 V supply is fed in the centre-tap of XFMR2. Current splits both ways in the transformer with the result that the DC current is running opposite directions in each half of the winding so the flux in the core cancels out and the transformer doesn't saturate.
At the microphone end the reverse happens and the 48 V is collected, smoothed and used to power the microphone preamp.
Return current is through the screen or shield wire (pin 1 on an XLR).
I had a friend mention something like if you use phantom power on a dynamic mic it can mess it up, but I leave my phantom power on all the time and it doesn't do that is that why it is wired like this?
simulate this circuit
Figure 2. Connecting a balanced mic to a phantom-powered line.
As you can see from Figure 2, connecting your mic to a phantom-powered line will apply the +48 V to both terminals of the mic coil. Since there is no voltage difference between the two terminals no current will flow, there will be no degradation of performance and no damage will occur. This is not an accident; microphone phantom power was designed that way!
I've used transformers in all the examples above as it's very easy to see what's going on. Modern systems will use electronics in place of the transformers for reasons of cost and quality.
|
H: 8051 timer delay calculation
In the following program
MOV TMOD, #01H
HERE: MOV TLO, #0F2H
MOV THO, #0FFH
CPL P1.2
ACALL DELAY
SJMP HERE
DELAY:SETB TR0
AGAIN: JB TF0,AGAIN
CLR TF0
CLR TR0
RET
It is asked to calculate the time delay generated by the delay subroutine.
Clearly the counter or timer counts a total of FFFF-FFF2 = 13 + 1 (one more count for setting up TFO).
Note the crystal frequency used here is 11.0592 MHz, hence the timer frequency would be 11.0592 / 12 = 921.6 kHz, hence one cycle length of the timer is 1/921.6 = 1.085 μs.
Hence the delay should be 14 × 1.085
but the answer says it is 28 × 1.085 as shown here:
AI: 14 × 1.085 µs is closer to the correct answer for the question that was actually asked.
However, the solution ignores altogether the time required to execute the instructions in the DELAY subroutine.
When you insert a call to the subroutine into a sequence of instructions, the following additional instructions are executed:
ACALL DELAY 2 cycles
DELAY: SETB TR0 1 cycle
AGAIN: JB TF0,AGAIN 14 cycles (2 cycles * 7 iterations)
CLR TF0 1 cycle
CLR TR0 1 cycle
RET 2 cycles
---------
21 cycles total (22.786 us)
The final sentence of the solution is answering a different question: How long does it take for the output pin to finish a complete cycle?
That loop actually includes not only the delay calculated above, but also the additional instructions above and below the call.
HERE: MOV TLO, #0F2H 2 cycles
MOV THO, #0FFH 2 cycles
CPL P1.2 1 cycle
ACALL DELAY 21 cycles (from above)
SJMP HERE 2 cycles
---------
28 cycles total (30.382 us)
So a complete cycle of the output bit, or two iterations of this loop, would require a total of 60.764 µs.
This illustrates why it is rarely useful to involve the hardware timer for short delays. A simple DJNZ loop uses much less code.
|
H: What components would I need to receive SMS-CB messages?
I could simply use a LTE modem for this (with a SIM Card). But since SMS-CB is an FTA (Free-To-Air) signal, having to buy a SIM card for this (I know, it's not expensive) would be overkill.
I am specifically looking to intercept EU-Alert/NL-Alert CB messages, for a little side project (using Arduino/uC).
What components could I use and where can I find some useful documentation to implement this into hardware and code (frequencies, data formats, etc)?
AI: You'd still want to use a 2G/LTE modem. That's it. You can certainly implement a GSM receiver as a software defined radio (there's really more than enough material on that), but it makes no sense in this context: you want to use a normal GSM functionality, so use normal GSM hardware.
And because you need to listen on the appropriate broadcast/cell organization channel in the case of GSM, or, much more complex technically, monitor the OFDM frames that each base station broadcasts to all subscribers in their cell¹, you'd really want to lock on to that base station.
That's basically all that a modem does in the absence of data to communicate.
So, get a modem. I'm not sure you need an active subscriber identity to make modems listen purely and receive cell broadcasts; in GSM, you could send SMSCB on the basic or extended cell broadcast channels; you should be able to read the basic channel without being registered, but whether or not your specific modem does that wihtout a SIM is a different question.
In 3G, the cell broadcast service is called Service Area Broadcast, and due to the significantly more complex air interface, you'll really want a modem that has locked onto the basestation.
Since this is probably in context of German emergency systems, 3G won't help you much, but 2G is going to be around for a while; so I'd guess a pure GSM modem might be the way to go.
¹ intentionally not using the technically correct abbreviations here, because, man there are many
|
H: AC circuit working principle
When a power source is AC (alternating current) one is a phase and the other is neutral.
Does phase and neutral lines change their polarity over time or only phase line voltage and current up and down sinusoidal form? Which one is correct? If it is not changing the polarity, how do rectifiers work?
AI: When a power source is AC (Alternative Current) one is a phase and the other is neutral.
AC stands for alternating current, not "alternative" current.
The neutral is so-called because it has been neutralised by connecting it to earth / ground.
simulate this circuit – Schematic created using CircuitLab
Figure 1. Two AC supplies.
(a) has no ground reference and so neither line has been neutralised and neither can be called "neutral".
(b) has one secondary connection connected to earth and so that conductor is neutral and the other is live.
Does phase and neutral lines are changing their polarity over time or on phase line voltage and current up and down sinusoidal form?
Your sentence is rather confused. Neutral has very little voltage on it and can be considered to be at 0 V for basic analysis. The live wire voltage oscillates in a sinusoidal wave from peak positive voltage to peak negative voltage.
If it is not changing the polarity of how rectifiers work.
You can't "change the polarity of how rectifiers work". You can, however, edit your question to clarify. If you do that we can try and help you further.
simulate this circuit
Figure 2. (a) The full bridge rectifier. (b) When the AC is positive (as defined by the secondary dot) D5 and D8 conduct. (c) When the AC is negative D11 and D10 conduct.
No matter which polarity the AC is the diodes steer the current to the positive output.
|
H: Can't turn on NMOS when using 1M pull-down resistor on gate
I am using ATmegA328P MCU to control ignitors which need about 500 mA current to work. To provide such current, I use AO3400A N-MOS to control the ignitors. In early design, I didn't use pull-down resistors. The N-MOS can ignite ignitors when MCU outputs 5V on its gate.
However ignitors maybe ignited unintendly when power on(MCU is reseting during this period of time, IO is unstable). So I put 1M pull-down resistor on the N-MOS's gate which provide 0V when MCU in reset. It successfuly avoid unintended ignition. But I found the N-MOS can't ignite ignitors anymore when MCU outputs HIGH.
My questions is: Why such large resistor makes the N-MOS can't work? According to the datasheet of the MCU, its IO can provide maximum source current of 40 mA. In my view, it's quite large, which means 1M resistor won't reduce IO's drive capability on the N-MOS's gate.
AI: Most likely you are getting low gate voltage from an incorrect assumption your R value is 1 Meg. e.g. R1, R5 might be in incorrect positions. You need Vgs > 2V to start conducting __ Amps typically. The FET seems to be “overkill” for the load current and voltage stated.
Add a flyback diode for any inductive load.
|
H: Math behind pull-up resistors
I have a question about the the math behind of pull-up resistors and how they work. I don't have any experience in the Electric Engineering field so it's probably a basic question.
I saw many explanations regarding WHAT pull up resistor is doing but rarely HOW it is doing it.
Consider that I have the following circuit, where:
$$V_{cc} = 5V$$
$$R_1 = 10K\Omega$$
$$R_2 = 100M\Omega$$
Now, there are two cases:
The button is unpressed. In such case we have: \$I = \dfrac{V_{cc}}{R_1+R_2} = \dfrac{5V}{10,000 \Omega + 100,000,000 \Omega} = 0.000000049A\$ and then the voltage at \$R_2\$ is \$V_2 = R_2 \cdot I = 100,000,000 \Omega \cdot 0.000000049A = 4.9V\$ and so the input pin see the input as HIGH. Are my calculations correct here or something else is going on in the circuit?
The button is pressed. I do know that the input pin see the input as LOW in such case (some value that is close to \$0V\$) but I don't know how to calculate it.
Thanks.
AI: The button is unpressed. Are my calculations correct here or something else is going on in the circuit?
Your calculations are correct. Few of us would bother with the calculations as the input impedance is so high. With those values of resistors you have loads of margin because the input voltage is so far above the logic 1 maximum threshold (which will be available in the datasheets).
The button is pressed. I do know that the input pin see the input as LOW in such case (some value that is close to 0V) but I don't know how to calculate it.
Do the same calculations again but with R1 as 10 kΩ and the button as, say, 100 mΩ in parallel with R2. You'll quickly see that you can ignore R2's contribution and that the result is ridiculously close to 0 V and well below the minimum low threshold. Again, few of us would bother to check this unless our switch had some odd properties that caused it to have a high resistance.
simulate this circuit – Schematic created using CircuitLab
Figure 1. A quick simulation for a 100 mΩ switch.
Can you show me an example of how to calculate it?
For parallel resistors the equivalent is \$ R = \frac {R_1 R_2} {R_1 + R_2} = \frac {0.1 \times 100M} {0.1 + 100M} = \frac {10M} {100M} = 0.1 \ \Omega\$. The 100 MΩ in parallel doesn't make any difference. Don't get hung up on precision. Your 100 MΩ resistor will be ±1% (1 MΩ).
I'm not sure how to calculate the voltage when you have two resistors in parallel after one resistor in series.
Get the equivalent of the parallel pair as I have done above and then do the series calculation. \$ V_{out} = \frac {0.1}{100k + 0.1} 5 = 50 \ \mathrm{\mu V} \$.
The current should split somehow proportional to the resistor value?
Well, proportional to the inverse of the resistance values.
Also, the voltage after the switch should be close to 5V, isn't it?
Not when it's closed. It will be closer to zero.
|
H: What happens when we reach the maximun atenuation in Optical Networks Terminals (Optical fiber )?
Let´s suppose We have the following Optical Network Terminal :
According to datasheet, the receiver sensitivity is : -27 dBm
Now Let´s suppose We have the following fiber optical network :
As we see , the Optical Power Meter show us that we reach de limit (-27 dBm ).So I have several questions :
1 ) If we reach that limit (-27 dBm) : Does it affect the conectivity speed ?. For example If we pay conectivity of 300 Mbps this speed will down to 50 Mbps, or 30Mbps or 10Mbps ? Or just It cause intermitent conectivity ?
2 ) This second question is a little more elaborate :
If we have -27dBm so we have :
Does it means that our ONT can work until 2uW (microWatts )?
3 ) Why matter the -27 dBm ? Why just don´t elevate the energy (Watts) in the Optical Line terminal and we trade-off the losses and atenuation with more watts?
AI: Below -27 dBm, it stops working. That's the definition of "sensitivity". Of course, if you're at -26 dBm, your mutual information is not great, so you might experience lower data rate. How specifically that is implemented depends completely on the communication standard and the actual device.
Yes. That is the definition of dBm. What's the question?
Because you can't. This might simply be the maximum output power your transmitter can do, or the fiber is too nonlinear with higher powers, or this is a passively split multiuser medium where you can't just adjust parameters per user.
|
H: Synchronous DC-DC buck converter I saw online
I constructed this synchronous buck converter circuit I found online and I wanted to know how reliable it actually is: ( I used a battery source for the VCC+ pin on the op-amp because the 12 V line was too noisy for some reason); also, I've seen people say that the buck circuit's output voltage is supposed to INCREASE when you lower the load's resistance, which doesn't make much sense to me (more current => lower voltage?) plus this circuit's output voltage drops the more you decrease the load resistance. Also I would appreciate it if someone were to show me a better alternative for a synchronous buck converter. I am mainly interested in designing a buck converter to learn more about switch-mode power supplies
rather than doing it out of necessity.
For people who want to simulate it using LTspice:
https://drive.google.com/file/d/1iq6pzlwegOPHI3BKrszCXosdomPM4ChM/view?usp=sharing
Note: I apologize for the messy circuit.
AI: This buck converter has no feedback loop. Consequently
It will have significant voltage transients when the input voltage changes (for example power on), or when the load changes.
The steady state output voltage will depend upon the input voltage.
An open loop buck converter such as this will thus only serve your purpose if the above behavior is acceptable to you. You probably want to use voltage feedback, to mitigate those problems, but you haven't specified the application for your buck converter, so whether or not such behavior is acceptable is only something we can guess at, with the information available to us at this point. (The guess is you won't be happy).
Additionally, as has been pointed out in the comments, using op-amps as gate drivers is probably not the best choice. In particular, the op-amp you have chosen, OP07 is not designed to output less than 2V above the negative rail. The mosfet you have chosen, the Si7234DP has a maximum threshold voltage of 1.5V. Thus, there is the real possibility that the op-amp, even when saturated toward the negative rail will keep the mosfet partially on. Another problem with the op-amp part of your circuit is that you have the V+ held at ground. To drive the op-amp output high, V- needs to be driven negative, below any offset voltage the op-amp may have. However, the recommended operating condition is for the common mode input to be at least 2V above the negative rail. So, all in all, that op-amp, as configured, does not reliably switch that mosfet.
However, in simulations, I have done similar to you and used op-amps to drive mosfets.
That is because the op-amp model was available, and models for gate driver chips for that particular simulator were not available. However, if you are using spice, you can quite often download models of chips from vendors. In a real circuit, you most likely don't want to use an op-amp to drive the gate. You certainly don't want to use that op-amp in that configuration to drive that particular mosfet. You will not be happy.
What would work, for many applications, is to use one of the many available buck converter IC chips together with the appropriate external circuitry, such as inductor, capacitor(s) and feedback resistors and possibly compensation components (capacitor and resistor).
Edit: Answering a question from @mkeith
Can you elaborate on the problem with open-loop operation at light loads? For some reason I thought that a fully synchronous boost converter (always in synchronous mode) would act sort of like a DC transformer even at light loads, including no-load.
My answer originally made a statement that open loop (i.e. fixed duty cycle) synchronous buck converters will, under light loads, change their output voltage based upon the load. That was incorrect. (I did model the OP's circuit -- not exact components, and it did have a voltage rise when the load was increased. However, I now believe it was due to the op-amp being operated outside of its recommended operating conditions, and leaving the low side mosfet in my model partially, but not fully on).
Here is a model of an open-loop (i.e. fixed duty cycle) synchronous buck converter. I used two square waves 180\$^\circ\$ out of phase to drive the switches, as opposed to using the op-amp as an inverter as the OP did. It indeed acts sort of like a DC transformer even at light loads. The output is 6V as RL ranges from 1\$\Omega\$ to 1M\$\Omega\$. I did not attempt to avoid shoot through, and that may account for much of the noise after the initial transient has dissipated.
simulate this circuit – Schematic created using CircuitLab
Here is a sample output.
|
H: How does a half-flash ADC work?
Specifically:
Why does the first 4-bit ADC output the 4 MSBs? Say an input value of 8 corresponds to an 8-bit output of 0000.1111, where 0000 is the MSBs and 1111 the LSBs. If that same input value of 8 enters the first 4-bit ADC, that ADC has to output 1111 - and that is the LSBs of the 8-bit output, not the MSBs. What I also don't understand is that the input is supposed to be an 8-bit value (from 0 to 255) but a 4-bit ADC can process 0-15.
I guess the concept of the half-flash is to feedback the quantisation error into the first 4-bit ADC and per cycle I can raise the resolution by another 4 bit. First cycle 4-bit resolution (or 4 levels), second cycle 16 bit, and 64 bits on the 3rd cycle and so on. I could theoretically repeat the process and reach infinite resolution ... what is the point of having the second 4-bit ADC?
AI: Consider a 2-digit BCD half-flash ADC with a range of 0 to 1 (actually 0 to 0.99) volts. Assume an input of 0.72 volts, and the first ADC has a range of 1 volt. The first ADC will put out the first digit of the result - in this case, 7. The internal DAC will reconstruct that to 0.7 volts. At the subtractor, the 0.7 is subtracted from the input, leaving .02 volts.
What the block diagram you show does not make clear is that either the 2nd ADC has a much smaller range than the first, or the subtractor has gain. In this case let's say that the two ADCs are identical, and the subtractor has a gain of 10. Then the second ADC will have an input of 0.2 volts, and will produce an output of 2. The result will be 72, which is correct.
Of course, for a binary system as shown, the subtractor gain will be 16.
The trick with a half-flash is that the first ADC produces a small number of bits, but the non-linearity is better than 1 lsb of the total range. That is, when the output is fed into the DAC, the output fed to the subtractor will not have an uncertainty of 1 bit at the first ADC, but rather 1 bit at the overall input.
In the case of the BCD version, the output of the DAC will step by 10 lsbs of the input range, and be accurate at that scale. If you will, the first ADC/DAC will provide values of 0, 10, 20, 30, etc.
EDIT - per a question in comment:
About the resolution and nonlinearity of the first ADC. Let's say that it is a "normal" ADC, with +/- 1/2 lsb uncertainty. Then, in the example, when the first ADC reports 7, this could mean anywhere from 0.65 to 0.75 volts. When you run this through the subtractor, the subtractor output (gain of 10) could be anywhere from .7 to -.3. This obviously isn't going to work. Instead, the first ADC must still represent 10 levels, but they must be accurate to +/- 1/2 lsb of the overall resolution. So, a 0 can be anywhere in the range of 0 to .1 +/- .01. 1 can be anywhere in the range of 0.1 to 0.2, with the limits accurate to +/- .01. And so on. I hope this helps.
|
H: Printing or "Stamping" a Transistor
I've looked around at quite a few websites to see if it is possible to make a transistor at home. I found that recently they have successfully "printed" nanocellulose using an ink jet printer (https://hackaday.com/2021/05/12/3d-printed-transistor-goes-green/). However, I doubt that ink would be something you could easily acquire at home.
I wondered why powdered doped silicon printed out of an inkjet wasn't done but I'm guessing it is too porous or something to make it effective. Also the "secret yellow dots" (https://en.wikipedia.org/wiki/Machine_Identification_Code) would probably really mess up your circuit. I looked into fox hole radios (there's a PN junction there) and that is not reliable because you have to find a "hot spot".
While searching stack exchange someone mentioned that DIYers had previously made "point-contact transistors". It appears that it is two pieces of gold foil with germanium in between. As exciting as printing out gold dust slurry and germanium slurry would be on an ink jet... ... would it be possible to use gold foil and germanium foil and a dot matrix printer to "stamp" a transistor out by layering them? I'm sure the best solution is "just try it", but hopefully there is someone that knows the physics well enough to know if it is possible.
AI: "Powdered" anything means amorphous. Amorphous means not crystalline. Not crystalline means no clearly defined band structure. No clearly defined band structure means no semiconductor behavior. No semiconductor behavior especially means no transistor.
You need a chemical/physical step that forms a periodic potential structure to get something like a semiconductor. So, growth of oxide crystals on an old razor blade works, taking that oxide as powder does not.
So, anything you can print easily won't work, sorry.
You can make transistors at home, but it will tend to look more like Sam Zeloofs Garage than like a slightly modified iinkjet:
|
H: Why the "Extra" Diode in this DC Motor Driver?
I've been shown this circuit, which uses a MOSFET to allow a 5V signal (from an Arduino Uno) to drive a 5V DC motor. I know that diode D1 is a flyback diode to protect the motor when it turns off, but what is the function of diode D2?
EDIT: I misspoke; D1 protects the MOSFET, not the motor. Whoops!
AI: The inductance of the motor will ring with the output capacitance of the FET. The Schottky diode in series with Q1 will reduce noise.
Motor control circuits are notoriously noisy. If noise is not a problem, the series diode may be removed. Modern standards for radiated and conducted noise are much more stringent that was the case many years ago.
|
H: Optocoupler Package Markings
I have a 4PIN dip octocoupler that’s cracked on the board of a power supply I’m fixing.
The markings on the optocoupler are
CT
817C
953K
What does the 953K mean? Is it a manufacturing batch number?
Thanks in advance!
AI: According to CT817C datasheet from CT Micro:
9 = Fiscal Year
53 = Work Week
K = Manufacturing Code
|
H: Recreating Arduino UNO - PCB design (How are the components connected?)
I'm trying to recreate the Arduino UNO in order to be good at PCB design via EasyEDA or Altium. I have recreated the Arduino Uno schematic and design but it is hard for me to figure out few things (please refer to the image.)
Why are those parts left aside from the main microcontroller assembly (denoted by numbers)
I have figured out how to connect those connections using an eye tool or net wiring, but could not figure out what were those numbers (pointed by arrowheads) in resistors
If anything that I should be reading or referred to draw PCBs
AI: The sections of the schematic you have numbered one to 7, are simply a way of making the schematic more readable. Even though they look separate, they are interconnected with the MCU by net-names (Vin, +5V, GND, etc..). The resistors in box "5" are just left over resistors not used, but still a part of the resistor network.
As @wouter-van-ooijen says the resistors are actually a "resistor network"; a single component containing multiple resistors. See the image below of the two types of resistor networks available; yours are most likely the bottom one.
There are many good online resources online; one of my favourites is Chris Gammel's Getting to Blinky (even though it is for KiCad, but the principles are the same)
|
H: How does this "single-ended push-pull with transformerless driver" work?
I'm studying the circuit block below in an Italian textbook, "Dispositivi e circuiti elettronici", by M. Gasparini, D. Mirri.
The circuit is called "single-ended push-pull with transformerless driver".
Without giving any comprehensive explanation, the book says that, if C2 is sufficiently high, then, dynamically:
The potential on point B may rise higher than VCC
The potential on point A' (which is very near to VCC in the quiescent condition), may increase by about VCC/2
Now, what is not so clear to me is how to clearly explain the 2 points above and, effectively, what is the push-pull operating class (A, B, or AB) in this condition?
I've searched in different books (and on the net) the above scheme, without finding any reference/explanation or what the circuit is called.
AI: Answering the last question first, the T1/T2 pair must be operating class B. You would not want any significant current passing through them in the quiescent state, in order to allow the R1/R2/R1/R2 chain in the middle to hold node A at Vcc/2.
Assuming that R' is significantly less than R, then node A' will be close to Vcc in the quiescent state. This means that the voltage across C2 is close to Vcc/2.
Now suppose that the current through T3 is reduced. Since the voltage drop across the upper R drops with decreasing current through T3, the voltage at B rises faster than the voltage at A'. This drives current into the base of T1, turning it on. This will drive node A close to Vcc, and since the voltage across C2 can't change rapidly, this will drive A' above Vcc by about Vcc/2. This is a form of "bootstrapping". And even though B starts from a lower voltage in the quiescent state (presumably somewhere around Vcc/2), it too can reach a voltage that's higher than Vcc.
Basically, during this high output state, C2 is discharging through both R' (back to the power supply) and R (providing the base current for T1). That's why it has to be "sufficiently large".
Here's a simulatable version of the schematic. I'm not sure what to make of the results, however. Maybe more later.
simulate this circuit – Schematic created using CircuitLab
|
H: Unexpected voltage from my H bridge
I have a problem with my H bridge. I don't understand why I have -1V insted of 12V or -12V.
AI: According to comments bellow my post, I use power mosfet and it works, I assume that it depends of internal resistance.
I use BSP171 P-MOS and power_mbreakn.
|
H: Combinatorial loop of SR latch
I implemented an SR latch in Verilog.
module sr_latch(
output Q,
output P,
input S,
input R
);
nor(P, S, Q);
nor(Q, R, P);
endmodule
However, Xilinx ISE reports a warning:
WARNING:Xst:2170 - Unit sr_latch : the following signal(s) form a combinatorial loop: n0000.
Is this warning avoidable for implementing an SR latch?
Should I just ignore it?
AI: It doesn't seem avoidable. Even in this document by Xilinx themselves they use this Verilog code to generate an SR-latch
module SR_latch_gate (input R, input S, output Q, output Qbar);
nor (Q, R, Qbar);
nor (Qbar, S, Q);
endmodule
module SR_latch_dataflow (input R, input S, output Q, output Qbar);
assign #2 Q_i = Q;
assign #2 Qbar_i = Qbar;
assign #2 Q = ~ (R | Qbar);
assign #2 Qbar = ~ (S | Q);
endmodule
which returns this warning when synthesizing (I couldn't recreate the warning you show in your question)
Critical Warning: 1 LUT cells form a combinatorial loop. This can create a race condition. Timing analysis may not be accurate. The preferred resolution is to modify the design to remove combinatorial logic loops.
|
H: What does the integrator do in a Delta-Sigma converter?
This question is in reference and an extension to this earlier question: How a Delta-Sigma Modulator Works. The role of the integrator was discussed in the comment section but I'm not following at all.
In the first and initial cycle where I input 1.015V and the DAC initially outputs a 0, the first op-amp essentially acts as a subtractor and outputs the difference - 1.015V - to the inverting input of the integrator. Assuming the non-inverting input of the integrator is tied to ground, the integrator ramps down from 0V to -1.015V then feeds an inverted -1.015V to the ADC. Why not just skip the integrator and feed the 1.1015V directly from the output of the first op-amp to the ADC? The integrator seems not do anything other than delaying the the conversion and inverting the polarity of the signal.
On the second cycle, the 8-bit ADC outputs the binary value of (negative) -1.010V and through the DAC, I assume the polarity will flip back to positive so the first op-amp will subtract 1.010V from the original 1.015V input. The first op-amp outputs (negative) -0.005V and the integrator discharge from -1.015V to -0.005V before feeding -0.005V into the ADC. The 8-bit ADC cannot resolve down to 5mV and will output 0. Again, why we add the integrator to delay feeding the 0.005V error to the ADC?
AI: Assuming the non-inverting input of the integrator is tied to ground,
the integrator ramps down from 0V to -1.015V then feeds an inverted
-1.015V to the ADC.
and
The integrator seems not do anything other than delaying the the
conversion and inverting the polarity of the signal.
The integrator symbol used in the original answer is a triangle (like an op-amp) but it has one input terminal and one output terminal. It therefore shouldn't be regarded as an op-amp integrator because an op-amp integrator is in fact an inverting integrator. Regard it as a pure mathematical integrator where a constant positive input produces a positive ramping output.
This means that after a short time during which the integrator has magnified the error (due to integration), the ADC output becomes one LSB higher. This now forces the DAC to be one LSB higher and, the output from the subtractor must now become negative. This then causes the integrator to ramp down. In effect, the ADC output toggles between a slightly low digital value and a slightly high digital value. The ratio of high time to low time can be used to estimate more precisely the actual analogue input value.
Maybe it's worth a simulation with a 2 volt p-p sinewave input and we make a comparison between a regular 4-bit conversion and a 4-bit sigma-delta conversion: -
Here's the waveforms side by side of the DAC outputs: -
As you can see, both systems share the same input, reference and conversion clock and, both use 4-bit resolution components but, can you see that at the top and bottom parts of the sinewave, the sigma-delta output is clearly a lot busier than the regular DAC - it's hunting around the true input value due to the extra process of sigma (difference) then delta (integration).
Here's an easier view of the valley of the sinewave: -
Use right-click to open the image in a new tab to see better resolution. Here's a comparative view using part of a triangle wave: -
The sigma-delta output is clearly trying to improve the basic 4 bit resolution by hunting around the true analogue value. Here's what it's like using a 2-bit ADC and DAC: -
And here we have a 1-bit ADC and DAC: -
The sigma delta converter is producing goods that are much more amenable to extracting a truer version of the input signal.
|
H: At t=0 the voltage across the Inductor will immediately jump to battery voltage. Why?
While reading transients i come to read......
"the voltage across the inductor will immediately jump to battery voltage (acting as though it were an open-circuit) and decay down to zero over time (eventually acting as though it were a short-circuit)."
Why the inductor voltage will immediately jump to battery voltage. They say that thats because of the time changing magnetic field across the inductor that will induce equal and opposite voltage in the inductor according to Lenz-Law. But then at t= 0 they claim that the current is zero in the inductor. So when the current is zero in the Inductor then how a magnetic field can be built to induce opposite voltage across the inductor.
So whats happening at t=0 ? zero current and equal and opposite volage ? Since there is zero current then what makes the inductor voltage equal to battery voltage ? If there is current at t=0 then why its never mentioned anywhere in theory or in equation ?
AI: Switch closes. Electric field starts to propagate in the wire, much faster than electrons.
Maxwell equation number 4 says:
The magnetic field induced around a closed loop is proportional to the electric current plus displacement current (rate of change of electric field) it encloses.
which is defined by the fourth formula here:
Check the differential form. At the very moment of closing the switch, the current is 0, therefore the current density (J) is 0. However, as the electric field arrives to the inductor, a sudden change in electric field induces a very high magnitude rotational magnetic field.
Due to Lenz' law, another electric field will be induced to resist this sudden magnetic field, in the opposite direction. Thus, a counter electric potential is formed, without any currents involved. You can search the phenomena "Displacement field" to see how this magic occurs.
|
H: How to read timing diagrams: ak4554 audio serial interface
As a newbie I struggle to understand interface timing diagrams of AK4554.
I am asking this particular example but I believe your answers will enlighten me about how to read any timing diagram.
It is a audio DAC, and here its datasheet: http://www.akm.com/akm/en/file/datasheet/AK4554VT.pdf
In this datasheet it is written that LRCK is clocked at sampling frequency. And data-in is 96 times sampling frequency. The device is 16 bits so what should I do to data-in in rest of 80 cycles? In the datasheet it is not written anywhere that the data is latched at certain time... Should I repeat the data or am I allowed to float? Can I do other tasks in my processor during that time? What is the easy way to produce sound messages while struggling with other tasks in simple microprocessor-peripheral systems?
AI: In this datasheet it is written that LRCK is clocked at sampling frequency.
Yes, LRCK is the sample clock.
data-in is 96 times sampling frequency.
Nope. Check table 1 on page 9: SCLK should be either 32Fs (which is the minimum for 16 bit stereo, ie 2x16=32 bits per sample) or 64Fs.
Page 7 mentions a minimum period of 1/(96Fs) for SCLK, but this is only a timing spec.
The device is 16 bits so what should I do to data-in in rest of 80 cycles?
If you use SCLK=32Fs there are no extra cycles. If you use SCLK=64Fs then simply add 16 zero bits as shown on Fig.3 page 10.I suffest using 32Fs since this is simpler.
In the datasheet it is not written anywhere that the data is latched at certain time...
Simply send the bits as shown in Fig. 4.
Can I do other tasks in my processor during that time?
Of course! Since your micro has an audio I2S/LJ/RJ interface with DMA, simply set it up and it will play your audio.
If your micro does not have this kind of interface, then use a micro that has one!
What is the easy way to produce sound messages while struggling with other tasks in simple microprocessor-peripheral systems?
If you want super low quality audio, use your micro's PWM and do it in software. If you want to use that CODEC chip then I supose you want higher quality, so you will have several MBytes of flash to store your samples, and a micro with DMA able to stream them.
|
H: How to calculate the maximum current rating of an ammeter manganin shunt?
I want to make a shunt for an ammeter using a manganin wire. I don't know how to chose the wire diameter for the desired current. For example, my shunt will have 0.05 ohms and it must withstand 5A for a continuous usage and 10A for short periods of time. What diameter of wire should I choose ?
AI: Start with the maximum power dissipation, which is 5 W.
The first thing you need to decide is how hot you are willing to let the wire get, which you haven't told us.
Then you find the length of wire that can dissipate 5 W while staying within your maximum temperature rise spec. This has nothing to do with the electrical properties of the wire. It's mostly just a geometry issue. You can probably look this up.
Once you have the minimum length of wire you need, find the diameter that gives you the desired 50 mΩ resistance over that length. Round up to the nearest available diameter, and make the wire longer to compensate.
|
H: Unexpected output from TPS73633 LDO
A while ago I had some trouble with a TPS73633 (see Troubleshooting handsoldered TPS73633 SOT-23 LDO whose output is unexpectedly high) . I didn't get the 3.3v output voltage that I was expecting, whatever I tried. Since I wan't sure whether the error was in my board design or somewhere else, and I wanted to get to the bottom of it, I designed a new board specifically for testing the TPS73633.
My final goal would be to use the TPS73633 to power a ESP8266.
My testing board looks like this:
it follows the "typical application circuit" design in the datasheet (http://www.ti.com/lit/ds/symlink/tps73633-ep.pdf). The board design looks like this:
Unfortunately, I do not get 3.3v out of it when I hook it up to one of these batteries:
At first I thought the problem was that I bought these ldo's cheaply in China, so I bought another batch to see if it woukld make any difference, and then I aven bought some from farnell, so I would be sure I had the real deal, but whatever I tried, I never got a output voltage of 3.3v. What I did get was 2v, 4v and 0.2v:
I even bought a few TPS73733DCQ ldo's, wich are basically the same thing in a SOT-223-6 package, but even those gave me 0.2v as output. So after all this testing I'm a bit lost at why I don't get a simple LDO to do what it should do.
I did all my tests while a load was attached ans with fully charged batteries. I also measured all connections and it doesn't seem like the problem is in my soldering skills.
The only other thing I could think of is that I either soldered at a too high temperature (300 c) or that there's something wrong with these batteries.
Does anyone have a clue what I'm doing wrong here?
Update 19-03
As suggested by peufeu I removed all capacitors. That actually did the trick for most of the ldo's, i finally got 3.3v out of it (except for the Chinese ones from the first batch, I expect them to be counterfeits, they just give a output voltage that is the same as the input voltage).
After that discovery, I started adding capacitors again, since the datasheet suggest doing so, in this diagram:
In my original design I was using capacitors of 1uf, 0.22uf and 0.01uf. To be honest, I don't remember why I chose those values.
In the datasheet I only found this information:
Although an input capacitor is not required for stability, it is good
analog design practice to connect a 0.1-µF to 1-µF low ESR capacitor
across the input
it doesn't say anything about the value of the other 2 optional capacitors. I tried using a 1uf capacitor for all three capacitors, and that seems to work fine. Is there some standard rule as to what values you would use in a case like this?
AI: post a schematic of your board would be helpful.
I would try a few things:
1) make sure that the layout matches the schematic;
2) make sure that the chip is real;
3) make sure that the battery provides the right voltage output when connected to the board;
4) make sure that the battery is correctly connected to the board;
5) make sure that your meters are accurate;
6) make sure that the chip is soldered on correctly;
7) put a small load to the ldo output;
...
basically, make as few assumptions as you can and start with the things that you are the most sure about.
|
H: Choosing Vds and id values for a Power MOSFET
I have a question regarding the choice of an appropiate power MOSFET in relation to the circuit below. What are the safety margins that I need to look for.
For example, Vdd = 100V and I = 5A. Obviously, buying a transistor with Vds = 100V and id = 5A is not a good ideea. What do you suggest the % safety margin is? And just a quick stupid question, Vds has to be greater than 100V, not 50V, right?
MOSFET is in saturation region, acting as a switch. (output is either Vdd or GND).
simulate this circuit – Schematic created using CircuitLab
AI: During switching, voltage spikes will occur.
Factors which make spikes worse:
fast switching
inductive load (incl. wiring)
bad layout with long, inductive traces
LC ringing
Load back-EMF if you use a motor as brake
This should be kept in mind when picking a MOSFET. For a 100V load, a 130-150V MOSFET should be adequate.
This document explains MOSFET avalanche breakdown on overvoltage:
http://www.vishay.com/docs/90160/an1005.pdf
Keep in mind that fast voltage spikes can also exceed maximum Vgs limit, which is usually pretty low. The very thin gate oxide layer is fragile and will be punctured almost instantly if Vgs spikes above the limit. This can happen on a burst of parasitic oscillations, for example.
Now, the current rating.
Usually, you don't select a FET based on current rating alone. Rather, you'd set a target for the maximum allowed dissipation, and this gives you a maximum RdsON value. Then, you decide on a compromise between RdsON and Qg which controls switching losses, with an aim to minimize total losses (resistive + switching).
Most likely, the FET you will select will have a current rating well above the current you will actually use, and what limits the current will be its maximum dissipation, which depends on how you cool it.
Keep it in mind for short-circuit protection, though.
|
H: uA702 OpAmp internal circuit confusion
I know that i don't have to understand the internals of Op-Amp ic to be able to use it but it's just some curiosity.
Anyway i've found this file have some explanation of the ic internals "through the ages".
I came to this very first circuit and there's something i really want to get ,
the document says that Q3 solves the problem of the differential pair and make us get the full gain of the single ended output instead of half of the gain.
This is supposed to be done using Q3 which acts as an OpAmp inside the circuit but i don't get how, and it would be great if somebody explained it to me.
Thanks in advance..
AI: As the paper explains, Q3 functions as an opamp in over a limited domain of operation in the sense that its output (collector voltage) is a function of the difference between its two inputs (base and emitter). Negative feedback via R1 insures that the base (inverting input) is held at a relatively fixed voltage with respect to the emitter (noninverting input).
This assists the rest of the circuit by making sure that all of the signal voltage variation across R1, which is caused by the current variation in Q1, appears at the upper end of R1 rather than at its lower end, where it is connected to Q1's collector. This means that the signal voltage is also imposed on the upper end of R2, where it can help drive Q4 and the rest of the circuit.
Q3 has essentially the same function as the upper transistor in a cascode amplifier, holding Q1's collector at a fixed voltage, but via a different mechanism.
|
H: Can Blackfin DSPs boot from 'any type' of SPI external flash?
So I'm working on a project that contains a Blackfin processor, specifically the BF592 datasheet
One thing that confuses me is how exactly the boot up process will work. During development, I plan on loading programs through the JTAG interface. This will also allow me to debug.
However when I want to actually deploy a program such that it can start up from scratch, I will need to place the program (loader file) on some type of external flash. Looking at it the BF592 datasheet, I can have the processor boot up external SPI flash.
One of the boot modes is described as follows:
SPI1 master boot from flash (BMODE = 0x2) — In this
mode, SPI1 is configured to operate in master mode and to
connect to 8-, 16-, 24-, or 32-bit addressable devices. The
processor uses the PG11/SPI1_SSEL5 to select a single SPI
EEPROM/flash device, submits a read command and successive
address bytes (0×00) until a valid 8-, 16-, 24-, or 32-
bit addressable device is detected, and begins clocking data
into the processor. Pull-up resistors are required on the
SSEL and MISO pins. By default, a value of 0×85 is written
to the SPI_BAUD register.
I'm confused. How exactly would this work? How would the SPI master know what opcode to submit to the external flash such that memory is retrieved? For example, I've been looking at this external flash IC. It requires an op code of 0x0B to be sent followed by the desired 3 byte address. How exactly would the SPI master on the Blackfin know to issue this?
I've been digging deep into the Blackfin manuals and this part just seems to be hand waved. What exactly am I missing? Is there some configuration registers in the Blackfin where this would be configured? Do I need to purchase some compatible flash IC? Is there some standard for SPI flash where the opcode 0x0B is always used when pulling data?
Thanks!
Update: I was finally able to find the dedicated hardware manual which explicitly explains the 'boot from SPI' sequence. See page 690. This says that it issues a 0x03 or/and 0x0B opcode.
AI: Yes, the Blackfin "just knows" that it needs to issue a read command followed by an address (all zeros) in order load software from external SPI flash. Devices from different manufacturers (mostly) use the same set of commands.
The code that does this is stored in a small ROM inside the Blackfin chip itself. You have to dig for it, but Analog Devices usually has the source code for that ROM hidden away somewhere in the library files that come with the software development tools, in case you want to look at it.
I have had occasionally needed to work around bugs in the ROM bootloader with respect to specific SPI flash devices by creating an extra stage of bootloading for my project. I use the ROM bootloader to load a second-stage bootloader from the SPI flash, which then loads the application code. The second-stage loader is mostly a copy of the ROM bootloader, but it is constructed to live within the limitations of the ROM code, while also fixing the bug related to getting the application code loaded.
If you're interested in the gory details, I wrote a Circuit Cellar article about it. The article itself is not available online, but the associated archive contains the source code for my replacement bootloader, and the comments there explain exactly what the problem was.
|
H: HVAC heating control with thermostat, on/off or continuous?
I have a honeywell rth8580wf heating thermostat.
I wonder what the output of the "white" terminal is (heating control).
Is it only going from 0V to 24V or is it continuous depending on the room temperature?
AI: It's a contact closure. When the thermostat wants heat, it closes a contact between the "Rb" and "W" terminals, which allows current to flow.
Here's a good reference for thermostat wiring in general.
|
H: Drain current of NMOS circuit is constantly changing on DMM
Hello I am working on a lab that pertains to the IV characteristics of n type MOSFETS.The lab calls for recording the value of the drain current while increasing VDS/VGS from 0 to 6 volts in steps of 1 volt.
I constructed this circuit
As I increase the voltage for VDS/VGS I am experience I weird issue with the Drain Current readings. It issue is that it is not very stable at all. The current is constantly changing. I took a video of it.
https://youtu.be/-LMfpRZQWVA
is there anything I can do with the circuit to fix this?
AI: FETS have temperature coefficients for Vgs versus Ids.
Thus heating changes Vgs and causes Ids to also change.
The usual method of measuring IV characteristics uses a voltage supply for Vgs, and a 2nd voltage supply+current meter for Ids.
To measure at high currents (or high power), use a "pulse" test setup. Tektronix 576 curvetracers has popular.
|
H: Positioning of the stabilizing capacitor for the ESP 12
I have been researching the ESP 12, and I am ready to hook it up. Because the Wi-Fi uses high current, it has been advised to add a stabilizing capacitor across the Vcc and ground terminals of the ESP 12. These are on opposite sides of the ESP 12.
I am wondering whether the capacitor should be right beside the Vcc, or right beside the ground.
AI: Ideally, a decoupling capacitor should be as close as you can get it to both the supply terminals of the device. However, when the supply terminals are far away, you must compromise. The underlying goal should be to minimize impedance in the local power supply, which means minimizing trace inductance.
The best solution would probably be a solid ground pour, and then placing the decoupling capacitor near Vcc and tying it to ground with big via or a few small vias.
If you do not have the stackup to do a ground pour, then place the capacitor near Vcc or ground, and connect the capacitor to the far terminal with a relatively wide trace.
Keep in mind, the ESP12 has its own local small value decoupling for handling high-frequency power requirements; the decoupling that you are placing should be designed to supply larger transient supply spikes (for instance, when the radio kicks in). As a result, the frequencies that this decoupling cap must handle are lower, and therefore the impedance requirements are somewhat relaxed (meaning a short length of trace in between cap and module isn't such a huge issue). I have done a design incorporating an ESP12 module, and I used a single 47 uF ceramic cap, placed near the VCC pin of the module. Depending on the nature of your power supply, you may want to use more decoupling, perhaps several 47 uF ceramics in parallel.
|
H: Secondary voltage referred to the primary
I am studying transformers and I am not completely understanding a basic detail, but it is annoying me.
When we draw the equivalent circuit of a transformer, why is the secondary voltage referred to the primary written as \$V_{2}'=\frac{N_1}{N_2}V_2\$ ?
AI: These currents --- in primary or secondary --- develop a flux in the core that is common to all windings. The response of each winding to the common flux is proportional to #turns in that winding. Thus a transformer is a self-regulating machine, with back-electromotive-force (back EMF) providing the negative feedback.
|
H: Electronic component that detects angular rotation like a potentiometer, but endless and digital
I am relatively new to this whole scene but am wanting to start a small project sending midi-signals from a board I am creating.
That said, I am a big fan of these endless, smooth digital potentiometers for example built into the Novation Circuit.
Im am having trouble finding these things for sale as I am not sure what exactly these are called. I found - sorry for the direct and probably wrong translation - "digital increment-givers", but reading into it the ones I found have sort of a grid, like giving 20 signals in the course of one rotation.
As I understand it, the ones I would need keep giving a consistent signal. Are these the droids I am looking for or are there different ones?
I could imagine if these are the ones I need that I would listen to the incremental signals and then add or subtract one depending on the rotation.
AI: The name for the thing you're looking for is rotary encoder. In particular, the incremental rotary encoder.
If connected correctly, you will get pulses such as these:
Depending on the direction you're turning, those two pins will flip its state in a different order, so you can find out the direction.
Most of the ones that you want to pay for will have up to 24 "increments" in one revolution. This is not that great, but you can easily get 48 steps from that, if you think a bit about how the signals arrive. You can even get 96 steps per revolution iff the pulses are square and have a 90 degree offset such as in the illustration above. Some encoders don't, and then your 96 steps will not be evenly spaced.
|
H: Help identifying a connector where wires are directly inserted into "slots"
My question is fairly simple.
Would you be able to identify this type of connector:
I can't seem to find it anywhere.It's not supposed to need any kind of crimp (wire is inserted inside as shown in the picture).
AI: It looks like this connector from TE Connectivity (p/n 3-643813-3): http://www.digikey.ca/products/en?keywords=3-643813-3-ND
The tool to punch the wires down (also from TE, p/n 59803-1) is this: http://www.digikey.ca/product-detail/en/te-connectivity-amp-connectors/59803-1/A9982-ND/132430
|
H: Arm mode and Thumb mode makes the PC's bit 0
The ARM and Thumb modes are word-aligned and halfword-aligned. I understand this means that if it's in ARM mode, the start of addresses must be divisible by 32, and if it's in Thumb mode it has to be divisible by 16. But how does this relate to the PC's bit 0 never used for anything? (The instructor said that because of that they used PC's bit 0 to show whether it's in thumb or ARM mode since it wasn't being used for anything else, but before that I'm confused about the relation between alignment and PC's bit 0)
AI: The bx instruction copies bit 0 to the T status bit, so it selects between ARM and Thumb mode on branch.
So, to jump to ARM code at address 0:
mov r0, 0
bx r0
To jump to Thumb code at address 0:
mov r0, 1
bx r0
To jump to Thumb code at address 2:
mov r0, 3
bx r0
ARM code cannot exist at address 2, because that would violate the alignment constraint. Neither ARM nor Thumb code can start at odd addresses, so the LSB of the address is always zero, and the bit is reused to select the new mode in the bx instruction.
To allow easy return from subroutines, bit 0 of the lr register reflects the Thumb state from before the function call after a blx instruction. So:
mov r0, #2f
blx r0
1:
b 1b
.thumb
2:
bx lr
will load the address of the 2 label with the bit 0 set (as the label refers to Thumb code) into r0, then blx loads the address of the l label with bit 0 clear (because it refers to ARM code) into lr and jumps to the bx lr instruction, which uses the lr to return to the endless loop in ARM mode.
|
H: What happens when we connect two generators in parallel with a single load?
I read that in large utility systems more and more generators are added to provide extra power. I'm lacking some basic knowledge here. So, what happens when we connect two generators in parallel? Is it like connecting 2 voltage sources in parallel with a resistor R? I'm trying to think it as a circuit but it doesn't help. Does the current in the load increase? And if so, how?
AI: DC Sources:
I think you look at each generator as if it is a DC battery. You can connect two DC batteries in parallel if they have the same voltage. Because they are DC. They provide direct current.
What if the voltage of one battery is greater than the voltage of the other battery?
Current moves from the battery that have higher voltage to the battery of the lower voltage. And the battery of the lower voltage will blow or damage (unless it is a rechargeable battery).
AC Sources:
generators do NOT provide DC voltage. They generate alternative voltage or current. I mean the value of the current changes depending on the time so, It has a frequency and maximum voltage value.
Imagine that there are two different generators operating separately, One of them generates the red voltage and the other generates the blue voltage as shown in figure:
Look at the point of time of (90 degree). The blue voltage is higher than the red one.
Look at the point of time of (180 degree). The red voltage is higher than the blue one.
So, at any time, The voltages of the two generators are not the same. As I said before, Current will flow from the generator of the higher voltage to the generator of the lower voltage and a damage will happen to the two generators. That's because a generator is supposed to produce current, Not to draw current so each generator will try to work as a motor and this causes damage.
How can we solve this issue?
A synchronisation process is the solution and a synchroniser is a device that controls the speed of the two generators so that they become in phase which means they should produce the same voltage in the same time.
The waveform of two synchronised generators must look like this:
This way we can connect millions of generators together to increase current capacity.
|
H: What happens when we connect two identical batteries in parallel with a resistance R?
I read somewhere that the voltage stays the same(it will be equal to the voltage of one of the batteries) but the output capacity increases. What does this mean? Will the current in the resistor increase?
Applying superposition theorem I see short circuits and therefore almost zero current going through the resistor. Am I right? So where's the increase and again what is output capacity?
We have something like this with a resistor across AB:
AI: The capacity of a battery is a measurement of its ability to output energy, i.e. current during a certain amount of time. It is usually measured in A.h.
If the current through the resistor is such that a single battery gets discharged in one hour, adding an additional battery with the same characteristics as the first one will make a discharge time of two hours (i.e. capacity of two batteries is doubled compared to single battery).
The current through the resistor doesn't change since the applied voltage remains the same.
|
H: How to add a 32-bit input in Quartus 2
Can you help me to add a multiple pins input in Quartus 2? If there is not a default one, how can I add it by MegaWizard Plug-in Manager? Thanks!
AI: I presume you are meaning in the Quartus Schematic?
If so, just change the name of the input (or output or bidir) to whatever[31..0]. The array indices are the same as if you did it in Verilog, except that a .. is used instead of a :.
|
H: Why does the stator field rotate at the same speed as the rotor field in a synchronous generator?
In the case of the induction motor the rotor never catches up with the rotating field of the stator because if it did the induced voltage would be zero as there is no relative movement between the rotor and the stator field.
What changes in the synchronous generator that makes the stator field rotate as fast as the rotor field?
(!)If they rotate at the same speed,there is no relative movement.So how is the voltage induced and the current that creates the revolving stator field produced?(!)
However, there is relative moment between the rotor and the windings. Is this what causes the current?
Edit: I completely understand how the induction motor works. What I'm trying to work out is the synchronous generator and why isn't there a problem if the rotor and stator field are synchronized as there is in the case of an induction motor leading to the 'slip'. Why don't we have a slip in the synchronous generator?
AI: In an induction motor, the speed of the rotor structure is always less than the speed of the stator field. However the rotor field rotates faster than the rotor structure so that the rotor and stator fields are synchronized with each other.
In a synchronous motor, the rotor magnetic field is produced by permanent magnets or by DC current in the rotor winding. In either case, the rotation of the magnetic field of the rotor is mechanically fixed to the motion of the rotor. For uniform torque to be produced, the both the rotor structure and the rotor field must move synchronously with the rotor field.
In other words, both synchronous and induction motors have synchronously turning magnetic field with torque produced in proportion to the angular displacement between the stator and rotor magnetic fields. In the induction motor, the rotor structure must turn at a slower speed than the magnetic fields while in a synchronous motor, the rotor structure must move synchronously.
Re: Question Edit
In a synchronous generator, the stator magnetic field rotates behind the rotor magnetic field with respect to torque angle. It is the relative motion between the rotor magnetic field and the stator windings that allows the magnetic field of the rotor to produce current in the stator. The current produced produces a rotating magnetic field in the stator that is synchronous with the rotor magnetic field but has a torque angle displacement.
|
H: Why doesn't the LM7805 circuit short circuit?
I'm new to electronics and have made some simple projects using a LM7805. I'm using the following schematics:
I don't understand why this circuit works. Why doesn't it short circuit? I thought electricity always takes the "easiest" path which in my opinion would be the following:
Why does the electricity even bother going through the 7805 to begin with while it can just fill up the 0.33uf capacitor and then go right through it back to the battery?
AI: Let's have a look inside a capacitor to see what prevent the short circuit.
A capacitor consists of two conducting plates. And there's an isolating plate between the two conducting plates.
How is current able to pass through the the isolating plate?
When there's a change in electrons (charge) on one conducting plate, a change in the charge of the other conducting plate occurs. The two conducting plates affect each other because the isolating plate is very thin. This way current can pass through a capacitor. I consider it as a virtual current.
When you apply an AC signal, a short circuit will happen because there's a rapid change in the charge of the plates.
When the frequency is decreased, lower current can pass. Capacitor is considered as a resistor and this resistance is called "capacitive reactance".
When you apply a DC voltage (zero frequency):
At the beginning, the capacitor will charge because applying DC voltage itself is considered a change in charge.
After completing charging, current will stop flowing because there's no change in the charges.
|
H: Finding voltages in a circuit with no power supply
I´m trying to solve the following circuit
However I cannot understand how is possible that E2 and E1 have voltage if there is no visible power supply on the circuit. What´s the approach to solve this circuit?
AI: E1, E2, and ground are the visible voltage.
You haven't specified what "solve" this circuit means, but you have everything you need to find the voltages at every node and all the currents between the nodes.
Hint: All but one node voltage is explicitly given to you.
|
H: Manually storing the stack pointer rather than using push and pop (Assembly)
What will be the instruction sequence that implements PUSH R4?
A: SUB SP, SP, #4
STR R4, [SP, #0]
B: STR R4, [SP, #0]
SUB SP, SP, #4
C: ADD SP, SP, #4
STR R4, [SP, #0]
D: STR R4, [SP, #0]
ADD SP, SP, #4
E: All of the above are possible
The correct answer is E, but I don't quite follow.
For A, SP would point at the current stack value, and then the SUB command would move it to a new empty place in the stack where R4 is then stored.
For B, R4 is stored at the current place in the stack and then the stack pointer moves to an empty space. Wouldn't this overwrite the current value in the stack when R4 is written?
For C and D, wouldn't the ADD move it back from the stack? So in C, R4 overwrites a value, and in D, the next value pushed onto the stack would overwrite R4?
AI: These instruction sequences are not equivalent, they are 4 different ways of implementing stacks.
A is a top-down stack where the stack pointer addresses the most recent value.
B is a top-down stack where the stack pointer addresses the next insertion point
C is a bottom-up stack where the stack pointer addresses the most recent value
D is a bottom-up stack where the stack pointer addresses the next insertion point
All of them are valid ways for a CPU to implement a stack, and it would be up to the micro-architecture choice of the CPU, or the OS calling convention if the CPU doesn't have explicit stack instructions.
Traditionally, CPU stacks are top-down (stack starts at a high address in memory and grows down). The reason for this as I understand is primarily due to early CPUs with small address space and no virtual memory, it is convenient to have the stack and heap start at opposite ends of memory and grow to the middle. It was usually easier to do this with the stack than the heap. Now, with 64 bit address space and virtual memory, I doubt it makes much difference, but AFAIK, growing down is still the most common choice.
|
H: Accidentally connected an Atmega328p and an Atmega644p to 12v instead of 5v. Are they screwed? The 328 seems to be bricked, the 644 seems fine.
I'm working on a project using the Atmega328p and Atmega644p, so first I built myself a USBasp programmer board for each one and I've successfully programmed a blinking LED, which is awesome!
The not-so-awesome part is that in the process I apparently hooked each chip up to 12v DC on the VCC and AVCC pins. For testing whether my program was written successfully, I built out a small breadboard circuit using a power supply by "Elegoo Electronics" from an assorted kit I bought off amazon. It's supposed to supply 3.3v or 5v depending on where you set a jumper. However, while debugging the fact that my simple LED blinker program/circuit wasn't working, I found out that everywhere on the power supply that says it provides 5v actually provides whatever voltage is coming in the DC in (if I use a 9v adapter, it provides 9v, if I use a 12v adapter it provides 12v).
So now I have a 328 and a 644 that were each connected to 12v DC for a bit. The 644 still works as far as I can tell. I can still flash it, and my program switches its pin on/off every half second just as it's supposed to (as long as I connect it to the 3.3v instead of the so-called 5v). The 328 seems busted though. When I try to flash it with avrdude, I get the following error messages:
avrdude.exe: error: program enable: target doesn't answer. 1
avrdude.exe: initialization failed, rc=-1
Double check connections and try again, or use -F to override
this check.
Is it worth trying to salvage either chip? I bought one extra of each (now I regret not buying a few), so I don't need these two for the actual project. Is there any way I can test the 644 to see if anything was damaged by the 12v? Should I give up on the 328? Is there an obvious reason the 644 is still functioning while the 328 seems destroyed?
Thanks!
AI: 12v on an AVR is dicing. quite some times back, I experiemented over voltage on a few chips (PIC, AVR, STM8, and older LM3S). the PICs held quite well, but at 9v Vcc, it is 75/25 survival. AVR was 20/80 survival and LM3S was mostly dead. I would imagine at 12v it is worse for AVR.
Older chips tend to fare better than newer ones.
|
H: Turning on/off leds very fast. Will it do harm?
I am using an arduino board to controll a led strip. 3 transistor and 3 resistors are between the board and the leds. 1 for each color (RGB). I am using analogWrite to change the brightness.
But I wanted to make the leds even less bright, without adding another resistor. I figured out that when i turn them on and off very fast they will not flicker but they will appear to be less bright.
Will this harm the leds? Or is this ok to do? Do other people also use this trick?
AI: That's what the analogWrite function is already doing: turning the LED on and off rapidly is called pulse-width modulation (PWM), and it's the standard way of dimming LEDs. It's perfectly safe at any reasonable frequency.
|
H: Atmega1284P-PU - how to connect more than one device through SPI?
I need connect 3 modules via SPI with ATMEGA1284P-PU. I looked on datasheet, but I found only one SPI.
On which pins are other SPI?
AI: You can use several modules using the same exact pins. The trick is to not let these several devices work at the same time. It is easily done by keeping only one Slave Select (SS) line low. What I mean by that you can have multiple Slave Select lines and control them from your application.
I would also advise using pull-up resistors on these Slave Select lines to avoid interference when writing your program to flash because programmer also uses SPI interface. I've had a problem like that myself some time ago
|
H: 25A USB port blew up a TP4056
I have made a bench power supply from an old PSU and added a USB port connected directly to the 5V rail (leaving the data pins disconnected).
When I connected a TP4056 charging board (like this), the TP4056 smoked and blew up.
I am trying to understand if the board was defective (or I messed up with the wiring) or if the USB must not provide more than 1A (the 5V rail of the PSU provides something like 25A).
As far as I know it does not matter how many amps the USB can provide since the connected device always draws what it needs, but I like a confirmation from someone more expert than me :)
Cheers
AI: USB ports are supposed to limit current somehow. This can be totally disconnecting power for a while when overcurrent is detected (like a resettable fuse), or current limit. However, no device should rely on this.
Being able to providing more than the maximum normal current should be OK, as long as the device is working properly. Most likely, your output wasn't supplying 5 V. Check it with a voltmeter.
Of course you should have done that before plugging in something that could get smoked by the higher voltage.
|
H: Purpose of bare copper perimiters on HF PCBs
I have seen many RF PCBs with a thick, rounded area of exposed copper going all around the PCB, like in this example:
My intuitive thought is that this is for some kind of grounding or shielding can, however I have seen this on PCBs that had none of that.
What is the design reason for this characteristic, rounded strip?
AI: Think of your PCB as parallel plate waveguide, with the top and bottom ground planes acting as parallel plates.
Now, since you want waves neither exiting nor entering this waveguide, you'd try to build a "wall" around them – or more, a fence, which the small vias around the edge do.
The fact that the ground plane is exposed around the corners (i.e. no solder mask) might point to the PCB potentially being mounted in a conductive, hence shielding if grounded, enclosure.
Rounded corners are nicer to handle, and don't break off. Also, æsthetics.
|
H: Why does stator current increase with increasing load in a synchronous motor?
Isn't the current, coming from the 3-phase source of the stator, constant? We don't change the source, so how does the current change? Is a voltage induced because of the magnetic field of the rotor?
This leads me to another question which might help: Does the rotor's magnetic field of an induction motor affect the stator's current value?
AI: Since the motor converts electrical energy to mechanical energy, the electrical input power must be equal to the mechanical power transmitted to the load plus power lost in the motor. Since the input voltage is constant, that would have to be reflected in the input current and power factor since power = voltage X current x power factor.
The mechanism by which electrical power is converted to mechanical power is explained using the equivalent circuit of the motor. Just as in a DC motor, a back EMF is generated in an AC synchronous motor. So the motor equivalent circuit is an AC back EMF generator in series with the internal impedance of the motor. The back EMF opposes the source voltage so that the stator current is proportional to the source voltage minus the back EMF divided by the internal impedance of the machine. With an AC machine, the voltage, current and impedance values are all complex numbers. The phase angle difference between the terminal voltage and the back EMF is determined by torque angle, the angle between the rotating stator and rotor magnetic fields. As the name implies, the torque angle is proportional to torque.
This leads me to another question which might help: Does the rotor's magnetic field of an induction motor affect the stator's current value?
I don't think that is related to the synchronous motor question. The induction motor rotor current is ultimately supplied by the stator, but the magnetic fields in both the stator and rotor are pretty much constant as long as the applied voltage and frequency are constant and the ratio of voltage to frequency is constant.
|
H: What humidity sensors allow the user to configure their I2C address?
I'm building something that will need to use eight humidity sensors, and I want to use I2C to communicate with them. However, all humidity sensors I've found either have a fixed I2C address, or sometimes they have an address pin to choose between two addresses. The most I've found was the TI HDC1010, which has two address pins, so four possible I2C addresses, but that's still not enough.
Are there any humidity sensors that either have three address pins, or that have a fully configurable I2C address?
AI: As far as I know there is no humidity sensor with 3 address bits available (at least I don't know any).
Yet, you could use some I2C Mux (example: PCA9548) to switch between the different sensors. Additionally , this will also give you the benefit of lower capacitance on the I2C bus, thus resulting in higher possible speed (although this is rarely required for humidity sensors).
|
H: In an AM radio do the electromagnetic waves cause the antenna to resonate to produce an alternating current?
Looking at Hertz's original experiment with electromagnetic waves with the spark gap how did he know how to build his antenna size to receive the wave that generated the spark ? In what way is the AM antenna like a spark gap receiver? There is no spark from within the antenna is there ? So then any metal object will produce an AC current when an electromagnetic wave hits it? And an antenna is constructed to resonate at some particular frequencies in order to amplify the signal? Basically I am trying to understand what makes an antenna an antenna so to speak.
AI: A current will be created in any length of wire that is subjected to a moving electromagnetic field. How much current is created depends on the potential difference in space along that wire.
The length of the wire vs the wavelength of the signal determines how much potential is along that wire. Obviously if the wire is short in comparison to the wavelength, not much current will be generated in the wire. If it is exactly half a wave length long, the maximum potential difference will be achieved and maximum current will flow.
So yes, you can design an antenna to work best at a particular frequency. However, they still work at other frequencies but produce less signal.
AM radios are fairly low frequency and a have a long wavelength. You would need an antenna more than 200m long. As such you can not tune it that way. Further, you want to pickup various frequencies so you can listen to multiple stations so tuning an antenna would be a bad thing even if you could.
As for Hertz's experiment, a spark causes a very wideband radio pulse which is detectable by pretty much any piece of wire.
|
H: Switching input to instrumental amplifier with an analog mux (ECG, EEG)
I'm trying to find a cost effective method of taking samples of electrical signals of the body (and potentially plants).
Can you have one high precision instrumental amplifier and then use an analog mux to read different inputs (different electrodes on the scalp, chest)?
I looked at this mux (CD74HC4067). Is the on resistance a problem?
Do you have any other suggestions for a cheap EEG device?
Thank you for reading and best regards.
AI: For measuring biopotentials cheaply AND easily I would recommend using a dedicated analog front-end eg. ASD1194. One chip + some resistors and capacitors. All muxes, right-leg drive and ADCs are integrated, you just read your data over SPI.
|
H: Charging cells in series and parallel
I am looking into a electric vehicle charging system and using a cell that can charge at 1.6 A and a voltage of 4.2 V. Therefore I believe the max power I can expose this cell to while charging is:
P = IV = 1.6 * 4.2 = 6.72 Watts
Would I be correct in thinking that if I were to connect 90 cells in parallel then I could expose this system to:
6.72 * 90 = 605 Watts ?
Would it also be the case that if I connected two of these set of 90 parallel together by a series connection could I expose with system to:
605 * 2 = 1210 Watts ?
AI: Theoretically it's correct but,
probably you are talking about li-ion cells.
Please note that algorithm of charging for li-ion cells it's diffrent:
firstly it's charging at constant current (1-1.6A per cell) and after current is decreasing it's charged with constant voltage (4.2-4.4V per cell) so charging power is variable during this process.
|
H: Steady state response
Could you explain me please what does this question exactly want me to find? Or, how should I start solving ?
AI: To solve this you need to:
Calculate impedances of circuit's branches (note there is pulsation of voltage).
Calculate RMS values of voltages. (ad. 1-2 use complex numbers).
Solve circuit using by chosen method f.e. Loop current method.
|
H: Determine the I and V of a logic gate
I have got this problem in my text book.
Determine V and I of the following circuit:
The result is I = 4mA and V = +1V (ideal diode)
How can I solve it using assumption process? Please help me out.
AI: I assume you use the ideal diode model, i.e. for positive currents the voltage drop is zero, for negative voltages the current flowing is zero.
You have three diodes, for a total of \$2^3=8\$ combinations of on/off. But we can get smarter.
If the top diode is on, V=3, but this is not possible because of middle and bottom diode. So top diode is off, and the same goes for the middle diode.
Now, we know that top and mid are off, so no current can flow. If also bot is off, no current flows anywhere, so V=5 because there is no drop in the resistor, but this is not possible because of the diodes, so bottom diode is on thus V=1. Using ohms law you calculate I as (5-1)/1k = 4mA.
For more complex circuits, what you usually do is start by assuming a combination of ons and offs, you try to solve the circuit, and if the solution is compatible with the status of the diodes, then you are done, otherwise you change the assumptions and start again.
|
H: Is it 'normal' to use a DSP's internal timers to generate lower speed clock signals for audio codec sampling?
I'm currently working on a project that features a Blackfin BF592 processor (datasheet).
I'm trying to interface this DSP with a stereo audio codec (datasheet).
Originally, I assumed that this audio codec would have some high frequency clock signal (i.e. frequency much higher than the sample rate) that would likely be fed from some oscillator circuit. I believed that this input clock would be divided as necessary to get a lower clock signal (e.g. 44.1kHz).
However after looking through the datasheet of this codec, it appears that the expected clock signal inputs are meant to be 'exact' and relatively low frequency (i.e. 44.1kHz-ish). And because this chip is a delta sigma codec, it actually expects two sample rate related frequencies: the actual sample rate clock, and then a multiplier of the sample rate clock to achieve proper oversampling.
Because I plan on sampling at 44.1kHz, with a oversample rate of x32, I will need two clock inputs: 44.1kHz and a 1.411MHz.
So my question: is it standard practice to use a DSPs internal timers to generate these signals? Or would it be more common to have a dedicated IC generate these two signals?
AI: In general you try very hard to avoid adding extra components to your circuit. It's just extra cost, extra board space, more circuits to keep in stock, more that can go obsolete, etc.
In your case, I would definitely use the timers in the DSP, if that's feasible. I have no direct experience with it, but it seems to have three externally accessible timers, so it should work well.
...however
I don't think you have read up on the timing, especially not on the features in the DSP. Your DSP has two serial ports which they call SPORT0 and SPORT1. According to the manual you have linked, each can operate in any of these modes:
Standard DSP serial mode
Multichannel (TDM) mode
I2S mode
Packed I²S mode
Left-justified mode
If you read a bit about I²S, you will soon realize that your CODEC actually doesn't have that many clock inputs. It has an I²S port. Your DSP should easily be able to connect directly without having to use any of its timers. The details are described further in the hardware reference.
If you're new to this, it will be very confusing. There's not much to do about that, except practicing, and even then you still have to do the job, it just gets slightly easier the more you've done it.
MCLK
You still have to take care of feeding a valid MCLK. As you can see from Table 1 in the CODEC manual, it doesn't have a 32X oversampling mode. Delta-Sigma converters are strange. Page 9 indicates that whatever you feed on MCLK, it will be internally converted to a clock 256fs. Since you want 44.1 kHz, Table 1 gives you four equally valid options: ~11 MHz, ~17 MHz, ~23 MHz, and ~34 MHz.
This gives you some flexibility. To avoid unnecessary headache, run your DSP at a multiple of any of those, then use one of the timers to create one of the clocks above. Since there doesn't seem to be a quality difference between the options, I suggest generating the lowest multiple: 44100 x 256 = 11.2896 MHz.
I don't even know if the DSP timer can output a signal that high, but it seems to have a decent VCO/PLL, meaning that you can probably even feed the DSP and the CODEC the exact same clock signal, and then have the DSP scale it up internally:
That's a whole nother question though.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.