text
stringlengths 83
79.5k
|
---|
H: How can I get 17 VDC from 25 VDC with current may up to 10 A?
I'm new in electronic and electrical circuits, so maybe my question will be naive.
I have an electrical transformer that gets 220VAC as an input and gives 25VAC as an output, and can produce 20amp current.
I connect the 25VAC output to diodes bridge with capacitor(3300uF - 25v) to the positive and negative pins to eventually get 25VDC.
What I want is to reduce the 25VDC to 17VDC, I thought i could build voltage divider with variable resistor to get the volt I want, but it looks like the passed 10amp damaged the resistor.
What can I do ??
Here's what I tried to do:
simulate this circuit – Schematic created using CircuitLab
PS:
This schema is pseudo, it's only to clarify the connections, circuit in real life is working as expected with the right parts.
AI: 25V AC means 25V effective, which is sqrt(2)*25V = 35V 0-to-top. Hence when you rectify this to DC you can get up to 35V, because the capacitor will be filled according to the top of the AC wave.
1N4148 diodes are MUCH to feeble for this application. Use at least rated for 10A plus ample margin. You might need to limit the inrush current when you first plug your power supply in.
You can indeed reduce voltage with a resistor divider, but this is only practical when the current through the resistors is much smaller than the load current, which is clearly not the case for you.
A better way is to use a voltage regulator chip, like the 780x series or an LM317. These chips behave a s series resistors, but they constantly adapt their resistance to achieve the desired output voltage. But like any resistor, they will dissipate heat according to the formula I * V = P, which is your case is 10 * (35 - 17) = 180W. That is a lot of heat, which makes this solution impractical.
The best way to reduce (or increase!) a voltage at a substantial current is to use a switching regulator. Designing such a thing for the voltages and especially the current you need is difficult, I guess it is way above your head.
Luckily, 17V adapters are available in most computer shops, because these values are often used by laptops. 10A is a bit heavy, but they might be available. I suggest you buy one :) |
H: Maximum LEDs on a matrix
There are several methods to drive LEDs using a microcontroller. The easiest method is just connecting all LEDs to an own pin on the microcontroller. Say you have \$n\$ pins available, you'll be able to drive \$n\$ pins.
However, there are different methods to drive LEDs as well:
A diode matrix:
Divide the pins into two sets: one for current source, one for current sink. Set the sources to 0 by default and the sinks to 1. Now, to light one LED, set the connected source to 1 and the sink to 0. Do this for all LEDs, after each other.
Charlieplexing:
With Charlieplexing, you set the unused pins to input, giving those a high-impedance state, 'disconnecting' them from the circuit.
What I'm looking for now is an overview of these methods to drive LEDs (and other much-used methods to drive LEDs, in a more efficient way than the 1:1-method described before, if they are noteworthy). What I need for every method is two calculations:
Given \$n\$ pins, what would be the maximum amount of LEDs to drive?
When you need \$n\$ LEDs, how many pins will you have to use as a minimum?
AI: First, let's say we work with these two functions:
\$L(n)\$ is the maximum amount of LEDs that can be driven from \$\mathsf{n}\$ pins.
\$p(n)\$ is the minimum amount of pins needed to drive \$\mathsf{n}\$ LEDs.
1:1-method
This one is easy:
$$L(n)=n$$
$$p(n)=n$$
A diode matrix
At first, we need to determine the most efficient diode matrix. For example, you could divide 4 pins into two sets of 2, or one set of 1 and one of 3. Obviously, the amount of LEDs is given by \$\mathsf{length\cdot{}width}\$. We can say \$\mathsf{width=n-length}\$, so the amount of LEDs is: \$\mathsf{length\cdot{}(n-length)=-length^2+n\cdot{}length}\$. Given an \$\mathsf{n}\$, this is a parabola, which has a maximum when \$\mathsf{length=\frac{n}{2}}\$. You can also do this on gut feeling. So, the maximum amount of LEDs is reached when the two sets have an equal amount of pins, or differ only 1, in case of an odd number of pins. We can now say:
$$L(n)=\lfloor{}\frac{n}{2}\rfloor{}\cdot\lceil{}\frac{n}{2}\rceil{}$$
Also, we can now easily understand the function \$\mathsf{p(n)}\$:
$$p(n)=
\begin{cases}
1&\text{ for }n=1\\
\lceil{}\sqrt{n}\rceil&\text{ for }n\gt1
\end{cases}$$
I just included the cases for 1, as this is a special case. Normally, you can just use the second function.
Charlieplexing
In this method, we have two LEDs between every set of two pins. We can calculate the amount of sets of two pins with:
$$(n-1)+(n-2)+\dots+1 = \frac{n\cdot(n-1)}{2}=\frac{n^2-n}{2}$$
Now we can say that:
$$L(n)=2\cdot\frac{n^2-n}{2}=n^2-n$$
We saw that the amount of pairs of pins equals \$\mathsf{n\cdot(n-1)}\$. With some reverse thinking, this leads to:
$$p(n)=
\begin{cases}
1 &\text{ for } n=1\\
2\cdot\lfloor\sqrt{n}\rfloor-1 &\text{ for } n\gt1
\end{cases}$$
I just included the cases for 1, as this is a special case. Normally, you can just use the second function.
Other methods
I'm not aware of any other methods, as of Tuesday march 12, 2013. |
H: Power supply for speakers
I have two big speakers with only bare wires for input and a small alarm clock with a low voltage output. I was wondering how I would supply more power to the speakers (there are two of them)? I was hoping to gain the extra power supply from mains power( 240v in New Zealand). P.s yes I am only a hobbieist so please explain it like I'm stupid :)
AI: No, just providing an extra power supply, from mains or otherwise, will not allow the clock to drive the external speakers.
Here is a broad explanation of what needs to be done to arrive at a workable solution:
Determine if the alarm clock output is an actual audio signal, or (usually) a DC voltage that drives a piezoelectric buzzer.
If it is an audio signal, determine the AC component of the drive voltage of the sound output.
Determine the impedance (Ohms) of the external speakers you have, and the supported power (Watts) of each speaker.
Design or buy a power audio amplifier that is matched to the speaker impedance, to optimally drive speakers of that impedance, to the desired power level, as determined above. The amplifier should also accept an input signal of the voltage level the alarm clock's audio output provides.
Use the alarm clock audio output signal to feed the power amplifier signal input, and the power amplifier output to the speaker.
If, however, the question is essentially seeking some off-the-shelf product that does this job, this may not be a good site for the question. |
H: Is information lost when you downconvert an RF frequency?
I am trying to work out what happens when you downconvert an RF signal, such as what happens in an SDR device when tuning to a given frequency. For example if a device using a Zero-IF is tuned to 399MHz, then whatever signal you see at 400MHz will appear at 1MHz where it is then digitised.
Now imagine at 400MHz you see a signal consisting of just a carrier, which switches on and off very rapidly. On for one cycle, off for one cycle, on for another cycle, off again. If you assign a binary 1 to the 'on' cycle and a binary 0 to the 'off' cycle, I believe this would allow you to transmit 400,000,000 bits per second.
Now what happens if this signal is downconverted to 1MHz ready for the SDR to digitise? If the 1MHz carrier switches on and off at a rate of one cycle at a time, there will only be 1,000,000 transitions, although the original signal had 400,000,000 transitions in the same time period.
So what happens in this case? Does the 1MHz carrier cycle on and off at the original 400MHz frequency? Does that allow you to transmit your original 400,000,000 bits per second on a 1MHz carrier frequency? Or are the extra cycles lost somehow? What would the resulting signal at 1MHz look like?
AI: The thing to realize here is that if you take a sinusoidal carrier and switch it on and off, change its amplitude, frequency, or modulate it in any way, then it can be shown mathematically, but somewhat counter-intuitively, that what you are doing is introducing sinusoidal components at other frequencies. In fact, any periodic waveform can be represented as a sum of sine waves. Take for example, the square wave here:
The mathematical tool that allows this transformation is the Fourier transform. Here in the case of the square wave we can see it is made of the fundamental frequency, plus all of its odd harmonics. Even if the signal we care about isn't strictly periodic (they usually aren't), we can pick some segment of the signal that is periodic, or mostly so, and analyze that.
Similarly, your example of switching a carrier on and off also introduces higher frequency components than your carrier. In fact, any rapid departure from a perfect sine wave creates high-frequency components. This explains how information is not lost: these high-frequency components are also down-converted and detectable by your SDR, provided it has sufficient bandwidth to see them all.
It also explains why this modulation scheme is not used in practice: each switch on and off would create a lot of noise far away from the carrier spectrum. In fact, this might be one of the oldest modulation problems in radio: CW (the usual way to modulate Morse code, simply switching a carrier on and off) is exactly what you describe, albeit at a much slower rate. While it would be conceptually simplest to switch the carrier hard on and hard off, this creates what's called "key clicks", undesirable interference on other frequencies, as well as an audible "click" resulting from those high-frequency components being converted down to audio frequencies. Consequently, the carrier is actually slowly tapered on and tapered off to reduce the bandwidth occupied by the signal. The tapering is fast enough it's not perceived by the listener as a taper, but slow enough that the high-frequency components are negligible compared to the carrier. |
H: Calculate Potential Difference for capacitor
I have a simple circuit that controls a 5vdc power source. My question is; how do I know what the Potential Difference is? In the circuit below I am sending 5v DC to a component (dotted box is where the capacitor will go).
Edit: Not sure why this was voted down twice. Its a question. Some people are too easy to vote down on http://electronics.stackexchange.com. Shouldn't be this way.
AI: To know the potential difference across a capacitor, you also need to consider what is happening as a function of time. This is what makes a capacitor useful. If you apply a DC voltage to a capacitor, the potential difference across the capacitor will be that DC voltage.
If that voltage changes, then a current will flow with the aim of making the capacitor voltage the same as the applied voltage. Ideally, this current has no limit, and the voltages are always identical, but in practice there is some series resistance, even if only the non-ideal resistance of the wires and the capacitor, that limits the current and introduces a difference.
The magnitude of the current that will flow (ideally) is a function of the rate of change of voltage, and the capacitance:
\$ I = C\dfrac{dV(t)}{dt} \$
As others have said, it's unclear what you are trying to accomplish or what you are asking, so it's difficult to more directly answer your question. |
H: Why cable TV frequency range ends at 1 GHz?
Currently frequencies 50-1000 MHz are allocated for TV and data transfers. I understand why there are limitations for aerial or satellite transmissions, but why are they retained for cable transmissions? If a cable provider streams something only through RF cable network, why can't he use whatever frequencies he wants, higher than 1 GHz (ensuring that customers have correct receivers)?
AI: One factor (but not the only one) is certainly "plant leakage".
Plant in this case means the "cable plant" which is an odd term, but like a factory is also known as a plant, the installed base of copper and cable is also an asset and is called plant. Very strange.
Because there is so much cable plant in a system, they try to keep the costs down, all cables will leak RF and the higher the frequency the greater the chance. They have to decide on a cut off point, and trade off against cost. With more shielding and better connectors costing more.
Hopefully someone else will chime in with other reasons, perhaps historical, amplifier limitations perhaps. |
H: Is there any real time clock (RTC) which provides time resolution in microseconds?
I've been searching for high precision RTCs on Google but almost all RTCs like DS12C887, DS1307 provide time resolution in seconds which is ok for general use. Are there any RTC ICs which can provide finer resolution like in milliseconds and microseconds?
AI: I've often found it frustrating that external RTC devices (and for that matter even internal ones, for reasons I can't fathom) seldom offer resolution anywhere near that of the incoming time base, but I would find it extremely unlikely that any conventional RTC device would offer microsecond accuracy in any case. RTC chips are designed to minimize power consumption when the system is idle, and an RTC chip which used a 1MHz clock would almost certainly use more than 30 times as much energy as one that used a 32KHz clock.
Depending upon what you are trying to do, and your system's waking/sleeping patterns, it may be possible to use an RTC with somewhat coarse resolution in conjunction with a higher-speed timer/counter circuit which is triggered by an external event. Operation would be something like:
Have everything wait while asleep until the external event occurs
When the event occurs, start up a ~5MHz non-precision clock and start counting
Log the count and RTC time at the first tick of the RTC clock following the event
Log the count at the second tick of the RTC clock following the event
The difference between the count values at the two RTC ticks will indicate the rate of the non-precision 5MHz clock, and the count value at the first event will indicate how long before the first RTC tick the event took place.
For example, suppose that the RTC can be read in units of 1/256 second. On receives an event when the RTC clock reads 12:34:56 78/256. The RTC clock advances to 12:34:56 79/256 when the count reads 1,000, and then advances to 80/256 when the count reads 20,000. One could then figure that the "5MHz" clock was running at a speed of 4,864,000Hz, and thus the event happened 1,000/4,864,000 second before that second timer tick, which is to say--if the RTC clock was precisely correct--at a time of 12:34:56.3122944.
Note that one would have to wait up to 1/128 second to know the exact time when an event had occurred, but once one had seen the RTC advance twice one would know within a fraction of a microsecond the time of the event [and could thus determine very precisely the time between two distinct events between which the CPU had been sleeping]. Some Lattice Mach series CPLDs have a fast-starting clock oscillator which runs at 5MHz nominal, and have relatively low sleep currents, and thus might be usable for this sort of timekeeping. |
H: About gatekeeper
"Gatekeeper" is a telecommunication term.
The following paragraphs are from a technical book.
My question is quite simple: is "Gatekeeper" a software or a hardware?
H,323 is one of those ITU(International Telecommunications Union)
recommendations that specifies an overall architecture and
methodology, and that incorporates several other recommendations.
An H.323 terminal is an endpoint that offers real-time communications
with other H.323 endpoints. Typically, a terminal is an end-user
communications device that supports at least one audio
coder/decoder(codec) and may optionally support other audio codecs
and/or video codecs.
A gateway is an H.323 endpoint that provides translation services
between the H.323 network and another type of network, such as an
Integrated Services Digital Network(ISDN) or the regular phone network, which in ITU parlance is known as the General Switched
Telephone Network(GSTN).....
A gatekeeper is an optional entity within an H.323 network. When
present, a gatekeeper controls a number of H.323 terminals, gateways,
and multipoint controllers(MCs). But control, we mean that the
gatekeeper authorizes network access from one or more endpoints and
may choose to permit or deny any given call from one or more endpoints
and may choose to permit or deny any given call from an endpoint
within its control. A gatekeeper may offer bandwidth control services,
which can help to ensure high quality of service(QoS) if used in
conjunction with bandwidth and/or resource management techniques. A
gatekeeper also offers address translation services, enabling the use
of aliases within the network.
AI: A gatekeeper has both hardware and software. To talk ISDN or POTS you need hardware. You also need Ethernet, I believe, on the H.323 side.
There is also software to handle signalling, e.g. to turn a phone number into a H.323 address. |
H: Do I need to use a resistor with a microswitch to limit current for an Arduino Uno?
I'm using an Arduino Uno to control a small robot and I'm starting to add micro-switches as bump-sensors. I know that when I'm creating a circuit with LEDs I need to use resistors to limit the current so I don't mess up the Arduino, but do I need to do the same with the micro-switches? If so, how do I calculate the necessary resistance?
If it makes a difference, my micro-switch is Pololu #1405, Snap-Action Switch with 15.6mm Bump Lever 3-pin, SPDT, 5A. The Arduino outputs 5V and has a maximum of 40mA.
EDIT: I've done some more research and I found a page that seems to describe exactly what I want to do (http://arduino.cc/en/tutorial/button), don't know how I missed it before. So if I'm understanding the situation correctly, the only resistor I need is the pull-up/down one between the switch and ground, but none between the switch and the pin that's reading the switch.
AI: You do not need a resistor, as long as you set up the pin to be input. The current protection resistors are necessary only in cases where the Arduino pin is used as a current source, i.e. output. If the pin is used as an output pin, you may need a current-limiting resistor, depending on what is connected to the pin. Usually we don't connect switches to outputs though (except for unusual cases when the switches need power, switches are actually circuits, etc).
For an input pin you may want a pull-up or a pull-down resistor. However, all AVR chips have built-in pull-ups. After you configured the pin as an input pin, you activate the pull-up resistor by digitalWrite(HIGH) to that pin, or, as @mpflaga points out, on the newer Arduino toolchain, you can set the input mode and pullup using pinMode(in_pin, INPUT_PULLUP); line. See here for more information on pull-up/pull-down resistors.
If you want a discrete pull-up resistor (i.e. not use the built-in one, though I don't know why you would do that) the circuit would look as follows:
simulate this circuit – Schematic created using CircuitLab
The pin will read high when the switch isn't pressed, and low when it is. |
H: Batteries in series with different amp-hour ratings.
I have a 1.5 amp-hour, 12 V battery and I have a 10 amp-hour 12 V battery. I know the voltage will increase to 24 V when they are put in series, but I don't know what goes on amp-hour wise.
Does the new amp-hour rating take the amp hour of the low amp hour rating, does it take the 10 amp hours or is it like an average?
AI: It is bad practice to connect batteries in series when they don't have the same capacity. The battery with the smaller capacity will be empty before the larger one, resulting in a lower voltage for the smaller battery. At that point things will start to get interesting as the larger battery will start to charge the smaller one through the connected circuit and with reversed voltage. The cell is not designed for being reversed and charged and bad things may happen a.o.: leaking acid and exploding. Neither of these situations are desirable. This is also the reason why most manuals of battery operated devices urge to replace all batteries at the same time. |
H: is my AND logic gate broken? HD74LS08P
when i turn on the circuit, the AND (74LS08) gate keeps sending signal to the 4 outputs, even without signal from the inputs. I may have broken it because i applied 7,5 volts before correcting the circuit voltage.
AI: At best the part is marginal. To clarify (and to check your IC), if your inputs are all 0 your output should be 0, if all inputs are 1 your output should be 1. If this is not the case your gate is toast.
My only worry is that you mention "without signal from the inputs". The output is not a tri-state device. The output will always be either 0 or 1 (and never off - a third state). Leaving your inputs floating by not connecting a signal to them (and expecting that the output is off) is incorrect. |
H: Arduino and Bluetooth USB dongle
Is it possible to use a mini USB Bluetooth dongle like in the following picture in order to improve my Arduino Uno, so it can comunicate with other Bluetooth devices?
If it is, how can I do that?
AI: It is, in theory, possible, to make your Arduino talk USB to the Bluetooth dongle. Usually, however, the better solution is to buy a serial-to-bluetooth module, and connect that to the serial pins on your Arduino, or to pins on Arduino compatible usable by the SoftwareSerial library. |
H: Why does my PSU die everytime I test with multimeter
I've set up my PSU to use as a "lab power supply", I'm trying to obtain 12 Volts at 14 amps. I have the red, orange, yellow, black wires separated, and connected the green wire to ground to keep the PSU on.
That being said I feel that the problem is not necessarily with those first few steps. Every time I connect my multimeter between all the black and red, or black and yellow wires the entire PSU shuts off and won't restart until I touch the power wires to a grounding source (Some plates of metal I have).
I'm trying to figure out how I can test the connection with my multimeter without shutting off the entire PSU.
AI: A multimeter set to measure current (an ammeter) looks like a short circuit, or just a wire. When you connect your multimeter to the PSU, you are overloading it by presenting it with such a low resistance that it can not provide enough current to maintain its designed output voltage, so the protection circuits kick in and turn it off. Were it not for those protection circuits, a fuse in your meter would blow, or some wires would get hot and melt, or something similarly bad would happen.
More likely what you want to do is set your meter to measure voltage. In this mode, the meter looks like an open circuit, as if it's not there at all. Or, if you are interesting in measuring the current being supplied to a load, put the meter in series with the load, not in parallel with it. |
H: Wiring an illuminated toggle switch
I'm trying to build a model rocket launcher circuit. So far I have
this
The igniter requires 12V and 3A, BAT1 is 12.6V 9800mAh.
The arduino input pin should read HIGH when the igniter is connected and LOW when it isn't provided that SW2 is closed.
The when the arduino output is HIGH, the FET should trigger and the igniter should ignite provided that both SW1 and SW2 are closed.
Various Datasheets (they're not links because I don't have enough rep):
M1: FQP30N06L 60V Logic N-Channel MOSFET
D1: 10A05 10 Ampere Rectifier
D4: 1n4148 Small Signal Diode
C1: No datasheet, 50V 100µF
Could someone please look over this schematic and make sure everything works? And additionally, I want to replace SW1 with an illuminated toggle switch that has 3 pins, POWER, ACC, and GND. Could someone please edit the schematic to use that LED toggle as I don't know how to wire it. I was told this might work but they weren't sure:
Additionally, this circuit can be simulated at this link (Falstad).
AI: I don't quite understand the use of D4 to the Arduino Vcc pin. If it's intended as the primary Arduino power source you're going to have some problems because there's a rather large resistor (R6) in the way. \$\frac{12V}{10k\Omega} = 1.2mA\$. This is way too small for running the Arduino reliably. There's no need for it otherwise.
I would just hook the Arduino Vcc pin directly to the battery. Use a schottky diode if you intend to use multiple supplies, otherwise you can get rid of D4 completely.
There's also the issue with D1 and having the sense line hooked through the igniter. Personally I would rather not have any current flowing through the igniter unless I intend to fire it. If D1 is a fly-back diode, hook it directly around the igniter only.
Also, your voltage divider S2 sensor circuit is outputting a nominal 6V. This likely just barely in the acceptable range. You might be relying on the internal clamping diodes of the Arduino to clamp the logic level to ~5V. Since there are current limit resistors in place it the internal clamping diodes might be able to handle the higher input voltage all day, but is not optimal. Consider using a voltage divider which will output a lower nominal voltage.
If you have the datasheet for the illuminated switch you want to use, post a link to it and I'll update my answer to include how to wire it up. |
H: Setting up an IBL2403 motor driver
I received an shiny new IBL2403 the other day:
IBL2403 http://www.realmotiontech.com/Images1/IBL2403.jpg
However, I've been having difficultly using the EasyMotion Studio software that comes with it.
When I try and set up the driver for the first time, the wizard tries to deploy some test code (i.e. to send back the terminal voltage) on the driver. However, this results in the error message:
Asking for details on test failure gives:
An internal error has occurred in the application. Possible causes:
The board is not responding;
Your board codification does not match with the selected amplifier codification;
Some of the configuration files are not related with the version of program;
You might be down with the system resources.
WRT to the first bullet, the driver is known to be responding, as I can successfully send it serial commands. I'm at a total loss to the meaning of the other three error messages.
How can I set up this driver, when the reason for setup failing is that the driver is not set up?
AI: The EasyMotion studio software does not work properly with a brand new driver. Instead, you have to put an incomplete setup file on with an external tool.
Within EasyMotion studio, select Application→Create EEPROM Programmer File→Setup only..., and create a .sw file.
Now open the "EEPROM Programmer" software that comes with the install of EasyMotion, and write the file to the driver. For me, this succeeded without errors.
Now that there's some initial setup on the driver, the rest of the configuration should work from EasyMotion studio. |
H: Microcontroller/cpu for fast trigonometry in robot?
This concerns hardware that does weighs little, because a (fat cat sized, 6 legs with 3 DOF) walking robot should carry it around. Because of that walking it'll need to do a lot of trigonometry (using matrix math or not i'm not sure yet) and this is where this question comes from.
PIC, Arduino or cheap AVR is not fast enough to calculate everything 100/second and keep things like inertia and obstacle avoidance in mind, or even bruteforce paths/gaits.
Plan A is to carry the brain on the robot.
Be it microprocessor, micro ITX, nettop or other; what is efficient hardware to do trigonometry / matrix math fast?
I searched online and expected to find out about AVR, x86, or ARM
microcontrollers specialized in this but no luck there.
Plan B is to have a x86 machine connected via WiFi to do the heavy
lifting. Great for prototyping also, but i'd like this to migrate to
plan A eventually when the hardware miniaturizes. But even then, what
desktop CPU can do trigonometry the fastest?
Plan C is to distribute the load and have one power efficient microcontroller/core for each leg, although that is not the best solution for many reasons i like the extend-ability of it.
I have not decided on the language and/or library used yet, but prefer Pascal and C++.
(suggestions for more suitable tags welcome, i am new here)
AI: It does not sound like your application is really all that compute intensive. A dsPIC, for example, can execute 400 k instruction for each one of your iterations. That's a lot. It will be useful, however, to have good low level I/O capability, PWM generators, timers, and the like.
Sine and cosine is really not that hard to do in a integer machine like a dsPIC. I have done it a few time myself. The trick is to pick the right representation for angles. Radians may be nice from a theoretical point of view, but is inconvenient computationally. Degress are artificial and just silly. Use the full range of whatever your machine-sized integer is to represent one full rotation. For example, on a dsPIC, which is a 16 bit processor, one full rotation is 65536 counts, which is way more accuracy and resolution than you need to control a robot or that you can measure anyway.
One advantage of this representation is that all the wrapping happens automatically just due to how unsigned integer adds and subtracts work. Another significant advantage is that this representation lends itself particularly well to using lookup tables for sine and cosine. You only need to store 1/4 cycle. The top two bits of the angle tell you which quadrant you are in, which tells you whether to index into the table forwards or backwards, and whether to negate the result or not. The next N lower bits are used to index into the table, with the table having 2N segments (2N+1 points). Note that indexing into the table backwards is then just complementing the table index bits.
You can give the table enough points so that picking the nearest answer is good enough. For example, if the table has 1024 segments, then sine and cosine will be computed to the nearest 1/4096 of a circle. That's going to be plenty for controlling a robot. If you want more accuracy, you can either make the table bigger or use the remaining lower bits of the angle to linearly interpolate between adjacent table entries.
Anyway, the point is it seems your requirements for this processor don't match up with the stated problem. I'd probably use a dsPIC33F. It is certainly small, light weight, and much more power efficient than a full blown general purpose computing process like a x86 on a single board computer. |
H: NVSRAM - Storage Capacitor Selection
I was reading this document from cypress website on Various storage capacitor options for nvSRAM. In page 2 , under the heading 'key characteristics' (see link below), I came across various values for capacitors for different densities of nvSRAM.(e.g. 4 Mbit parallel nvSRAM- 68 μF with 10% tolerance)
My question is this: What basis did they use to arrive on particular value of a capacitor?
Capacitor options- nvSRAM
AI: The datasheet says that the capacitor needs to provide enough energy for the "store to nvram" action to work. Cypress will have done simulations of the chip to work out how much current is consumed for how long during this phase. That gives a value in coulombs for the charge consumed in that time. Plus their knowledge of what the minimum voltage must be during this process, gives a value for the capacitor. |
H: Voltage Follower Saturation
I'm trying to use a 0.5V signal as a source.
I've wired a TL072CN as follow:
VCC+ to 11V
VCC- to GND
OUT2 to IN2-
IN2+ to 0.5V
The thing is when I measure the voltage on OUT2 I get 10V where I am expecting 0.5V.
I made some test and for different values of IN2+ I get the values under:
IN2+ = 0.5V : OUT2 = 10V
IN2+ = 1.0V : OUT2 = 1.3V
IN2+ = 3.0V : OUT2 = 3V
IN2+ = 5.0V : OUT2 = 5V
Do you have any idea where this effect could come from?
Edit: the schematic (I didn't know you could do this)
simulate this circuit – Schematic created using CircuitLab
AI: From the datasheet you linked to:
ELECTRICAL CHARACTERISTICS VCC = ±15V, Tamb = +25°C (unless otherwise
specified)
Vicm : Input Common Mode Voltage Range min +/-11V, typ +15V -12V
Which means that at +/-15V supply, it is guaranteed to work with the inputs are more than 4V away from either supply, and will usually work from V-(+3V) to V+.
Yours is already doing at least this well...
Now the good news : you can find a drop-in replacement if you search for "rail to rail opamps" which are designed to avoid this limitation. Depending on your application, you can either use one of these, or re-design to use the TL072 within its specifications. |
H: Using Audio Input of computer as Oscilloscope for Hobby Electronics
I just discovered an app for mac that uses the Audio Input of the computer and the AD-converter in the sound card as an Oscilloscope.
First, I was thinking this can't work... but theoretically, it's a great idea! I know it can only measure for frequencies starting from ~20 kHz, but if you need to measure MHz, then it's just fine.
my question is: How reliable is this technique, independent form the Software? If you build a nice adaptor with overvoltage protection (max 2V), then I suppose there is no danger for the sound card on the computer, right? Any other experiences?
Thanks
AI: Generally speaking, the audio inputs of a PC are terrible. Lots of noise, terrible frequency response, etc.
For example, the frequency response of a PC audio input is typically 20Hz to 20 KHz, but could be much worse. Some audio inputs are only really designed for voice, which has a frequency band of about 300 Hz to 4 KHz.
But none of the PC audio inputs will be good down to DC (0 Hz). This makes the usefulness of this as an o-scope rather limited.
This is useful if you are looking at analog signals that are within the 20Hz to 20 KHz range. It can be very useful, actually, for this. But when looking at digital signals, or signals that have components outside of this range, then the usefulness quickly vanishes.
Here's the thing: You want your test gear to be trustworthy, and using a PC audio input is anything but trustworthy. Yes, there are some situations that you can probably make it work-- but it takes extra work to know if what you are seeing is valid or an artifact of the "scope". Your mileage may vary. For many things, it just isn't worth the effort to use the PC audio inputs in that way. |
H: How does one compute the ripple currents seen by a rectifier filter capacitor?
Suppose one is designing a full-wave rectifier bridge, with a known maximum current load, and a known input inductance. Assume three-phase for this example, though the same concept applies to single-phase:
The ripple current seen by the output capacitor is critical. If that current is too high, the capacitor will heat up, and its lifespan will be reduced. But how does one compute the ripple seen by this capacitor?
AI: Assume the system is already precharged and operating in a steady state. The bridge has two discrete states: either the capacitor is charging (a diode pair is forward biased), or the capacitor is discharging. Call the period P, the charge time DP, and the discharge time (1-D)P.
During the charge cycle, we can approximate the current entering the capacitor as a triangle, starting at 0, and rising to a peak.
$$
1: I_{charge}(t) = \frac{t I_{peak}}{DP}\\
$$
Assume that the output capacitance is large enough that its voltage ripple is small, meaning the current out of the cap during the discharge time is fixed.
$$
2: I_{discharge}(t) = I_{load}\\
$$
Computing the RMS:
$$
3: I_{RMS}=\sqrt{\frac{\int_0^{DP}I_{charge}^2(t) dt + \int_{DP}^{P}I_{discharge}^2(t) dt}{P}}
$$
Evaluating the integral:
$$
4: I_{RMS}=\sqrt{\frac{I_{peak}^2D}{3} + I_{load}^2(1-D)}
$$
Since we're in a steady state, the total charge into the capacitor during the charge cycle must be equal to the total charge leaving the capacitor during its discharge time:
$$
5: Q_{charge}=Q_{discharge}
$$
The total charge entering the capacitor is the area of the current triangle:
$$
6: Q_{charge}=\frac{I_{peak}DP}{2}.
$$
The charge leaving the capacitor during the discharge cycle is the product of the fixed current and time:
$$
7: Q_{discharge} = I_{load}(1-D)P.
$$
Which gives us:
$$
8: \frac{I_{peak}DP}{2} = I_{load}(1-D)P
$$
Solve for peak current:
$$
9: I_{peak}=\frac{2I_{load}(1-D)}{D}
$$
Substitute into equation 4:
$$
10: I_{RMS}=I_{load}\frac{\sqrt{D^3-5D^2+4D}}{D\sqrt{3}}
$$
From this we see that the ripple current seen by the output capacitor is a function of the load current and the fraction of the AC period spent charging the capacitor. As D approaches 0, the ripple current approaches infinity. As D approaches 1, the ripple current approaches 0. Longer charge times reduce the ripple.
Consider the choke currents and capacitor voltages during a charge cycle:
$$
11: V_{choke} = L\frac{di}{dt}\\
12: I_{cap} = C\frac{dv}{dt}
$$
During the charge cycle, we have approximated the current through the choke into the capacitor as a triangle with a height of I_peak. The average current into the capacitor during the charge cycle is half this peak. The length of the charge cycle is DP. The voltage across the choke starts at 0, rises to a peak approximately equal to the ripple voltage dv, then falls back to zero. We can approximate the average voltage across the choke as half the ripple voltage.
$$
di = I_{peak}\\
dt = DP\\
I_{cap} = \frac{I_{peak}}{2}\\
V_{choke} = \frac{dv}{2}
$$
Substituting into 11 and 12:
$$
13: \frac{dv}{2} = L\frac{I_{peak}}{DP}\\
14: \frac{I_{peak}}{2} = C\frac{dv}{DP}
$$
Solve both equations for dv, then solve for D:
$$
15: \frac{2LI_{peak}}{DP} = \frac{DPI_{peak}}{2C}\\
16: D = \frac{2\sqrt{CL}}{P}
$$
Substitute into equation 10 to find the RMS current seen by the capacitor.
So the length of the charge cycle is twice the time constant of the LC resonant circuit. Increasing the size of the choke spreads the charge cycle over a longer time, reducing the RMS current (and improving line harmonics). Increasing the size of the capacitor lengthens the time the choke is forward-biased. And increasing the frequency (decreasing the period) means each charge pulse can be smaller and deliver the same current. Thus, three-phase rectifiers have lower ripple current on their output capacitors than single-phase. This math indicates that for a fixed capacitor ripple current, a three-phase rectifier run with a single-phase input can only run ~30% of the three-phase load current. |
H: How to drive a 20mA LED From a 4mA max GPIO Pin
I have an IC that has an GPIO with which I would like to drive a LED.
Since the device will be running off battery, keeping the power use low (Zero maybe) while the LED is off as a priority.
The GPIO supplies 3.3V when turned on and 0.0V votes when turned off.
It also has a limit of a maximum of 4mA.
The LED has a forward current of 20mA and a desired forward voltage of 2.0V.
When the LED is turned on it will most likely be blinking (using PWM) in the low kilohertz range.
After poking around I believe this may be the type of circuit I need.
Question 1: Am I even close to being on the right track.
Question 2: What is the correct component to use for item (5), (Transistor or Mosfet), and how do I go about finding one (at the local Frys, RadioShack, Online) and how are they identified(specified)?
Question 3: Will the choice of item (5) have any effect on the ohm value of the resistor item (3)? Apart from the normal Ohms law for the 3.0V power source and the 2.0V LED.
Question 4: What would be the ohm value of the resistor item (2), if any is required.
AI: The circuit you show should work, but is unnecessarily complicated and expensive. Here is something simpler and cheaper:
Just about any small NPN transistor you can find will work in this role. If the B-E drop of the transistor is 700 mV, the LED drops 2.0 V, then there will be 600 mV accross R1 when the LED is on. In this example, that will allow 17 mA to flow thru the LED. Make the resistor higher if you can tolerate lower light from the LED and want to save some power.
Another advantage of this circuit is that the collector of the transistor can be connected to something higher than 3.3 V. This won't change the current thru the LED, just the voltage drop on the transistor and therefore how much it dissipates. This can be useful if the 3.3 V is coming from a small regulator and the LED current would add a significant load. In that case, connect the collector to the unregulated voltage. The transistor in effect becomes the regulator just for the LED, and the LED current will come from the unregulated supply and not use up the limited current budget of the 3.3 V regulator.
Added:
I see there is some confusion how this circuit works and why there is no base resistor.
The transistor is being used in emitter follower configuration to provide current gain, not voltage gain. The voltage from the digital output is sufficient to drive the LED, but it can not source enough current. This is why current gain is useful but voltage gain not necessary.
Let's look at this circuit assuming the B-E drop is a fixed 700 mV, the C-E saturation voltage is 200 mV, and the gain is 20. Those are reasonable values except that the gain is low. I am using a low gain deliberately for now because we'll see later that only a minimum gain is needed from the transistor. This circuit works fine as long as the gain is anywhere from that minimum value to inifinity. So we'll analyze at the unrealistically low gain of 20 for a small signal transistor. If all works well with that, we're fine with any real small signal transistors you will come accross. The 2N4401 I showed can be counted on to have a gain of about 50 in this case, for example.
The first thing to note is that the transistor can't saturate in this circuit. Since the base is driven to at most 3.3 V, the emitter is never more than 2.6 V due to the 700 mV B-E drop. That means there is always a minimum of 700 mV accross C-E, which is well above the 200 mV saturation level.
Since the transistor is always in it's "linear" region, we know that the collector current is the base current times the gain. The emitter current is the sum of these two currents. The emitter to base current ratio is therefore gain+1, or 21 in our example.
To calculate the various currents, it is easiest to start with the emitter and use the above relationships to get the other currents. When the digital output is at 3.3 V, the emitter is 700 mV less, or at 2.6 V. The LED is known to drop 2.0 V, so that leaves 600 mV accross R1. From Ohms law: 600mV / 36Ω = 16.7mA. That will light the LED nicely but leave a little margin to not exceed its 20 mA maximum. Since the emitter current is 16.7 mA, the base current must be 16.7 mA / 21 = 790 µA, and the collector current 16.7 mA - 790 µA = 15.9 mA. The digital output can source up to 4 mA, so we are well within spec and not even loading it significantly.
The net effect is that the base voltage controls the emitter voltage, but the heavy lifting to provide the emitter current is done by the transistor, not the digital output. The ratio of how much of the LED current (the emitter current) comes from the collector compared to the base is the gain of the transistor. In the example above that gain was 20. For every 21 parts of current thru the LED, 1 part comes from the digital output and 20 parts from the 3.3 V supply via the collector of the transistor.
What would happen if the gain were higher? Even less of the overall LED current would come from the base. With a gain of 20, 20/21 = 95.2% comes from the collector. With a gain of 50 it is 50/51 = 98.0%. With infinite gain it is 100%. This is why this circuit is actually very tollerant of part variation. Whether 95% or 99.9% of the LED current comes from the 3.3 V supply via the collector doesn't matter. The load on the digital output will change, but in all cases it will be well below its maximum, so that doesn't matter. The emitter voltage is the same in all cases, so the LED will see the same current whether the transistor has a gain of 20, 50, 200, or more.
Another subtle advantage of this circuit which I mentioned before is that the collector need not be tied to the 3.3 V supply. How do things change if the collector were tied to 5 V, for example? Nothing from the LED or the digital output's point of view. Remember that the emitter voltage is a function of the base voltage. The collector voltage doesn't matter as long as it's high enough to keep the transistor out of saturation, which 3.3 V already was. The only difference will be the C-E drop accross the transistor. This will increase the power dissipation of the transistor, which in most cases will be the limiting factor on maximum collector voltage. Let's say the transistor can safely dissipate 150 mW. With the 16.7 mA collector current we can calculate the collector to emitter voltage to cause 150 mW dissipation: 150 mW / 16.7 mA = 9 V. We already know the emitter will be at 2.6 V, so the maximum collector voltage would be 9.0 V + 2.6 V = 11.6 V.
This means that in this example we can tie the collector to any handy supply from 3.3V to 11.6 V. It doesn't even need to be regulated. It could actively fluctuate anywhere within that range and the LED current would remain nicely steady. This can be useful, for example, if the 3.3 V is made by a regulator with little current capability and most of that is already allocated. If it is running from a roughly 5 V supply, for example, then this circuit can get most of the LED current from that 5 V supply while still keeping the LED current nicely regulated. And, this circuit is very tolerant of transistor part variations. As long as the transistor has some minimum gain, which is well below what most small signal transistors provide, the circuit will work fine.
One of the lessons here is to think about how a circuit really works. There is no place in engineering for knee jerk reactions or superstitions like to always put a resistor in series with the base. Put one there when it's needed, but note that it isn't always, as this circuit shows. |
H: Working out thickness of a copper wire
This was one of the questions for the repetition paper before the test.
"A lit light bulb has a resistance of 1320 Ω. It uses tungsten as a filament, and when hot, has a resistivity of 0.92 µΩm. Assume the filament is 2.0 cm long, how thick is then the filament?"
I work it out as follows:
Using the formula for resistance in conductors:
ρ = Resistivity
l = length
A = cross section area
R = resistance
R = ρl/A
A = ρl/R
A = 0.92 µΩm * 0.02m / 1320 Ω
A = 1.39×10^-11 m^2
A ~= 13.9pm^2
That's just ridiculous. The answer given by my professor is 4.2µm^2.
What am I doing wrong here?
AI: 4.2 microns, not 4.2 microns squared. Area is not a unit of thickness. Divide your answer by pi, take the square root, and multiply by 2 to yield diameter |
H: Is there a PCB-layout related reasoning behind DDR memory package and footprint?
BGA DDR packages have a unique footprint. There are two columns of pads on both sides of the device, and an empty column in between.
Is there a reasoning behind the placement of these pads (in terms of PCB layout), or is this just a consequence of the design of the ddr3 silicon die?
More specifically, what I am wondering is, are there any tips/tricks/guidelines to place DDR modules on both sides of the board, directly across, or very close to each other?
AI: You can take a look at a DDR3 die and Xray photo of the same chip here : http://chipworksrealchips.blogspot.com/2011/02/how-to-get-5-gbps-out-of-samsung.html
You can see that the memory is organized along a central spine and that the pad are placed along this spine. I can't tell you more about the internal layout as it's not my field of expertise.
For DDR PCB layout you can read this Application Note :
Freescale AN 111 : http://www.freescale.com/files/training_pdf/VFTF09_AN111.pdf
For the chips' placement it's more a signal integrity issue as the timings are sensitives.
If your PCB and process technologies allows you any placement and your design is compliant with DDR/DDR2/DDR3 standard (mostly timing constraints) you are free to go with it.
I haven't seen a board with DDR3 memories for the moment, I only worked with a board with DDR2 chips. The five chips were placed on the same side (same or opposite side of the CPU) and side by side.
I can only recommend you to simulate your DDR design to be sure that your placement and routing are ok. |
H: Arduino Ethernet not visible in Router Status
I am trying to send over my code to my Arduino Ethernet board via the Adafruit FTDI friend. I am using the 1.0.4 version of the IDE and have the following settings:
Tools -> Board -> Arduino Ethernet
Tools -> Serial Port -> COM 3
The sketch is sent over to the board without problems or errors... however I am unable to see the IP of the board when i check my "attached devices" in my routers settings menu.
Here is a snip of the code I used:
byte ip[] = { 192, 168, 9, 199 }; //Manual setup only
byte gateway[] = { 192, 168, 9, 1 }; //Manual setup only
byte subnet[] = { 255, 255, 255, 0 }; //Manual setup only
// if need to change the MAC address (Very Rare)
byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED };
EthernetServer server = EthernetServer(199); //port 199
void setup(){
Serial.begin(9600);
//Pins 10,11,12 & 13 are used by the ethernet shield
pinMode(2, OUTPUT);
pinMode(3, OUTPUT);
pinMode(4, OUTPUT);
pinMode(5, OUTPUT);
pinMode(6, OUTPUT);
pinMode(7, OUTPUT);
pinMode(8, OUTPUT);
pinMode(9, OUTPUT);
//Ethernet.begin(mac);
Ethernet.begin(mac, ip, gateway, subnet); //for manual setup
server.begin();
Serial.println(Ethernet.localIP());
}
Here is a screen shot of the router settings:
And this is my Arduino ethernet hookup:
What could I be missing?
AI: On some routers the attached devices list only contains a list of devices that have requested a DHCP address. Try the following change to go back to using a DHCP address rather than a manual setup:
Ethernet.begin(mac);
//Ethernet.begin(mac, ip, gateway, subnet); //for manual setup |
H: dsPIC: Using FRC and OSC2 as an I/O pin
My question is about using FRC as an oscillator source and OSC2 pin as an I/O pin.
In several different manuals on dsPIC30F chips (ex: DS70139G, page 126) it is written:
LP Secondary 0 0 0 XXXXX (Note 1, 2)
FRC Internal FRC 0 0 1 XXXXX (Note 1, 2)
LPRC Internal LPRC 0 1 0 XXXXX (Note 1, 2)
Note 1: The OSC2 pin is either usable as a general purpose I/O pin or is completely
unusable, depending on the Primary Oscillator mode selection (FPR<4:0>).
I carefully searched manuals, errata, forums, etc. But I cannot find clarification on how to set the FPR bits in the FOSC register to be able to use OSC2 as an I/O that the Note 1 suggests. Is this possible at all?
Errata on dsPIC30F3012 suggests using FRC with PLL and postscaler to make this pin useable (DS80448D - pages 3, 15). But this errata states that this problem is applicable only to revision B0, that is relatively old. We have chips of revision B1. Was this issue really fixed in B1? If yes, the work around should not bee needed, but in this case how this feature should be used?
Are there any differences in dsPIC30F3010 and dsPIC30F4011 devices regarding this issue?
I use assembler from XC16 v1.11. The sample code is (the part that sets FOSC):
.section __FOSC.sec, code
.pword 0xC701; // FRC with 4x PLL.
Update:
I made experiments on programming undocumented FPR's into the device. As an environment I was using MPLABX and ICD3. It turns out that compiling always goes fine and the .hex file always contains the values that I put into the code. But some part of the tool chain blocks programming the undocumented FPR values into the device. After programming the "Configuration Bits" tabs shows the value C100 (FRC without PLL) and device works like this value is really programmed there.
The behavior described above (blocking programming) happens to majority of the values. The values 0x02, 0x04, 0x06, 0x08, 0x09, 0x0b, 0x0c manage to get through, but device does not seem to work properly.
These experiments have not destroyed the device.
AI: I haven't used the part in a while, but it should be just a case of setting the configuration bits correctly. You don't say what compiler you are using so I can't give a code example, but your compiler manual should let you know (the commands will probably be at the end of the include file)
Also, you are better off reading the 30F family reference manual (Section 7 is the Oscillator) as it's far more detailed than the datasheet.
If you let us know which compiler and maybe post the code you have already tried it would help give a more specific answer (e.g. a code example)
I notice in the table below the FPR bits are dashed out for the FRC without PLLoption, even though note 1 says the pin function is dependent on these bits (if so, how should they be set?) This seems suspicious, I think I would raise a support ticket with Microchip to see what they have to say on the matter. |
H: Is Rg, calculated in the linked PDF calculating the pulldowm/pullup resistor for a MOSFET gate?
I stumbled upon the following paper while looking for ways to calculate a specific value for the pullup/pulldown resistor for the gate of a MOSFET. I see several calculations for Rg. Is that the pullup/pulldown resistor for the MOSFET?
Related see this Q
AI: The lecture deals with DC steady state analysis of the MOSFET. Pull-up and pull-down play the role when there is switching or transient behavior.
I wouldn't call RG1 and RG2 in the lecture pull-up and pull-down. In the lecture, they form a resistor divider, which sets the gate voltage. They may have as well drawn an ideal voltage source between gate and ground. |
H: Meaning of a small rectangle in schematic drawings
What is the meaning of the rectangle at the top of this schematic?
AI: Yet another symbol for a rail or power supply voltage. According to W5VO it means "This is the symbol for ultra-low voltage CMOS supplies of 1.2V" |
H: i2c external pullup kills communication
I am using an MCP23017 (which shouldn't matter tho) to extend the number of GPIO pins of an arm based linux computer using i2c-dev bitbanging. It all works fine with wires of a few centimetres.
The problem is that i would need 2*~3m wire in between clusters of 3 such MCP23017 based modules. Now, if i use a 3m wire (0.75mm diameter) for SCL and SDA i do get an ack for setting the address but polling the IC for the pin states results in confusing replies.
Sidenote: The MCP23017 has 2 banks of 8 pins each, with the 3 metre wire only 1 bank seems to work and B turns into A... Not sure what happens to the signal there, i guess the address in the request is being corrupted on a regular basis.
So i figured to increase the signal strength i would add an external pullup (the gpio pins used for bitbanging already have an internal one), which again works nice for some 10cm wires but kills even the address ACK for the 3m wire.
I tried 2k2, 4k7 and 10k ohms, all with the (exact) same result from my low resolution perspective.
Unforunately i don't have an oscilloscope to look at the whole thing and i'm not too knowledgeable about all the details to figure this out by just staring at it, therefore trying this route.
I hope someone has an idea what is happening here, what i could try or where i could learn enough to understand this on a level that helps me fix this. Let me know if you need any more info.
UPDATE: 3m wire now works with 850 ohms external pullups. going down to 150 ohms it also works with 6m but isn't a great thing to do since one easily uses more power (to pull the line low) than most gpio pins are rated for.
AI: Given the high cable capacitance, you need a smaller resistor than 2.2k. Try 1k or 750. Also, a cable with lower capacitance will help. The communication to your i2c device is probably OK because bit banging would be slow. |
H: What does this circuit do? Op-amp with transistors in feedback
I'm an electrical engineering student and I find this circuit in a guitar amplifier schematic, and basically, I'm just wondering what it does, and how one would go about working out what it does. Like what approach do you take when you're confronted with this kind of problem.
P.S. The voltage source supplies for the op-amp are actually 16V, not 15V.
AI: In opamp feedback circuits, it's all about the current flowing in the feedback network, which must balance the current flowing from the input. Clearly, the transistors are intended to modify how the feedback network passes current, so the question is to figure out how they do that.
The basic feedback is provided by the string of resistors in the middle, which pass current according to the voltage difference divided by the total resistance. As long as the voltage across either of the 22K resistors is less than about 0.6V, neither transistor will turn on, and you have a basic inverting amplifier with a gain of about -10.
However, if the output voltage exceeds about +14V (note that the "" input of the opamp is held at "virtual ground"), the lower transistor will start to turn on. This will pass extra current through the feedback network, reducing the gain of the amplifier overall. In terms of the application, this provides a "soft clipping" or "limiting" function. The other transistor conducts when the output tries to exceed -14V, making the operation symmetrical.
Note that the diodes are required in order to prevent current from flowing in the "wrong" direction through the 22K resistor and the B-C junction of the corresponding transistor. |
H: Save energy with PIC project
I want to implement a device with 2 push buttons. When I press the first button, it will increment a counter. When I press the second, the number of times the first one was pressed (counter content) will be displayed in an LCD screen.
I want to use a PIC microcontroller and save as much power as possible in order to extend battery life. I think one good way is to put the system in sleep mode when nothing happens and when I press the first button, wake the system up. Is this better to implement with interrupts or is there another way? Also if someone has a better idea that will save energy, please share with me.
AI: Yes, interrupts and sleep mode are the best way of doing this. There is a list of ways to minimise PIC energy consumption in an app note: http://ww1.microchip.com/downloads/en/devicedoc/01146b_chapter%202.pdf |
H: PIC18F4580 - PORTD not being read correctly
I am experiencing a strange problem with reading back the value of PORTD while it is configured as an output. I am writing to LATD to set output on the port. The actual output is as I expect (verified with LEDs), but when I then read the value of PORTD, bits 0:3 always come back as zero...
My first thought was conflict with the ADC, but it is switched off, as are CCP1, ECCP and PSP which also share pins with port D. The code works fine when running in MPLAB SIM but the problem shows up both running normally and debugging with a PICkit3...
Here's the code:
void main (void)
{
char test;
ADCON0bits.ADON=0; //Turn adc off
ADCON1 = 0x0F; //Turn individual ADC channels off.
ECCP1CONbits.ECCP1M = 0; //Turn off ECCP module
CCP1CON = 0; //Disable CCP1
TRISEbits.PSPMODE = 0; //Turn off PSP module
TRISD = 0; //PORTD to output
LATD = 0xFF; //Set all bits high
Nop(); //Wait for output to propagate back to PORTD
test = PORTD; //test gets 0xF0???
}
This is really frustrating, all the more so because the hardware actually functions but my self tests keep giving false positives...
AI: I think it's the comparators still being attached - try adding:
CMCON = 0x07;
If you notice in the table CM2:CM0 have to be 111 (rather than 000) to have the pins disconnected and digital. Plus they correspond to pins RD0:RD3: |
H: How does one compute the ripple currents seen by a boost converter output capacitor?
Suppose one is designing a boost converter, with a known maximum current load, and a known inductance:
The ripple current seen by the output capacitor is critical. If that current is too high, the capacitor will heat up, and its lifespan will be reduced. But how does one compute the ripple seen by this capacitor?
AI: Assume the system is already precharged and operating in a steady state. The booster has two discrete states: either the diode is forward-biased (the booster switch is OFF), or the diode is reverse-biased (the booster switch is ON). Call the period P, and the duty cycle D. Thus the on-time time is from 0 to DP, and the off-time is from DP to P.
Assume that the output capacitance is large enough that its voltage ripple is small, meaning the current out of the cap during the on-time is fixed.
$$
1: I_{on}(t) = I_{load}\\
$$
During the off-time, we can approximate the current through the diode as a triangle, starting at a peak, and falling to a trough
$$
2: I_{off}(t) = I_{tr} + \frac{(I_{peak} - I_{trough})(P-t)}{(1-D)P}\\
$$
The current through the diode during the off-time is the choke current, which averages around:
$$
3: I_{avg} = \frac{I_{load}}{1-D}\\
$$
Define R to be the fraction above and below the average choke current that the choke current reaches. The peak current into the capacitor is thus the peak current of the choke, less the current going to the load. Similarly for the troughs.
$$
4: I_{peak}=I_{avg}(1+R)-I_{load}\\
5: I_{trough}=I_{avg}(1-R)-I_{load}
$$
Computing the RMS:
$$
6: I_{RMS}=\sqrt{\frac{\int_{0}^{DP}I_{on}^2(t) dt + \int_{DP}^{P}I_{off}^2(t) dt}{P}}
$$
Substitute and evaluate the integral:
$$
7: I_{RMS}=I_{load}\sqrt{\frac{R^2+3D}{3(1-D)}}
$$
Consider the choke current during on-time.
$$
8: V_{choke} = L\frac{di}{dt}\\
$$
The voltage across the choke is the input voltage to the booster. The time this voltage is applied is DP. The change in current is the total ripple current seen by the choke.
$$
9: V_{input} = L\frac{2RI_{avg}}{DP}\\
$$
Solve for R, and substitute for I_avg:
$$
11: R= \frac{V_{in}D(1-D)P}{2LI_{load}}
$$
Substitute this value back into (7) to find the RMS current seen by the output capacitor.
As a check, one can assume that L is very large, meaning R is negligible. Further suppose a 50% duty cycle and a fixed 100A load. If the diode is forward-biased, the choke is delivering 200A, 100A to the capacitor and 100A to the load. If the diode is reverse-biased, the capacitor is delivering 100A to the load. So for half the cycle, the capacitor is absorbing 100A, and for the other half the capacitor is delivering 100A. The RMS seen by the capacitor is 100A. This matches our computations.
Now, this assumes a purely resistive load. If a boost converter is driving a capacitive load, and there's wiring inductance between the converter and the load, you can get ringing effects that drive the ripple currents much higher. I've simulated as much as 1.7x the full load current, and I suspect 2x is possible with the right combination of capacitances and inductances. |
H: Is there a glitch / race condition at the output of this circuit?
The input rising and falling edges occur at time t = 0. The propagation delays of the respective gates are 3 and 4 ns, as shown.
According to my analysis, the waveforms are as below:
As I've drawn above, I don't think there's any glitch in the final output Y2, since both the inputs i1 and i2 have a transition at t = 0; and if there was to be a glitch, it would have been at t = 4 only if the 2 inputs to the AND gate were High before t = 0.
But since Y1 is Low before t = 0 and turns High only after t = 3, and by the time Y1 goes High, the other input i2 is already at Low. So there is no glitch in output Y2.
Is this correct? Or am I missing something here?
AI: A proper propagation-time specification for a gate should state the guaranteed minimum time and the guaranteed maximum time. Although simulators don't usually do so, I would suggest that you should model an OR gate with a 1ns minimum propagation and a 3ns maximum propagation as being:
-1- High if, within the last nanosecond, either input has or had been at a valid high level for at least solid nanoseconds.
-2- Low if, within the last nanosecond, both inputs have or had been at a valid low level for at least three solid nanoseconds.
-3- Indeterminate in all other cases.
Thus, if the OR gate in the above diagram had a 1ns minimum and 3ns maximum propagation, the output would have been low from time 0-1, indeterminate from time 1-3, and high thereafter.
Until time 1, the AND gate would have had its first input low (and its output would be low). Until time 2, it would have "within the last nanosecond" have seen input 0 low for at least four solid nanoseconds. At time 2, its output would go indeterminate until at time 4, it would again have an input that was low for four solid nanoseconds.
Thus, the output of the AND gate should be regarded as low until time 2, indeterminate from time 2-4, and low after time 4.
Note that if this approach indicates a circuit will work reliably, it will do so, but there are many circuit designs which will delay into having "indeterminate state" everywhere even though in reality they would have useful behavior. Such problems may often be solved by assigning minimum propagation delay values which are very close to the maximum values. This is not entirely realistic, and introduces the possibility that a circuit may simulate correctly but fail in the real world. On the other hand, it's often necessary to tweak only a small number of circuit elements in such fashion; if one can validate the real-world behavior of those few elements while having the rest of the elements behave "more sloppily" than real-world circuits, one should be fairly confident that signals which are reported as high, will be high, and those that are reported as low, will be low (those that are "indeterminate" could in reality be valid high, valid low, oscillating, sitting at mid-rail, or anything else).
Note that many simulations assume either an "inertial model" or a "transport model" for propagation delays. The inertial model assumes that an output will not appear to change until the apparent correct value has been stable for a duration equal to the propagation delay. The transport model assumes that the reported state of an output will represent its computed state based upon the value the inputs had some time before. If an inverter has a 10ns propagation delay with the inertial model, a 6ns pulse on the input will represent nothing on the output, while an 15ns pulse would cause a 15ns pulse to appear on the output that was delayed by 10ns. Under the transport model, that same 6ns pulse would cause a 6ns pulse to appear on the output delayed by 10ns (such that the output would first switch 4ns after the end of the pulse), and the 15ns pulse would appear the same as with the inertial model. Neither model, unfortunately, generally has any provision for timing uncertainties. |
H: Connect speaker output to microphone input
What exactly is the difference between a microphone signal and a speaker signal?
I want to make a simple cable to connect a PC speaker output to a microphone input. Does anyone have a good idea or circuit for doing this?
Edited
I want to transfer data with audio cable and a way to connect speaker output to microphone input. I found this circuit. Is it a good solution?
AI: Better make R2 switchable or variable with 1k being the upper limit. But the basic approach is probably OK.
Speaker signals are relatively high voltage and relatively low impedance (meaning they can deliver a lot of current). What that means depends on the speaker; anything from a couple of volts and a fraction of an amp (total power 0.25 watts or so) for a little multimedia speaker up to tens of thousands of watts for an AC/DC gig...
Microphones are delicate devices delivering millivolts into a high impedance input (low current, very low power).
So you need to attenuate the speaker output (reduce the voltage) to avoid overloading the microphone input. The circuit you provided will protect it from damage, but it might still overload enough to distort the input signal.
HOWEVER if you are referring to the coloured connectors shown, the green one is "Line Out" - 0.1 to 0.5V rms, not enough power to drive a speaker directly; a lot of PC speakers have amplifiers built in to work with this low level signal.
In that case your suggested circuit is fine, but there is a simpler approach : just connect "Line Out" (green) to the "Line In" (blue) on the other end; they use the same signal levels and need no circuitry in between. |
H: Resistor values?
How can one see what value a resistor is given the colors? For example, what's the value of Red - Orange - Green - Gold?
AI: There are many resistor calculators on the 'net (e.g this one):
Normally the first two bands are the significant digits and the third is the multiplier, so for example, red/orange/green would indicate 23 * 10^5 = 2.3MΩ
The fourth band indicates tolerance, gold is +- 5%.
There are odd standards out there, including 5 (see asterisk in table above) and 6 band resistors and resistors with the values printed on them. For SMT resistors there are various systems also.
Further reading:
Electronic color code |
H: input vs output in ac adapters
This is probably stupid, but I have become very confused and would like someone's help.
I have brought some equipment from the usa, and these run on 110v. Where I live, I only have 220v. I brought an ac adapter, but became confused with something.
The adapter says "input 110v 50 Hz, output 220v 50w". My question is, is this a step up transformer or a step down transformer? Which is the input and which is the output on the adapter? Since my equipment runs on 110v, will plugging this adapter into a 220v outlet work, or is it the other way around?
AI: It steps up. It was designed for the States, you should try to find an adapter that outputs the power you need at a 220V input. |
H: Stepper motors - stride angle?
I am interested in using one of these cheap stepper motors for one of my projects, but need a step angle of ~2°. I came across the 28BYJ-48 and noticed that it has a "stride angle" of 5.625°/64. What exactly does this mean? I doubt it will give me the 2° accuracy which I desire, so maybe I could use a gear system to reduce that step angle further.
But in general, what is that stride angle referring to?
AI: 5.625 = 360 / 64, ie, there are 64 steps per revolution.
However, the actual number of steps can be some multiple of that, depending on how you energize the windings. 2 to 4 times that number is easily achieved, and microstepping drivers can provide substantially finer interpolated resolution.
Your specification is not very clear - you seem to say both 2 degree steps, and 2 degree accuracy. Probably you want 2 degree steps with an accuracy of some fraction of that.
64 steps is relatively course - 200 step motors are widely available.
If you are looking at mechanical reduction, consider toothed timing belts and sprockets instead of gears. They are have less critical needs for mechanical alignment, and run quieter. If your system must operate in both directions without slop, the fact that timing belts suffer minimal backlash when the direction of torque is reversed makes them strongly preferable versus gears. |
H: Basic rules to calculate the equivalent resistance of a resistor circuit
I have a certain circuit only containing resistors of different values. There is one 'input' and one 'output' for the current. How do I calculate the equivalent resistance of the circuit? Are there any basic rules to follow?
AI: If determining replacement value is the only goal then I can think of the following steps:
1) Analyse the circuit into the smallest solvable sub-circuits possible (series and parallel);
2) Calculate series resistors \$R_S = R_1 + R_2\$;
simulate this circuit – Schematic created using CircuitLab
3) Calculate parallel resistors: \$R_P = \frac{1}{\frac{1}{R_3}+\frac{1}{R_4}}\$
simulate this circuit
4) Apply wye-delta (Y-Δ) transform or reverse
5) Repeat until solved or run the circuit through a circuit simulator like SPICE.
Wye-delta (Y-Δ) transform
simulate this circuit
Y→Δ
$$R_{ab} = R_{an} + R_{bn} + \frac{ R_{an} \cdot R_{bn} }{ R_{cn} }$$
$$R_{ac} = R_{an} + R_{cn} + \frac{ R_{an} \cdot R_{cn} }{ R_{an} }$$
$$R_{bc} = R_{bn} + R_{cn} + \frac{ R_{bn} \cdot R_{cn} }{ R_{an} }$$
Δ→Y
$$R_{an} = \frac{ R_{ab} \cdot R_{ac} }{ R_{ab} + R_{ac} + R_{bc} }$$
$$R_{bn} = \frac{ R_{ab} \cdot R_{bc} }{ R_{ab} + R_{ac} + R_{bc} }$$
$$R_{cn} = \frac{ R_{ac} \cdot R_{bc} }{ R_{ab} + R_{ac} + R_{bc} }$$ |
H: Connector type?
I am trying to find the connector type (see links below) for some time with out success.
The pitch is ~1.5mm.
AI: If you are looking for the name I think it is a JST connector from the ZH series (1.5mm pitch). |
H: Best place to place a decoupling capacitor
See this image which gives four options to place decoupling capacitors:
(from http://www.learnemc.com/tutorials/Decoupling/decoupling01.html)
I would say option (d) isn't good - I would recommend someone to place the capacitor near VDD instead of VSS. Is this right? The same goes for (c).
Generally: what's the best place to place a decoupling capacitor? Where would it have the most effect? And, more important, why? I'd like a theoretical explanation.
AI: Think of the copper tracks as series inductors. Series inductors are bad, you want them as small as possible. (B) is the better option.
Also loops in your tracks are bad, again they form an inductor and easily pick up (or radiate) an EM-field. You want the surface area of loops as small as possible, thus keep forward and return paths as close to each other as possible. (C) is the better option. |
H: Will 4 layer PCB isolate inner layers from moisture?
I've never done a 4 layer board and am not familiar with the process of manufacture for them. Will the external layers isolate inner layers from water if say, I will be putting the board into water for long periods (months) of time?
AI: No. Ordinary PCB technology does not protect against moisture ingress over the long term - that's actually a very, very difficult problem to solve where joints between different materials exist. |
H: All I want to do is send text to my Arduino and display it on an LCD Screen via Serial
I have been working on this for hours and have no idea where the issue is...
So I have the following code...which when I type a letter into the Serial Monitor I get the binary code for that letter on my LCD Screen...
#include <Adafruit_CharacterOLED.h>
Adafruit_CharacterOLED lcd(6, 7, 8, 9, 10, 11, 12);
void setup()
{
// Print a message to the LCD.
Serial.begin(9600);
lcd.begin(16, 2);
}
void loop() {
char TestData;
if(Serial.available() )
{
TestData = Serial.read();
lcd.setCursor(0, 0);
lcd.print (TestData, BIN); // echo the incoming character to the LCD
}
}
The issue I am having is, how do I then convert the text from either BIN or HEX or some other ASCII code to literal TEXT...
I have tried leaving the lcd.print as simply lcdprint (TestData) because then it should return just the VAL of the input but it does not, it gives me a few weird symbols and then turns off...Like something is wrong
I also need to then figure out Char Arrays but Ill get to that once I figure out how to display a freakin LETTER
EDIT:
https://vimeo.com/61851351 This is video of what of comparison of lcd.print (TestData, BIN); vs lcd.print (TestData);
Per the last comment the follow works as expected:
#include <Adafruit_CharacterOLED.h>
Adafruit_CharacterOLED lcd(6, 7, 8, 9, 10, 11, 12);
void setup()
{
// Print a message to the LCD.
Serial.begin(9600);
lcd.begin(16, 2);
lcd.setCursor(0, 0);
char TestData='X';
lcd.print(TestData);
}
void loop() {
char TestData;
if(Serial.available() )
{
TestData = Serial.read();
lcd.setCursor(8, 0);
lcd.print (TestData, BIN); // echo the incoming character to the LCD
}
lcd.setCursor(0, 1);
int charcode = 65;
lcd.print(charcode);
}
AI: To recap what was done to partially solve the problem:
Verify that lcd.print("Hello, World"); worked OK in setup()
Tried to do print(char); in setup: that worked fine. The conclusion from this was that something was going wrong in the loop()
Put delay(1000); inside loop(): now the characters showed up correctly on the OLED display.
This suggests that Serial.available() was always returning non-zero for some reason, so that is something that still needs to be looked at. Most likely the characters coming in from the serial port were spaces since most of the time nothing was visible on the OLED display. |
H: Setting up Norton Equivalent of RLC circuit automatically?
I'm trying to gain a better understanding of how the internal SPICE solve works, and for the most part I understand the basics of DC analysis and transient analysis when it comes to RLC networks. The big issue I'm running into is how to handle circuits with voltage sources instead of a current sources. Dr. Mitcheson's Lecture notes suggests converting voltage sources to a Norton equivalent current source. However, I am having a hard time figuring out how to automatically calculate the Norton equivalent circuit params in code.
Current strategy (no pun intended):
Resistors have a basic 2x2 elemental conductance matrix.
\begin{equation}\boldsymbol G =
\begin{bmatrix}
\frac{1}{R} & \frac{-1}{R}\\
\frac{-1}{R} & \frac{1}{R}
\end{bmatrix}
\end{equation}
Capacitors are converted into a simple numerical derivative:
\begin{equation}
\frac{I_c^{k+1} + I_c^k}{2} = c \frac{V^k - V^{k-1}}{\Delta t}
\end{equation}
Here k is the current time step and k-1 is the previous time step. Can easily be converted into a 2x2 elemental conductance matrix.
Inductors are approximated with a numerical integration (trapezoidal rule).
\begin{equation}
I_L^k = \frac{\Delta t}{2 L} \left ( V^k + V^{k-1} \right ) + I_L^{k-1}
\end{equation}
Here k is the current time step and k-1 is the previous time step. Can easily be converted into a 2x2 elemental conductance matrix.
Currents should be known for all nodes at the previous time step and current time step.
Superimpose all elements together to get linear system which can be solved with any matrix solver. Repeat for each time step.
From the above capacitor and inductor approximations I can create an equivalent resistor model which is valid at that time step only and work only with a pseudo-resistive network for calculating the Norton equivalent, which would be quite simple by hand.
Is my strategy at all valid? If so, how would I take the next step and make a computer perform the same calculation?
AI: This doesn't quite answer the same question as how to resolve Norton Equivalent current/resistance using a computer, but it does offer an alternative route to handling voltage sources (which was the original goal): Modified Nodal Analysis
The basic idea is to treat the currents as additional unknowns. We can then add a few extra linear equations describing the current, and by adding in "trivial" equations we can continue to use known voltages as "unknown quantities". This makes combining the elemental matrices still a trivial task, and we just end up with a slightly larger matrix to solve. |
H: Connecting push button to a controller pin
This problem is decades old but I do not see a direct answer to it on this site. I want the following:
This circuit just allows to check the state of the button from inside the micro-controller. Nothing fancy. My questions:
Is there anything better than this simple design?
Is the value of 100 kilo ohms resistor adequate for our days CMOS devices, like dsPIC30FXXXX and not a high power application?
AI: First, a lot of microcontrollers and digital signal controllers will have internal pull up resistors. Here's an example, an Atmel ATMega164.
There will typically be a register that allows the internal pull ups to be turned on and off. Due to variations in the fabrication process, these internal pullups come in a very wide range, and are not a good choice if you need very close control over current draw in ultra low power applications. If keeping component count low is important, this is an easy way to do it. Using internal pull ups for hardware debounce would not be a good idea, since it's not possible to predict their exact value.
Whether the 100\$k\Omega\$ value is adequate depends. If it's just a switch that will be periodically flipped by a user, then 100\$k\Omega\$ would be a good choice for minimizing power consumption. For things that are going to switch more rapidly, such as rotary encoders, the process I would go through is
Find maximum sink current in data sheet
Calculate a pull up value using Ohm's Law
Choose a standard sized resistor a few sizes bigger, depending on the speed actually needed
Test and see if the resistor size chosen gives the required rise and fall times
Adjust accordingly
So if the maximum sink current per GPIO pin were 10 mA and operating at 5V: \$R=\dfrac{V}{I}=\dfrac{5V}{10mA}=500\Omega\$. Keeping this R value small as possible will allow for the sharpest edges and highest switching frequencies. |
H: Which tool do I have to use for wire connector?
I want to make the wire connector as the screen below. What tool do I need to use and how can I make it?
AI: I believe that is a Molex Spox connector. The connectors are crimped on the wire leads and snapped into place in the connector head. |
H: How do I measure the characteristics of a voltage divider in a QUCS simulation?
How do I measure the characteristics of a voltage divider in QUCS?
Below is what I have tried thus far, using Qucs 0.0.15 on Ubuntu 10.04
I want to measure how the voltage at the center of a voltage divider changes based on R2 in the schematic below. I am attempting to simulate this in QUCS using a DC simulation and a parameter sweep.
This is how I have it set up below, but the output is constant from what I can tell, which does not make sense.
AI: Like SPICE simulators there's a difference between simulation parameters and reference designators.
Here, the reference designator for the lower resistor is R2. The R value of R2 is always 50 ohms. To use a parameter, you must assign it to the parameter name surrounded by curly braces. So the R value of resistor R2 should be {R2} to have resistor R2's R value set to the parameter R2.
To make things a bit more clear, change the parameter sweep param to RLOW and the value of R2 to {RLOW}. The simulation should work as expected and you'll get the correct results (yeah, I realize that my voltage source isn't at the same level as the OP so the actual values will be different). |
H: How can SCRs be effectively paralleled?
I'm designing a precharge circuit which will carry 200A when closed. I'd like to use MCO150 Silicon Controlled Rectifiers, but they won't handle the current. Two in parallel looks like the most cost-effective way to handle the current I need. However, I'm concerned about thermal runaway.
The effective impedance of any device will vary with temperature. If that impedance rises with temperature, two devices in parallel will share reasonably well. The device carrying more current heats up more, and its fraction of the total current drops. But if the impedance of the device drops as temperature rises, the device carrying more current starts carrying even more. Sharing fails, and one device hogs the current until it dies.
The datasheet for this SCR doesn't seem to address the issue of temperature coefficient. Is it just assumed that SCRs have a temperature coefficient in one direction or the other? If the temperature coefficient is negative, is there a way to force the two SCRs to share anyway? Or should I try another approach, like a single device or a large contactor?
AI: SCRs do not parallel well. Semiconductor junctions, like those in SCRs, diodes, and bipolar transistors, have a forward voltage drop that decreases with temperature. The hotter SCR will therefore draw more of the current making it even hotter, drawing more of the current, etc.
Why do you think you need SCRs? Their main attributes is their latching behavior and the fact that they can be produced with large current capabilities. If you just want the latter, several MOSFETs in parallel would work. These have a positive temperature coefficient so don't exhibit thermal runaway. Still, you need to derate from assuming each of the FETs will share the current equally.
Since FETs look mostly resistive when on, parallelling them not only decreases the dissipation on each, but also the total dissipation. At 200 A, only 5 mΩ will cause a 1 V drop and 200 W dissipation. That won't be trivial to design to no matter what you use as the switch. It will help if this 200 A is only in short pulses with a much lower averge. Take a look at the SCR datasheet and see the forward drop at 200 A. It won't be all that nice either, and you'll have to deal with significant dissipation with the SCR too.
Fortunately 5 mΩ is not that far fetched for a few FETs in parallel. 4 FETs with maximum Rdson of 20 mΩ would do it, and each would dissipate only 50 W if they were to share the current equally. I'd probably derate to 100 W per FET when designing the thermal system. |
H: Connector for a certain type of lead-acid battery
I need to connect some lead-acid batteries together, to form a bank. A fair amount of current will be drawn from them.
However, I haven't seen the terminals used in those batteries before. What should I use to form a solid connection between them?
Thanks.
AI: These are called spade connectors.
They're often available with plastic covers to make them a little bit safer against someone dropping a spanner on the terminals. Here's some with a useful banana plug lead on:
Available from Gliders Distribution |
H: Implications of INL on the accuracy and resolution of an ADC
I was trying to understand the meaning of various specs given in a typical ADC datasheet, and I came across this article. Now, if I understand correctly, it seems to me that the accuracy of the ADC in bits can be taken as \$n - \log_2{{e}}\$ bits, where \$n\$ is the number of output bits of the device (its resolution?) and \$e\$ is the INL (integral non-linearity) error in terms of LSB.
My questions are:
Is my understanding correct? If not, what is the correct explanation?
If a 16-bit ADC has a INL of \$\pm 4\$ LSB, can we say that the accuracy of the ADC is 14-bit, and will have to ignore the lowest 2 bits for most practical purposes?
Here is a 16-bit ADC with a typical INL of \$\pm 0.4\$ LSB. Does this mean the device has a typical accuracy of more than 17-bits? How is this possible? Or does it mean the device typically has an accuracy of over 17 bits, but since it has only 16 output bits, it has to truncate the data to a lower resolution?
AI: Sort of ... if you look further down the page in the linked article, you'll a good explanation of the gain and offset errors. Particularly fig.5 So if you only have gain errors sometime the digital range is suppressed and in some cases the analog input range is suppressed. The former case is explained by your formulae. The later not. You need to account for gain differences.
That would be one way, however, if it's the analog that is suppressed AND you have sufficient noise in the sampled signal to hide your computational noise you could conceivably be able to post multiply to get your full 16 bit range (span) back. Because of the noise present you won't have a full resolution ADC (ENOB - Effective Number of Bits). If you don't have enough noise then you'll notice this fractional multiplication. You don't mention your application but in images this wouldn't be acceptable.
It just means that the INL is low, it doesn't speak to having to truncate the length because that is limited by other factors like DNL. What is does mean is that architecture (circuit technique) has promise for further extension to 17 bits.
Other factors do come into play in your decision. Monotonicity is one. A non-monotonic ADC will have high INL and NOT be correctable.
The article is good, but it does say some things that are applicable to certain architectures of ADC. One statement is " a LOW INL means a low DNL" to paraphrase the very first sentence in the INL section is not necessarily true in all cases. |
H: Battery Profiling Device
I am working on building a battery pack for a solar car. I need to know if there is a device on the market that will "profile" the batteries (lifepo4) that I have, so that I can put like batteries together in a pack.
What would a device like this be called?
Just as a note, I am not looking for sales links, or which is the best, but I just want to know what to search for.
AI: These devices are called battery analyzers. Here's one made by West Mountain Radio . Don't know if these are big enough for a car propulsion battery. |
H: Definition of a System
A mathematical operator is generaly a mapping between a domain set of functions and a range set of functions.One example is the derivative operator L ( L[f] = f' ) for example.
Can i say every representation of a electronic/electrical system ( input-output representation, statespace representation, block diagram representation) can be generalized to be formed by mathematical operators. Can i generalize to say that every electronic/electrical system ( filter,ac/dc converter, amplifier,etc ) is simply an operator ?
AI: Well, yes*, in the same way that all computer programs can be represented as a function of their inputs and time. Hence simulators exist for most kinds of electronics that people build.
The (*) is there because you have to make a number of simplfying assumptions. Real world components have properties that are temperature dependant, and real systems always have a certain level of noise. Most of the time you can approximate this out, but sometimes it matters; you can build a pretty good random number generator by amplifying a noise source. |
H: A basic question about oscilloscope probes
I have recently acquired a Tek 460A scope and a mixed bag of oscilloscope probes. I have a very basic question about the probes:
One of the probes is a Tek P2200. This is a 1X/10X switchable probe with a simple BNC connector on the back. If I connect this up to a 16V power supply and set the probe to 10X, the scope reads 1.6V. No surprise there.
The other probe is a Tek P6121, which has a chunkier BNC connector on the back that includes an additional pin on the outside ring (that is obviously engaging with some contacts on the scope). The probe is labelled 10X, but when connected to the same 16V power supply the scope registers 16V. Is this because (a) the probe is somehow communicating its attenuation factor to the scope, or (b) is this really a 1X probe?
AI: The extra pin is connected to ground via a resistor which is used to communicate the scaling factor of the probe. When you connect the probe it should show up somewhere on the scope channel settings that you are using a 10x probe. |
H: Why pic18fxxx microcontrollers is better for C language
Pic18fxxx family has a lot advantages and improvements from the pic16fxxx family of microcontrollers. What is the particular feature that make them better for C programming than the pic16fxxx devices?
AI: I don't know about "ideal for". That's a marketing term that has no place in a learned discussion.
However, one big advantage of the PIC 18 architecture versus the original PIC 16 architecture, especially related to compilers, is that it is possible to implement a software data stack on a PIC 18 with single instructions for PUSH and POP. The PIC 18 also has a deeper call stack, 32 versus 8, which can help when a compiler implicitly calls subroutines. The PLUSW indirect addressing mode is probably something a compiler would make more frequent use of than a human programmer.
The PIC 18 also has other advantages that are useful both for compilers and human programmers, like 3 hardware pointers instead of one, auto inc/dec indirect addressing modes, a 8x8 into 16 hardware multiplier, add with carry, and subtract with borrow, and a few other niceties. |
H: Kilowatt Hour Definition
This is a very simple question but something that I'm not able to wrap my head around. This Energy report states generated electricity. But what does "generated killowatthours" mean? I know that can't mean that the US produced that much electricity every hour...but what exactly does it mean?
AI: You are confusing kilowatt hours with kilowatts per hour (which is a fairly useless measure of anything).
A kilowatt is a measure of the rate at which energy is delivered (also known as power). One kilowatt means that 1000 joules of energy is being delivered every second.
A kilowatt hour is a measure of total energy delivered.
kilowatt = energy/time
kilowatt hour = kilowatt x time
= (energy/time) x time
= energy
So if your water heater draws 6 kilowatts when it is on, and is on for 30 minutes, then it has consumed 3 kilowatt hours.
According to the report:
In 2011, the United States generated about 4,106 billion kilowatthours of electricity. About 68% of the electricity generated was from fossil fuel (coal, natural gas, and petroleum), with 42% attributed from coal.
kilowatt hours = 4,106,000,000,000 kWh
hours = 356.25 days * 24 hours/day
= 8766 hours
kilowatts = (kilowatt hours) / hours
= 4,106,000,000,000 kWh / 8766 hours
= 468,400,639 kW
So we can see that during 2011, if you were to measure the power being generated in the USA, it would be about half a Terawatt. |
H: Upgrade cheap power supply
I want to power my project(AVR uC + RF transceiver) with a fake iPhone charger whose internal is the same as this one, but with a smaller input and output capacitor. I therefore suspect that the output will be similar: source (under Counterfeit iPhone)
The oscilloscope traces show the output signal (yellow) and frequency spectrum (orange). The left images provide high-frequency information on the output voltage. The right images show the low-frequency information on the output voltage.
How to smooth the output? Bigger input and output capacitors, buck converter, filter or extra rectifier + filter stage?
AI: You can further smooth the output by increasing the output capacitance, but to power a MCU or RF transceiver I would use an LDO or buck converter(if efficiency is important) .
Your voltage will have to come down slightly but your chip will most likely run from 4V fine if you were able to run them with 5.
Your RF chip will probably have an input ripple requirement that will need to be taken into account when selecting an LDO along with peak current requirement. |
H: 9-Bit State Machine
I am working on a state machine that will drive a state bus for a personal project. The state machine will have an output pattern like the following:
100000000
010000000
001000000
000100000
000010000
000001000
000000100
000000010
000000001
100000000
As you can see, the active bit shifts and wraps back around. I've put together the circuitry to do this; however, the results are really hit and miss. I accomplished the above with two octal flip flops, two line drivers, and a clock to change the active bit. The flip flops have the output of the line drivers going back into their input, shifted by one bit. The last bit goes back around and feeds the first bit.
My circuit only works about 1/4 of the time... the majority of the time, several bits will be high at once OR it will wrap around once and eventually the high bit will 'fade' away.
I am seeking any recommendations and/or changes that I can do to make this work. I understand that the circuit isn't very elegant.. I have just been trying what I know.
Some notes:
The PCB layout above is meant to go on one layer - the different colors are only to help me. Furthermore, there are a few jumpers in between wires... they are always directly across from one another.
The schematic does not include the line drivers, only the flip-flops
AI: If your goal is to have one output at a time go high, you don't need a 9-bit state machine. You should probably either use a 4017 which has 10 outputs that are hit in sequence, and wire it so that it resets when the tenth output goes active, or else use a 4-bit state machine which will progress through nine states, along with a device that will output one of nine wires based upon the state of the 4-bit machine.
Alternatively, you could construct your machine so that a "1" will be shifted into an 8-bit latch only when all the other bits are zero. This could be accomplished using a 74HC688 and an inverter, or if you didn't mind having all but one of your wires be "1" (as opposed to all but one being "0") you could feed the output of your 8-bit latch into an 8-bit NAND gate. |
H: How to find Thevenin resistance of circuit with voltage source and three resistors?
The question is:
Replace the circuit with a Thevenin Equivalent Circuit and find the Thevenin Resistance (Rt) in ohms.
I've used Thevenin before and solved quite a few equivalent circuits as well.
Unfortunately, with this circuit, I'm getting very confused on how you would exactly go about combining these resistors. Are they in series or parallel? Or are two of them in series with the other one? That's where I get confused.
simulate this circuit – Schematic created using CircuitLab
AI: There are two steps to finding the Thevenin equivalent circuit: finding the Thevenin voltage and the Thevenin resistance.
Thevenin voltage is the voltage across the two points you interested in (Vin). In this case it is easy to calculate as there is no current flowing in the 43 and 60 \$\Omega\$ resistors thus no voltage drop. Thus the voltage at Vin is the same as the voltage form the source, 72 V.
Thevenin resistance is calculated by 'turning off' all independent current and independent voltage sources and calculating the resistance between the two points. Turning off a voltage source sets the voltage across it to 0, which results in a short (0 \$\Omega\$) in parallel with the 275 \$\Omega\$ resistor. Any resistor combined in parallel with a short results in a short, leaving you with the 43 and 60 \$\Omega\$ resistors now in series, giving a Thevenin resistance of 103 ohms.
Putting the two together gives you a voltage source of 72 V in series with a 103 \$\Omega\$ resistor for you Thevenin equivalent circuit. |
H: Rationale for interleaving in G.709
According to the ITU-T G.709 (OTN) specification, a single frame has 4 'rows' and 4080 'columns' of octets, arranged like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ... 3824 3825 ... 4080
+-----------------------------------------+-----+-----------+-------------+
1 | frame alignment | OTU header | | | |
+-----------------------------------------+ OPU | | |
2 | | he- | | |
| | ad- | payload | error |
3 | ODU header | er | | correction |
| | | | |
4 | | | | |
+-----------------------------------------+-----+-----------+-------------+
The OTU header contains stuff related to transport, and the ODU and OPU headers contain information related to the payload (I don't know the details here).
When the frame is sent, this 2 dimensional representation has to be translated into a (1 dimensional) sequence of octets. Section 11.1 of the specification says that this is done by transmitting all the octets in row 1, then all the octets in row 2, and so on up to row 4.
Why is the frame transmitted in this order? To me, it seems easier to transmit the whole header (columns 1-16) all at once, then transmit the entire payload (columns 17-3824), then transmit the error correction (columns 3825-4080). Is there some advantage to interleaving the header, payload, and error correction?
AI: It looks reasonable. The scheme generates four packets, each of which has a header and error-correcting information.
Since the packets have identical size, it is convenient to diagram them as rows.
The problem is your I don't know the details here.
You may be assuming that the "2D" areas are monolithic. But perhaps they are decomposable by row. That is to say, for example, perhaps the error correction in row 1 only pertains to the payload of row 1, and isn't just one big block that applies to the payload as a whole. |
H: How do I find companies that would build prototypes for me?
Im asking since I have projects that I would like to test, but honestly have little to no time to do this myself. Particularly since I've never even welded since I was a kid.
I just need to know how to finds these people and/or companies? Are there magazines? Web searches? I've come up empty or with links that are not relevant.
Projects could include arduino or RasPi.
AI: As far as I understand your problem you want someone to make some project for you
Look for your local electronics market, There are many people doing these kind of jobs
Asks your friends or friend's friend for a reputed firm or individual
If you find no one there (which is extreme case) then go to online and look for freelancing and choose people according to their repute, reviews and budget that suits you well. |
H: 3 bit synchronous counter design d flip flop
Any idea how I would go about designing a 3 bit synchronous counter in regards to having the following states
111->001->110->101->100->000->010->111
I drew up a present state and next state table etc... not really sure where to go from here, I have designed in logism a start schematic with a CLK, CLR and PRE, with 3 D type Flip flops as these are the ones I am required to use, but I am unsure where to go from here.
AI: The key thing is to treat each bit individually.
For example, for the low-order bit, lets call it next_state[0].
state | next_state[0]
---------------
000 | 0
001 | 0
010 | 1
011 | X
100 | 0
101 | 0
110 | 0
111 | 1
So you can write
next_state[0] = state[0] & state[1] & state[2] | ~state[0] & state[1] & ~state[2]
If you are building this with discrete logic you could simplify this to
next_state[0] = state[1] & (state[0] & state[2] | ~state[0] & ~state[2])
And you have an equation you can use to drive the flip-flop that will generate the next condition for the low-bit of the state variable.
Finding the equations for the other two state bits works the same way. |
H: LEDs with two transistors very dim
I am building an LED Cube with my kids and a launchpad. I built a prototype with perf board and its seemed to work pretty well, except the LEDs were dim. They were pretty cheap LEDs, I didn't give it much thought. Well, I decided to have some boards made from seeedstudio for them. Also I bought very bright LEDs. That's the backstory, here's my problem:
I can't seem to get the LEDs to brighten enough.
Hooking the LED up without the transistors it works as expected on a
breadboard.
Hooking up one transistor works switching with the LED behind the
transistor.
Hooking up one transistor is dim switching with the LED before the
transistor (transistor switching to ground)
On breadboard:
3.3V -> resistor -> emitter -> collector -> LED -> GND - OK!
3.3V -> resistor -> LED -> emitter -> collector -> GND - DIM!
Here's the schematic:
Sorry, this probably isn't well worded. I hope the schematic can help explain.
AI: Well they're working now. Here's what I did:
I switched out the Transistors switching the cathodes to NPN transistors (the transistors switching anodes are PNP)
Apparently I need to learn more about transistors
I forgot to give Resistor values too, I apologize. If it helps anyone with my problem I am using 3.3Ohm Resistors, 3.3V power, with 3.2Vf LEDs, with 470Ohm resistors on the base
All columns are PNP and now all rows are NPN transistors |
H: Why do these LEDs have different brightnesses
I did a search for 0603 sized high brightness Kingbright LEDs on element14, results are here .
Why is there such a range of luminous intensity, even within the same colour, despite forward voltage, forward current, size and viewing angle being the same?
I am using the LEDs as indicators in a very low power battery powered device. The LEDs will be pulsed at times to give user feedback. Would the LED that gives the brightest output be the one that gives the highest luminous intensity, or is there another consideration?
AI: This is going to be really long, so just skip to the summary sentence at the end to avoid TL;DR.
There are several factors contributing to varying millicandela ratings of LEDs, and more importantly the relevance of the mcd rating to the intended purpose:
Angle of dispersion / beam angle:
This one is the most obvious, and fairly intuitive as has been pointed out in user20264's answer. The narrower the beam angle (how far off the axis the LED's light is visible) the greater the luminous intensity for a given luminous flux: Basically the same amount of energy being pushed through a greater or smaller solid-angle.
Paraphrasing Wikipedia, a light source emits one candela in a given direction if it emits monochromatic green light with a frequency of 540 THz (555 nm wavelength, yellow-green), with a radiant intensity of 1/683 watts per steradian in said direction.
(source)
This is why illumination grade LEDs are often rated in lumens rather than mCd, as the MCD can be quite misleading depending on added elements (lenses, diffusers, reflectors) that would change the effective beam angle, by definition.
Practical measurement of "peak luminous intensity":
While peak luminous intensity is supposed to be measured as a single point, on-axis value, there is no global standard for the geometry and size of this "point" sensor:
Is it 1 degree around the axis, 0.01 sq.mm, square bare-wafer photosensor / PIN Photodiode, circular lensed sensor (if so, what diameter lens?), half-theta angle (yes, some scientific papers use this as a measurement area), or something else entirely? Is the distance to the sensor measured from the LED package surface, the wafer surface, or the LED lens inner or outer surface?
You will find nearly as many answers as there are manufacturers, and clearly, keeping this flexible allows for some "creative accounting", to favor one type of LED versus another.
Geometry of lens:
The specific optical arrangement used for the LED lens will change the distribution of light intensity across the illumination beam angle - One can get very intense light at the center of the beam and a long tail of fall-off, or fairly even distribution of the intensity between axis and maximum viewable angle, just like with camera optics.
This impacts the "half-theta" angle, the angle at which intensity falls off to half that at the axis. Depending on the lens and thus the intensity distribution curve, half theta angles can be a small fraction of beam angle (center-intense beams), or heading towards half the beam angle or more.
A smaller half-theta angle i.e. a skinny tall bell-curve with long tails, translates to high mcd values on the axis, but sharp drop-off of visibility off the axis. For greater range, such as for infrared remote controls, a smaller half-theta is of interest, while for visual indicator / illumination needs, a greater half-theta works out better, even for a fixed beam angle.
Angle of view:
This relates closely to the previous two points:
If the half-theta or beam angles are narrow, the mcd figures can look very high, but practical usability of the LED as an indicator by itself is questionable. Yet, if a light-pipe is used, such as on some indicator boards, or for fiber optics, a narrow half-theta is a good thing.
Transmission coefficient of lens
This relates to the specific light wavelength emitted by an LED:
Manufacturers typically standardize on one or a very small number of materials for the design of the lens element of their LEDs. Evidently, any given transparent material will have different transmission characteristics for different light wavelengths.
Thus, what may be the best possible lens material for a green LED would likely be less than ideal for red.
For white, this is even more complex, because common "white" LEDs have a phosphor layer of Yttrium Aluminum Garnet on a Gallium Nitride chip emitting a deep blue spectral line. The combination of the natural and the phosphorescence spectral lines requires compromises in transmission and phase, so the combination is anything but ideal in transmission for each spectral line, by the nature of the optical design.
Clear v/s translucent LEDs:
Milky LEDs make the mcd ratings practically irrelevant, since they are designed to disperse the generated light as evenly as possible across the surface of the LED - near-180 degree (or should it be, near 90 degree?) solid angles, and half-theta values of nearly the same, are common, and desirable.
Thus, a milky LED will typically have poor mcd values for the same chemistry and construction as a "water-clear" LED, and colored clear LEDs will lie somewhere in the middle. Yet, for indication purposes a translucent LED is perhaps the most ideal!
Wavelength of emitted light
As would be seen from the definitions of luminous intensity, this differs from radiant intensity in taking into account human-vision perceived intensity of the light in question.
Human beings characteristically are most sensitive to the yellow-green portion of the spectrum, around 555 nanometer wavelength:
(Source is Wikipedia, high resolution image here)
Thus, for a given amount of electrical power through an LED, the luminous intensity would vary widely with LED color, and of course drops down to zero for ultraviolet and infrared, which human vision cannot perceive.
Chemistry of LED junction:
Enough has been written about this, in other answers as well as elsewhere on the web, so just a brief mention: The chemistry determines the emitted color-spectrum (see previous point), as well as the conversion efficiency, of an LED's "Light Emitting" aspect. Also, minor variations cause spectral shifts, so two nominally identical chemistries need not be. It is thus stating the obvious that this determines both luminous flux and intensity.
Efficiency of wafer / batch:
Despite the best manufacturing process controls, LED manufacture is notorious for its variation in efficiency and output characteristics between batches of wafers, and even within a batch or a single wafer. Manufacturers address this by a process of "binning" - While white LEDs are binned by a complex process, by color as well as light output, color LEDs go through an essentially linear binning process for light output. Different light output levels are then packaged as differently rated products.
While reputable manufacturers typically do a sincere job of binning and published rating for their LEDs, no-name LEDs are infamous for wide variation in intensity within a stated datasheet rating, as much as 1:3 ratios in extreme cases.
n.b. Some manufacturers such as Philips (Luxeon range) are beginning to claim a binning-free process, due to modern improvements in manufacturing technique.
Encapsulation of LED:
While this is largely covered in the lens design discussion a few points previously, additional factors such as position of contact whisker / wire bond do make significant impact in LED light output. The wire bond creates occlusion of the light source, the nature of which varies between designs.
An obvious response to this would be, why not always design the wire bonds to occlude as little as possible? This isn't done because the wire bond positioning, material and thickness are not just about electrical conduction, but also thermal dissipation.
Some designs need better cooling, hence a whisker attached to the approximate middle of the chip, or even multiple wire bonds from a lead-frame, are opted for. Other designs do not really care about this, the power involved being too low or the substrate being better designed for thermal off-take.
These trade-offs determine the occlusion compromises and thus the actual measured luminous intensity at the axis of the LED's beam.
Orientation of LED substrate within package
This factor has little relevance to most modern LEDs, especially SMD parts. However, older LED designs, and possibly some still in production, sometimes had orientation tolerance issues on the LED emission surface. In simple terms, the actual LED chip may or may not be perfectly perpendicular to the axis of the LED package.
It is intuitive therefore that measured luminous intensity along the axis would vary from piece to piece, or between production runs, for such LEDs.
Actual power of LED:
While the rated current of an LED is typically controlled by your circuit to meet the datasheet specifications, the rated and actual junction voltages at that set current will invariably differ, both due to manufacturing tolerances, and due to shortcuts taken in datasheet specifications.
This means that the actual power converted from electricity to light will vary as per P = V x I, for each LED design, for each minor variation in semiconductor doping, and for a variety of other factors. Part of this is addressed by the binning process, and partly the datasheets for "different LED models" which just happen to be different batches of wafers, reflect the resultant change in measured intensity.
Most important, marketing mumbo-jumbo:
While this fudge-factor is perhaps the least recognized by the engineering community, several years of using and recommending LEDs for various products has shown that there is a very strong influence the marketing department of a manufacturer has upon the data shown in promotional materials and datasheets for a given LED product. This is probably more pronounced in the LED industry than in most other semiconductor trades.
If there are several different ways of measuring or representing any LED data, such as luminous intensity, and there are several standards or guidelines in place in the industry for any such parameter, you can be sure that the marketing drivers will ensure that different product lines or models will use different measures and measurement methodologies, even within a single manufacturer, so as to put the best possible spin on every LED.
While the more reputable manufacturers may stick to merely using different intensity measurement equipment as convenient, the less scrupulous ones do not shy away from outright prevarication for their product publications.
What makes this more amusing is that some of the most reputable manufacturers are also resellers, i.e. they source their non-premium product lines from the same factories as bulk sellers, so the only difference is the branding on the box or reel, and of course the 100% to 300% brand-value mark-up. How many of these resellers actually bother to re-validate the measurements and parameters, is anyone's guess.
TL;DR summary:
Don't trust the millicandela ratings on any LED, test them yourself if you absolutely need real data. |
H: How to wire a 12V DC fan to a Molex connection?
I have a 12V DC fan with two wires (black and red). I want to solder it to a Molex 8981 connection (powered by an ATX power supply).
Here are the details for the connection:
I'm guessing the fan's positive red wire goes to the +12V yellow wire, but I'm not sure what to connect the black wire to.
AI: You don't say what the Molex connector is coming from, but I'm assuming it's an ATX supply of some sort.
Anyway, connect the red wire to the +12V yellow, and the black wire to either of the 2 black ground wires. |
H: What is the origin of the "r" in resistance measurements?
In a post to the diyaudio forums, www.diyaudio.com/forums 67232, I see terminology that are likely to be measurements of resistance: "0r1", "1r0", and "10r". I am thinking the "r" there means a decimal point. If that is true, then is this some standard used in the electronics industry, and if so, what is the name of the standard and/or URL's to descriptions of that standard?
In case that forum ever disappears, below I quote that post's content.
now we know what you are trying to measure to, tell us what the probe
to probe resistance reading and also the probe vias winding to probe
resistance reading is. That way we and you can compare the extra
resistance as the likely winding resistance. <=0r1 indicates a big fat
secondary capable of many amperes. 1r0 indicates a big fat primary
capable of many hundreds of VA. 10r starts to show current
limitations.
AI: You are correct about it standing for a decimal point. So 2R2 means 2.2Ω, 0R5 means 0.5Ω, and so on. Also as pjc50 notes, it's common to see e.g. 47R used for 47Ω since using an Omega symbol is more difficult and time consuming (e.g it's a special symbol you can't just shift + something for on most keyboards)
Similarly, it is common to see 2K2 for 2.2KΩ, or 3M3 for 3.3MΩ
As far as I know, these conventions started back in the days when paper schematics used to get copied a lot (e.g. photocopied, faxed, etc) and the reproduction quality was not very good. This meant that small decimal points could easily get lost or even "appear" if a speck of dust was on the sheet at the time of copying.
So to make things clearer and these type of errors less likely, this type of notation was used.
Here are a couple of example clips from old schematics:
One where they should have used the notation discussed (60's guitar amp schematic - is that .02uF or 2F? ;-) ):
Another bad one:
Two clips from a better schematic from the mid 70's with the same convention used for capacitance also, plus nanofarads are used (in earlier schematics 22nF would be presented as 0.022uF):
Schematics found at Dr.Tube and Vox Amps. It's interesting to browse through the schematics and note the differences in notation over time. |
H: Why do AC-to-DC adapters have long cables on both ends?
Why do AC adapters have long cable on both ends? I believe that almost everyone using one would want it either directly next to the gadget it is powering (even possibly joined together with it if possible) or next to the power plug (possible made into a single unit with the power plug). It is inconvenient to have it hanging in the middle of a power line. Is this simply a design flaw, or does it have particular reason?
If I were to replace either the AC side or the DC side cable with a very short one, does it have any difference between the two options electronically?
AI: Usually for thermal management or mechanical compliance.
thermal -- placing the power supply away from the potentially hot device and away from the wall (better airflow) results in the most efficient operation.
mechanical -- large power supplies would block adjacent outlets if connected directly. They can also be heavier than can be safely suspended (hung) from wall outlets.
Basically no... except for the above mentioned reasons. |
H: How to design voltage ladder on ADC for non linear purposes
I'm designing a 4-bit ADC for a project that receives a reference voltage that ranges from 1.0 to 1.25. The problem is, it's not a linear variation, as in 0001 can be 1.05V, 0010 1.1V but 0011 could be 1.2V, instead of the 0.05V increase shown from 0001 to 0010, there was a 0.1V increase to 0011. So the first comparator will need to have the 1.05V reference, the second one the 1.1V, and the third one the 1.2V, and as this is, I can't use a bunch of resistors with the same resistance, I need different resistors, and I can't imagine how to calculate their values. Considering it's a theoretical project, so it doesn't matter if such resistor is manufactured or not, how can I calculate resistor values to input in comparators a non linear voltage decrease?
AI: The easiest way to do this (kind of like counting on your fingers) is to set each resistor equal to the voltage of the step. For instance, the 0001-0010 step is 0.05 volts, so it gets an 0.05K resistor, the first step is 1.05 volts, so it gets a 1.05K resistor, and so on.
But you need to learn ohms law! |
H: Video camera exposed to lightning in clouds
Suppose you are sitting outside in the dark and the video camera in the mobile is ON. If the clouds are lightning again and again, will this damage the video camera? As our eyes can get damaged by light (not thunder but maybe very intense light), I thought maybe the mobile camera can get damaged too.
AI: Basically, no.
Of course there is some luminous intensity above which the cell sensor will die, but I don't think Lightning will get there.
Lightning only lasts about 30 micro-seconds so the absorbed energy per pixel is actually quite small
You observe lightning from a great distance (intensity drops off at the square of the distance)
The PRP (pulse repetition period) is extremely high compared to the pulse-width (e.g. low energy waveform)
Let's do some math...
According to the National Oceanic and Atmospheric Administration (NOAA) the average lightning bolt contains enough energy to light a 100W incandescent bulb for 3 months.
That's almost a billion (777,600,000) Joules of electrical energy
Only a small fraction of that energy is converted to optical energy (light), just like the incandescent (~3%).
16 lumen/Watt,electrical
1/683 lumen/Watt,optical
2.35% Watt,electrical --> Watt,optical
~18,200,000 Joules,optical per lightning strike (typical)
Assuming you are 1km from the lightning bolt (VERY CLOSE!), the emitted optical energy is spread over the surface of a sphere
100,000 cm in 1km
Area = 4 Pi r^2 = 125,660,000,000 cm^2
Irradiance = Energy / Area = 0.00015 Joules,optical/cm^2
Here is the human eye safety limit for collimated polarized light.
The average lightning bolt lasts 30 microseconds (same NOAA citation used above). Therefore the maximum safe irradiance from a lightning bolt is:
.035 * 30e-6 = 3e-7 J/cm^2
The cell phone sensor (let's use an iPhone1) has 2048 pixels in 0.358 cm:
1/(2048/0.358) = 1.75 micron width of a pixel
= 1.75^2 micron^2 area of a pixel
pixel area / illumination sphere = 2.43e-19
The amount of the original almost billion Joules of electrical energy that reaches an individual pixel in your imager is about 1/2.43e-19 times smaller!
4.43e-12 Joules,optical/pixel
Therefore you are below the human damage threshold by five orders of magnitude. Even if we assume the full electrical energy of the bolt was light you get:
2.43e-10 Joules,optical/pixel
Still three orders of magnitude too small.
Cell phone sensors can withstand greater intensity than the human eye without damage since they can handle internal temperatures in excess of 125C which the human eye can, obviously, not.
Cell phone sensors are also a lot less sensitive to light than the human retinal cells, furthering their withstanding ability.
You can recalculate for being even closer to the lightning bolt and for farther away by recomputing with the equations above.
Expanding the assumptions
Optics is tricky and in the analysis I made a few assumptions that could be expanded to clarify.
Intensity reduction
One commenter bizarrely states:
"Intensity doesn't drop with distance"
Nope. Physically impossible for non-collimated sources. You can prove this to yourself by observing the lightning bolt from different positions. If you can walk around the lightning bolt and still see it, it's an approximate isotropic radiator (emits light in all directions). Given that lightning is a highly chaotic process involving the production of plasma (high-energy random motion), this is expected.
The camera cannot collect the light (energy) emitted in directions that do not point towards the camera. Therefore, the intensity, just the energy available over the area in which it is captured, will decrease with (the square of) distance in a loss-less medium.
The energy impinging the area of the camera represents an ever smaller quantity of the total illumination sphere of the lightning bolt as you back away from it.
The effect of the lens (focusing)
I modelled the lightning bolt as a point source and disregarded the lens of the camera. The lens has an almost insignificant effect. Let's put the lens back in and assume it's perfect (no spherical or chromatic aberration, perfect installation, and loss-less).
First lets validate the point-source assumption. Typical lightning bolt strikes from about 6,000 feet (~1.8km) to the ground and is 1 inch (2.54 cm) in diameter.
Standing 1km away you would need a camera with at least a 61 deg field-of-view, a requirement which our example iPhone1 camera meets. The camera therefore catches all of the light from the bolt radiated in its direction.
To validate our point-source assumption, we need to show that the total light reaching the camera from a single on-bore source is equivalent to the total light of a set of weaker point-sources distributed along the path of the bolt. To clarify the math, I'll model the lightning bolt as a straight arcing (uniform radial distance to the camera) line and the lens in a 2D plane (but the extension to 3D -- e.g. 2D image plane -- is logically trivial).
$$
\mid I \mid = \frac{P \cdot A_{total}}{n \cdot A_{lens}} n
$$
That is to say that if we divide the lightning bolt into n-many points of light, distribute them along its length, and reduce their intensity by n (evenly distribute the output power among the points), the total intensity of the light on the lens is the same as if radiated by an on-bore point-source (assuming the lens can see the whole extent -- arc length -- of the lightning bolt).
The effect of the lens becomes:
$$
\mid I \mid = \frac{A_{lens}}{A_{sphere}\cdot n_{pixels}}
$$
The number of effected pixels is a function of the composition of the image (how much of the frame the lightening bolt covers), the size of the bolt (width and height), and the distance of the observer (how close are we?!).
Rather than roll-up all of these parameters, I took a sample set of lightning photographs (example) taken under approximately these conditions, threshold filtered them, and averaged the number of white pixels. Approximately 5% of the sensor pixels are covered by the lightning bolt.
For the iPhone1 camera with lens area approximately 0.1 cm^2, the capture ratio on an illuminated pixel is 7.48e-18 -- for a lens focusing gain of only about 31x. Given the magnitude of the margins, the focusing/lens just doesn't play an important-enough role.
On the origins of damage
[you assume] the damage caused by optical light would be by radiative heating
That's a perfectly fine assumption (a lot like assuming there is gravity). Heat transfer is really all energy transfer is. You can read more about CMOS sensors and how damage occurs here.
and that a device that can withstand a higher temperature than an eye is necessarily more resilient,
Given that damage comes from heat transfer, withstanding greater heat transfer is by definition more resilient.
also that being less sensitive would make a device more resilient
This deserves some nuance, but not really that important given the other factors. Being less sensitive requires that you are less reactive to or less absorbing of the phenomenon of interest. In the general case, that implies less energy transfer, because the assumption is that the phenomenon is taken in isolation (ceteris paribus). In practice, there are alternate parallel mechanisms, so it may not hold. |
H: Questions about connecting a passive infra-red sensor to the USB port
So, we are linking a sensor to a computer via USB, using a female USB connector and male USB wires. The sensor is connected to a circuit with 3 pins, vcc, ground, output signal. If I would want the signal to be processed by the computer to output " I sense someone" when ever the sensor fires, what would I need to do? basic questions to answer are:
How would the computer understand the signal from the usb? Using a device driver? or something else(please specify)
I read that the usb can burnout if you are careless about the energy you input on it, so what is this healthy energy( Vcc , Amps) range? ( and how do I achieve this?)
P.S.: my professor told me that the device driver is only used for driving a certain device... idk if he is right or not :\ this totally contradicts my knowledge about device drivers, I know that everything you insert to a computer that has an input to be understand needs a device driver( ei, mouse, keyboard) it's hard for me to believe that the computer controls the mouse since, it can't because that is senseless(because the standard mouse is only for input :)
P.S.: the sensor is a Passive infrared sensor, with 2 jumper modes, to fire only once every X time while their is movement. and to fire constantly which there is movement.
AI: You will need to use a device that has USB controller capability, or you can use an external chip. The actual device I would recommend depends on what kind of output the sensor has. Is this output On or off? If it's something so simple, I highly recommend using an ATTINY device like this. ATTiny simulates USB and works nicely without too much complexity.
However, if your IR sensor needs an ADC and some processing, you can use an MSP430, PIC, Cortex M3/M4 and many others depending on whether you need some processing of that signal.
Lets assume that you have the USB connectivity. In reality you need to choose a class for your USB device and for this I recommend CDC since it basically opens a COM port on the computer and you can send data as it arrives, then process it with any kind of program you like. |
H: Considering Loading in analysis and design
What are the conditions under which loading can or cannot be ignored? I would like to be able to design a circuit in segments and then connect these segments together to create a circuit that performs all the operations of the individual circuits.
To my understanding, loading refers to the condition when the first stages output voltage is reduced when it is driving the second stage, as compared with the output voltage if the second stage were not present. How can this be accounted for in ones analysis or design?
AI: In the general case, it's an iterative process, where you design all of the stages in isolation first, then adjust the design of each stage to account for the actual values found in the previouis pass. Usually, this will converge quickly.
However, if you can design your stages up front so that the input impedance is an order of magnitude or two (i.e., 10× to 100×) greater than the output impedance of the previous stage, then no iteration should be required.
Alternatively, you can plan ahead what the interface impedances are going to be and design each stage for that specific impedance on each end. Usually, you have enough degrees of freedom to make this a straightforward task. |
H: PIC16F877A (with LCD) not working
I have designed a PIC18F877A micro controller project to read temperature from an LM35 using ADC, display it on an LCD and transmit it to a serial port.
When the program starts, sometimes it shows a startup message - sometimes it doesn't display anything. Also, the serial port connection is not working. Can anyone help - am I missing something? Are there any ground connections missing?
My code:
#include <16F877A.h>
#device adc=10
#fuses HS,NOWDT,NOPROTECT,NOLVP
#use delay(clock=20000000)
#use rs232 (baud=9600,rcv=PIN_C7, xmit=PIN_C6)
#include <lcd.c>
float value;
float temp;
float temp2;
float temp3;
float temp4;
float temp5[14];
float count[14];
int c;
void main(void)
{
//setup_adc_ports( ALL_ANALOG );//Initialize and Configure ADC
//setup_adc(ADC_CLOCK_INTERNAL );
while(1)
{
lcd_init();
lcd_gotoxy (1,1);
delay_ms(1000);
printf(lcd_putc," WELCOME TO\n Micro Tech Sol.");
delay_ms(3000);
lcd_gotoxy (1,1);
printf(lcd_putc," Fuel Monitoring \n PROJECT ");
delay_ms(3000);
}
}
AI: Your comment "Its working some times which may mean code is working." means (to me) that the hardware isn't fried (it wouldn't work at all otherwise) and that your software needs adjustment.
You might want to use an unused GPIO pin as a 'heartbeat' signal, and toggle it through various places in your while loop. This allows you to not only make sure your code isn't getting lost (with your simple program, it shouldn't be) but also whether or not your overall timing is valid. For example, you can set the pin before one of your delays then clear it afterwards. If you see the pin change state for 1 second, you know that your crystal is working, the PIC oscillator is set properly and that your delays are working.
There may be some incompatibility between the LCD driver you're using and the specific LCD that you're working with. You may need to tweak that LCD code - add extra delays, etc. until your LCD cooperates.
lcd_init() and a delay(1000) need to go outside the while loop, as others have said. You need that delay(1000) after calling lcd_init() before any commands are sent - you may need a slightly longer delay depending on your specific LCD hardware.
If you've added decoupling capacitors per the comments, please update your schematic sketch showing where you've added them. If you've updated your code, please update your code section. |
H: Light bulb exploded
This morning, one of the lights in my house began to flicker. I didn't think much of it, and left the room to get a drink. While I was in the other room, I heard a bang, and came back to find broken glass from the flickering bulb all over the floor.
Has anyone else ever had experience of an exploding light bulb? Should I bee concerned? If I had been in the room at the time, I could have been cut by flying glass. This has never happened before in 25 years of living at my current property.
EDIT: Following advice from the comments,
can anybody explain what failure mechanism can cause a light bulb to explode? And do light bulbs often explode?
EDIT: A few more details. The metal part of the bulb remained in the light socked, with a few bits of jagged glass around it. I was able to unscrew this using pliers, but the socket has been damaged/blackened/melted, and needs replacing before a new bulb can go in. The bulb was a traditional filament bulb, and had been in place for about five or six years.
AI: My guess here is that there was a bad contact between the bulb and the socket. We can conclude this from the flickering part and the damage on the socket. If the contact surface between socket and the bulb is contaminated enough, you'll get arcing which will result in bulb flickering. As the arcing continues, it will corrode the contact surface making the contact even worse resulting in even more arcing.
As the process progresses, the affected area will heat up. After some time I suspect that the heat resulted in different expansion rates between the glass and the metal parts of the bulb resulting in "explosion".
In general, flickering lights are a symptom of bad contacts somewhere and should be investigated. If for example the source of arcing was a switch, it could have been very dangerous if its body softened up and someone tried to operate it. |
H: Context Information Saving on microcontrollers?
I make a search for context information saving method in microcontrollers (especially PIC), and the only thing i found is a patent of the "tamper resistant microprocessor". According to the patent a microprocessor save a contex information for one program whose execution is to be interrupted and the execution of that program can be restarted by restoring the execution state from the saved context information.
link: http://www.google.com/patents/US20050166069
Is this what i am searching for? And if this is correct and i have understand correct this method is used for example in interrupts when you save the registers values that you need and probably the interrupt routine can change and when you return to the normal execution of the program you take back the saved values? Or when you pop a value to the stack that indicates an address that start a piece of code and when you return from a call instruction you retrieve this address??
AI: You ask for "context saving", but you don't seem to know what that term means in your context?
The meaning I am most familiar with is in the context of interrupts and task switching, where everything that the main program or a task relies on is saved in RAM, to be restored later. In most cases this amounts to pushing all registers on the stack, so they can be popped later.
Things can get difficult when there is context outside the CPU registers that can be used by the interrupt (or by other tasks), so it must be saved too. Think for instance of floating point coprocessor registers.
On a the old 12 and 14 bit core PICs context saving for an interrupt is a bit tricky, but it is explained in the datasheet, better read it there. Note that on these chips various memory-mapped registers can be context too, like the indirection register. If your interrupt routine uses such registers they must probably be saved (and restored) too.
Real context swapping for tasking switching is not possible on these PICs, because the stack can not be changed. There are some dirty tricks that achieve the same effect (like not using the hardware stack at all), but at a cost.
The 18F PICs and IIRC the enhanced midrange chips too have a stack that can be read and written, so real context switching is possible, but it is tedious. If you want multitasking, better look for a CPU that has a memory-mapped stack. (Nowadays a Cortex would be an obvious choice.) |
H: PIC12F output pins working differently than programmed
I'm trying to drive a shift register using a PIC12F683, so I wrote this code for a simple test:
#include <pic.h>
#include <pic12f683.h>
#define _XTAL_FREQ 4000000
// GP0 -> data
// GP1 -> latch
// GP2 -> clock
void clear_shift_register() {
GPIObits.GP1 = 0;
GPIObits.GP0 = 0;
GPIObits.GP2 = 1;
GPIObits.GP2 = 0;
GPIObits.GP2 = 1;
GPIObits.GP2 = 0;
GPIObits.GP2 = 1;
GPIObits.GP2 = 0;
GPIObits.GP2 = 1;
GPIObits.GP2 = 0;
GPIObits.GP2 = 1;
GPIObits.GP2 = 0;
GPIObits.GP2 = 1;
GPIObits.GP2 = 0;
GPIObits.GP2 = 1;
GPIObits.GP2 = 0;
GPIObits.GP2 = 1;
GPIObits.GP2 = 0;
GPIObits.GP1 = 1;
}
void main(void) {
TRISIO = 0x00;
clear_shift_register();
while(1) {
GPIObits.GP1 = 0;
GPIObits.GP0 = 1;
GPIObits.GP2 = 0;
GPIObits.GP2 = 1;
GPIObits.GP0 = 0;
//GPIObits.GP0 = 1;
GPIObits.GP2 = 0;
GPIObits.GP2 = 1;
GPIObits.GP0 = 0;
GPIObits.GP0 = 1;
GPIObits.GP2 = 0;
GPIObits.GP2 = 1;
GPIObits.GP0 = 0;
//GPIObits.GP0 = 1;
GPIObits.GP2 = 0;
GPIObits.GP2 = 1;
GPIObits.GP0 = 0;
GPIObits.GP1 = 1;
__delay_ms(500);
}
}
When I programmed the PIC and powered it, nothing happened. So I decided to hook up some LEDs directly to the PIC pins to know what it was doing:
GP0 is OFF all the time
GP1 is ON all the time, but periodically does a one-time flicker very fast
GP2 is OFF all the time
Why am I getting these weird results and how to correct it?
AI: First I second Leon's remark: NEVER use the output pins directly, ALWAYS use a shadow register (unless maybe when you know what you are doing, think Olin level). For an explanation of the read-modify-write issue check my answer on Interfacing a keypad with a microcontroller
Second, what do you expect? The changes you make to the I/O pins have no delay inbetween, so even if they appear on the pins at all you will need a oscilloscope set to the MHz range to see anything, LEDs will be much too slow (and your eye even slower). |
H: Problem stacking accelerometer and SD card shields on Arduino UNO
I have an ADXL345 accelerometer on a Makershield stacked on top of a Seeed SD card shield. Both of those are stacked on an Arduino UNO R2.
I've got each working individually, but when stacked they share pin 12. The SD Card shield uses 12 for MISO of SPI and the ADXL345 uses it for SDO/Alt address.
I'm new to Arduino and have not stacked shields like this before. I'm not sure what the best course of action is here to get them both working, Ultimately, I want to log the data from the accelerometer to the SD card, but first things first!
I'd be grateful for any help. I've linked the data sheets for the ADXL345 and the SD shield above.
AI: Your best solution will depend on which mode you are operating the ADXL345 in.
If you are operating it in SPI mode, then pin 12 is being used for the same role on both boards, and they should be able to share simply by taking turns - only the one which has (had) its slave select activated should be driving return data.
If you are operating it in I2C mode, then you can cut the trace or remove the pin from the shield and permanently wire it high or low depending on which address you want to select. It's also possible that you could reconfigure pin 12 as an ouput and drive it high or low whenever you want to talk to the ADXL345, then make it an input again when talking to the SD. Even putting a pullup resistor on it would be likely to work - that would let the ADXL345 see it as a high (just a reliable value to control the address) and still let the SD card override it to send data.
If you don't know which mode is being used, look through the code (for the library) you are using to interact with it. (As a guess, it's probably I2C mode, because if it were SPI it would probably want to use most of the same pins the SD shield were using, instead of just one). Or check the /CS pin - if it's wired high you have I2C mode, if it's connected to an Arduino pin you might have SPI. |
H: Getting Right parts and building ZVS flyback correctly
I recently built a 555 timer flyback driver. I used 2 IRFP450 MOSFET's in parallel to drive it and I got pretty good arcs. Something always seemed to fail, and I always was buying another 555 chip or another MOSFET. I powered it at 12 volts with a lead acid battery. I now want to build a ZVS flyback driver but I really want to get every right parts and build this correctly so I don't have to keep ordering new parts. I want to use as much stuff that I have already as I can and I will be using only Digi-Key to buy the parts to keep the shipping low.
My first question is the primary coil of the flyback. According to this schematic, I need to have "5+5" turns. I don't know what this exactly means, but I think it means to wind 5 turns with two separate coils and connect the two middle ones together. I also Don't know the exact gauge of the wire I should be using and if it will be too small of a gauge, but I used fairly thick magnet wire from a radio shack roll of three thicknesses (i used the thickest). (tell me if i need thicker). Here is the image:
*note that the two middle wires in the picture are connected together
Next, I am wondering about the inductor. The schematic calls for a 170μh 10 amp inductor. I am planning on winding one myself due to price, so what size toroid core should I buy, is that same magnet wire reasonable for 10 amps or do I have to buy larger, aprox. how many winds do I need to get close enough for the circuit to work!
Third, I have a bunch (like 30) aerovox capacitors. The schematic calls for 6 1μf 270 volt capacitors to make a large bank but I looked and they can get quite pricy especialy when buying 6 of them so I am wondering if these would work. I have values from .33 μf to 5 μf and I have voltage ratings for 270-600 VAC (600 volts on some of the lower values) Here is a few pictures of them:
Next, is the flyback itself suitable for a ZVS driver? And is it possible I don't even have to wind my own primary? (maybe it has something like 5+5 turns already built in)
The label on the flyback itself says 8014-3 {new line} MSH1AAS92A. I couldn't find the datasheet anywhere and the closest thing I could find was this which was a little singing arc tutorial. Here is a little picture of the label on the flyback.
I'm pretty sure the rest is pretty straight forward: ordering 4 resistors, 2 1n5349b diodes, 2 mur1560 diodes, and 2 IRFP250 MOSFET'S. My main concern is winding the primary of the flyback CORRECTLY and EFFICIENTLY and the capacitor bank and the inductor.
AI: Wow, where to start...
If you blind yourself from the arc or electrocute yourself, it's your own fault. These sorts of do-it-yourself circuits can produce LETHAL amounts of energy and are easily FATAL.
Now to your questions:
I don't know what this exactly means, but I think it means to wind 5 turns with two separate coils and connect the two middle ones together.
Correct. What you're describing is two windings of 5 turns with the end of the first winding connected to the start of the second winding (technical speak for 'the middle ones').
I used fairly thick magnet wire from a radio shack roll of three thicknesses (i used the thickest). (tell me if i need thicker).
"Fairly thick" is completely relative and not helpful. The 5+5 turn windings are used to source energy to the arc that's formed by the open HV terminals. It's difficult to predict just how much current can flow since (I believe) this sort of self-oscillating, non-controlled design is going to be dominated by parasitic elements and hard-to-control elements like transformer coupling, the resistance of the windings, the layout of the switching devices with respect to the transformer, etc. - so, use the thickest magnet wire that you can fit on the core.
I am planning on winding one myself due to price, so what size toroid core should I buy, is that same magnet wire reasonable for 10 amps or do I have to buy larger, aprox. how many winds do I need to get close enough for the circuit to work!
You should do a complete inductor design. The number of turns on the toroid depends on the core's inductance factor (\$A_L\$) which of course depends on the exact toroid you're going to be using. There's no magic solution here. As for wire, I'd guesstimate 18AWG magnet wire or thicker to minimize DC losses. Go for a toroid that has room for more turns that you calculate, so that you can more easily add more turns if you find you need more inductance.
Third, I have a bunch (like 30) aerovox capacitors. The schematic calls for 6 1μf 270 volt capacitors to make a large bank but I looked and they can get quite pricy especialy when buying 6 of them so I am wondering if these would work.
The idea is to use multiple capacitors to divide up the current, so these in parallel should work. The inductor and capacitor values define the operating frequency (or so a few websites say) so try and keep the same capacitance value as the original schematic as a starting point.
Next, is the flyback itself suitable for a ZVS driver? And is it possible I don't even have to wind my own primary? (maybe it has something like 5+5 turns already built in)
You tell us. It's your transformer, after all. Seriously, "flyback transformer" is a broad term that covers many more devices than those found in CRTs. And I wouldn't trust any windings other than the multi-turn high-voltage one (that's the reason you're recycling a CRT transformer and not building your own transformer, right?)
My main concern is winding the primary of the flyback CORRECTLY and EFFICIENTLY and the capacitor bank and the inductor.
This sort of homebrew work doesn't lead itself to immediate efficiency. You probably won't hit the sweet spot the first few times, especially if you don't have any power electronics knowledge. |
H: ATmega168: UART 8 data bits, no parity, 1 stop bit
According to the datasheet I'm setting the UCSR0C register as following:
UCSR0C = 0b00000110
Description:
Bit 0: 0: TX rising, RX falling
Bit 2-1: 11: character size 8 bit
Bit 3: 0: 1 stop bit
Bit 5-4: 00: no parity
Bit 7-6: 00: asynchronous USART
(http://www.atmel.com/images/doc2545.pdf page 193)
Although I configured the UART to use 8 bit data mode I have to set my communicating terminal (picocom) to use 7 bit data mode. Otherwise I don't get correct data from my ATmega168. I also tried to use minicom and miniterm.py.
My target device I want to communicate with is a Raspberry Pi. I'm using Peter Fleury's UART library (http://homepage.hispeed.ch/peterfleury/group__pfleury__uart.html).
Any ideas what I'm be doing wrong? Did I forget something?
AI: I solved the problem by using an external oscillator. It seems that the internal oscillator is too imprecise for UART communication. |
H: Why MAX232 works without any capacitors?
I was playing today with a very simple setup of pic16f876 and max232 connected to the COM port of a PC running a terminal emulator. I was just testing (the uc simply echoes anything it receives) when I decided to try using smaller capacitors for max232. For my amazement after I removed from the breadboard ALL FIVE capacitors (C1-4, Cbypass) it turned out that the communication was still working just fine! It works at 9600 and 28.8k (couldn't test with higher speeds because the 4MHz crystal I use is not suitable for generating the higher baudrates).
But anyway, so far everything I tested works just fine. I even tried another analogous chip ST232CN and it also works without caps.
So my conclusion is that max232 already has inside all it needs for the generation of +/-8.5V. Until now I thought that the chip DEPENDS on the external caps to work. But it seems that they are not critical to max232 operation but ... I don't know, only provide extra stability? And yet with all my tests I never observed corrupt data transfer so I wonder what kind of interference do the caps actually protect from, what would the drawbacks be if I continue to use this setup with no caps at all?
PS: Since I didn't expect max232 to work without caps, at first I thought that I was mistaken and that I was observing some local echo at the terminal. But then I programmed the uc to echo not the same byte but byte+1. As expected, the incremented ASCII symbols were observed arriving at the terminal, so the local echo possibility was eliminated.
AI: It's not likely that Max232 is pumping out +/-7.5V or more without the capacitors, more that your terminal accepts the logic level voltage coming out of your pic controller.
The RS232 spec allows for anything between +3 to +15 and -3 to -15. If the supply line to the Max232 is 5V then simply providing +/-5V for the max232 is acceptable to your terminal.
Some equipment (sometimes older) require +/-12V and the 7.5V that Max232 would boost to a 5V line would get you there. As Gustavo mentioned, if you hook it up to a scope you're unlikely to see a sustained +/-12V or more coming out of the max232. |
H: How can I improve my Freq to Voltage circuit?
I had a term project which was based around Analog-to-Digital convertors.
We used anLM2907 Freq-toVoltage convertor to take a 150mVpp sine wave (0- 1kHz) and convert that to a voltage. The voltage was sent to a PIC 18F4685 mcu that ran on a Northmicro101 proto board and through some embedded code would take the anaolg voltage and convert it to a 10bit value. This value was translated back to a frequency value and displayed on the LCD.
It wasn't meant to be a bullet proof set-up, more about learning and implimenting the concepts learned through the term. However what we found was that the frequency on the LCD would creep upwards 1 or 2 Hz after about 15-20 minutes and would keep on doing a very slow creep the longer we ran the circuit.
My question is what would be the most likely cause for this creep and how would we eliminate it in a future version?
The F2V circuit was tied to the NM101 protoboards +9V power supply and the output of the F2V circuit was set for +5V output max for PIC input pin protection.
AI: I suspect a big part of the drift you're seeing is related to this equation in the LM2907 datasheet:
This shows that the LM2907 output voltage will drift linearly if your Vcc drifts. If you are not using an extremely good regulator to power the LM2907, this could easily account for 0.2% of drift.
If you are using a precision reference circuit for your ADC (or a bandgap reference internal to the uC), you might try switching to using Vcc as the reference. With a Vcc-referenced ADC any affect of Vcc drift on the LM2907 output will be compensated by the drift in the ADC operation.
Edit: I remove the prior advice because I see you are powering the LM2907 from 9 V, while the PIC is powered from 5 V; therefore, you can't expect the two power supplies to drift equally. |
H: What does 3AG mean in "Fuse Block 3AG"?
I have read from a vendor website. It mentions 3AG fuse block. What does 3AG mean?
AI: 3AG is a standard form factors for fuses. It's very common. 3AG also known as "American fuse" (as opposed to "European fuse", which is smaller).
(source)
(source) |
H: USB to GPIO module
I'm working on a project which involves having users press buttons on a webpage which the server needs to send onto a microcontroller in the form of a GPIO signal. The only ports available on the server are USB.
I have had a couple of thoughts:
Just use an Arduino board as the USB to GPIO interface.
Use a purpose-built module such as the http://numato.com/8-channel-usb-gpio-module (this is pretty much the only one turned up in a quick Google search.
Use something like what is described in this question. Drive LEDs from USB
Can anyone with experience in this area make a recommendation?
AI: Perhaps the simplest solution to the stated requirement is the FTDI FT245R IC.
The key points of note in the context of the question are:
Royalty-free USB drivers from FTDI for Windows, MacOS and Linux (MacOS support only for USB VCP mode)
Complete USB device mode protocol handled on-chip without custom programming
Complete USB hardware on-chip, including USB resistors
8 GPIOs available - will allow future expansion for inputs from the microcontroller too
USB suspend / resume support, to switch device to low-power mode when not in use
Integrated level converter and 5 V, 3.3 V, 2.8 V and 1.8 V totem-pole output, so it can talk to most standard microcontrollers
No additional crystal or oscillator required, it's all integrated.
Free sample implementation code for both computer and FT245R ends
In terms of time saved in implementation, and future requirement bloat, this is perhaps the fastest, simplest and cheapest 1-chip answer to the requirement. Price for individual units is around $4-5, and in the $2 range in volumes. |
H: How do I reverse my electric motor?
For a pet project I am doing (trying to fly a kite using my computer) I need to move and reverse an electro motor to my liking (I took the motor from an old hand-held vacuum cleaner). I need to control this movement from a Raspberry Pi.
From what I understand there are 3 basic types of electro motors: DC, Single phase AC and 3 phase AC. From these three only the DC motor reverses when you switch polarity (which I guess means switching positive and negative). On this page however, I read that "for a DC motor which has a wound field winding instead of a permanent magnet you have to reverse the connections either to the field winding or to the armature.". And here I'm lost.
So I thought to simply try it out: I hooked up the + of the battery to the - of the motor and vice versa. The motor did in fact move, but still in the same direction and a lot slower. Unfortunately, after a few seconds it smelled like something was burning. So I quickly took the load of the motor. Luckily it still works..
The things I now wonder about are:
What kind of motor do I have (see pics below)?
How do I reverse this motor?
Can I reverse this motor from the Raspberry Pi using the Gertboard's on board motorcontroller (as far as I understand the motorcontroller on the Gertboard simply reverses the positive and negative when it wants to reverse the motor)?
If this motor cannot be reversed using the Gertboard, where would I be able to get a motor that can actually be reversed using the Gertboard (preferably by getting it from an old home appliance)?
All tips are welcome!
AI: The components on the back of the motor are three capacitors to suppress the contacts, and a diode to protect whatever switched it from a back-emf.
If you reverse the polarity without removing the diode, then you will just put the current through the diode not the motor, and possibly either destroy it or damage your power supply, or both.
So you need to remove the diode, and if you are using a simple transistor to switch the motor, provide some other protection. If you're using an h-bridge with diodes, then you're ok (I'd imaging the gert-board does, but don't know for certain).
You also need to remove the larger black capacitor - it is polarised, so will be damaged and possibly explode if you connect it the wrong way round. Previously the diode prevented the current going through it when you did that. If the motor generates lots of electrical noise when you run it without the capacitor - interfering with radios and so on - then you will need to add a similar value of non-polarised capacitor to suppress that.
Also check that board you're using can supply the motor's required current. |
H: Use of LED driven over-current in a strobe lamp
I'd like a strobe lamp using RGB LEDs as it's light source. I want to pulse the LEDs with very short duration pulses (ideally microseconds or less) at around 100 Hz.
The total on time for the LEDs per second is likely to be less than 1/1000 of a second. If the LEDs are driven at nominal power, the total light output will be low and the resulting illumination will be very poor. I'm interested in the idea of driving very short pulses through the LEDs that are constant power, but with a current well over the nominal. Ideally, 10x or even 100x over nominal.
A thread here: High Current Pulse on LED suggests that a few times over nominal current for short pulses is probably okay, but I think they're talking about longer pulses than I am imagining.
Could anyone comment on whether the LEDs are likely to survive long enough to be useful? I don't mind a drastically reduced total life. As long as they'll survive for a few tens of hours of usage (total on time probably less than an hour), that's fine.
AI: For a practical answer to the question, destructive testing of at least one LED, preferably a few, will be required.
Broadly:
LEDs are primarily destroyed by heat, not so much by current. Depending on the internal construction of the LED and its short-term thermal dissipation performance, an LED could conceivably survive 100x its rated current. Equally, if the thermal off-take from the junction is not quick enough, an LED could well be destroyed by as little as 5x the rated current.
Given the desired pulse duration mentioned in the question, I just tried the following:
I have a cheap no-name 20 mA red LED being pulsed at 0.8 Amperes at 12 Volts, with pulse duration 5 microseconds, duty cycle 1/256 (0.39%). It has not blown up in the last 15 minutes, in fact the leads are not even discernibly warm. It is not very brightly lit, though - which might be partly because of droop in switching waveforms.
For similar LED overdrive requirements, an in-house rule of thumb I follow is to derate the average power rating of the LED by 10% for every 100% increase of drive current over nominal. I believe this is overly conservative, but I have had success with as much as 30x nominal current for "camera flash" type applications using white Piranha LEDs.
Would this exceeding of rated values be considered acceptable engineering? Not by a long shot.
Update:
Subsequent to the test with the red LED described above, the PWM frequency was reduced such that each "on" pulse became 20 microseconds, from the previous 4.88 microseconds, keeping the duty cycle the same as before.
The result was true destructive testing: The LED exploded spectacularly, the top half has still not been found.
Hypothesis: With the pulse duration being comparable to the LED's rise time, the LED does not really light up much, nor does it exhibit expected thermal catastrophic effects.
While retaining the 20 microsecond pulse duration and 0.39% duty cycle, current limiting was introduced, systematically increasing allowed current from 50 mA to beyond 400 mA. The LED survives up to a point and is much brighter than in the 4.88 microsecond case throughout.
Beyond around 350 mA, the LED dies, magic smoke comes out, i.e. it transforms to SED (Smoke Emitting, Dead).
Conclusions:
Average power is not the only factor contributing to destruction (or survival), keeping pulses too short simply does not allow the LED to turn on enough to matter
With 20 microsecond pulses, the 20 mA LED survives approximately 17.5 times its nominal current rating before destruction
I need to buy more LEDs. |
H: Is it OK to have data lines and AC in the same multicore cable?
I need to provide the following over sections of 1m multicore cable, totaling around 15m.
240v AC
5v DC
GND
RX
TX
Will the AC cause problems for the Serial data?
AI: Operator safety and risks due to insulation failure are the primary concerns to address before even delving into the electronic side of the question.
Keeping the above aside, the answer would depend on the current passing through the power cables.
If the amount of current is large enough, the resultant electromagnetic interference on the other cables in the core would have severe detrimental effect.
This is easiest to visualize if you consider the way a clamp-type current meter operates - a signal is induced in the clamp's coil proportionate to the current flowing through the cable under test.
For moderate power supply currents, there exist special-purpose multicore cables specifically for the purpose described, such as Ambicore Cable:
Note that such cables have a separate insulation for the power conductors as a group, and that the signal conductors are typically shielded to protect signal integrity. |
H: FPGA Synthesis = 0 LE (Altera Quartus II)
Just starting with FPGAs and stuck with a synthesis issue.
Basically, the circuit I am designing is coming out with 0 logic units and 0 for all resources except for the pins assignment. The code compiles (with a few warnings) and it seems fine, but when it comes to fitting/placement, nothing shows up.
The circuit is purely combination, asynchronous and has no clock.
I've tried some example simple circuits such as flipflops and they work fine.
So anyone know why/scenarios/cases where FPGA synthesis on Quartus gives 0 LEs ?
Thanks.
AI: Without the code, you can only expect general advice.
However the most likely scenario is that the outputs don't actually depend on the inputs, so that optimisation eliminates all the logic in between them and hardwires the outputs to '1', '0' or 'Z'.
This can often be due to a mistake in your logic, or a reflection of the fact that you are trying out an incomplete design, and the missing portions contain logic that would prevent the optimisation from deleting what you have done so far. |
H: Does nodal analysis work with alternating currents if every step is treated like DC?
I have a question that is difficult for me to describe.
I want to know if I will get accurate readings of an AC circuit by nodal analysis if I treat each significant voltage change as a DC circuit.
For example: if I have a 10 volt AC power supply at 60 hz, can I determine the currents (through a complex circuit) if I calculate each separate voltage up to my target point as a DC voltage?
Will I get approximately accurate results?
AI: Only if your circuit is purely resistive. If you have any inductance or capacitance then your circuit will be frequency dependent and you will need to take account of the complex impedance. |
H: Dual closed v.s. Dual open audio jack?
I've been looking at getting some 3.5 mm audio jacks for a project I'm working on and these seem to be the best: http://www.switchcraft.com/productsummary.aspx?Parent=529 The problem I'm having is that I don't know which is better for my application. I'm looking at 35RASMT2BHNTRX which is a dual open jack v.s. 35RASMT4BHNTRX which is a dual closed jack. If I use the dual closed jack can I use those switches to detect that an audio cable is present?
AI: Unless you need to detect if a connector is plugged in (such as to select or connect something else like an internal speaker/mic instead) the difference is essentially irrelevant.
If you do need to do so, consider which will be most convenient for your circuit. You may find it clearer to look at the schematic symbols on the mechanical drawings than to try to interpet the written description. |
H: Overwrote STM32F4 ST-Link pins
I overwrote by mistake ST-Link programming pins PA13 and PA14. Now I cannot reprogram my chip.
How could I fix the problem?
Datasheet
AI: To 'unbrick' STM32F4, connect BOOT0 pin to 3V pin and then you can erase or flash it. |
H: What is the voltage at a node between two series voltage sources?
Hopefully the title explains it. As an example, what is the voltage at node one in the schematic below? And more importantly, why? I believe the answer is 0V, but I suppose what I'm thinking is: Why isn't it 125V? Thanks for any help you can give.
simulate this circuit – Schematic created using CircuitLab
AI: To say the "voltage at" is incorrect. Voltage is a difference.
When people say the voltage at a point, what they really mean is the voltage from said point to ground.
Because there is a ground symbol next to your node with nothing in between you can say that the voltage from the node to ground is zero.
see http://en.wikipedia.org/wiki/Voltage |
H: How do I connect a signal shield to minimise noise from close proximity AC
In my proposed application, I will have approximately 30 RS-485 nodes configured in 3 chains spaced ~1m apart. There is a need to pass through 240v L-N-E, +5v, A, B, GND in each node. The physical layout is shown below, as well as an example of the cable.
I am worried about the interference on my signal lines. There will be approx ~1.6A Triac switched current on each chain.
How do I connect the the signal shield to offer the best protection?
AI: Is there an equivalent to this cable where the data member is a "star quad" cable? A differential signalling standard on correctly wired "star quad" cable is more resistant to interference.
Star quad cable has 4 conductors in a circle (I numbered them clockwise below)
1 2
4 3
Use 1 and 3 in parallel for one leg, and 2 and 4 for the other. This gives much better noise cancellation than a regular twisted pair.
The downside of star quad is a much higher capacitance between legs, but over short (1m) runs that won't be such a problem.
edit : this tight twist is what I think of as star quad; though ordinary quad used the same way will also provide reasonably good immunity to interference. Image from the Canford catalogue. |
H: Is a IIR digital filter with a0=1 effectively a FIR digital filter?
I have been experimenting with several digital filters for my thesis recently, and while learning about the FIR and IIR kind, they seem to have similar output expressions.
While using a Java class I found online to design a Butterworth IIR Filter's coefficients, I realized that the 'a0' value turns out to be 1 and, consequentially, that the resulting expression looks like a FIR filter.
Is this the case? Or is the fact that the 'y[n-i]' terms are present on the final implementation enough to warrant it not being FIR and definitely IIR?
AI: It's unlikely to be true. IIR filters use feed-back paths and recirculate a fraction of the output (ever-diminishing hopefully). Hence they have the name Infinite Impulse Response meaning that an impulse on the input would cause an output that continues to decay to infinite.
FIR filter do not have feedback paths and hence the name Finite Impulse Response because the output following an inputted impulse changes and then restores to "normality" after a finite length of time.
The resulting expression may look like an IIR expression but if it contains y[n-i] then it can't be FIR |
H: Is it possible to compute delay with an IIR filter between peaks?
I am developing a project which receives accelerometer signals as input, which must then be analysed via peak-detection algorithms (among other techniques).
In order to do so more effectively, I first filter the signal with a low-pass Butterworth digital filter (an IIR filter) with order 20.
By plotting the signal to a graph, I can then see a delay between the peaks to be detected, which wouldn't matter much if I just needed to detect the peaks themselves.
But since I need to compare the output of this filter with the input and then some other filtered outputs, I need to be able to match these peaks between themselves.
I know this can be done (approximately) with FIR filters by calculating their delay (which is n/2), so I ask: is this possible with an IIR filter?
Filter coefficients:
A = [ 1; -5,98758963; 16,67219332; -28,2587879; 32,15975649; -25,6017496; 14,40568743; -5,647074344; 1,473727937; -0,230919346; 0,016479631 ]
B = [ 1,68E-06; 1,68E-05; 7,58E-05; 2,02E-04; 3,54E-04; 4,24E-04; 3,54E-04; 2,02E-04; 7,58E-05; 1,68E-05; 1,68E-06 ]
AI: It is possible with an IIR filter, but not accurately with a plain Butterworth filter. As far as I can recall, the only IIR filter with flat group delay is the Bessel, whose stopband attenuation is inferior to the Butterworth.
The problem with the Butterworth is that the group delay is a function of frequency, therefore the different frequency components of your peaks are delayed by different times.
One solution is to follow the Butterworth filter with a "group delay equaliser" consisting of approximately as many "all pass" stages as the filter itself. This will give you a good approximation to a constant group delay. I wouldn't presume to advise you on actually designing the equaliser.
All in all, I think the FIR filter is a far simpler approach, as well as more accurate : I don't know why you regard its delay as "approximate". |
H: Processor - L1 Data cache interface
Sorry if the following looks like a very specialized (or programming) question, but I'm hoping there are people on this forum who have done VHDL/Verilog modeling, and might be able to answer:
I'm writing a simulation model of a multi-processor cache system. My processor model is a 32-bit Sparc V8 processor. I was trying to understand how the processor- L1 data cache interface looks like. I have the following doubts:
How wide is the processor-L1 interface? If it is 32 bits wide, then how are doubleword accesses handled atomically? Example: if the DoubleWord instruction is split into two word-accesses, can the block in the cache get invalidated between the first and the second word access? Doesn't it mean the instruction is not atomic? Is the load/store doubleword instruction required to be atomic?
How are atomic load/store or swap instructions implemented on this interface? Is there a signal going from the processor to cache that says "stall all other operations until I say so", and then execute a load followed by store?
I'd be thankful for any links pointing in this direction
AI: (I do not know any HDL, but I hope the following will be helpful anyway.)
One can use a 32-bit wide interface and implement atomic 64-bit loads/stores. For loads one can "cheat" by reading from the invalidated cache entry (only checking the tags on the first 32-bit load), since one knows that the two 32-bit accesses will be back-to-back and within the same cache block that is known to be a hit.
For stores, since the cache block must be in modified (or exclusive if silent updates are allowed) state to accept a store, an invalidate request (really read-for-ownership) generates a data response. Since a data response is provided and the total time of the write would typically only be two processor cycles, the data response could be delayed until the store has completed.
LDSTUB (load-and-store-unsigned-byte) and SWAP could be handled somewhat similarly to a 64-bit store by delaying the load until the cache block is in exclusive/modified state; the store part of the operation is known to be immediately after the read portion and a data response is required anyway, so the data response can be delayed slightly.
An alternative implementation of LDSTUB and SWAP could treat an invalidation between the load and the store as a miss for the load, effectively reissuing the load. However, this presents a danger of livelock. While livelock issues can be managed (e.g., various back-off techniques), the earlier mentioned implementation is probably much simpler. |
H: Circuits newb needs someone to help with a diagram
I'm trying to build an OBDII circuit for a STN1110 chip from OBD Solutions. My problem is I don't understand some of the values needed. For instance, the datasheet has most of the information I need, but in the ISO transceiver I cannot understand what the Q's are or what their values need to be. Also how do I tell what the right LM339 chip is? There are like 6 different versions on mouser.
Also, if anyone can tell me the difference between DLC_RAW and just DLC I would appreciate it...
AI: The Q are NPN bipolar junction transistors, they do not have an specific value, you have to find the right transistor for the application. Regarding the LM339 most likely you are looking at different packages in which they come in I recommend you go with a DIP(dual in line package)Like this one. The diffence between DLC_RAW and DLC is that a DLC_RAW is a signal comming from the outside and is unprotected, while the DLC is protected againg reversed polarity by the diode, current limited by the PTC fuse, and over voltage protected by the ESD protected by the TVS diode, and filtered out the high frequencies by the capacitors |
H: Shifted shift register
I've just had my first go at getting a shift register to work with my Arduino.. it was almost flawless, but I seem to be missing my first bit.
I followed the guide here, but using only one shift register:
http://arduino.cc/en/Tutorial/ShiftOut
It took a little while to figure out, but it appears as though pin Q1 on my shift register is lighting up when I send the number 2, Q2 is high for 4, Q3 = 8 etc.. when I call shiftout() with the number 1 no LEDs come on.
I checked my wiring and it seems fine.. then I double checked and it's definitely fine.
The only things I can think of are:
1. My shift register is slightly different to the one they're using.. it's a 595N, not a 595.
2. I've done something wrong in the code..
In loop():
digitalWrite(latchPin, LOW);
shiftOut(dataPin, clockPin, MSBFIRST, 1);
digitalWrite(latchPin, HIGH);
Does anyone have any tips on debugging this issue?
Cheers,
John
AI: The shift register is behaving exactly as designed.
The second-from-lowest significant bit should be lighting up for a value of 2. That's Q1.
To check for a value of "1", you need to be looking at the least significant output bit, which is Q0.
Also, the 595 and 595N are not functionally different. |
H: How to wind a toroid for 170 uH Inductor
I am planning on buying a toroid core from Digi-Key. I want to be sure that this is the right type and that I can achieve 170 uh inductance with 22-18 AWG wire. How can I wind this and what is the formula so I can calculate this myself in the future. If this doesn't work which one can I buy from Digi-Key to get the right inductance. (my budget is 4 dollars or under for the toroid core) Last, I want this to handle up to 10 amps of current so tell me if I can't use 18 AWG wire.
edit-fixed broken link, i guess it was just because it was linked directly to my shopping cart
From a comment It said to buy my own I can't find any already pre-wound 10 amp 170 uH toroid and the only thing close was like 10 dollars so I would like to wind it myself!!!
AI: The next step after MMGM's excellent answer is to put a few numbers from his datasheet into the calculator from Mark B's answer at
Averaging the inside and outside diameter (6mm and 10mm) we get radius 0.4cm and MMGM's 10 turns.
Datasheet has "Ae=7.83mm^2" so enter 0.0783 (cm^2) in the "Area" box and it will calculate a coil radius. Enter 4300 for relative permeability (datasheet calls it ui, calc calls it k, these things happen!) and the calculator confirms inductance 0.168mh, pretty close... So far so good.
Now the crucial question : will the coil take 10 amps?
There is another calculator to answer that on the same site...
Enter the radius (0.004m this time!) 10 turns, k=4300 again. And new, the "Flux density near saturation" from the N30 data sheet - B = 380mT = 0.38T, and click the link to "current" above.
For this core size and material, with these turns, and this saturation flux density, the calculator says "0.177 amps".
So, no...
As an experiment, try a 4cm radius, 1cm^2 area, 9 turns, same material. The first calculator says 0.174mh, again pretty close. The second now says 1.96 amperes which is heading in the right direction, but a MUCH bigger coil...
So, as MMGM says, magnetics design is hard.
But that was a first step. Now try some different core materials (lower ui=k, larger cores, lower inductances, and see where you get.
(Also bear in mind that 10A DC may translate to 20A or more at AC. Try designing for 1A,5V until you have something working) |
H: PIC MCU long and short jumps in assembly
if i want to a short jump from a memory address lets say from 0x0040 to 0x0060 the assembly code is like the below?
ORG 0x0020
GOTO 0X0060
And if i want to make a long jump from bank 1 to bank 3 for example address 0x0810 to address 0x1810 is like:
ORG 0x0810
BSF PCLATH,3
BSF PCLATH,4
GOTO 0X1810
??
AI: It is not clear what your question actually is, but yes, you have to set PCLATH<4:3> to reliably jump to any location in program memory using a GOTO instruction on the original 14 bit PIC architecture. This is, of course, all very well described in the datasheet in several places. What exactly are you confused about?
You also have some other confusions. There is no distinction between a "long" or "short" jump. The GOTO instruction always does the same thing. The low 11 bits of the target address come from the instruction itself, and the upper 2 bits from PCLATH<4:3>. This has nothing to do with whether the two upper bits happen to be the same in the source and destination addresses or not.
No, you can't jump between bank 1 and bank 3. Data memory can not be executed, only program memory can. You can jump between pages of program memory, with the only distinction being that the upper 2 bits of the address change when jumping between pages.
If you use the convention that PCLATH<4:3> are always set to the page currently executing from and you know that the target is in the same page, then you don't have to explicitly set anything in PCLATH. However, this is a convention that is completely up to you. The hardware can be used various ways. |
H: Do I need FCC certification to sell if my device uses ISM bands?
I have made a portable xbee-like transceiver. The system will be operating at 2.44000GHz (RX) and 2.46000GHz (TX). The TX power to the antenna is well underneath +30dBm; the EIRP is also less than +36dBm. My understanding is that even though my device is unlicensed and uncertified, and I am legally able to operate under these frequencies in the ISM band.
However, if I want to sell this device, do I need to have this device certified by the FCC? If so, what does it mean to be certified by the FCC for ISM bands?
P.S.
I only care about FCC certification, as I will be getting Intertek certification soon... and I don't really care about UL, cUL, or CE.
AI: All intentional radiators must be certified to the FCC regulations. Since you said "transceiver", it implies this device is in part a transmitter. Selling a intentional radiator, such as your device, without FCC certification in the United States is a federal offense. All your units can be confiscated, you can be fined, and in some cases worse punishments may be imposed.
Frankly, if you are asking such a basic question, you don't belong in this position. Get someone who knows what they are doing to help you thru the process. Then maybe next time you can be the expert. This really not a place newbies belong without the guidance of someone that does know what they are doing. |
H: What impedances do graphics cards expect on H-Sync and V-Sync for VGA monitors?
I have a graphics card for a laptop whose driver kills itself if it runs for longer than two minutes with no external monitor attached. To solve that, I decided to make a sort of dummy plug that would be detected as a monitor by PC.
I have managed to find out what impedances video card expects for most of the pins, but for H-Sync and V-Sync, Google brought up numerous conflicting answers, mostly from people who are guessing.
AI: According to the VESA Plug and Display standard, which uses compatible electrical signaling for analog video, the host is supposed to detect a minimally-capable monitor by looking for 75 Ohm termination on the video lines. Anything beyond basic functionality is determined by reading EDID information over the I2C lines.
Because EDID is so old, the video card may not even bother checking for a monitor without it. You could probably fool the video card into believing a monitor with any desired capability is present with an appropriately programmed I2C EEPROM or microcontroller. |
H: What is the cap for in Arduino reset circuit?
My Arduino AtMega328P board has this circuit for reset. I understand the switch (LTSpice didn't have a switch symbol) pulls the line low, thats obvious, but what does the cap do when reset comes through DTR? Does it invert the signal or only allow a pulse?
AI: Yes.
It converts the level-triggered signal DTR into an edge-triggered signal and has the effect of level shifting it to within the operating voltage range of the MCU (+/- a forward diode drop).
DTR (Data Terminal Ready) is a signal line used for hardware flow-control in various EIA serial protocols (such as EIA-232). In your case, it is being "hacked" to serve as a host-initiated reset of your microcontroller.
Ease-of-use
The host has software control over the state of the DTR line, but implementing a short pulse may not be possible due to scheduling or other tasking in the way.
Speed
Perhaps you want a really fast reset (you don't want a user to notice the micro went through a reset), this wouldn't be possible with software control of the reset line directly through DTR. You'd have to send a low-level, then a high-level via your host software.
Level Translation
The DTR line may be at EIA-232 levels, many volts higher (and lower) than the microcontroller can safely tolerate. AC coupling the reset edge severely limits the current (waveform energy) such that it may be safely clamped by the ESD protection diodes attached to the reset pin inside the microchip. |
H: I2C on explorer 16 board not working
I am using the Explorer 16 Board for building a Accel connected by I2C .
Currently I have only the I2C part done and I am trying to tap scl1 & sda1 using oscilloscope. The accel has to connected yet to the the MCU!
But on oscilloscope I dont see any signals on both SCl1,SDA1, any help ? here is the code :
UINT config1 = 0,i=0;
UINT config2 = 0;
/* Turn off I2C modules */
CloseI2C1(); //Disbale I2C1 mdolue if enabled previously
ConfigIntI2C1(MI2C_INT_OFF); //Disable I2C interrupt
config1 = (I2C_ON | I2C_7BIT_ADD );
config2 = 157;
OpenI2C3(config1,config2); //configure I2C1
IdleI2C1();
StartI2C1();
while(I2C1CONbits.SEN ); //Wait till Start sequence is completed
MI2C1_Clear_Intr_Status_Bit;
AI: The documentation for OpenI2Cx() says that it configures the I2C control register and the I2C baud rate generator. You still need to set up the GPIO lines themselves.
What I did in my PIC24 I2C application is:
Configure PPS (may not be necessary for you)
Disable ADC on the I2C lines
Set the TRIS registers on the I2C lines as inputs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.