text
stringlengths
83
79.5k
H: Battery still drawing tiny current when fully charged I have a device (a vaporizer) which contains a rechargeable battery. The battery only takes a couple of hours to recharge, but sometimes I end up leaving it charging overnight. I have monitored the current drawn by the device when charging, and to my mild annoyance it still draws about 0.02A even when fully charged. I don't know how to stop this small current draw. I also don't know whether I need to worry about this (i.e. whether it's decreasing the battery life). I have two choices when leaving the device charging overnight -- I can either use the mains, or a portable USB battery (i.e. use a second battery to charge the device). My portable USB battery turns off automatically when current draw is zero -- however the current draw of 0.02A pulled by the device even when fully charged stops my USB battery from switching off. Is there something which I can buy, which has a USB input and output (or a micro USB input and output) and which stops conducting electricity if less than about 0.05A is flowing through it? Or is there a USB battery which turns off when less than about 0.05A is being drawn from it? Some more details -- I know very little about the rechargeable battery in the vaporizer. There is no mention of what battery type it is in the manual; it's a portable vaporizer, so its job is to heat something up to 200 degrees celsius and keep it at this temperature. The specs on the device are that it wants a 5V input to charge and max power input is 10W; the charger that came with the device is 2A so this checks out. I bought one of these: https://www.amazon.co.uk/Muker-KCX17-Voltage-Multimeter-chargers-capacity/dp/B01BTRLVYQ to monitor the current of the device whilst charging; it typically starts at 1.5A and then drops to around 0.02A when fully charged. My main concern is that this constant drawing of a small amount of charge will degrade the battery in the device. A secondary concern is that leaving it to charge overnight represents a fire risk; most devices I have stop taking charge after some time and if I'm charging them with my USB battery then the battery switches off; this does not. Am I fussing about nothing? AI: Addressing your concerns in turn: Battery damage. All depends on how good the battery charger is. The best treatment for a lead acid battery is to maintain a trickle charge after recharge. The best treatment for a lithium ion battery is to cut the charge to zero. The charger could be doing either and still be drawing 20mA for its own purposes. The manufacturer could care about battery life or they could not. Whether its causing damage is probably up to how much you trust the manufacturer! Fire risk. Certainly not while operating correctly and only drawing 20mA, but anything left on is more risky than being physically switched off. In general, I sympathise with the concern but think it's not worth worrying about: 20mA @ 5V is 100mW and wont make a dent in the grand scheme of things. Charging from another battery is its own issue. Whether the battery is being treated well or not is really a product quality consideration - is the product well made and likely to last a couple of years? Then the battery charger is probably well made too and nothing to worry about. Is the product crappy and likely to be chucked out in a year? Then who cares if the battery only lasts 2 years? Of all the things that could catch fire, is this one likely? Probably not. Consumer products have safety requirements to meet, and there's bound to be someone that treats the product worse than you that will be the first to prompt a product safety recall. If you're not satisfied, then join the campaign against Phantom Power - there are plenty of guides, tips, products and advice associated with reducing phantom power. You can use automatic detection, timers, rules and triggers to turn off devices, but it's a rabbit warren to navigate. The Lawrence Berkeley Laboratory's Standby Power site is a good place to start - there's even a chart with battery chargers on it.
H: Added parameter variant does not take effect I want to add a parameter variant to my schematic. The parameter being varied, called "Load", does not exist in the DbLib from which I'm pulling my schematic symbols. Instead I've added the load parameter to certain symbols placed in my SchDoc. I'm able to add a parameter variant changing the Load parameter from empty to "DNI" in the Variant Manager Dialog, but when I switch to the newly added variant in my SchDoc, I see no change to the "Load" parameter of the target symbol. Any idea what I might be doing wrong from the above description? thanks AI: When you change the variant the value in the schematic editor does not change. In order to view the variant differences you must select the variant-specific sheet, which is selected using the tabs at the bottom of your SchDoc. I don't have one set up at the moment to show you, but hopefully you will be able to find it on your own. Switch to the variant with the modified value for "Load" and the extra tab(s) should appear at the bottom of the pane, next to the "Editor" tab. If it doesn't, after changing the variant close the SchDoc and reopen it, and the tab(s) should appear. All this is from memory, so hopefully it still works for you. I can look into it again when I get to work tomorrow -- I have various projects with multiple variants.
H: How to pull Triac gate to ground with an NPN transistor with AC present I am trying to figure out in this circuit, which lets the Triac 170 to conduct normally, which prevents an engine from firing, how to use an NPN transistor to replace the seat switch 154, but I assume there is AC voltage on the 112a line, and I know that AC would not play nicely with the transistor. AI: You could use the transistor to switch a small relay. Or use a MOSFET-output SSR. Both those solutions also avoid any issues with where the grounding is. As this is apparently lifted from a patent for a safety device - if your application is similar be sure to undertake a proper engineering review of the safety aspects. Keep in mind that semiconductors most typically fail on (but can also fail off or the connections can go open). Edit: I am suggesting you replace the switch with the SSR- 400V or 600V units can be found, such as the TLP797J. You have to confirm that the voltage rating is adequate. This particular one can switch 100mA. In the case of the relay you can choose whether to use a normally open or normally closed contact of a sealed SPDT relay, and replace the entire circuit. I assume they didn't do that in the patent for some reason (perhaps just the wiring) and I would want to figure out exactly why. Again, the safety aspects need to be examined carefully at a system level as well.
H: How to use 350mA constant current driver with a 180mA LED As part of a domestic down-lighting system I've got a 350mA Constant Current driver, but i'm advised the LEDs I want to use should be driven with a 180mA CC driver. Changing the driver or LED's is going to be tricky. Can anyone see any problem with putting a resistor in parallel with the LED to drop the current from 350mA down to the recommended 180mA? How would I go about calculating the appropriate resistance? The forward bias of the LEDs is around 0.6V. AI: Can anyone see any problem with putting a resistor in parallel with the LED to drop the current from 350mA down to the recommended 180mA? yes. That won't work out, or at least, it's hazardous. Diodes, unlike resistors, reduce their effective resistance when heating up. Thus, after initial "correct" setting, the current through the diode will increase, due to the temperature increase and the resistance decrease. That, in turn, means the diode gets hotter. And then, its resistance reduces, further increasing the current through it… A vicious circle, often called thermal runaway. It usually ends with the diode failing by burning out. One thing you could do is have another constant current source feeding the diode, and a "waste power" resistor in parallel to that. But it's an ugly solution that's not going to be easier than ripping out the original CC source.
H: Questions Regarding Symbols in a Schematic So I'm building the circuit below as an exercise: I have some questions on this schematic as I'm fairly new to reading schematics and electronics in general. I have every component in the diagram but I'm confused on some of the symbols in diagram. What does Vin and Vs stand for? Where would the common ground be in this circuit be and how would you be able to determine that from the diagram? What does the BYPASS part of the schematic mean and how would I implement that in my circuit? Also, would I be able to put a SPDT switch and LED just after Vin (assuming Vin is voltage in which I'm still unsure about)? I would be greatly appreciative of any help, thank you. AI: \$V_{S}\$ = Supply Voltage Sometimes you might also see \$V_{S+}\$ and \$V_{S-}\$ for dual supply op amps, although this depends on the supplier and the type of device. The table below shows the possible different notations that can be used for positive and negative supply. \$V_{IN}\$ = Input voltage This is the voltage signal that you are providing your circuit with, considering this is a typical audio amplifier, your input would be a low voltage input signal from something like a microphone In a circuit schematic, ground is indicated wherever you see one of these symbols. All your signal grounds are linked together, all your earth ground are linked together and all your chassis ground are linked together. These are then connected to each other at a single point, but correct earthing is another topic entirely and isn't one of my strong points. For the circuit you have there just link all the ground points together and connect it to the 0V on your power supply. The 'BYPASS' is a pin on the actual LM386 device and is shown in the datasheet. It is to do with the internal amplifier stages of the device. Putting a capacitor between the bypass pin and ground (The datasheet recommends 10uF) prevents any noise from your power supply feeding back into your amplifier and causing any humming or buzzing you might hear from your speaker. If you have a clean power supply you don't always need one. As for putting an LED and SPDT switch just after \$V_{IN}\$, it all depends on what your input voltage is. If this is a signal coming from an electret microphone then the voltage will be in the 100mV range and will not be able to light up an LED. If you let me know why it is you want the LED and what you expect it to do I might be able to give better advice.
H: Why does this MOSFET switch on so slowly? I am learning Tina and drew this simple MOSFET switch: Adjusting P1 will at a certain point switch on the MOSFET, which I expected to saturate immediately. However when I look at the DC transfer characteristic, it turns on 'slowly' when the gate voltage moves between 3.5V and 4.7V I confirm this by setting P1 to 4.3%, where the MOSFET seems to be half-conducting and thus dissipating 249W: How can this be? I thought that this kind of MOSFET was a switch but it seems to behave like a BJT (albeit in a narrow gate voltage range)? EDIT: Thanks to for the answers, all +1, which thoroughly enlightened me; I accepted Andy's answer simply because it corresponded best to my lack of understanding. AI: However when I look at the DC transfer characteristic, it turns on 'slowly' when the gate voltage moves between 3.5V and 4.7V I'm assuming here that you don't mean "slow" in time but that you mean it doesn't instantly turn fully-on as you reach some precise voltage threshold. This is what a MOSFET does - if you wish to use it as a switch then you apply a suitable voltage level on the gate (relative to the source) in as quick a time as you can. Maybe 12 to 15 volts or, for logic level MOSFETs, only 5 volts is needed. Then it behaves like a switch. If you wish to use the MOSFET as an amplifier then you apply a linear voltage to the gate to get a (somewhat) linear voltage at the output. Notice the voltage gain in your circuit - For a change in gate voltage of about 1.15 volts you get a change in output voltage of about 100 volts. That's a gain of about 87 if you want it in simple numbers. If you looked at gate voltage from 4.3 volts to 4.5 volts, the output voltage changes about 25 volts - that's a gain of 125. it seems to behave like a BJT (albeit in a narrow gate voltage range)? A biploar transistor might totally switch on with a base-emitter voltage change from 0.6 volts to 0.7 volts. As a range that is 0.1 volts with an offset of 0.65 volts and if you compared this to your mosfet switching on in a range of 3.5 to 4.65 volts (1.15 volts) it isn't that dissimilar. 0.1/0.65 = 15% and 1.15/4 is about 29% so relatively speaking a BJT and a MOSFET are similar.
H: Magnetic induction on data lines from DC power lines I have experienced failures of RS-485 links due to magnetic induction from ~52V battery power lines running very close in parallel. Standard shielding around the RS-485 cables did not solve the problem but a clearance distance of 1 inch did. The fault probably occurred when a high current such 90A was interrupted by the opening of a relay or a breaker. I have the following questions about such cases: Standards: Is there any accepted guideline regarding clearance distances in this particular scenario? The recommendations I have seen are based on house wiring cases where there is a chance of shorting the data and power lines running through walls. Moreover, section 1.16 "Electrical Clearance" of IPC-A-620B (2012) leaves clearance spacing between cables up to design and only gives guidelines based on voltage. And while the IEC 61000-4-x tests appear to be very applicable here, they would apparently focus on protection at the transceiver level. Calculations and simulations: What is the recommended approach for calculating the induced voltage or power in such cases? Which tools are prevalent in the industry for computer simulations of such scenarios? Are such calculations followed by experimental testing? Behavior of DC current at trip: This is a more general question. For simulating scenarios like the one described, I will probably need to assume a maximum di/dt value. How fast and in what manner does a DC current terminate when its path is broken with an MCCB or a latching relay? Is it dependent on the source and load? Can waveforms from other cases be applied for this case or will I have to gather empirical data for my own system? AI: Standards: Is there any accepted guideline regarding clearance distances in this particular scenario? When you take a walk outside on a mild and overcast day you cover-up because you know that the sun can still give sunburn if exposed too long. This is equivalent to moving the cables further apart (as you did). If it gets warmer and brighter you put on a pair of shades and a hat. Warmer still, and you put on sunblock. It's the same with EMI to data cables - you take the action that is appropriate and convenient. If that means twisted pair cables then do that. If it means screened twisted pair then so be it. If you have to avoid earth fault currents getting through your data cables then you only terminate solidly at one end and maybe via a 10 nF at the other end. If the common mode induced voltage is high enough to potentially push the RS485 receiver out of its common-mode input range then you use an isolated receiver (sun block). In other words, there are precautions and you can take all of them but they come with a cost and some performance limitations (sometimes). Calculations and simulations: What is the recommended approach for calculating the induced voltage or power in such cases? Faraday's law of induction is a good starting point but it quickly becomes difficult to estimate induction levels when wires are twisted and the interfering source is close up (near field). So, you take the precautions that you can afford and advise installers not to do "this" and not to do "that". Behavior of DC current at trip You could try and model it. You know the cable type and therefore you can find out or estimate the inductance and capacitances. You can model it like a transmission line and see what di/dt you get. Plenty of simulators do this. I was using a t-line model only yesterday for a similar thing.
H: MCU clock drift and radio frequency drift - are they the same? Let's assume that I have two sensor nodes, one with crystal oscillator running at 24 MHz frequency, the other at 24 MHz + 10 ppm frequency. To my understanding, in system-on-chip (for example, Texas instruments CC2650) based sensor nodes this single high-frequency crystal (MHz range) powers both the MCU and the radio (where GHz range is needed). I mean, the "local oscillator" component in RF diagrams that generates the 2.4 GHz sine wave is calibrated by using the MHz oscillator as the source (through PLL). Assume that on both nodes the radio is configured to use the 802.15.4 channel 11, which has 2405 MHz center frequency. Is it the case that one node will communicate using 2405 MHz and the other 2405 MHz + 10 ppm? AI: Is it the case that one node will communicate using 2405 MHz and the other 2405 MHz + 10 ppm? Yes, one will transmit at 2405 MHz and the other will use 2405.02405 MHz plus there will be jitter caused by the PLL (in both systems) and this may be in the region of +/- 100 ppm at a variable frequency in the hundreds of Hz to low kHz range (with some randomness too). This dwarfs the static 10 ppm error. When it comes to receiving, it is likely that the "misaligned" receiver will lock-onto the precise transmission frequency irrespective of its own slightly misaligned local clock. This can also be done using PLL techniques. On more complex transmissions where the actual centre frequency can be missing (such as in phase modulation or other suppressed carrier types), a special PLL (called a Costas loop) is employed.
H: Triac fan speed control Are Triac based fan speed regulators harmfult to ceiling fans ? One shop guy told me that it's being replaced with capacitor based regulators now a days. But then how can the fan speed be controlled via microcontrollers without using triacs ? AI: Are Triac based fan speed regulators harmfult to ceiling fans ? Triac based fan speed controllers are not harmful to ceiling fans that are designed for speed control. Even fans that are designed for another type of speed control are not at much risk since the main source of potential difficulty is difficulty in dissipating motor losses at lower speed leading to higher operating temperature. If the motor is designed for sufficient cooling at reduced speed with increased slip. it doesn't matter too much how the increased slip is accomplished. Triacs can cause a buzzing or humming noise that can be annoying. They can also cause electrical noise on the power lines and even radio frequency interference. A capacitor in series with an auxiliary motor winding is a common method of giving a single-phase motor starting torque and a defined operating direction. Reducing the capacitor value is a common method of speed control. See the question: "varying run capacitor for speed control of single phase motor." To use that method with a micro controller, the micro controller could be used to control switching devices to switch in different capacitor values. The switching elements would carry only the auxiliary circuit current not the entire motor current. As described, that would provide several discrete alternative speeds like the common three-speed switch. However, a modulated switching method could be used in conjunction with capacitors to provide the efect of a continuously variable capacitance.
H: 12 potentiometers with multiplexer I'm going to connect 12 potentiometers (100k) to a couple of cd4051 analogue (8->1) multiplexers and to 2 of Arduino (Uno) analogue pins. Now I think each pot draws 0.5mA (I'm going to measure it to be sure) so around 6mA for all 12 pots. I think it is ok the Arduino permits 200mA as I know. Also I read that it is recommended to use 10k potentiometers (because of the ADC) but I have also read that the 100k are going to work. So is there anything I miss here? (before soldering all the power and ground terminals). (the multiplexer in the schematic is a single module) simulate this circuit – Schematic created using CircuitLab AI: stray inductive noise L currents and C voltages may be induced on cables connected to pots or on board into R of pots. So 100k is worse for I(f)R than 10k. But shielding and ferrite CM chokes improve the results and may be mandatory anyways with twisted pairs, since the inputs are unbalanced.
H: Datasheet register bit number When it is written that to check if some flag bit is set, you have to look at for example 0x74[2], does that 2 means 3rd bit of register or 2nd? From which number they count in datasheet 0 or 1, how to determine that? AI: It is common for register bit numbering to start at bit 0, but there are exceptions. That is why, to be sure of the correct answer for a specific device, you need to read its datasheet: If you see a mention of bit 8 within a byte, then the bit numbering starts from 1 If you see a mention of bit 0 within a byte, then the bit numbering starts from 0 Obviously things are more complicated when referring to registers larger than a byte (i.e. larger than 8 bits), but the same principle applies. Here is an example from a datasheet which you've asked about before: [Source] You can see that in this device's datasheet, a register's bit numbering is from 0 to 7 as highlighted with the red border, so that answers your question about the starting bit number.
H: What happens to an electrical system at its pole? Pole is defined as the value at which value of a function becomes infinity. Let's say I have one LTI system with transfer function H(s)=1/s+1. How the system will behave when s=-1. From Bode plot I can see at pole the slope of Bode plot changes at pole frequency. Can someone clear this thing? AI: This picture might help: - Along the top of the picture there are three bode plot examples of the magnitude response for a 2nd order low pass filter. These are just examples that show how the damping ratio (\$\zeta\$) affects the peak of the response. Bottom left shows the fuller picture where you can see the bode plot and pole zero plot together. Finally, bottom right is the conventional pole zero diagram (as viewed from above in the previous diagram). So, if you have a pole at -1, that pole exists along the \$\sigma\$ axis and is at a frequency where jw = 0. Because the \$\sigma\$ axis is concerned with damping (al la \$\zeta\$ or its inverse Q/2) the further to the left you travel the more damping there is. What happens to an electrical system at its pole? A lot of systems will begin to turn oscillatory as the pole advances towards and aligns with the jw axis. If the pole advances further then almost certainly, a system will become unstable and oscillate. Can someone clear this thing? Hopefully this will help or maybe trigger a related question that I should be able to clarify.
H: Why are 40 pin and above DIP packages generally wider than 28 pin and below Take for example this datasheet for a Microchip PIC, the 40 pin package has a width of between 0.485 - 0.580 inches, while the 28 pin variant is between 0.24 - 0.295 inches. I have noticed this is the case with many parts, and don't remember seeing any 'thin' 40 pin DIP parts. Is this due to fitting in the lead frame for higher pin counts, or is there another reason? AI: EDIT - First, notice that the two packages are very different in layout. One is called a DIP (Dual Inline Package - and yes, DIP package is redundant. Live with it.) package, and the pins (not the plastic body) are spaced on 0.1 inch spacings with a row spacing of 0.6. The other is a surface-mount package which does not use rows of pins. The difference is important. END EDIT First, you have to keep in mind that for the early logic chips, a 0.3 spacing became the de facto standard. It's important to realize that early (1960's) printed circuit fab techniques made the sort of narrow traces which we take for granted today very expensive, so running connections around a bunch of ICs was a problem for crowded footprints. Traces made on 0.1 inch center-to-center was the norm, with some daring designs using 0.050 pitch. To make matters worse, multilayer boards were almost unheard-of. Even at the low gate densities of the time, there were some chips (like the 74150 and 74181) which required more than the common 16-pin DIP. At the time there was a reluctance to get the extra pins by making a longer, narrow package, and this had two issues. The first was PCB trace issues, and the other was mechanical. DIPs were made using a ceramic substrate, and a long, narrow platform would have been prone to mechanical failure when applying extraction force to one end of a socketed part. So, since engineers and computer geeks tend to think in powers of 2 and 10, the larger pin counts were accomodated by doubling the row spacing to 0.6 and the standard length increased to 24 pins. It's not certain if this was a dominant issue, but much logic design at the prototype stage was done using wire-wrap boards, and going from 0.3 to 0.6 allowed the production of "universal" WW boards with rows of pins at 0.3 spacing, allowing easy mixing of the two sizes. It would be nice to think that the IC companies recognized that engineers will tend to choose parts which are easier to work with in development and then use them in production. It's worth pointing out that the choice was not universal. Some early RAMs with 22 pins were produced in 0.4 spacing, as well as the entire 100K ECL logic line, and occasional other chips as well, but the wild success of the TTL family made 0.3 and 0.6 the de facto standard. With the explosion of chip capacity due to uCs and memory, pinouts began to grow, although chip sizes stayed within the bounds of 0.6 row spacing. Early (E)PROMs, for instance went from 24 pins to 28 quite rapidly, and thence to 32. With the high pinouts needed for data busses, microprocessors jumped quickly to 40-pins, but once again mechanical constraints started to rear their ugly head. I believe there were a few 42-pin weirdos, but it was clear that going to longer chips would have bad reliability consequences due (again) to the fragility of ceramic substrates. As a result, bigger chips, such as the Motorola MC68000 processors and various specialized DSP products such as multipliers and multiplier/accumulators, jumped to 0.9 inch row spacing, with 68 pins as the norm. About this time, though, it became apparent that there were shortcomings. With the big packages the lead lengths from the chip to the pins started getting onerous, especially as speed increased. Signal integrity/termination issues get much harder when there exists a long stub within the package. The answer was to go to packages which were not enormously larger than the chip, using SMD packages with much finer connection pitches than the old 0.1. This was aided by the fact that PCB production techniques had gotten good enough to accomodate the tight clearances required, and to do so at reasonable cost. Just how bad the increased speed issues had become is shown by the introduction of some logic chips at the 20-pin DIP scale which had multiple ground pins instead of one, with the ground pins shifted from the convenient corner to the middle of one side, allowing very short ground connections from the chip to the pcb ground plane, with the specific aim of preventing ground bounce due to total lead inductance. During this period, of course, it also became common to accept long skinny parts in smaller packages, particularly 20 and 24 pin parts in 0.3 spacing. As experience was gained with these skinny packages folks became more comfortable with them,and some specialty chips, such as FIFOs, were produced with 28 pins on .3 inch center, but these were special cases (FIFOs, for instance, tend to have very simple signal connections).
H: Device/method for handling rapidly changing current demand in DC source A circuit in my project uses rapidly switching MOSFETs to control current flow, and as a side effect, the current demand on the DC source varies rapidly. The FET is switching at somewhere between 10kHz and 100kHz, and the current demand switches from 100A to about 150A every period at about that frequency. For example, this is a slightly less extreme example from LTSpice (ignore the small inductive spikes): The DC source is a large 200V/1000A generator/rectifier from the 1950s, and likely can't handle any changes above ~60Hz. Is there a device or component I could add that could draw an essentially constant amount of current from the rectifier, but handle the rapid changes in load to maintain a constant voltage? A (very) large capacitor across the device could do it, but I'd prefer something a little more elegant and less likely to kill me (or at least advice on how to use something like that safely). AI: Is there a device or component I could add that could draw an essentially constant amount of current from the rectifier, but handle the rapid changes in load to maintain a constant voltage? It's called a large capacitor A (very) large capacitor across the device could do it, but I'd prefer something a little more elegant and less likely to kill me (or at least advice on how to use something like that safely). I'm still saying capacitor and to avoid getting harmed you should consider interlocks and/or a discharge relay that removes charge within several seconds when power is removed. Whether you use a capacitor or not, it's likely that if you don't take care, the 200 volt supply will electrocute you. You should also consider that if one of the coil's reverse diodes failed, the electric arc produced from 6 mH and 100 amps is about one-tenth of the energy delivered by a fairly standard heart defibrillator. How safe do you feel given that the catch-diodes you are using have a peak repetitive forward current of only 20 amps and a non-repetitive current of only 150 amps. I'd be concerned about safety with or without a capacitor.
H: Identification of component I was desoldering components from a power supply board for a printer and I came across this small component. It was pressed against a heat sink so I assume it is some sort of thermistor but I am not sure. The text on the component is "5104BL6" AI: It's a temp sensor for laser printers. Between then electrode stiff wires is an SMD Thermistor which is used in Laser printers to regulate drum heater temperature so that when carbon is attracted to the charges on the photosensitive cartridge and transferred to the hot drum the carbon then fuses into the paper. The high temp Kapton® or Polyimide package withstands solder temperatures and protects the sensor from carbon getting stuck to it while and the leads from getting spread apart causing stress on the electrical joints. The tip helps regulate the drum temperature and the heater often peaks On at 1500W with a low duty cycle at temperature is reached in a minute or so.
H: Zener Diode + fuse overvoltage protection for LEDs with constant current driver- fast or slow blow fuse? I am working with a 6-axis robot (UR5) with a smart camera mounted on the end of arm, and I want to add a ring light to the camera lens which I will power using tool outputs from the robot interface. The tool outputs can supply either 12V or 24V, and the voltage is chosen programmatically using a touch screen or script. The LED light rings have a constant current driver requiring 12-18V, so there is a potential for a programming mistake sending 24V which would damage them. I want to prevent that by adding a voltage protection mechanism in front of the ring light, so that if I or anyone else mistakenly sets the outputs at 24V instead of 12V the ringlight is protected. As an aside, I also want my circuit to be very small, with the fuse fitting in a 150 series inline fuse holder from Littelfuse, catalog # 150274, And the zener diode fitting on this tiny PCB, which I can fit inside the case with existing electronics: 5050 LED breakout PCB, product ID 1762 from ADAFruit. I have located components already that fit these, so I don't need help with that part; I just wanted to point out my design requirement. I'm a mechanical engineer, and not well-versed in electronics, so I want to make sure I am doing this overvoltage protection correctly before I implement it. Below is a diagram of what I've come up with after extensive internet research. My question is specifically about the fuse, but if anything else stands out that I am doing wrong, please feel free to comment and correct me. I based my circuit off of this: (http://www.learningaboutelectronics.com/Articles/Overvoltage-protection-circuit.php) and I did a lot of reading on zener diodes, fuses, thermal runaway in LEDs, constant current vs. constant voltage drivers, etc. Here it is: Here is how I understand this: the Zener diode will prevent more than 12V from reaching the circuit. So if 12V is applied the Zener acts like an open circuit. If 24V is applied, the Zener dissipates 12V of that and the LEDs still get 12V...is that right? And am I correct in assuming that the Zener will draw the maximum current it can from the output, causing the fuse to blow? That is what I am hoping. I want the fuse to blow and alert the user to the programming error so the Zener doesn't just sit there and get hot. I've chosen a zener diode that is 12Vz with a 5W power rating. I couldn’t find a fast fuse in the size I want (more than 300mA, less than 600mA), but the slow blow fuse I spec’d out will take 3-20s to pop at 200% current rating. Is that good enough, or do I need a fast fuse? I've calculated 0.330Ax12V=3.96W across the Zener if the circuit has 24V applied to it, which is within its rating...Have I done this correctly? If someone can clear this up for me, that would be great, and as mentioned above, if I have made other mistakes or taken the wrong approach let me know. Thanks a bunch! AI: Why don't you just design (or buy) a small voltage regulator that feeds your bank of LEDs. Then there are no complex issues like fuses blowing and notifications etc. Think simple. Design a simple small buck regulator that can supply (say) 12 volts from the 24 volt power option. It looks like this would be OK: - But there are many other options from Linear technology and Texas Instruments. if I or anyone else mistakenly sets the outputs at 24V instead of 12V the ringlight is protected With this idea you set it at 24 volts and it regulates to 12 volts. If you set the input to be 12 volts then the LEDs will be a bit dimmer because the LT3970 won't be able to supply 12 volts from a 12 volt input (it'll be more like 11.75 volts as per graph on pg 6 of data sheet). But it won't fuse! As an aside, I also want my circuit to be very small, with the fuse fitting in a 150 series inline fuse holder from Littelfuse, catalog # 150274, And the zener diode fitting on this tiny PCB, which I can fit inside the case with existing electronics I expect the regulator circuit will be smaller. So if 12V is applied the Zener acts like an open circuit Of course if you do decide to go down the zener diode route and pick a 5% tolerance one (about as good as they come) it may only be regulating at 11.4 volts and this may blow the fuse under normal circumstances. Go for a 15 volt zener is my advice then there is never any possibility of it blowing the fuse at 12 volts.
H: Footprint design for vibration resistance I have a product that will be subjected to reliability testing (random vibration and thermal shock) and I need to design a footprint for an SMT power inductor that will give the best mechanical strength to resist damage during vibration testing. I've looked through the IPC 7351 standard but I cant find anything that talks about how different pad size (resulting in different fillet shape) will affect the mechanical strength of the solder joint. The part I am working with is Murata LQH3NPN1R0MMEL. It has a mass of approximately 0.05 grams. Does anyone have a method for calculating a land pattern for optimal mechanical strength? AI: If you're serious about this, you must take into account the mounting geometry of the pcb, the material stiffness, and the location of the inductor. You'll need to do a FEA vibration analysis to identify nodes on the board and coupling of vibrational energy into the inductor footprint. If you don't have the in-house expertise for that, and don't have access to an outside expert you can hire, you're best off making a blank pcb with just the inductor and shaking/shocking the hell out of it until it fails. EDIT Sometimes the physical straightforward approach can save a lot of time over careful analysis. There's a story about Edison. Supposedly he assigned a task to a new engineer - determine the capacity of a an oddly-shaped glass bulb. The engineer spent a day taking careful measurements of the bulb and calculating precisely how much volume this entailed, after making due allowances for the (measured, of course, at several places) thickness of the bulb. Edison, it is said, took the bulb, walked over to a water tap, and filled the bulb, then poured the water into a graduated cylinder.
H: PIC24FJ256DA210 issues with UART Receive, U1STAbits.URXDA never changes AND interrupt never triggers I am having a problem receiving data. Using the logic analyzer I can confirm that Tx is working from the PIC. The Rx register (U1RXREG) is not collecting the Rx data and the data available bit (U1STAbits.URXDA) is never being set to 1. I have tried swapping the pins and Tx works on both pins but still no receive into the PIC. I tried both interrupt driven Rx and polling the UART in the main loop. When using the interrupt the ISR is never called. Using polling the UART the U1RXREG never contains data and the U1STAbits.URXDA bit never changes. We've confirmed this using an ICD3 in circuit debugger by running the code in debug and looking at the register. It might be worth noting that the Loopback is working as expected, when enabled. I'm are using the following code: /******************************************************************** FileName: main.c Processor: PIC24FJ256DA210 Microcontroller Hardware: PCB revision xxxxxxxxxxxx Complier: Microchip XC16 (for PIC24/dsPIC) Company: ********************************************************************/ #ifndef MAIN_C #define MAIN_C //////////////////////////////////////////////////////////////////////////// //***PIC24FJ256DA210 CONFIGURATION BIT SETTINGS***************************// //////////////////////////////////////////////////////////////////////////// // CONFIG4--------------------- //NOT USED // CONFIG3--------------------- #pragma config WPFP = WPFP255 // Write Protection Flash Page Segment Boundary (Highest Page (same as page 170)) #pragma config SOSCSEL = EC // Secondary Oscillator Power Mode Select (External clock (SCLKI) or Digital I/O mode() #pragma config WUTSEL = LEG // Voltage Regulator Wake-up Time Select (Default regulator start-up time is used) #pragma config ALTPMP = ALPMPDIS // Alternate PMP Pin Mapping (EPMP pins are in default location mode) #pragma config WPDIS = WPDIS // Segment Write Protection Disable (Segmented code protection is disabled) #pragma config WPCFG = WPCFGDIS // Write Protect Configuration Page Select (Last page (at the top of program memory) and Flash Configuration Words are not write-protected) #pragma config WPEND = WPENDMEM // Segment Write Protection End Page Select (Protected code segment upper boundary is at the last page of program memory; the lower boundary is the code page specified by WPFP) // CONFIG2-------------------- #pragma config POSCMOD = HS // Primary Oscillator Select (HS Oscillator mode is selected) #pragma config IOL1WAY = OFF // IOLOCK One-Way Set Enable (The IOLOCK bit can be set and cleared as needed, provided the unlock sequence has been completed) #pragma config OSCIOFNC = OFF // OSCO Pin Configuration (OSCO/CLKO/RC15 functions as CLKO (FOSC/2)) #pragma config FCKSM = CSDCMD // Clock Switching and Fail-Safe Clock Monitor (Clock switching and Fail-Safe Clock Monitor are disabled) #pragma config FNOSC = PRIPLL // Initial Oscillator Select (Primary Oscillator with PLL module (XTPLL, HSPLL, ECPLL)) #pragma config PLL96MHZ = ON // 96MHz PLL Startup Select (96 MHz PLL is enabled automatically on start-up) #pragma config PLLDIV = DIV4 // 96 MHz PLL Prescaler Select (Oscillator input is divided by 4 (16 MHz input)) #pragma config IESO = OFF // Internal External Switchover (IESO mode (Two-Speed Start-up) is disabled) // CONFIG1--------------------- #pragma config WDTPS = PS32768 // Watchdog Timer Postscaler (1:32,768) #pragma config FWPSA = PR128 // WDT Prescaler (Prescaler ratio of 1:128) #pragma config ALTVREF = ALTVREDIS // Alternate VREF location Enable (VREF is on a default pin (VREF+ on RA9 and VREF- on RA10)) #pragma config WINDIS = OFF // Windowed WDT (Standard Watchdog Timer enabled,(Windowed-mode is disabled)) #pragma config FWDTEN = OFF // Watchdog Timer (Watchdog Timer is disabled) #pragma config ICS = PGx1 // Emulator Pin Placement Select bits (Emulator functions are shared with PGEC2/PGED2) //turned on debuging on alternate pins to see if uart rx will work now #pragma config GWRP = OFF // General Segment Write Protect (Writes to program memory are allowed) #pragma config GCP = OFF // General Segment Code Protect (Code protection is disabled) #pragma config JTAGEN = OFF // JTAG Port Enable (JTAG port is disabled) // #pragma config statements should precede project file includes. // Use project enums instead of #define for ON and OFF. //////////////////////////////////////////////////////////////////////////// //***INCLUDES*************************************************************// //////////////////////////////////////////////////////////////////////////// #include <p24Fxxxx.h> #include <HardwareProfile.h> #include <pps.h> #include <uart.h> //////////////////////////////////////////////////////////////////////////// //***DEFINES**************************************************************// //////////////////////////////////////////////////////////////////////////// #ifndef __DELAY_H #define FOSC 32000000LL // clock-frequecy in Hz with suffix LL (64-bit-long), eg. 32000000LL for 32MHz #define FCY (FOSC/2) // MCU is running at FCY MIPS #define delay_us(x) __delay32(((x*FCY)/1000000L)) // delays x us #define delay_ms(x) __delay32(((x*FCY)/1000L)) // delays x ms #define __DELAY_H 1 #endif //////////////////////////////////////////////////////////////////////////// //***GLOBAL VARIABLES*****************************************************// //////////////////////////////////////////////////////////////////////////// volatile int gotData = 0; volatile char rxData; void Pins_Config(void); void SetupUART1(void); void WriteOKtoPC(void); void CheckBuffer(void); void __attribute__((__interrupt__, auto_psv)) _U1RXInterrupt(void); //////////////////////////////////////////////////////////////////////////// //***INTERRUPT SERVICE ROUTINES*******************************************// //////////////////////////////////////////////////////////////////////////// void __attribute__((__interrupt__, auto_psv)) _U1RXInterrupt(void){ rxData = U1RXREG; gotData = 1; IFS0bits.U1RXIF = 0; //Clear Interrupt Flag return; } //////////////////////////////////////////////////////////////////////////// //***INITIALIZE ROUTINES**************************************************// //////////////////////////////////////////////////////////////////////////// void Pins_Config(void){ //CONFIGURE THE UART PINS----------------------------------------------------- _ANSG9 = 1; /*Configure I/O ports as digital*/ _ANSB0 = 1; /*Configure I/O ports as digital*/ _LATG9 = 0; //Bring pins low _LATB0 = 0; //Bring pins low /* Setup analog functionality and port direction */ _TRISG9 = 1; // set RG9 to output; (U1RX) pin F3 _TRISB0 = 0; // set RB0 to input; (U1TX) pin K2 /* Initialize peripherals */ PPSUnLock; iPPSOutput(OUT_PIN_PPS_RP0,OUT_FN_PPS_U1TX); // Assign U1TX To Pin RP0 iPPSInput(IN_FN_PPS_U1RX,IN_PIN_PPS_RP27); // Assign U1RX To Pin RP27 PPSLock; } //////////////////////////////////////////////////////////////////////////////// //UART1 SETUP------------------------------------------------------------------- //////////////////////////////////////////////////////////////////////////////// void SetupUART1(void){ //Setup Interrupts-------------------------------------------------------------- IEC0bits.U1RXIE = 1; //Enable UART1 Rx Iterrupt IPC2bits.U1RXIP = 1; //UART1 Rx Iterrupt Priority Level IFS0bits.U1RXIF = 0; //Clear Rx Interrupt Flag IEC0bits.U1TXIE = 0; //Disable UART1 Tx Iterrupt IPC3bits.U1TXIP = 1; //UART1 Tx Iterrupt Priority Level IFS0bits.U1TXIF = 0; //Clear Tx Interrupt Flag U1BRG = 34; //Baud Rate Generator (115200) //UxMODE: UARTx MODE REGISTER--------------------------------------------------- //ABAUD: Auto-Baud Enable bit U1MODEbits.ABAUD = 0; //1 = Enable baud rate measurement on the next character – requires reception of a Sync field (55h); cleared in hardware upon completion //0 = Baud rate measurement is disabled or completed //BRGH: High Baud Rate Enable bit U1MODEbits.BRGH = 1; //1 = High-Speed mode (4 BRG clock cycles per bit) //0 = Standard-Speed mode (16 BRG clock cycles per bit) //IREN: IrDA Encoder and Decoder Enable bit U1MODEbits.IREN = 0; //1 = IrDA encoder and decoder are enabled //0 = IrDA encoder and decoder are disabled //LPBACK: UARTx Loopback Mode Select bit U1MODEbits.LPBACK = 0; //1 = Enable Loopback mode //0 = Loopback mode is disabled //PDSEL<1:0>: Parity and Data Selection bits U1MODEbits.PDSEL = 0; //11 = 9-bit data, no parity //10 = 8-bit data, odd parity //01 = 8-bit data, even parity //00 = 8-bit data, no parity //RTSMD: Mode Selection for UxRTS Pin bit U1MODEbits.RTSMD = 1; //1 = UxRTS pin is in Simplex mode //0 = UxRTS pin is in Flow Control mode //RXINV: Receive Polarity Inversion bit U1MODEbits.RXINV = 1; //1 = UxRX Idle state is one //0 = UxRX Idle state is zero //STSEL: Stop Bit Selection bit U1MODEbits.STSEL = 0; //1 = Two Stop bits //0 = One Stop bit //UARTEN: UARTx Enable bit U1MODEbits.UARTEN = 1; //1 = UARTx is enabled; all UARTx pins are controlled by UARTx as defined by UEN<1:0> //0 = UARTx is disabled; all UARTx pins are controlled by port latches; UARTx power consumption is minima //UEN<1:0>: UARTx Enable bits U1MODEbits.UEN = 0; //11 = UxTX, UxRX and BCLKx pins are enabled and used; UxCTS pin is controlled by port latches //10 = UxTX, UxRX, UxCTS and UxRTS pins are enabled and used //01 = UxTX, UxRX and UxRTS pins are enabled and used; UxCTS pin is controlled by port latches //00 = UxTX and UxRX pins are enabled and used; UxCTS and UxRTS/BCLKx pins are controlled by port latches //USIDL: Stop in Idle Mode bit U1MODEbits.USIDL = 0; //1 = Discontinue module operation when device enters Idle mode //0 = Continue module operation in Idle mode //WAKE: Wake-up on Start Bit Detect During Sleep Mode Enable bit U1MODEbits.WAKE = 0; //1 = UARTx will continue to sample the UxRX pin; interrupt is generated on the falling edge, bit is cleared in hardware on the following rising edge //0 = No wake-up is enabled //UxSTA: UARTx STATUS AND CONTROL REGISTER-------------------------------------- //ADDEN: Address Character Detect bit (bit 8 of received data = 1) U1STAbits.ADDEN = 0; //1 = Address Detect mode is enabled. If 9-bit mode is not selected, this does not take effect. //0 = Address Detect mode is disabled //FERR: Framing Error Status bit (read-only) //U1STAbits.FERR //1 = Framing error has been detected for the current character (character at the top of the receive FIFO) //0 = Framing error has not been detected //OERR: Receive Buffer Overrun Error Status bit (clear/read-only) U1STAbits.OERR = 0; //1 = Receive buffer has overflowed //0 = Receive buffer has not overflowed (clearing a previously set OERR bit (1 -> 0 transition); will reset the receiver buffer and the RSR to the empty state //PERR: Parity Error Status bit (read-only) //U1STAbits.PERR //1 = Parity error has been detected for the current character (character at the top of the receive FIFO) //0 = Parity error has not been detected //RIDLE: Receiver Idle bit (read-only) //U1STAbits.RIDLE //1 = Receiver is Idle //0 = Receiver is active //TRMT: Transmit Shift Register Empty bit (read-only) //U1STAbits.TRMT //1 = Transmit Shift Register is empty and transmit buffer is empty (the last transmission has completed) //0 = Transmit Shift Register is not empty, a transmission is in progress or queued //URXDA: Receive Buffer Data Available bit (read-only) //U1STAbits.URXDA //1 = Receive buffer has data, at least one more character can be read //0 = Receive buffer is empty //URXISEL<1:0>: Receive Interrupt Mode Selection bits U1STAbits.URXISEL = 0; //11 = Interrupt is set on an RSR transfer, making the receive buffer full (i.e., has 4 data characters) //10 = Interrupt is set on an RSR transfer, making the receive buffer 3/4 full (i.e., has 3 data characters) //0x = Interrupt is set when any character is received and transferred from the RSR to the receive buffer; receive buffer has one or more characters //UTXBF: Transmit Buffer Full Status bit (read-only) //U1STAbits.UTXBF //1 = Transmit buffer is full //0 = Transmit buffer is not full, at least one more character can be written //UTXBRK: Transmit Break bit U1STAbits.UTXBRK = 0; //1 = Send Sync Break on next transmission - Start bit, followed by twelve 0 bits, followed by Stop bit; cleared by hardware upon completion //0 = Sync Break transmission is disabled or completed //UTXEN: Transmit Enable bit NOTE: If UARTEN = 1, the peripheral inputs and outputs must be configured to an available RPn/RPIn pin U1STAbits.UTXEN = 1; //1 = Transmit is enabled, UxTX pin controlled by UARTx //0 = Transmit is disabled, any pending transmission is aborted and the buffer is reset; UxTX pin is controlled by port. //UTXINV: IrDA Encoder Transmit Polarity Inversion bit NOTE: Value of bit only affects the transmit properties of the module when the IrDA encoder is enabled //U1STAbits.UTXINV //IREN = 0: //1 = UxTX is Idle 0 //0 = UxTX is Idle 1 //IREN = 1: //1 = UxTX is Idle 1 //0 = UxTX is Idle 0 //UTXISEL<1:0>: Transmission Interrupt Mode Selection bits U1STAbits.UTXISEL1 = 0; //11 = Reserved; do not use U1STAbits.UTXISEL0 = 0; //10 = Interrupt when a character is transferred to the Transmit Shift Register (TSR) and as a result, the transmit buffer becomes empty //01 = Interrupt when the last character is shifted out of the Transmit Shift Register; all transmit operations are completed //00 = Interrupt when a character is transferred to the Transmit Shift Register (this implies there is at least one character open in the transmit buffer) } /////////////////////////////////////////////////////////////////////////////// //***MAIN PROGRAM************************************************************// /////////////////////////////////////////////////////////////////////////////// int main(void){ INTCON1bits.NSTDIS = 0; //Interrupt nesting enabled. Note: The IPL Status bits are read-only when NSTDIS (INTCON1<15>) == 1. See Page 42-43 & 96-97 & 140 in data sheet. Also see Chapter 10 in the XC16 C COMPILER USER’S GUIDE CORCONbits.IPL3 = 0; //CPU CONTROL REGISTER - The IPL Status bits are concatenated with the IPL3 (CORCON<3>) bit to form the CPU Interrupt Priority Level (IPL). See Page 42-43 & 96-97 & 140 in data sheet. Also see Chapter 10 in the XC16 C COMPILER USER’S GUIDE SRbits.IPL = 4; //CPU interrupt priority level is 0 (turns on all interrupts) See Page 42-43 & 96-97 & 140 in data sheet. Also see Chapter 10 in the XC16 C COMPILER USER’S GUIDE Pins_Config(); SetupUART1(); //LOOP START******************************************************************** while(1){ while(U1STAbits.URXDA){ rxData = U1RXREG; gotData = 1; } CheckBuffer(); } //LOOP END********************************************************************** } void CheckBuffer(void){ if (gotData == 1){ gotData = 0; WriteOKtoPC(); } } void WriteOKtoPC(void){ /* Data to be transmitted using UART communication module */ char Txdata[] = {'6','\0'}; /* Load transmit buffer and transmit the same till null character is encountered */ putsUART1 ((unsigned int *)Txdata); /* Wait for transmission to complete */ while(BusyUART1()); } /** EOF main.c ****************************************************************/ #endif Can someone help explain why this is not working as expected? AI: These lines look suspect to me : _ANSG9 = 1; /Configure I/O ports as digital/ _ANSB0 = 1; /Configure I/O ports as digital/ It looks like your RX pin is configured as an analog input. When an input is set as analog the digital input circuit is disconnected to protect it from larger analog signals. The TX won't be disabled because it controls the voltage levels itself, but the RX circuit will be disabled. To configure the ports as digital I/O you need to assign 0's not 1's to the analog select bits.
H: What are these elements and what they do? I have a telephone circuit (Model No. Panasonic KX-TS500MXB) which has following item between TIP and Ring. Do you have any idea what it might be? At first I thought it might be a back to back zener diode but after some testings I doubt if I am correct. The following item is as well unknown to me. I have searched the part number but I couldn't find any item similar to this!! EDIT: Here is the symbol that corresponds to first image: Thanks AI: The first item is a spark gap device i.e. over-voltage protector. The symbol clinched it for me. I have no definite thoughts about the second picture but it might be a bridge rectifier given the + symbol on the top right pin. Modern telephones use bridges of course, usually two; one for the speech circuit and one for the ringer circuit.
H: What is the purpose of diodes after RF power amplifier and before LPF? I am planning to build an RF transmitter using an integrated PA module (RA30H1317M1). While studying schematics of similar transceivers, for example IC-2200H (that also has a PA module) I see all typical transmit chain components like the PA, filter, antenna connector, but I don't exactly know the purpose of diodes D27 and D12 (in the middle of the picture). Are they perhaps to prevent "backward" energy flow reflected from the filter and antenna to prevent intermodulation in the PA? Are they necessary? AI: The diodes are part of a RF switch that switches between RX/TX modes of the transceiver (See also my answer to another question). This can be accomplished by changing the DC potential at the RX LINE side of the diodes. You can see a RF filter of the TX LINE side consisting of R150 C174, C176 and L40. I'm sure there is a similar filter on the RX line side (connected to a switchable DC voltage source) not shown in the excerpt of the schematic. If the diodes are forward biased RF power can pass from TX LINE to COMMON LINE. The transceiver is transmitting. If the diodes are reverse biased RF power from TX LINE to COMMON LINE is blocked. The transceiver is receiving.
H: How do I reduce the rise and fall time to ideal case of 0 in my LTSpice model? Working with basic circuit with pulsed voltage yields a rise and fall time or 0.5ns. My maximum time step is 1ps. With no delay , rise or fall times assigned. ( default assigned to zero) . How do I reduce the rise and fall time to ideal case of 0 in my LTSpice model ? What are the minimum limits in LTSpice rise and fall times if it cannot be 0(zero) ? In this example simulation I changed my simulation parameters to picoseconds range. Even then there is a rise and fall time of 0.05ps. At a higher range the same happens. In the attachment simulation in the range of seconds is shown. Is there a setting that I need to change in LTSpice ? AI: Rise & fall times can't be truly zero due to the way that SPICE works. A "zero" risetime by definition will have an infinite dV/dt. This doesn't make sense from the perspective of the solver, which in the case of LTSpice is a modified trapezoidal method. The tightest risetime you can muster will be the smallest timestep you use (dt). This is OK, since in the real world risetimes aren't zero either. Even incredibly high speed digital communications buses will have rise times in the range of tens of picoseconds, and most practical signals the average EE deals with will be in the single or double digit nanosecond range. For practical purposes, you shouldn't worry too much about sharp risetimes in SPICE. It can actually be detrimental, since it will (1) slow down your solver by requiring a tighter timestep, and (2) may introduce numerical instability. If I want a sharp risetime, I'll usually set the risetime on pulse trains to two or three orders of magnitude less than the period of the pulse, and then forget about it. For more information on how LTSpice works internally, Mike Engelhardt (author of LTSpice) has a great blog entry: http://www.linear.com/solutions/5739. EDIT: jonk in the comments above pointed out this very useful LTWiki link describing the default behaviour when zero risetime is specified: http://ltwiki.org/index.php5?title=Most_frequently_asked_questions_for_beginners#I_have_a_pulse_source_in_my_schematic_with_zero_transition_times..C2.A0_LTspice_only_shows_slow_transition_times_of_2ns..C2.A0_What.27s_going_on_here.3F.
H: How to find out a register contains inputs or outputs or both? I don't have much experience with embedded systems and I am trying to check if is possible to determine if a register is made of inputs or outputs or a mix of inputs and outputs. Let's say I have a 32 bit register like so: "11111111111000000000000000000000" Is there a possibility to check if this is made of inputs/outputs/both ? The project that I work on, already does this somehow when run on hardware, but I need to simulate a method that given a register determines what is made of. AI: How to find out a register contains inputs or outputs or both? By reading the register reference manual. If the only thing you can do is read and write registers, there's no way to tell the difference between the following three cases, all of which are reasonably common: a read-only bit that's currently zero a bit that's unused and always reads as zero a bit that's write-only and reads as zero Worse, writing the wrong values to some registers may have unwanted, and possibly irreversible, effects on the peripheral.
H: Is the voltage across a turned off transistor a problem? I was studying some stuff here and I came across a question that I couldn't find on the internet. I created this circuit for demonstrating my problem: The transistor Q6 is a normal NPN transistor, the only difference is that it's maximum Vce is 30V. Now here is the situation: If I oscillate it, when it's ON the voltage drop will be on the resistors and there will be very low voltage between Colletor and Emitter. But when it's turned OFF the Collector wil be at 48V and the Emitter at 0V. As long as the breakdown voltage is not reached we are good, but does the turning on of the transistor (that inicial drop between the 48V-30V) damages the transistor? I assume that the 48V drop on the turning on takes some time and in that time amount the transistor will be operating at a voltage higher than it should, is that true? Does it damage the transistor? AI: But when it's turned OFF the Collector wil be at 48V and the Emitter at 0V. Is 48 V more than 30 V? Yes, then you have a problem. The off state is exactly the state that's normally expected to cause the highest \$V_{ce}\$ and where the maximum \$V_{ce}\$ spec is most likely to become important.
H: Calculate gain of degenerated common-source stage using small-signal model I have difficulties calculating the gain of a degenerated common-source stage, with the output resistance of the MOSFET taken into account. I came up with the SS-model below. I calculated the following: The gain is defined as \$ A = \frac{v_{out}}{v_{in}} \$, whereas \$ v_{in} = v_1 + v_s \$. I started writing an expression for the current. \$ i = g_m v_1 + \frac{v_{out} - v_s}{r_{ds}}\$ Next I used that expression to solve for \$ v_{out} \$. \$ v_{out} = g_m v_1 R_d+ \frac{R_d}{r_{ds}} v_{out} - \frac{R_d}{r_{ds}} v_{s} \$ \$ v_{out} = (g_m v_1 R_d - \frac{R_d}{r_{ds}} v_{s})/(1 - \frac{R_d}{r_{ds}}) = (g_m v_1 R_d - \frac{R_d}{r_{ds}} v_{in} + \frac{R_d}{r_{ds}} v_{1})/(1 - \frac{R_d}{r_{ds}})\$ And for the gain: \$ v_{out}/v_{in} = (g_m v_1 R_d - \frac{R_d}{r_{ds}} v_{in} + \frac{R_d}{r_{ds}} v_{1})/((1 - \frac{R_d}{r_{ds}})(v_1 + v_s))\$ But now I have again \$ v_s \$ in the denominator, which I cannot substitute for anything useful. I feel like running in circles. Can someone help me out? What am I doing/approaching wrong? simulate this circuit – Schematic created using CircuitLab AI: The current equation you show is the source current: $$i_s=g_mv_1+\dfrac{v_{o}-v_s}{r_{ds}} $$ And \$i_s=\dfrac{v_s}{R_S}\$ So you have: (1) $$\dfrac{v_s}{R_s}=g_mv_1+\dfrac{v_{o}-v_s}{r_{ds}}$$ It's hard for me to follow what you did but you could use the equation that includes the current through \$R_d\$. That is, (2) $$ \dfrac{v_o}{R_d}+g_mv_1+\dfrac{v_o-v_s}{r_{ds}}=0$$ You could use either equation to start the process to find \$\dfrac{v_o}{v_i}\$ If you take equation 1, you can solve for \$v_o\$ to find: $$ v_o=v_s\dfrac{r_{ds}}{R_s}+v_s-g_mv_1r_{ds}$$ As you noted, \$v_i=v_1+v_s\$, so the previous equation becomes (after some algebra): (3) $$v_o=v_s\bigg(\dfrac{r_{ds}+R_S+g_mr_{ds}R_s}{R_s}\bigg)-g_mr_{ds}v_i $$ Everything looks good except for the fact that I still have a \$v_s\$ term and need to get rid of it so that we can solve for \$\dfrac{v_o}{v_i}\$. You can use the second equation (2), solve for \$v_s\$ and plug that in equation (3). Solving for \$v_s\$ in equation (2) results in: $$v_s=\dfrac{(R_d+r_{ds})v_o+g_mv_ir_{ds}R_d}{(g_mr_{ds}+1)R_d}$$ You still have to plug this value of \$v_s\$ in equation (3), but now you will have everything in terms of \$v_o\$ and \$v_i\$ so you should get the result you've been looking for. This is not the only way to do this but it kind of follows the path you took.
H: What do I have to learn to make digital wristwatch? I want to make a digital wristwatch for my school project. So i want to know, what are the things that I have to learn in order to make digital watch by myself? As far as I know, I have to learn about microcontroller. But what else? Thank you for answering AI: Microcontrollers is a very general word here. You will need to learn the following in microcontrollers:- Interrupts (since you will be using timer to keep track of time) RTC (Incase you don't want to use timer and want to use more reliable source for keeping track of time) LCD or LED interface (To display the data on screen) On electronics side, You just need to know how to connect these components properly and calculating resistors in series with LED's. Power source (long life of the battery) is one very important thing you have to take care (as pointed out by @jonk in the comment). So while designing the circuit make sure that no component is consuming more than the minimum current required. For eg, if you plan to use LED as a display then the size of LED and the desired intensity will decide the runtime of the battery. Making the LED's glow more than detectable intensity will cause the battery drain faster, so you need to find the optimized values that work for you. You can also learn about power down feature in controllers which save a lot of power when not in use, it will display the time only when you want it to, so LED's will be on only when required. RTC would be of great help to keep track of time when microcontroller is in power down mode. For a basic digital watch, these should work fine.
H: Interfacing ILI9341 with PIC18Fxx series I am new to embedded development, so please bear with me. I have to interface a TFT display(ILI9341) with PIC18F452, it does not have any controller chip, I want to run a simulation with Proteus before getting the hands dirty with hardware. I am using mikroC PRO for coding PIC controller and trying to use the inbuilt library provided by mikroC for TFT (https://download.mikroe.com/documents/compilers/mikroc/pic/help/tft_library.htm) but I am not able to use the library and the hardware connections at end of the page does not help in how to connect pins to microcontroller. So I want to ask: 1)Is mikroC good enough for interfacing a touchscreen and taking input from the user or should I use MPLAB-X. 2)How can I connect the pins to PIC controller, I have read the datasheet but I am not able to figure out should I use SPI or 8-bit bus interface and where the pins should be connected. AI: and the hardware connections at end of the page does not help in how to connect pins to microcontroller. Why do you say that the schematic at the end of the page doesn't help? It is very clear in my opinion. It shows that the library supports an 8-bit parallel connection between the driver chip and the PIC. 2)How can I connect the pins to PIC controller, I have read the datasheet but I am not able to figure out should I use SPI or 8-bit bus interface and where the pins should be connected. If you want to use the libaray you are saying, then you have to connect the ILI9341 using an 8-bit parallel interface, meaning there will be an 8-bit data bus and some control signals. You need to go to Chapter 7.6.3. of the datasheet of ILI9341. There it is very clear how to do this.
H: How to use the Shutdown pin of a switching regulator? What are the systematic/standard methods for incorporating the SHDN (shutdown) pin of a buck/boost IC? Take MAX756 for example (datasheet). Here are its I_q profile: Suppose I'm using an Arduino nano (ATmega328) as the MCU and nRF24l01+ as the transmitter. I'm reading some sensor values via ADC and transferring the data over the RF link. So the system has a sampling frequency (suppose 100Hz). If I want to save battery, the system should sleep most of the time and wake up and transfer the data 100 times a second. All Vccs should come from the boost converter. Now: 1- I should make MCU and nRF sleep but how about the step-up IC? Should it also be shutdown and waked up 100 times a second? I guess I should take the IC's start-up delay into account? Here is its profile: Regarding this profile, I guess the IC wakes up in ~2ms... So the maximum sampling rate would be at best 500Hz? 2- What voltage would be on the OUT pin of MAX756 if it is shutdown? The same as Vin? or is it floating? 3- Is SHDN the same as Enable in different ICs? 4- If there is no way that the main MCU could control the SHDN pin, can I use like a 555 timer solution to control SHDN instead of an auxiliary tiny PIC MCU just for controlling the step-up IC? Because using a separate MCU just to control when an IC should shutdown seems to me rather an overdesign solution...Although if it is used in industry I have no problems then Thank you very much AI: 1- I should make MCU and nRF sleep but how about the step-up IC? Should it also be shutdown and waked up 100 times a second? No. Startup times will make that impractical. 2- What voltage would be on the OUT pin of MAX756 if it is shutdown? The same as Vin? or is it floating? 0v (turned off) 3- Is SHDN the same as Enable in different ICs? Yes, but the opposite label. 4- If there is no way that the main MCU could control the SHDN pin, can I use like a 555 timer solution to control SHDN instead of an auxiliary tiny PIC MCU just for controlling the step-up IC? Because using a separate MCU just to control when an IC should shutdown seems to me rather an overdesign solution...Although if it is used in industry I have no problems then If you power your MCU from the regulator and the MCU controls the regulator, then when the MCU turns the regulator off it will be committing suicide. No way to turn itself back on, since there is no power to run the MCU. It is more normal to power different sections of your circuit from different regulators and turn them on/off as you need them. You need to take the startup time of the regulator into account (most have a "power good" pin to see when they have started up properly). If that takes longer than the sleeping time then switching off while sleeping is not possible.
H: Question about 3/~5 output select pin of a boost IC? I want to power up Arduino nano and nRF24l01+ 's Vcc from MAX756 step-up DC-DC convertor (datasheet). I'm using a single AA battery (1.5V) There is a 3/5(bar) pin to select whether the output would be 3.3 or 5V. The pin recognizes max 0.4V as low and min 1.6V as high. Now If I connect that pin to ground, it would produce 5V but what should the pin voltage be if I want 3.3V? 1- How can I select 3.3V OUT if I'm only using 1.5V battery in the input? Should the pin be connected to OUT if I want 3.3V? 2- If so, how does the IC even work? It first wants to know what voltage to produce in OUT therefore it checks 3/~5 pin. But if it is connected to OUT, wouldn't this situation be a paradox? 3- nRF24l01+ is very sensitive in it's VCC. It should never be 5V. Would connecting OUT to 3/~5 pin, create a transient 5V in OUT and burn the RF module? 4- What is meant by this line (bootstrapped?) is the datasheet? The device is internally bootstrapped, with power derived from the output voltage (via OUT). Thank you very much AI: The extract of the table in your question describes two pins. One is the shutdown pin and the other is the output voltage select pin (pin 2). Read the data sheet and look at the pin out and all will become clear. Connect pin 2 to the output to select 3.3 volts. It seems that you should use a 1 Mohm resistor in series with the pin when connecting to the output.
H: Mosfet as switch for high output current I'm working on an android controlled input/output/gauges manager for cars based on a pic24 micro controller. I'm really not an electronics expert so I had someone working on the schematics but he has no time anymore. It's mostly done and functional but I'm having trouble with the output schematics. The idea is to be able to drive various loads (leds, solenoids, dc motors, etc) and turn them on/off from the pic micro controller. Pin43 on the schematic is a pic pin that is 0v-3.3v, Vin is 12V and Pos_Out1 is connected to a load that is grounded. The issue I'm having is that testing it with a light for example, there is always current even when the pic pin is floating or grounded. When I send 3.3v on the pic pin, the light gets brighter so more current flows but there should be no current flowing when the pin is grounded or floating. Is something wrong in the schematic? Thank you for your time! AI: The MOSFET is shown connected incorrectly in the schematic- the body diode conducts so you get 11.3V out with 12V in, and when the transistor turns on you get 12V out with 12V in. This kind of inverse operation is used deliberately sometimes (for example in the classic reverse polarity protection circuit as shown below- not related directly to your application, of course). The 1M resistor is too large to allow the MOSFET to switch quickly so you will be unnecessarily stressing the MOSFET when it switches off, however that may save its life since there is no protection against inductive load flyback (a case where two wrongs may make a sorta-right) and there is no gate protection for the MOSFET in case of typical automotive transients. I'm not going to correct all those issues for you in this answer, there are plenty of examples of good design of high-side switches for automotive applications. It needs a few more parts- a couple of diodes and a resistor at a minimum- and a bit better choice of value for the pullup to be acceptable. If you don't fix those problems you will have failures caused by inductive loads. There is also a potential brownout issue that you should address at a system level if nowhere else- think about what happens during cranking or with a dead-ish battery- can the MOSFET go linear and burn out.
H: Hacking DSLR battery to power camera from power supply I would like to power my DSLR for long captures with a power supply. I know the voltage and inner structure of the official batteries. The battery has 2x 3,7V cells inside with a circuit. People worked on this issue report that the battery's circuit is there to protect shortage or high current draw for protection. The circuit also provides authentication and ID to the camera. If this is not provided camera seems to know this and display error message. What I am thinking doing now is to use manufacturer's own charger to provide the power, while battery is still attached. Meaning, that the power supply will simultaneously "charge" the battery and in parallel will supply power to the camera. The authentication pin can this way be connected to the camera. I don't want to kill my camera, therefore I am asking, if this can go wrong in any way? PS: I will keep the actual battery outside (for the authentication pin), while a 3D modeled/printed power supplied fake battery will go into the camera. Plan (blue line: authentication pin): Edit: I forgot to mention that there is an adapter sold by Nikon to fit into the battery hole and power via external power supply. However, >$45 is too expensive for dumb power supply. If I can 3D print my part, then I can have the same thing for <$5. This is also a matter of principle. AI: There are many ways this could "go wrong". It almost certainly violates the terms of the warranty on the camera. The voltage of the charger may be out of tolerance for the camera. (Obviously, the charger voltage must be significantly higher than the terminal voltage of the battery in order to accomplish its function.) The charger is designed to safely charge a specific battery. The load of the camera will almost certainly confuse the charger's internal logic. DSLRs are notorious for drawing a huge spike of current when a picture is taken. The charger may not be capable of supplying this current. The terminal voltage will sag, possibly messing up the camera's logic. I have built external power supplies for DSLRs (for aerial photography). Trust me, they are not just "dumb power supplies". Tight voltage regulation combined with high peak current capability makes them non-trivial to design.
H: How to determine a system is stable without knowing the input? I was doing some exercises for an upcoming exam and ran into this. I am not sure how to find if the system is stable without knowing the actual input (since Routh array is the only method I'm familiar with.) (The answer provided is a) Can someone explain briefly what needs to be done? Thank you in advance! AI: Seemingly a homework. Unstable system starts to generate increasing output if there exists some excitation other than a sometimes possible stabilizing one that is generated by external controller. Thus c is in the very beginning false. You must solve "a or b". Laplace transform the left and right side ignore the initial value terms replace the right side with 1 (it's all the same, what is the input) solve Y(s), you get a rational function of s solve the poles of Y(s) (= values of s that make the Y(s) indefinite due the division by zero) system is stable if all poles are in the left half of the complex s-plane Imaginary poles are a limit case. The output continues to swing if some excitation has happened, but does not expand in amplitude. You must check, is it considered stability or unstability in your definitions.
H: SST 39VF1602 - NOR or NAND? I have this flash chip, and I cannot figure out if it is a NOR or a NAND flash chip. Here is the datasheet. From the capacity of the chip (2M), I am assuming that it is a NOR chip, but I am not sure. How can you tell the type of this chip? AI: It's implemented as a single array of parallel addressed and accessed memory, with conventional address bus A[n:0], data bus D[15:0] and control bus (/OE, /WE, /CE). It is a NOR flash.
H: What is sensor radius? The manual for the bno 055 (link below) has an attribute for the sensor labelled radius on page 33. The radius takes up 2 bytes of memory, the msb and lsb. This already confuses me, as the msb and lsb are bits, but each register is for a whole byte. Continuing, the radius of the sensor has a property called the range of the radius. The range is supposedly unitless, and is equal to +/- 1000 × LSB. I have searched both the document and the internet, yielding nothing but data from proximity sensors. What is a sensor's radius , and what do the msb and lsb represent on it? Why do the SBs take up entire bytes? What does the datasheet mean when it says +/- 1000 lsb for the value of the radius's range? Sheet: https://ae-bst.resource.bosch.com/media/_tech/media/datasheets/BST_BNO055_DS000_14.pdf AI: MSB means either 'most significant bit' or 'most significant byte' depending on context. Where the range is given as +/- 1000 LSB, it obviously means least significant bits. Where the bytes of memory are being identified, it means bytes. The radius calibration is the distance between the axis of rotation, and the active point of the sensor. See US3470730 for a calibration method. The range, +/- 1000, are the max and min values that this parameter can be set to, or be interpretted correctly, notwithstanding that 2 bytes can hold numbers from -32768 to +32767.
H: How to check if an IC is fried? I was using my 74173 (d-type register) today and accidentally attached the vcc pin to ground and the ground pin to 5v. I didn't realize this and turned on the power. A couple seconds after turning it on smoke started to come out of the IC so I turned it off. I touched the IC a couple seconds after turning off the power and it was so hot it burned my finger a little. What I am wondering is what is the best way to check if this IC is fried? I could plug it in and see if it seems like it is behaving normal, but I am afraid that even if it works in my tests it may still have an edge case that it is broken for. Basically I want to know if there is any way to check if the IC is 100% functional still instead of just making sure it still works with the cases you test it. AI: The chip is dead. As you said yourself, it got hot enough to burn your finger and also burn the epoxy case (hence smoke), both of which mean it has quite literally fried. In general, the answer to the question "is there a way to check if an IC with unknown condition is 100% functional?" is simply maybe. The only way to test an IC is to try it, and if it behaves within the manufacturers specifications, then it is most likely functional - though it is not guaranteed to be. The only way to be more sure (still not certain though) that you have a functional IC is to buy a new one. A new chip is much more likely to be functional than one of unknown condition.
H: If I have 5% 100k resistors but wanted 1% precision, can I use 4 of them in series parallel to improve on the 5% precision? I made an error on ordering and am receiving a shipment of 5% 100K resistors when I wanted 1%. This is for a voltage divider circuit. Would it help improve the precision if I put two each in parallel then put them in series to achieve the 100K? Or would the extra parts be worse for the circuit than simply using a DMM to find the most accurate one and use that? Please excuse me if this sounds like a dumb question. The order was wrong. It is a simple case of the computer doing what I told it to do instead of what I wanted. (Old joke, I know). Thanks in advance for your answer(s). AI: No, the precision remains the same regardless of the configuration. You're stuck with 5%, unless you reorder the correct parts, or you test them with a reliable DMM. I have to say that the last time I tried this, all the resistors in the batch had almost precisely the same resistance though (I can't remember the value, but say it was nominally 1K, they were all 992 ohms or something like that). If you think about it, it has to be this way. If you could create higher precision from combining lower quality components, you wouldn't need the high precision ones in the first place...
H: Damage a phone from intermittent charging? Is it possible to damage a phone by charging it with an intermittent power source? For example, by using piezoelectric transducers in the soles of your shoes. Assuming they're wired to a 5V regulator, will the constant oscillations between 0-5V damage the phone in any way? AI: Is it possible to damage a phone by charging it with an intermittent power source? Probably not. Lithium batteries have no memory effect have no problem handling a bit of charging followed by a bit of discharging. For example, by using piezoelectric transducers in the soles of your shoes. Now, this is interesting. If you somehow manage to create a 5V supply with enough current, then the phone will start charging. Since the power generated by these piezo elements will be ridiculously small, it is highly likely that as soon as the phone starts drawing current to charge itself, your device's output voltage will collapse, and the phone won't charge at all. However, the screen will light up for a few seconds and display "charging..." and this will most likely use a lot more energy than what your piezo elements can provide. Therefore, the net result will be that the phone's battery will discharge quicker (due to the backlight) than if it had not been plugged into the sneaker-charger ;)
H: MCP6002 op amp without Vss Vdd connected? I'm building a circuit from a schematic - K6BEZ Antenna analyser Note there is no connection to pin 8 or 4 (Vss and Vdd) on the uppermost op-amp - as there is in the bottom one. The circuit is not working - there is never any voltage at A0 - leading me to think there is something wrong with the top "half" of the circuit. I have not used op amps before. It strikes me as odd there is no ground connection and no voltage supply to the op amp in this half of the circuit. Is it likely just an omission in the schematic (i.e. should I connect up pin 4 and 8 as in the bottom op-amp), or is this a valid application for an op-amp, and I have some other issue? Cheers in advance. AI: As JRE says, the MCP6002 is a dual op-amp - you only need one MCP6002 to build this circuit. Both op-amps in the package get their power and ground from the same pins. If you are using two MCP6002 packages, then both packages need power and ground connections (but then you're wasting two op-amps.)
H: Unexpectedly N-ch MOSFET high drain to source voltage in "on" state Inside a bigger project, I'm trying to control a simple resistive load (A heating element) with an N-channel Power MOSFET. The load runs on two parallel lithium-ion cells, so at around 3.7v. The microcontroller (An Atmel Attiny85) which I am using to control the MOSFET also runs at this voltage, and outputs this voltage from its pins. The MOSFET used is a P80NE03L-06. The diagram represents an on state of the MOSFET, with the IC Output theoretically being at the positive voltage of the battery. Vgs should therefore be at 3.7 volts right? The minimum threshold voltage is 1.8v, so I expect it to be fully on at this point. The datasheet indicated that the on resistance should be under 0.006 ohms and the maximum current is 80A. The problem is that that when running my load (17A), the voltage between the Drain and Source pins is a whopping 0.43v, creating an enormous power loss in the circuit and causing the MOSFET to heat up dramatically. I stumbled upon this answer while trying to find a solution, and it mentions an "on-region characteristics" diagram. The output characteristics of my MOSFET states that Vds should be no higher than 0.25v at 25A and Vgs=4v Does anyone have any ideas on why the voltage drop is so high in my particular configuration? I must've forgotten something, but according to any MOSFET wiring schematic this is all that is put in. AI: Pay careful attention to the datasheet. RdsOn = 9 mΩ max @ Vgs=5V (6 mΩ @ Vgs=10V) Symbol Parameter Test Conditions Min. Typ. Max. Unit 1VGS(th) Gate Threshold Voltage VDS = VGS ID = 250 µA 1 1.7 2.5 V Note the low current of threshold As a Rule OF Thumb to get good results; use at least 2x Vgs(th)max but often Vgs= 4x Vgs(th) Max or more Look for a current rating of 10A application for low voltage But most importantly, if you want low T rise in switch Use delta T=Rja=Pd ['C] or choose % loss and then RdsOn= % of R load so for 5% choose 5% of 20m Let's assume you used 5V Logic drive simulate this circuit – Schematic created using CircuitLab I suspect you must be using more than 5V as worst case is worse than yours.
H: How to design for a spectrum analyzer analysis? I am designing a board that I will eventually have to hook up to a tracking spectrum analyzer to tune the PI circuit on. I have 2 different circuits. The first goes RF-OUT -> PI circuit -> U.FL connector -> external antenna, and the second goes RF-OUT -> PI circuit -> U.FL connector -> On board antenna In the board design, I assume because I am going to use a tracking generator on a spectrum analyzer that I need 2 U.FL connectors per line, so that I can hook up the analzyer. Where would I put them? My instinct is to go RF-OUT -> U.FL connector (for testing) -> PI circuit -> U.FL connector AI: If I were you would include a stripline Directional Coupler to antenna port on PCB so you can detect Return loss and output power with a pin diode peak detector to DMM using sweep test pattern on your radio. Then you don't need an SA and you have a built-in test method. We call this Design for Testability (DFT) which is a MUST HAVE for any design, not an afterthought!! You might use this on your prototype. Normally directional coupler (splitters) are used for this. You can choose the coupling factor to minimize loss like -20 dB sample ports so insertion loss is low. So if you are ok with this, look for microstrip or stripline DC-20. These have 4 ports on PCB and you you have 2 U.FL ports for the DC-20 and 1 for the External Antenna. If you choose to use only 3 of 4 ports and antenna port as the output then the DC-20 is terminated with 50R on the unused port. Advice. You need a minimum number of accessories. U.FL to SMA to N cables Do a tolerance analysis on your RF PCB design. dielectric k has a wide tolerance as well as track tolerance on impedance. Testing at board shop includes test coupons is about >100$ This allows them to control your designated track impedances with TDR testing and the tune G codes for give batch of laminate to obtain this result. Learn "What is a Directional Coupler?" and how to measure Return Loss with it. such as for antenna. without spending $600 ( research) Semi-rigid coax works best after calibration using SMA . learn to make.... later If Analyzer has only 1 port then save gen response and then normalize when reading DUT response to get ratio. Since s11 input impedance may be not 50 Ohms, the transfer function will then depend on source impedance so ratios must consider this using directional couplers... aka splitter Bonus. make/buy? design these loop antenna for EMI nearfield noise detection. ref info: http://www.semtech.com/images/datasheet/rf_design_guidelines_semtech.pdf https://www.everythingrf.com/rf-calculators
H: VHDL process requires multiple clock cycles I wrote a simple counter in VHDL for a program counter. Everything is done in a process, but what I dont understand is that in the simulation, the addition of the program counter, is only done at the next clock event, rather than immediately after PCNext has been output. Here is the code as well as the simulation: LIBRARY ieee; USE ieee.std_logic_1164.ALL; USE ieee.numeric_std.ALL; ENTITY dlatch IS PORT ( Reset, Clock : IN std_logic; PC_out : OUT std_logic_vector(31 downto 0) ); end dlatch; ARCHITECTURE d_arch OF dlatch IS SIGNAL PC : std_logic_vector(31 downto 0); SIGNAL PCNext : std_logic_vector(31 downto 0); BEGIN PROCESS(Clock, Reset) BEGIN IF Reset = '1' THEN PC <= x"00000000"; ELSIF Clock'event and Clock = '1' THEN PC <= PCNext; END IF; PCNext <= std_logic_vector(unsigned(PC) + 4); END PROCESS; PC_out <= PC; END d_arch; Do you see how PCNext is only calculated at the falling edge of the clock? Why isn't it calculated immediately after PC <= PCNext? AI: First of all, both doubts (1) & (2) which is actually asking same question (1) The addition of the program counter is only done at the next clock event [i.e. here it's falling edge], rather than immediately after PCNext has been output. (2) Do you see how PCNext is only calculated at the falling edge of the clock? Why isn't it calculated immediately after PC <= PCNext?] Answer: It is because the signal PC is not present in the sensitivity list. As shown in simulation below when I add it in sensitivity list the addition takes place immediately because PCNext <= std_logic_vector(unsigned(PC) + 4); executed concurrently as change on signal PC in sensitivity list invokes process again. Earlier in your case immediate assignment was no happening till next clock event (which is falling edge of clock present in sensitivity list). I hope now you will actually understand the sensitivity list and be more careful about it next time. One more thing you can do is that declare signal PC as variable PC and use blocking assignment := this will also update addition result immediately and also you can learn more about the difference between blocking assignment and Non-blocking assignment. edit: As Dave Tweed already said to move the statement PCNext <= std_logic_vector(unsigned(PC) + 4); outside the process block altogether will also work.
H: what is the ideal number of contacts to be placed in NMOS and PMOS when drawing layout in Cadence virtuoso I am trying to draw a layout for inverter and i am not sure about how many contacts to be placed in the diffussion region of NMOS and PMOS and what is the reason behind selecting that particular number of contact? I am using Cadence Virtuoso tool for layout editing AI: It is usually a matter of how many contacts you can fit without violating DRC rules. For a small inverter using minimum size transistors one or two contacts per drain/source/gate is often enough. But manufacturing reliability might improve when 2 contacts are used. That might increase the size of the cell though so in many logic gate libraries only one contact is used. There are many compromises to consider when layouting circuits including logic circuits. The "ideal" layout does not exist. Likewise there is no "ideal" number of contacts to be placed. It is always a compromise. Only experienced layouters know what makes a good compromise.
H: vhdl port declaration with different sizes I'm writing a vhdl model and I'm stuck with a problem about port declaration. Let's say that I have an entity entityA that instantiates N entityB. Now, entityB has a port, out, with size M bits, and M can vary among all entities, so out is std_logic_vector(M-1 downto 0). These ports need to be propagated outsize entityB. If all entityB components would have the same port size, say FIX_M, the solution would be to use a std_logic_vector(N*FIX_M-1 downto 0) in entityA. My problem is that the size M can vary. The first solution that comes to my mind is to use the same solution, using instead of M a MAX_M, but in that case a lot of pins would be left unused (and for input it is a problem, right?). Do you have a better idea? Thank you in advance. AI: It uses an array of sizes to specify the individual sizes of EntityB. The port of EntityA has the size calculated by sum. The matching bits are sliced by high and low. global function: function sum(SIZES) is variable count : integer := 0; begin for i in SIZES'range loop count := count + SIZES(i); end loop; return count; end function; Example: entity EntityA is genierc ( SIZES : integer_vector ); port ( data(sum(SIZES) - 1 downto 0) ); end entity; architecture rtl of EntityA is function high(SIZES, idx) is begin for i in 0 to idx loop pos := pos + SIZES(i); end loop; return pos - 1; end function; function low(SIZES, idx) is begin for i in 0 to idx - 1 loop pos := pos + SIZES(i); end loop; return pos; end function; begin genB : for i in SIZES'range generate instB : entity work.EntityB generic map ( N => SIZES(i) ) port map ( data => data(high(SIZES, i) downto low(SIZES, i)) ); end architecture; Usage: signal input : std_logic_vector(13 downto 0); ex : entity work.EntityA generic map ( SIZES => (2, 3, 4, 5) ); port map ( data => input );
H: How to simulate a mosfet from a datasheet in ngspice First of all, I'm pretty new to the simulation side of engineering. I'm using gschem to draw simple circuits and I'm using ngspice from the commandline to run the simulation and plot the results. So far I've succesfully done a simulation with a simple voltage source, and resistor. Just to get used to the basic workflow. Now the next thing I would like to accomplish, is to use a Mosfet in my simulation. This generates the following netlist: * gnetlist -g spice-sdb -o sim1.ckt sim1.sch ********************************************************* * Spice file generated by gnetlist * * spice-sdb version 4.28.2007 by SDB -- * * provides advanced spice netlisting capability. * * Documentation at http://www.brorson.com/gEDA/SPICE/ * ********************************************************* *============== Begin SPICE netlist of main design ============ M1 drain 1 0 unconnected_pin-1 STN2NF10 V1 Vcc 0 DC 12V R1 drain Vcc 250 .tran 1ms 100ms V2 1 0 pulse(0 5 0s 2ns 2ns 1ms 10ms) .end Now when I try to run this I get the following error: Error on line 9 : m1 drain 1 0 unconnected_pin-1 stn2nf10 Unable to find definition of model stn2nf10 - default assumed Which is not realy a supprise, after all, how should ngspice know the characterics of all components in the digikey catalogue. So I understand that I have to specify the charcteristics of this mosfet. This is where I get stuck. I've read the part of the ngspice manual (p. 127, sorry not enough reputation to post 2 links: ngspice.sourceforge.net/docs/ngspice-manual.pdf) about mosfets, it says the general form to define a mosfet is as follows: MXXXXXXX nd ng ns nb mname <m=val> <l=val> <w=val> + <ad=val> <as=val> <pd=val> <ps=val> <nrd=val> + <nrs=val> <off> <ic=vds,vgs,vbs> <temp=t> The 'm' parameter is for multiplicity, I understeand this one. Now the 'l' and 'w' parameters, these are the length and width of the channel. How on earth should I be able to know these? They are not in the datasheet for sure. Same for the 'ad' and 'as' parameters these are the drain and source 'diffusions'. Why can't we just enter the characteristics as meantioned in the datasheet: STN2NF10. Vdss, Idss, Igss, Vgs, Rds, etc. I believe there is a good reason why it's the way it is, I'm probably just missing or misunderstanding something. The question: How do I simulate a circuit containing a Mosfet, and transform the values in the datasheet of the mosfet into ngspice. PS. The same question is of course applicable to other parts then mosfets, I would just like to keep to the mosfets for now, in order to keep thins simple and practicle. AI: How do I simulate a circuit containing a Mosfet, and transform the values in the datasheet of the mosfet into ngspice. First of all pick a MOSFET from within your sim that is already present and supported. Try this out to make sure everything seems to work. Second, double-click (or whatever mechanism is needed) to open that MOSFET part so you can inspect the parameters. Can you edit them - you should be able to do this. Thirdly, forget about trying to convert data sheet values to spice parameters - just go look for the model of the device you want to use and change the values by editing them. Sometimes models are written down in data sheets but, more often than not you have to dig around. Some sims will allow you to paste the whole ascii model text into a special area and this will overwrite the model parameters contained in the device you chose. I don't use ngspice so I can only guess at this bit and what facilities it has. The spice model for the STN2NF10 is found on this page: - ***************************************************** * Model Generated by STmicroelectronics * * All Rights Reserved * * Commercial Use or Resale Restricted * ***************************************************** * CREATION DATES: 21-04-2006 * * * * POWER MOSFET Model (level 3) * * * * EXTERNAL PINS DESCRIPTION: * * * * PIN 1 -> Drain * * PIN 2 -> Gate * * PIN 3 -> Source * * * * ****C**** * * ********************** * * *************************************** * * PARAMETER MODELS EXTRACTED FROM MEASURED DATA * * <<<<<<<<<<<>>>>>>>>>>> * * *************************************** * * THIS MODEL CAN BE USED AT TEMPERATURE: 25 °C * * * ***************************************************** * MODELLING FOR STN2NF10 .SUBCKT STN2NF10 1 2 3 LG 2 4 7.5E-09 LS 12 3 7.5E-09 LD 6 1 4.5E-09 RG 4 5 4.001 RS 9 12 0.325E-02 RD 7 6 0.142 RJ 8 7 0.445E-03 CGS 5 9 0.419E-09 CGD 7 10 0.467E-09 CK 11 7 0.307E-10 DGD 11 7 DGD DBS 12 6 DBS DBD 9 7 DBD MOS 13 5 9 9 MOS L=1u W=1u E1 10 5 101 0 1 E2 11 5 102 0 1 E3 8 13 POLY(2) 6 8 6 12 0 0 0 0 0.321 G1 0 100 7 5 1u D1 100 101 DID D2 102 100 DID R1 101 0 1MEG R2 102 0 1MEG .ENDS STN2NF10 .MODEL MOS NMOS + LEVEL = 3 + VTO = 4.184 + PHI = 0.827 + IS = 0.1E-12 + JS = 0 + THETA = 0.995 + KP = 15.084 + ETA = 0.199E-02 .MODEL DGD D + IS = 0.1E-12 + CJO = 0.171E-10 + VJ = 0.754 + M = 0.367 .MODEL DBD D + IS = 0.1E-12 + CJO = 0.202E-10 + VJ = 0.755 + M = 0.335 .MODEL DBS D + IS = 0.1E-12 + BV = 117 + N = 1 + TT = 0.699E-07 + RS = 0.505E-02 .MODEL DID D + IS = 0.01E-12 + RS = 0 + BV = 127 * END OF MODELLING
H: EEPROM Values Corruption - Only Some times I am using 25LC640A EEPROM with a 32 bit Micro controller. This EEPROM can store 8kb of data, with SPI serial Communication. In my case during every power down sequence MCU will write some block of data into EEPROM and then it will shut down. And also its working fine. I have tested as many time as possible. No issues with EEPROM Data Issue : But, Some times the EEPROM values got corrupted. My code throwing checksum error after turn on the MCU. i have checked code thoroughly, i am not able to find the bug since its working fine for so long time. So any one here please suggest me what is the cause of this corruption of data. I have attached schematics also. AI: What you are trying to do sounds reasonable, but this hints at something to look more carefully at: power down sequence MCU will write some block of data into EEPROM and then it will shut down. You have to be careful that the EEPROM is done doing the last write before you shut down power. EEPROMs can take ms to do writes. That's a very long time for even a modest microcontroller. During normal operation, it's actually good to initiate the write, then let the chip go off and take its time while the software does something else. The software then only blocks waiting for the EEPROM to finish the write if another EEPROM operation is requested. In many cases, the write and other software operations overlap nicely. Even if another operation is requested immediately, that's never any worse than waiting explicitly after each write. If you are using layered routines you didn't write, this may be how they work under the hood. In that case, the final shutdown needs to be handled differenctly. You make the call to do the last write, but then you must call something else that explicitly waits until the write is finished. Powering down before then will cause corruption. EEPROM libraries designed as described above will have a call that explictly waits for the EEPROM to be idle.
H: Implementing 3 variable boolean function od 2-1 multiplexers Recently i've been given a task from my lecturer to implement \$f=\sum(1, 3, 4, 5, 7)\$ on "2-bit multiplexers". That's it. I assume he meant 2:1 MUX. I've came up with following k-map... +------+---+---+ | AB/C | 0 | 1 | +------+---+---+ | 00 | 0 | 1 | | 01 | 0 | 1 | | 11 | 0 | 1 | | 10 | 1 | 1 | +------+---+---+ \$A\overline B + C\$ ... and following 2 schematics: 1) 2) Since the execrise's desscription consists plural multiplexers- is the second example OK? And if the description would mention using only one MUX- is the first example OK? AI: Both are ok. You have saved 4 pcs of 2 to 1 muxes when compared to canonical 8 input solution.
H: Arduino analog pin doesn't read correctly without connecting to gnd and Vcc of arduino I'm using an LDR (photoresistor) with arduino and i have the following two cases: Case 1: I connect the input of LDR to Vcc of the arduino, The ground of the LDR to the ground of the arduino and the output of the LDR to A0 (analog pin) of arduino and it reads the values correctly. (like the figure below) Case 2: I make the same exact wiring but instead of taking Vcc from arduino, i take it from an external power supply (also providing 5v) and i connect the ground of the LDR to the ground of the power supply..So input and ground are connected externally and output pin is connected to A0. In this case the arduino reads values incorrectly! So what's wrong with case 2 why is it not working? Note : in case 2 i measured the voltage between the output of the LDR and the ground is giving a correct value, so why connecting that output to arduino gives incorrect values? In other words: Why do I need to connect input and ground of LDR to arduino and not to an external power supply? AI: My answer is actually transferred from my comment above, so now I know it is correct. However, it was not clear from the original question how the connections were made in the second case with the "incorrect" measurements. So, my initial assumption, which was later indeed confirmed, was that the GND of the external power supply you have used is not connected with the GND of the Arduino. That means that the GND of the LDR circuit is not connected with the GND of the ADC of the Arduino. As a consequence, there is most probably some fluctuating voltage difference between the two GNDs. So you basically measure a voltage and the ADC doesn't have the same reference voltage (GND) with the voltage you want to measure, which leads to the incorrect measurements. The solution is to use the same GND for both the LDR circuit and the Arduino.
H: Remove switch off 'delay' from a Solid State Relay Board I'm switching audio signals (line level) with a solid state relay board (Keyes - 8x Q3MB-202P), but when I switch off one channel, it takes some milliseconds (maybe around 500ms to 2 seconds) to really stop the signal flow. I believe this happens due to either a capacitor or an inductor (I think it's an inductor) that's in the board, so I was wondering if I could remove (unsolder) one of these components from the board and make the switching off more precise. I don't have an individual piece of a Q3MB-202P, but I believe this wouldn't happen with the individual piece. This is probably caused by the board circuitry, which does that for integrating LEDs or for some safety reasons which I don't mind sacrificing. I couldn't find a schematics of this board, but probably someone will know by the picture. Attached are pictures of the board. What I have referred as inductor is the green piece (F), but I'm not sure if I'm right. In this piece it's also written 2A (or 2AE, it's not very legible). Note 1: the reason I'm switching signals with Solid State Relays is because I need low resistance. I'm avoiding using a IC (like 4066) as it has too much resistance for my audio signals and then the sound is too quiet. Note 2: I don't want to use a mechanical relay to avoid the click noise. Note 3: I don't have individual pieces of Solid State Relays (Q3MB-202P) and I don't have the time to buy them now. I need to sort out this problem quickly for a project as the deadline is in 3 days. I really appreciate any help!!! Thanks! AI: The green item is a fuse, not an inductor. There is nothing apparent on the board to delay other than what is inside the SSR itself. You will not get good results with this type of (triac) SSR. You could try to find a compatible MOSFET-output SSR and replace the triac SSRs in your PCB or just start from scratch. It is better to switch audio before an amplifier than to try to switch at the speakers, if that is what you are doing.
H: Analyzing a transistor relay driver I found this circuit for a relay driver. It is similar to others I have seen, though it appears to have two extra parts that others do not have. I mostly understand how it works, but I would like to understand it thoroughly, so I have some questions in my attempt to analyze it. This is not homework for a class. Rather, I am attempting to educate myself. My questions come after studying tutorials online, but I still have questions. I usually see an input-base resistor around 4K7. I think it determines the current flowing from base to emitter when an input is applied. I think it needs to be high enough for the transistor to saturate, and no more than the maximum CMOS load, which I think is 20mA. I want to understand why the resistor is 4K7. My analysis: The Vbe(sat) for the 2N3904 is about 0.7v. If the input voltage is 12v, the load on the input voltage is (12 - 0.7) / 4700, or about 2.3mA, ok for CMOS. I have a 12v relay that has a DC coil resistance measuring 392R, so at (12 - 0.7)v, that means about 29mA to actuate it (too much for a CMOS load without a driver). The 2N3904 has a beta / hFE of 300 (max), so the minimum Ib I need for saturation is 29mA / 300, or 96uA, though it should be higher to be reliable. I think the input base resistor could be as high as (12 - 0.7)v / 150uA, or about 75K, but a lower value would be more reliable for a wider range of loads. With the 4K7 resistor, 2.3mA is the load at 12v input, so a load of (300 * 2.3mA) or 67mA is possible. The 2N3904 can dissipate 200mADC, but for a load that high, the 4K7 resistor would be more like 1K5. Is my analysis correct? This circuit adds a 2K2 resistor from base to emitter (ground), something I do not usually see in other circuits. I think this may change the bias voltage for the transistor, but I don't know why you would do that here. What is the purpose of this resistor, i.e. what problem does it solve? For relay control, I always see a "flywheel diode" across the relay, with anode at transistor collector and cathode at Vcc, as shown in this diagram. I understand that the inductive effect of the relay coil causes a backward voltage spike to occur as the magnetic field collapses, and the diode protects the transistor from too large a reverse spike. I think if I connect a diode across a power supply with anode at Vcc and no current limiting resistor, it will burn out. The backward spike would seem to do the same. Can you explain why it does not? Is it because the spike is too brief to burn out the diode? Most circuits have only the one diode across the relay, but some also have one from the emitter to the collector. This circuit has one (it is not certain that it is actually connected to the collector, as there is no junction dot in the diagram there). I have a feeling there is another spike, perhaps on power on, but I don't know why that would happen, and I would like to understand. What problem does this second diode solve? I have a new oscilloscope, but I don't know how to apply it to this circuit. In particular, if there is a transient across the relay when it turns off, do I simply connect the scope across the relay to see it? If the answer to #4 is "another spike," where do I connect the scope to see it, across C and E? Do I have to remove the second diode to see it? Do you think that this is a suitable circuit using best practices? If not, please critique. AI: Answering your questions in order- If you look at beta(min) it is specified with a rather large Vce. Usually we want to ensure saturation in the transistor. In the case of 12V we might be able to live with a 1V drop across the transistor but it will compromise the relay life a bit and is not good practice. See the diagrams in this [datasheet]. The Vce(sat) is guaranteed at a forced beta of 10. In other words, you give the base 10mA to get the collector to switch 100mA. 20 is probably safe here, so let's use that. With a 4V input, the 4.7K will pass (4V - 0.7V)/4.7K = 0.7mA. The 2.2K resistor eats almost half that, leaving 380uA for the base. Using the 20:1 ratio, the collector can switch 7.6mA, which isn't much of a relay coil. The 2.2K resistor prevents AC picked up by the input lead (or DC leakage) from turning the transistor on partially if the lead is open circuit. It also could prevent damage to the transistor from applied negative voltage. The transistor is guaranteed to withstand Veb of 5V maximum, so you could apply more than -15.6V without violating that limit. The 'spike' does not exceed the relay coil operating current (it starts there and tails off) and it is brief (milliseconds). It does not stress the diode much at all. At the risk of sounding snarky, I don't see any problem it solves other than an excess of diodes in the storeroom. It doesn't do anything at all of value. I suppose it burns out along with the other diode if you apply negative voltage rather than +12, but it would be much more effective directly across the supply. If you connect the 'scope ground to the circuit ground, probe to the transistor collector, and turn the relay off you should see the transistor collector voltage rise to one diode drop above the 12V supply briefly then settle back to the supply voltage. That is the diode conducting. If you put a resistor in series with the diode equal to the coil resistance, the voltage will rise to about 24V then (more quickly) tail off to +12. I can't critique it without relay specifications and specs on how the circuit is supposed to perform. Looks like marginal base current, as previously mentioned, and a useless diode. Another possible issue, not related to the reliability of the drive circuit, is that clamping with a diode slows the relay release and thus shortens the life of the relay somewhat. Using a diode plus resistor, zener in series with a diode, or zener across the transistor (TVS may take the place of a Zener diode) can allow the relay coil voltage to rise higher, hastening the collapse of the magnetic field (but harder on the transistor). If you read the datasheets for relays carefully they usually specify the life without diode snubbing. Do not do this if the transistor is marginal in safe operating area (SOA)- if you don't know what that is or it is not specified (preferably it is), pick a transistor good for something like 600mA to switch a 150mA relay.
H: How does an output switching from HI or LO to HI-Z affect an input in CMOS? Let's say we have a tri-state buffer output connected to an inverter input, implemented in 7400 series CMOS chips. If the buffer output is HI, the inverter output is LO. If the buffer output is LO, the inverter output is HI. But what happens if the buffer output goes tri-stated (HI-Z) and we are not using a pull-up or pull-down resistor? Is the inverter input now floating and its output can vary? Or will the inverter output be stable and hold its logic level? Do TTL chips behave differently? AI: The Hi-Z output and thus the inverter input will reach a voltage level which will depend on several factors. The node will have a very high impedance to ground so any noise/charge/leakage currents it picks up can affect the voltage. If there is a clock line running nearby and that line is capacitively coupling to the node, the inverter could pick up that signal and give a clock at the output. Without much external disturbances like clocks and noise, the leakage currents from the NMOS and PMOS transistors in the tri-state buffer will work against eachother. If for example the PMOS transistors leak slightly more than the NMOS transistors then chances are the Hi-Z node will go up in voltage and eventually reach the supply voltage. But at a different temperature or same model chip but from a different manufacturer or even same model from the same manufacturer but from a different batch of chips the opposite could also happen (NMOS leaking more I mean). This is unpredictable so we want to avoid that always ! Anyway, it is bad practice to leave a CMOS gate input floating like that. So you'd never find this situation in a properly designed circuit. What most circuit designers do is define the voltage in Hi-Z mode by using a pull down or pull up resistor. In TTL a Hi-Z is usually interpreted as 1 (one). But again, this is bad practice and it is better design practice to define all inputs properly just like in CMOS logic.
H: How should I properlly draw on a schematic a shunt going through the center of a toroid coil? So I'm trying to reverse engineer the schematic for an Avair AV-200CN SWR bridge. Two toroid coils, used for sensing, are mounted on a shunt, so that the shunt goes through the center of the coils. I'm not 100% sure how to draw such an arrangement on a schematic. Was initially thinking of drawing it as a transformer with two secondaries, since the shunt acts as a one-turn primary, but I'm not sure if there's a better way to draw it, since I'd like to minimize the amount of thinking needed to understand the schematic. Here's the picture of the circuit: AI: Here's one example where a strict adherence to schematic symbols can cause confusion - that single-turn "winding" can easily be done wrong. Many SWR schematics do a pictorial representation of a toroid that helps make clear the single-turn nature (one wire through the toroid's centre) of the current transformer: Perhaps not clear is the phasing of the multi-turn winding of the transformer - the "dot" convention is easy to get wrong. It might be easier to leave the labels of input connector and output connector off until it is built. Then discover which direction gives a null with a dummy load in place.
H: Do i need to use capacitors with a 5v voltage regulator? I am using a solar panel with a regular 12v charge controller but instead of using 12v batteries i am planning to use li-po batteries with a specific charger. I know that the best way to step down the voltage would be a buck converter but i already have a L7805 laying around. Do i need to use a capacitor at the output ? I have seen some circuits that used it and some that did not and both worked fine. If i do need one how do i determine which one suits the circuit? I am a newb to electronics. I see that capacitors have both voltage ratings and capacitance ratings. I figure that the voltage should be above the 12 volts the charge controller outputs as otherwise the capacitor would be blow up, right? Is the voltage of it relevant? How do i determine the right capacitance? AI: Read the datasheet. It will tell you exactly what you need to know. TI recommends a 0.22 µF capacitor on the input. A capacitor on the output is optional, but adding 0.1 µF there can help in some applications. For a battery charger, this is probably unnecessary. Larger capacitors are not necessary, and may damage the regulator if the input is shorted. (See "Shorting the Regulator Input" on page 13 of the datasheet for details.) That all being said -- as mentioned in comments, a linear regulator is probably a poor choice for this application, as you will end up wasting roughly 60% of the power you gather (7 V out of 12 V) in the regulator. A buck converter or MPPT will give you much better results.
H: Purpose of components in op-amp radio circuit I am trying to understand the purpose of a couple of components in a schematic I am looking in at: My questions are: 1) What exactly is the 100k resistor attached to the diode doing? 2) Why is the output of the rightmost op-amp fed into a transistor? Could the audio signal not be pulled directly off the output of the op-amp? Thank you for your responses. AI: The \$100\:\textrm{k}\Omega\$ resistor is part of an RC filter (as well as a DC return path.) Together with the \$300\:\textrm{pF}\$ capacitor, its filtering purpose removes the RF that is still present in what passes through the diode detector (smoothing it out) while also leaving the relatively low frequency audio signal envelope undamaged. Take a look at the RC time constant. Also imagine that node without the resistor present -- all you see then are a couple of capacitors and a diode feeding it. It really needs a DC path added, as well. That last bit uses an emitter follower as a driver. It can source fine, but is dependent on the \$470\:\Omega\$ resistor to ground for pulling down and sinking current. I'd definitely arrange things differently.
H: Determining necessary resistance in connections to IC pins I'm building a portable 8x8 LED matrix with an attiny85 and two SN74HC164s shift registers, powered off of two CR2450 coin cell batteries and having trouble determining where to put resistors in my design. To test the SN74HC164, I connected and powered it directly to an arduino with resistors coming from the cathodes of the LEDs ( connected to the outputs of the shift register ) and nowhere else in the circuit, like shown in many tutorials. Would there be any reason to add any resistors to the connections of the shift register, other than the outputs when powered off of the batteries or are all of the pins designed to consume only as much current as they need? I will have pins A, B and CLK connected to the I/O pins of the attiny85, VCC and GND connected directly the the batteries, with CLR connected to the voltage supply. Do I need to limit the current through VCC and GND on the attiny85? AI: no, resistors are not normally needed between logic chips. Logic chips have high resistance inputs already, and the inputs consume very little current, adding more resistance in series is not needed.
H: Designing a simple fan + resistor system for heating air I have a research application that requires a steady supply of heated air. My current plan is to 3D-print a small tank (6" x 6" x 4") to which a muffin fan will be fastened. The fan will blow air across a 20W resistor into the tank to heat the air. The heated air will continuously leave the tank through an exit port and travel to the application for which the air is needed. The exiting air will need to be maintained at 37C +/- 1C. Here is a schematic of the device: To get the correct temperature, I will adjust the fan speed (variable CFM). This is my attempt at calculating the necessary fan speed: Ambient air temp: T1 = 25C Final air temp: T2 = 37C Resistor power: P = 20 W Heat capacity of air: c = 1 J/gK Density of air: d = 1.15 g/L air flow rate: f = P/[(c)(d)(T2-T1)] = 20/[(1)(1.15)(37-25)] = 1.45 L/s = 3.1 CFM This is not my area of expertise so I was wondering if I could get some feedback on my proposed method. Do the calculations seem sound? Do you foresee any issues with this system as a whole? AI: All of the energy output of the resistor will end up in the exit airflow, regardless of fan speed (to first order). Your plan is to vary the output temperature by controlling the air flow rate, that is, the mass of air the energy is diluted in. Most people (myself included) would have adopted a fixed fan speed, and varied the amount of power being delivered by the resistor. While basically equivalent, the two schemes have different features, none of which make either approach a no-brainer. The variable airflow method will have a slightly lower latency from control input to temperature output, especially if a big fat resistor is used for the heater, many small resistors in parallel would be faster in that case. Latency is important for stability with feedback control, though I would expect most of the latency will come from the physical transport, then heating the tank, tube, and components in line to the monitored exit. Set against that, the variable fan speed method will have variable latency, due to changing the transit time for the tube. If you are servoing the temperature at the output, this could cause you loop tuning and stability problems, unless you tune for the lowest fan speed. The variable fan speed will act as an audible monitor of what the loop is doing. That may be useful, or irritating, or inaudible above the lab. The fluid dynamics around the experiment may well be sensitive to fan speed. I might be concerned that having got layout right at one speed, things could change at another. Power control uses less energy. Set the airflow to the minimum required, then the resistor output is automatically controlled to the minimum output. Though at 20w and presumably mains powered, this is a small consideration. Apparent overshoot is built-in to the flow regulated system. Consider a flow regulated shower. Let's say the water is too hot. I turn the flow up, and until the cooler water reaches the shower head, I feel even hotter as the faster flow delivers heat to my skin faster, but doesn't cool down until after the transit latency. Depending on whether heat is being lost continuously from your experiment, this effect may or may be relevant. I must confess I thought I'd find more reasons in support of the (for most people) 'normal' way of power control. If the plastic tube on the barb is intended to be the air supply to your experiment, then it looks a very poor match of fan and fan load resistance. That's a high-flow low-head fan, and it will be essentially stalled into that thin long tube.
H: Weird behaviour from a 40x4 LCD Character Display We have a fire monitoring/management panel which has a 40x4 character display attached to provide information in the event of a fire. It has recently started "glitching out", it displays random characters/glyphs for split-seconds in random places on the display. These are valid alphanumeric characters (not just random pixels etc), which leads me to think the LCD controller is damaged, which happens to be on-board the display PCB. We have had a quote provided to replace the panel, but we'd much rather replace just the character display (huge difference in price). I'd like to make sure however, that the display is the part malfunctioning. What are your thoughts? This is a very similar display for reference. AI: I suggest you let the manufacturer of the fire alarm work on it to keep it in warranty. The LCD can be easily tested with an Arduino, if you know the pinout. https://www.arduino.cc/en/Reference/LiquidCrystal Problem is, that the issue might be caused by the LCD, the power supply, the main processor or a combination of those. Replacing the LCD and seeing if the issue remains is less work. Those LCD's go for ~$20.
H: Heat dissipation in a DIP Solid State Relay I am working on an application that switches a high power (15 W typical, 100 W maximum) resistive load at 230 VAC (RMS) from a 5 VDC control line. For this I plan to use a Solid State Relay. I am currently planning to use the AQH3213A from Panasonic, but I am concerned about the heat dissipation of the part. The load will be switched on regularly anywhere from a few seconds to a few minutes. I performed the following calculation for power dissipation under maximum load conditions: $$ P = I_{Load}\bullet V_{TM} $$ $$ = 0.435 A \bullet 2.5 V $$ $$ =1.09 W $$ However the datasheet (or any other information I've been able to find from Panasonic) does not list any thermal resistance values for the DIP package the SSR uses. How can I figure out how hot the SSR will get under maximum conditions, and whether I need a heatsink? Furthermore how could I effectively heatsink a DIP 8? AI: "Peak ON-state voltage" is not necessarily something that will apply to your SSR the whole time it is switched on. It is listed in the datasheet so that you can account for the voltage loss your load can see, not to calculate the dissipated power. There's a figure in the datasheet which shows you how much current is OK for a given ambient temperature. This is what you should take into account.
H: Proper cut-off voltage for SLA battery in 120°F+ environment? This is related to this question as well, in which I inquired about the most appropriate float voltage for a battery that would be in very high ambient temperatures, as the float voltage should compensate for temperature changes to avoid over-charging. A question I'd asked in the comments, and was recommended be a separate question here, is the following: What compensation is appropriate for the cut-off voltage of a sealed lead-acid battery? If ambient temperature is 130-140 degrees [inside a vehicle], and I'm lowering the float voltage by .5Volts to compensate (0.003mV per cell per degree celsius over 25°C, 30°C difference to 55°C) As an example, if my cut-off voltage would normally be 10.8Volts for a deep-cycle battery at 25°C, would that value be lower for a battery in an ambient temperature of 55°C? If so, how much / how would I calculate the compensation? Tagged as battery charging because I wanted to tag as battery discharging and there wasn't one for it... feel free to edit if there are more appropriate tags. AI: Float voltage is lowered at higher temperature to prevent gassing. On discharge the open-circuit voltage is barely affected, but internal resistance is lower so voltage will be a bit higher at high current. At end of discharge the internal resistance increases and voltage drops rapidly, so a fixed cutoff voltage should be fine. If ambient temperature is normally 55ºF then battery life will be severely reduced. Deep Cycle Battery FAQ "Battery life is reduced at higher temperatures - for every 15 degrees F over 77, battery life is cut in half. This holds true for ANY type of Lead-Acid battery, whether sealed, gelled, AGM..."
H: How much current the PS-on pin on the motherboard's power connector can sink? How much current the PS-on pin on the motherboard's power connector can sink? can it sink 100-200mA? (to activate a relay/solid-state-relay when the motherboard wants to turn/keep on the computer) (e.g. motherboard's PS-on pin >>> relay/solid-state-relay >>> my controller board >>> computer's PSU's PS-on wire) AI: According to the ATX Power Supply Design Guide, Verizon 2.01, page 24, the power supply is only allowed to draw 1.6mA. Based on that, it is very unlikely that any standard motherboard will sink 100mA.
H: Automatic switch on when sensor is activated Im trying to find a component or to make a circuit to switch a dc wire when a sensor is activated and sends a voltage of 12V, like in the image bellow. Thanks in advance. AI: A relay provides a simple solution. simulate this circuit – Schematic created using CircuitLab The electrical schematic shows the coil which is energised by the control circuit and the contact which closes when the coil is energised. Pick a relay with a 12 V DC coil (to suit your sensor) and a contact rating adequate (>=) for both the supply voltage and load current you are switching.
H: HT7333-A LDO quiescent current I've sourced some HT7333-A LDOs in SOT-89 package from random AliExpress seller for my battery-powered project. They were quite affordable and I saw some posts from people using these in batter-powered apps. Datasheet on that IC states it has about 4 uA quiescent current and requires only 2 x 10 uF capacitors. I've soldered 2 (input and output) SMD ceramic caps in 0603 rated and 10 uF and measured to be around 9.3 uF in fact. Casual Li-Ion battery are used as input, at 4.083 V, the output from LDO are at 3.263 V (ok for me, within datasheet specified range). But when I measure current flow while output are left "floated" it reads at around 1.2 mA, which is definitely not good for battery app. I haven't connected real load yet. But why is it sinking 1.2 mA while no load are connected? Am I "happen" to get some sort if fake ICs or flawed component (I've checked only 1 IC from the strip so far)? PS: Asking more out of curiosity, I've already ordered other low quiescent current LDOs from reliable supplier. AI: random AliExpress seller ... why is it sinking 1.2 mA while no load The first statement seems to answer the second. You buy parts from a questionable seller in some dark corner of the internet, and you're actually wondering why they don't meet spec? Seriously!? If you can observe this symptom on real parts you know are full production from the same manufacturer the datasheet is from, we can delve into this more. Otherwise this is a waste of time. Think about it. Your a manufacturer of knockoff parts in east Asia someplace. 70% of parts meet full specs, so you sell those as second source parts like the real thing. 20% are close enough, so you sell them to hobby places where the customers buy on price only and generally aren't sophisticated enough to notice the difference. The remaining 10% are pretty bad, so you sell them to your second cousin's wife's brother in law who has about ten different "company" names thru which he sells stuff on Alibaba and the like. These go to people that don't look past the low price and think they're getting a "deal". Hey, these marks will never find you, and it's better than tossing the lot in the ditch behind the factory. That's full of leaking drums of used paint thinner already anyway.
H: Will multiple chips outputing onto a bus for a few nanoseconds cause damage? I'm working on a home-brew CPU design, with the usual mix of parallel EEPROMs, static RAMs and registers, tri-stated onto a single 8 bit bus. My /output-enable logic for three tristate-able chips on the databus is: /Ken = /a /Ren = /a nor /b /Men = /a nor (/b nor /b) Do I need to worry about the few nano-seconds during which more than one chip's /oe pin will be low, due to the differing number of gates? Will current flowing out of a high output of one chip, into a low output of another chip, for less than 10ns cause damage? If it would cause damage, how was this situation avoided in the 1970's-80's? Update: Chips are: http://www.farnell.com/datasheets/32783.pdf http://www.farnell.com/datasheets/1911297.pdf http://www.farnell.com/datasheets/2047758.pdf AI: There are a number of ways to create nonoverlapping enables for bus devices. Perhaps the simplest is to add the clock signal itself to your equations. Then, only one device at a time is enabled while the clock is high, and no devices at all are enabled while it is low. (Or vice-versa if you're using the rising edge of the clock to capture data.) Normally, the output-enable function of most devices is fast enough that "wasting" half of each cycle in this way does not cause a timing problem. But if it does, one workaround is to modify the duty cycle of the clock as needed.
H: What if a capacitor exposed to slightly higher voltage than rated? I'm building my 5 V circuitry for my bike's dynamo rated 3 W 6 V. Today I went for testing peak voltages without load and capacitors, just with diode bridge 4 x 1N5819. Unfortunately my multimeter doesn't have peak function, so I made peak detector from LM324N: Capacitor used: 100 nF ceramic. Meter leads were connected to Vout and GND pin of LM324N. Maximum readings were 24 V at maximum speed I could do. I tested this peak detector on my ATX PSU (bridge rectifier omitted), I got: real 5V -> Vout = 3.9 + 0.1V (Vdrop of 1N5819) = 4V, delta = 1V real 11.1V -> Vout = 9.8 + 0.1V = 9.9V, delta = 1.2. Considering that real peaks are higher then 24V, real peak I'd say +2V for rough estimation = 26V Can those peaks be damaging for 25V electrolytic capacitor with no load? And with load? I personally think if there is a load, the cap cannot be fully flooded (charged), hence cannot be damaged that way. But I'm not sure. UPDATE: Just in case I'll use 24V transil AI: Your open circuit dynamo - actually it's an alternator since the output is AC - may be irrelevant to the real-world use. The unit will contain internal series resistance and inductance. As desribed in my answer to Non-led simple bicycle dynamo light system, the impedance of an inductor is given by \$ Z=2ωL=2πfL \$. This shows that the impedance is proportional to the frequency which, of course, is directly related to the speed of the bike. If designed correctly the lamps will turn on to a reasonable brightness at quite low speed and will be noticeably brighter at high speed but without blowing the lamps - the reason being that the inductors and lamps form an L-R voltage divider. The inductance helps keep the voltage more constant over a wide range of speeds than if it was minimised. Figure 1. Bicycle "dynamo" equivalent circuit. If you place your regulator circuit after the switch shown in Figure 1 it will never run with a no-load situation and the alternator output voltage should be pulled down to a safe value for your capacitor. Can those peaks be damaging for 25 V capacitor with no load? You are right to be concerned but given the short duration of exposure to the higher voltages it's unlikely to fail. If going into production where warranty costs could become an issue you might take the cautious approach. And with load? I personally think if there is a load, the cap cannot be fully flooded (charged), hence cannot be damaged that way. Agreed. Try your speed tests again with various loads connected to the output and I suspect the voltages obtained will be much less.
H: Cross section of ceramic capacitors I am trying to make a fault analysis of a bunch of ceramic capacitors. Short description of the application: 10 220 µF ceramic capacitors 1210 package are placed in parallel with a 3.6 V battery. A MCU wakes up periodically (maximum once per minute) and draws current (maximum peak 10-15 mA for a few milliseconds). Total time before going back to extremely low power sleep is 130 ms. The capacitors are supposed to hold enough energy to cover this without dropping below 1.6 V (minimum supply voltage for the MCU). This is needed since the operating temperature is low, and the battery cannot deliver. The battery has enough time to recharge the capacitors while the MCU sleeps. I am suspecting shorts in the capacitors. Because: The battery has drained very quickly on some of my PCBs From what I have read ceramic capacitors, especially in large packages, are sensitive to mechanical stress and can crack, causing shorts To see this for myself I have attempted making cross sections, but I have a hard time understanding what I am seeing. How I made the cross section: Used a dremel to cut off the corner of the PCB where the capacitors are placed Molded the cut off PCBs in epoxy glue to make handling easier Used a diamond circular saw blade to make a cross section approximately in the middle of the capacitors (lengthwise) Wet grinding and polishing down to 1 micron and then 1 µ lapping film I repeated this on two PCBs. There are 3 capacitors next to each other: Here you can see a color difference between the capacitors, top right and bottom middle are darker in color. But as you can see, not in the same position. I don't have enough rep to add all images. I will comment links to all the images. Would appreciate if someone could edit and add the images to the post. The darker colored ones (top right, bottom middle) look like this close up. Almost what I was expecting a ceramic capacitor to look like. At least you can see some kind of layering. But the layers are not solid as I expected. Can this be damage caused by the grinding and polishing? The distance between the layers is 2 µm. The lighter colored ones look like this: What is this?! Can e.g. high currents cause the layers to melt together like this? Or can this also be caused by my grinding and polishing? Here we can see an air bubble in the solder. But the gap close to the bottom, can that be damage caused by mechanical stress? I later tried grinding and polishing a bit further into the capacitors. It looks exactly the same. If the strange wavyness and/or the broken off layers had been caused by the grinding and polishing I expect that the characteristics would have changed. E.g., a wavy one now has broken off layers instead, and the other way around. The exact capacitors used are Taiyo Yuden JMK325ABJ227MM-T AI: It looks to me like the grinding/polishing has been done fairly well (with more care you could have less scratches), and you're looking at an accurate and undamaged image of the capacitor cross section. The "dark" images are more or less what I'd expect to see from a capacitor cut across the planes of the electrodes. Metal electrodes in a darker ceramic matrix. For lower value capacitors I'd expect to see thicker parallel lines, but for the lines to be slightly wavy and broken isn't a huge surprise. I expect that this results from the special steps they've taken to get the very high capacitance in a tiny package. Possibly a combination of grid electrodes rather than planes, and squashing/forming the ceramic after building the layers but before final firing in order to get the layers thinner. The "pale" images are more or less what I'd expect for a capacitor sectioned parallel to the electrode planes. Assuming you've used a metallographic grinder (looks like it) then your section plane is flat, but the electrodes aren't. So you get contour-like features where the electrode crosses the section plane. I doubt you'll find your leakage in these images. Other places to look: Check the datasheets for the expected resistance. Is it as high as you thought? Check the conditions under which it is given in the datasheet, see if your environment is likely to make it worse. Check a batch of new capacitors to see what the resistances are Check a bunch of capacitors from your warranty returns to see if the capacitance or resistance has changed. Measure the resistances on your PCB before assembly (should be nice and high) Measure the resistance on a completed PCB (maybe sans MCU). Look for evidence of flux which has not been cleaned well enough and could reduce the resistance.
H: Altium: Including Text of Evaluated Parameter in Value of Second Parameter I would like to include the text value of one parameter in the value of a second parameter. In my case, I am trying to add on the text from the schematic SheetNumber parameter to the end of the text of the schematic DocumentNumber parameter. In other words, suppose the DocumentNumber parameter is populated with "ABCD" and the SheetNumber is "2". I want to populate the DocumentNumber parameter dynamically with "ABCD-2" (or "ABCD-1", "ABC-3", etc depending on the value in SheetNumber). It would be nice if there was a way for a parameter to be evaluated while within a second parameter, like "DocumentNumber-=SheetNumber" but this does not work. Is there a way to do this in Altium? AI: Sure, we do something similar with variants for our boards. In our case, we append the variant name to the end of the document number, but it should work the same with any schematic parameters. The way we do it is by editing the DocumentNumber parameter in the Parameters tab within the Document Options window, the same way you're inputting the base DocumentNumber. In that field, enter the following: ='ABCD-'+SheetNumber Now wherever the DocumentNumber parameter is used, it should appear as a string formatted per your specification (i.e., "ABCD-2", "ABCD-3", etc.) However, I'm not sure how it will behave in situations where there is no value for SheetNumber. In our case, if there is no variant defined or the active variant selection is "No Variant", then the string will appear as e.g., "ABCD-[NO VARIANT]". If you want to avoid this, you can always define a custom parameter such as NumberedDocString within that same dialog and set its value equal to the following: =DocumentNumber+'-'+SheetNumber That way, DocumentNumber is "ABCD", SheetNumber is your current sheet number, and the value of NumberedDocString wherever it is used should be formatted as you want it.
H: Understanding work case of directional antenna I try to understanding the workflow of directional antennas. I want to implement a bluetooth communication with external, directional antenna. Lets suppose that I buy this: https://cdn3.bigcommerce.com/s-blpsc02m/products/1667/images/1947/A2D5w__15836.1454272603.1280.1280.jpg?c=2 My question is, if this antenna "looks" to the north (the white head looks to north and the back with cable looks to south) . If I transmitted signal from south to north, does the antenna recieve the transmitted signal ? AI: Antennas follow a property called reciprocity. This means that if the majority of the directivity is in one direction during transmit, during receive the directivity will be the same and in the same direction. So in the case of the PDF you posted for the LogiLink antenna, you can see the it is directive in one direction (for both transmit and receive). If you try to pick up a signal coming from the opposite direction, the signal will be highly attenuated. This means you may or may not receive the signal but you would certainly have a much better chance by pointing the antenna in the direction of the desired signal. There are also omnidirectional antennas that exhibit gain. These may be more suitable for you if you need to receive signals approaching the antenna from various directions. An omnidirectional antenna exhibits gain by restricting the pattern in the vertical orientation.
H: Oversampling on dsPIC33EPXXGS50X family I'm trying to implement oversampling on a dsPIC33EP64GS502. According to the datasheet and the 12-Bit High-Speed, Multiple SARs A/D Converter (ADC) reference manual I have to configure the following bits to set the oversampling ratio: I'm struggling to understand why a 2x and a 4x oversampling both provides the same 13-bit result, the same for the other ratios. AFAIK a 4x oversampling will increase the resolution by 1 bit, a 2x ratio should increase the resolution in 0.5 bits. Is this an error on the datasheet or am I misunderstanding something? AI: The quantity of bits are those actually present in the result, not the resolution. As such they should be >= the resolution. In other words, if the ADC has a resolution of 12 bits, the x2 oversampling gives you 0.5 bit more resolution but to report it you must use 13 bits. So the result with 4x oversampling is 'better' in terms of resolution than the 2x oversampling but is presented to the same number of bits.
H: Does a fan need a flyback diode? I'm aware that inductive loads like relays need a flyback diode. Since a motor is also an inductive load, I'm inclined to put a reverse-biased 1N400x diode across the terminals of this fan. However, I've never heard any mention of using flyback diodes with fans. Am I correct in thinking I need a diode, or is there some reason why I wouldn't? Thanks! AI: It's a brushless DC motor. Most, if not all of them, have flyback diodes across the windings (inductors) which are switched electronically with transistors in the motor (contrast to mechanical commutators in brushed motors). There shouldn't be any need for a flyback diode across this fan, unless there is some significant series inductance eg a EMI filter inside the motor. But, when in doubt, add a flyback diode. It's cheaper to add one unnecessary diode than frying your circuit because you did need one.
H: Could sliding down a slide damage my camera? Specifically one of those plastic spiral slides - I was at the playground with my children yesterday and when we went down the slide there was enough static generated to stick our hair on end. Of course my son has more hair than I do by now... Anyway, there was a drastic increase in charge - such that I could feel the buildup as we got closer to the ground. And when we grounded it felt like a jolt from my feet to my hair. I'm not super intimate with electricity, but from what I know that seems like it was a lot. I had my cell phone in my pocket, and was wearing my DSLR around my neck, but both devices seemed to be un-damaged by the voltage - I'm guessing it just didn't pass through them. Is there a circumstance where the static could damage these electronics, based on how I was holding them or something? Or are they designed well enough that they would withstand this staticy sort of shenanigans? AI: ESD will create an arc when it finds "ground," which in this case could be the actual ground, and the path to ground being the slide and/or yourself. At this point you probably had several thousand volts between yourself and ground. Here, "ground" could also be found in the playset, maybe even the slide itself, depending on what it was made of. Basically, if you shocked yourself, you just found ground. :) Here, it sounds like your camera or phone were on your person and therefore were at the same voltage as you and not in the path to ground. Therefore, no current passed through them and they were fine. If those devices changed position in the circuit and the ESD current passed THROUGH them to get to ground, then they might have been in trouble. This would have happened it you had connected a wire to your camera, the other end of the wire was stuck in the playset, and then you touched the camera. Then you would have presented the camera with a couple thousand volts across it, and inducing a current to flow as a result. Generally it should survive such an ESD hit, though. The high voltage and current as part of the ESD shock would pass through the metal case, if its case was metal, and the electronics inside would not have seen much disturbance itself. Most devices have decent ESD protection, as it is a part of many product standards.
H: Optical perpendicular speed sensor required I've been looking to this for a while now, and so far I can't find what I need. But I'm think I'm missing the right search terms for it.. I've got a little project in mind where I can measure the speed of a conveyor belt. I'd like to do this by having an arduino sized box with the sensor inside. You could hold it on the belt and measure the speed, showing this on a few 7 segment displays or LCD display. How I'd like to do this is by using a system like an optical mouse, and from my understanding they have a module inside that compares low res pictures that compares how much the mouse has moved. This way I can keep my system a bit airtight instead of a wheel of some sort, and it looks fancy. Does someone know what I should be looking for? EDIT1: It has to calculate m/s, with 1, maybe 2 decimal numbers. I can't place extra detection stuff on the belt(s). It has to be one handheld device that can measure it all in a few seconds. AI: Your mouse linear motion sensor could be done. What reliability you need is not addressed. Optical mouse sensor modules compatible with microcontrollers are available: This ADNS3080 Mouse sensor chip combines: optical sensor array array processor digital communication output The array processor processes image frames, and updates X & Y position registers. When you read these registers, they're reset to zero, so that your microcontroller can calculate speed (it is up to the microcontroller to measure time between successive reads). XY array could be aligned so that one axis (say X) is aligned with conveyor motion. Y-axis should then read near zero on average. Or XY axis could be aligned with 45 degree offset, so that X and Y registers yield similar numbers. The mouse chip itself has no optics, so a simple external lens is required to focus an image of your conveyor belt onto its internal optical array. You'll also likely want to illuminate the conveyor belt with a lamp. Some speed calibration will be required, since the image size is affected by lens magnification, hence the motion registers will yield different numbers. For this module, standard serial communication is used to configure, read and write mouse-chip registers. Many of these chips can deal with video-speed image processing with enough illumination, but there will be an upper limit to the conveyor belt speed sensing. This is likely not a robust way of controlling conveyor speed, but as an open-loop sensor as you have outlined, seems workable.
H: Beginner Q: Measuring current via USB charging cable I was trying to use the multimeter to find out what current does my charging cable actually provides to the phone. How do I accomplish this without completely destroying my cable? Right now I can measure the voltage (~ 5.15 to 5.2 V) and V~ (Ac voltage? which is around 10-11 ) but not able to figure out a proper way to find our current. Will be very grateful for any pointers. Thanks AI: One way or another, you'll have to break into your cable. USB cables are cheap. Get a short one just for this purpose, especially if you want to do this more than once. The best would be one of those USB "extension" cables with a type A plug on one end and a socket on the other. These things are not good for the USB signal, so get the shortest one you can find. Very carefully cut open the outer jacket of the cable. That should expose four separate wires. Finding which is which might be tricky. Worst case, expose a little bit of the bare wire of each, then use a ohmmeter or continuity tester to see which is connected to which pins of the plug. You can look up which plug pin is which conductor. Cut the power wire and solder about 100 mΩ in line with it. Bring out connections to both ends of the resistor. Now fix the cable by using electrical tape or hot glue. At this point, you have a regular USB extension cable with two wires coming out of it from the middle somewhere. Insert the cable in-line with the device you want to test. Connect a voltmeter to the two leads coming out of the test cable. The voltage will be the power current times the resistance you added to the cable. For example, if you added a 100 mΩ resistor, then the voltmeter will show 100 mV for every 1 A of current.
H: How to reduce/remove piezo pickup hiss I made a stompbox which is basically a small wooden box with a piezo element inside that when stomped on produces a sound similar to a bass drum except that the thing is much more portable. The circuit works but I have two problems with it that I don't know how to solve: It's extremely loud. I hooked it up to two different soundboards and even with the gain at 1 this was super loud. The circuit produces a really loud and constant "crackly" hiss. The hiss remains at the same apparent loudness regardless of what the gain pot on the soundboard is. The circuit in question is this: (I replaced the MPF102 with a J113 because I didn't have one in hand) AI: Having read extensively the posts on 'diyAudio.com' about the challenges of achieving low-noise performance with JFETS for RIAA vinyl record playback with MovingCoil phonocartridges, one noise source is hot-carrier currents needing an exit path out the gate. Your exit path is 3.3MegOhms. Experiment with reducing the Rgate. Regarding the "super loud", you can reduce the gain by reducing the 1.5Kohm to 560 ohm or 220 ohm. At 220 ohm, your gain is actually 'attenuation', of 10 dB. The 'diyAudio' NJFET is 2SK170, usually cascaded with bipolar to hold the drain stress to 20 volts or less, for purpose of avoiding hot-carrier-gate-current noise. In this circuit, the J113 exhibits that noise with only 9 volts.
H: Matching sensor output with PLC input I am trying to add a height captor for a line of my production where I work. I found this Banner captor (QS30LLPQ) and it has 2 bipolar discrete outputs (PNP and NPN). Now, I am trying to decide which of those two outputs I should connect to the input card of my PLC. This input card is a sinking type. Now, I read some information online and from what I got on forums, NPN sensors are current sinking devices and PNP sensors are current sourcing devices. So that tells me that I have to go with a PNP output, since a sourcing sensor output must go with a sinking input. My question is, if that is true? From what I understand, the sourcing output will drag voltage from the sensor and then the PLC will take it and sink it to the ground, thus bringing the input to a high state from the PLC perspective. So, will that make my output in the sensor go to a high state when it is activated (it should detect boxes moving in front of it)? Thanks a lot. source AI: Now, I read some information online and from what I got on forums, NPN sensors are current sinking devices and PNP sensors are current sourcing devices. So that tells me that I have to go with a PNP output, since a sourcing sensor output must go with a sinking input. You are correct. In Figures 1 and 2, below, an external sensor is connected to a PLC input. Let's assume that the PLC input has an opto-coupler to the internal logic to isolate the sensitive logic circuitry from the outside world. The circuit, as far as external switches are concerned, is an LED with series resistor. If we short the input terminal to the common terminal we should expect about 5 to 15 mA to flow (determined by the internal series resistor). In Figure 1. The opto-LED anode is connected to +24 V supply and the cathode is connected through the resistor to the input pin. When the input pin is connected to COMM- the LED will light giving a logic '1' to the PLC CPU. The sensors usually use an NPN transistor in this configuration. Either way, the PLC input provides or sources the current through the LED (red arrow) and is known as a "sourcing" input. Since NPN transistors can be very easily switched in this configuration they are generally used - hence "NPN" inputs - and the sensor "sinks" the current. One major advantage of this arrangement is that the transistor can be powered from a supply of different voltage to the PLC - e.g., a 5 V micro-controller and once it shares the common negative it effectively becomes a level shifter between the two systems. The main disadvantage is that the logic is somewhat inverted. A high voltage on the input is logic 0 and a low voltage is logic 1. This can be confusing. Figure 2 shows the PNP / sinking circuit. Here current flows from the + supply, through the transistor and the PLC "sinks" it. The logic is the right way up now and this style of input is preferred on most industrial equipment at present. For outputs the situation is similar. A current sourcing output will supply the current from the + supply, through the load to the COMM-. For a current sinking output the current will flow from the + supply, through the load, into the PLC input where an NPN transistor will "sink" it to COMM-. Note some PLCs use bi-directional opto-isolators - two LEDs connected in opposite directions. By connecting the input common terminal to + or - supply the inputs can be made sourcing or sinking. So, will that make my output in the sensor go to a high state when it is activated (it should detect boxes moving in front of it) ? That depends on the sensor. Some provide a switch or teach mode to allow the user to configure it either way. Since it is being fed into a PLC you can inver the logic there anyway. A more important consideration is what way you want the system to behave on loss of signal. i.e. If the sensor fails due to disconnection, etc., what will be the effect on the machine. If you are simply counting boxes then it won't matter. If you were filling boxes with stuff then you would want to make sure that the switch is configured to signal "box present" rather than box absent. PLCs are great. Have fun.
H: What is the IRQ out in I2C? On this MPR121 capacitive keypad (link), what is the purpose of the IRQ out? AI: What is the IRQ out in I2C? I'll give a slightly different focus from some other answers. Remember that I2C Slaves cannot initiate an I2C bus transaction. Therefore if you have an I2C keypad or touch screen controller (or other HMI) how would the I2C Master know when to request data from the I2C keypad controller, to ask whether or not there had been a touch or release? Three possibilities include: I2C Master sometimes polls the I2C keypad controller, but not as a high priority. Problem - Potential delay between the touch/release and the I2C Master polling the I2C keypad controller, leads to poor user experience, due to perceived "lag" (delay) between a touch/release and the machine's response. or I2C Master spends lots of time polling the I2C keypad controller, to minimise any lag between a touch/release and the I2C Master actually detecting that this has occurred. Problem - I2C Master has fewer CPU cycles for doing anything else, since it is spending so much of its time polling the I2C keypad controller. The I2C bus also has reduced bandwidth for bus transactions to any other I2C devices, due to so many polls to the I2C keypad controller. or I2C Slave has an extra "interrupt" signal connected to the I2C Master (not part of the I2C specification, but this was introduced in SMBus). This allows the I2C Slave to alert the I2C Master and effectively say "poll me now!". Problem - Requires an extra signal line between the I2C Master and the I2C Slave. As you see, your I2C keypad controller chose the last option (some I2C touch screen controllers do the same thing.) This is an example of the poll vs. interrupt choice, which occurs in computer science and elsewhere in life e.g. you could stay awake and continuously check the clock to see when to get up in the morning (polling), or you could set an alarm and let that wake you (interrupt).
H: Does cassette tape change its velocity while playing? Looking at the old cassette tape, From the POV of the head, let's say that it reads at speed \$v\$ (the magnetic medium scrolls at speed \$v\$). But looking at the right wheel, which is the one that's pooling the magnetic medium - its radius is growing(!) over time. Now, \$v=r\omega\$, where \$\omega\$ is the angular velocity, i.e. a constant Question I don't think that's true. What is really going on here? Radius is growing over time, for sure. I also assume that \$\omega\$ is constant. so did \$v\$ increase? AI: The details of how a cassette drive works are well covered by this Wikipedia article. The tape is pulled by a capstan next to the playback head, and this capstan pulls the tape at a steady rate. (picture from the Wikipedia article) You probably need to click on the picture to see it full size. I have indicated the capstan by a red arrow. The take-up spool doesn't rotate at a fixed speed. It uses a slipping drive, as badjohn says in his answer, so it takes up the tape at the speed the capstan moves it.
H: Should I use cascaded regulators, or connect them all to the same input? For a particular design, I need to take an unregulated voltage from a battery and regulate 5 different DC voltages from it. What's worse is that I need step-up and step-down converters. What's even worse is that some voltages need more current than others. Note: To clarify the question, I've given the specific voltages that I need. However, keep in mind that I'm wondering what to do in general, not just this specific case. The battery voltage is 11.1 V, which drops as it discharges. I need to create: 15 V 12 V 5.5 V 5.0 V 3.3 V The 12 V level needs to run at least 1 A, and the three lowest need to run about 200 mA each. The 15 V level doesn't use more than 50 mA. So this is my question: would it be better to go from the battery to 15 V to 12 V to 5.5 V to 5.0 V to 3.3 V, or would it be better to just connect all 5 regulators directly to the battery? AI: I have often wondered this myself, and each time what I come down to is a trade-off. The most obvious trade-off is the following: Pro: Cascading them generally causes the lower-voltage regulators to stay cooler (you're not dropping as much voltage with them, so you're wasting less power in the form of heat). Con: Your first regulator would need to be able to supply enough current for all of the circuitry running off each of the other regulators (and this is the case all the way down the chain). This means that your top-level regulator would have to be big and beefy compared to the regulators that are fed from it. Sometimes the pro wins (for example, all of the circuitry on each of the power rails only draws milliamps of current, so you don't need a powerful regulator at the top), and sometimes the con wins (you can't find a top-level regulator that can supply enough current, so you opt for large heat sinks and extensive cooling systems). You, as the designer, will need to analyze all potential cases and make sure the circuitry can handle any stress that it might see during normal operation.
H: First Arduino Board Design This is my first designing an Arduino 'shield', so I wanted to get some input. This board interfaces with sensors using I2C. A Raspberry Pi interfaces with the Arduino, which reads magnetometer data from six to seven different sensors through an I2C expander. The schematic can be shown below: I'm using KiCAD as my CAD software, and the following are the parts I'm using: Arduino Uno Rev. 3 TCA9548A I2C Expander (Breakout) from Adafruit Adafruit HMC5883L Breakout - Triple-Axis Magnetometer from Adafruit Link to Fabrication Print: https://learn.adafruit.com/adafruit-tca9548a-1-to-8-i2c-multiplexer-breakout?view=all The magnetometer sensor breakout board consists of six pins, but I'm only using four of the pins. They'll connect to 4-pin headers which I will attach a ribbon cable to, so they will be used from a 1.5 meter distance in a box. The pin layout at the sensor is different but by crossing some wires on the connector at the sensor, I can get the 4-pin configuration shown in the schematic so SDA/SCL will be separated by +5V and GND. I'm designing the Arduino shield which will attach to the Arduino to eliminate the need of a breadboard. So, for my design, I figured I would add some 10k pull up resistors to make the total pull up resistance be 5k for each sensor SDA/SCL line. The sensors already have a 10k pull up resistor, and by adding another 10k resistor, we can read the sensor from a long distance. I tested all of this on a breadboard with 1.5m wires, and it seems to work out well. I then added some capacitors from the +5V power pin from the Arduino to keep the voltage steady since it has to power a lot of devices. For the board layout, I have a ground plane on both the top and bottom layer, but I turned it off for the picture to show the routing. For the capacitors and resistors, they're all SMDs using 0805 size standard, so I can easily hand solder them. I placed most of the pull-up resistors on the bottom of the board so as to use as much space as I can. Judging by the size of the Arduino, it shouldn't touch the parts (unless I'm mistaken by the distance from the shield to the board). As a first Arduino shield design, do you see anything wrong with the design that might be a concern? Design-wise, the multiplexer has 0.6 inches (15.24 mm) between the two sides of the pin, so there should be plenty of room for the 0805 resistors. But, would their placement like this have any issues that I don't know about? Similarly, the capacitor values were just picked based off another design that was working off of +5V. I was powering an OpAmp, and the reference design used those values, but now that I'm powering more items, should I go with a larger value? EDIT: Apologies, but I wanted to clear something up. The part that I used for the TCA9548 I2C expander is a little strange, but I just used a 2x12 part in the schematic and labeled as the TCA9548. I didn't know that it was left to right rather than CCW. Nevertheless, I made sure that the pin placements matched up with the component shown on Adafruit's page, so it is fine. AI: Perhaps it's because I'm not familiar with Kicad, but the sch and pcb symbols for the TCA9548 confuse me. The sch symbol has non-standard pin locations (back and forth, instead of CCW). For example, nReset should be on Pin3. Then on the pcb, the labels on the silk overlay don't match what I see in the datasheet for the TSSOP package. My point: double check the 9548 pin assignments. Also, I'm sure you realize that the I2C resistors are pull-ups, not pull-downs (as mentioned in your question). The web has some wonderful I2C resistor calculators (for length/speed considerations).
H: Wiring a switching regulator, two relays, and a LED… Currently I’m working on a small portable amp which will be run either with 17 to 24 volts, depending if I drive it with a battery or a wall wart. It’s my first electronics project, so I have some fairly basic questions... To power two single coil non latching relays (G6K-2P DC12 by Omron Electronics, coil current: 9.1 mA, coil voltage: 12 V, used as input selector, and stereo-to-mono switch) and an LED, I plan to use a voltage regulator. After reading an input by Russell McMahon on the Electrical Engineering Stack Exchange, it seems to me that in my situation (voltage drop of up to 12 volts) it would be the best to use a switching regulator, like the OKI-78SR-12/1.0-W36-C by Murata (input: 15 - 36 V, output: 12 V). I want to design a little PCB with the regulator, the relays, a power output for the LED, and all the circuits and resistors needed for the input selector, and the stereo-to-mono switch). I got the audio part all figured out, but I’m not sure if I got the power part right. Here is my schematic (just the power part), showing how I would connect the regulator the relays, and the LED: I already tried to sketch out, how I would put this on a PCB in the size that would fit into my amp case. I will not solder the LED and the SPST switches to control the relays directly on the PCB therefore on the PCB you see the footprints of JST connectors at their place (SW1 = P2, SW2 = P3, LE1 = P7). Now, I have the following questions: Is is a good choice to use a switching regulator, as the one linked from Murata? Is my schematic correct? Is the routing in my PCB correct, and supposed to work? AI: Don't use SMPS. Yes, you have 12V dropout voltage but by only ~20mA load power loss is approximately 250mW which won't be troublesome for many of the "off the shelf" linear regulators. Use ceramic capacitor for input and output decoupling of linear regulator. Seriously, your circuit will not work without them. About 470nF at input and 1uF output will most probably do the job. However, read the datasheet of regulator thoroughly! Both diodes are orientated wrongly! Replace cathode and anode of the diodes - you will blow them otherwise! Friendly advice - read some basic books about electronics before building anything. Predict what could possibly go wrong and deal with it. I mean seriously, otherwise it's just waste of money.
H: Clean water turbidity sensor I'm working on a project to determine the water clarity, using a MCU with a turbidity sensor. I have tested sensors such as: Cheap Dishwasher sensor, and while it works for changes in turbidity in dirty water, it isn't great with clean water. For example, the water I am testing in the ocean could have enough clarity to see down 20ft. The value returned on the sensor is about the same as if the clarity is 50ft, or 60ft. It is able to easily determine the difference between 2ft and 10ft, when there is a huge difference in suspended particles. I am using an Atmega ADC with a 10bit resolution. I know I would be better off with an external ADC, and making my own sensor with a photoresistor/transistor and IR emitter. My question is, is there anything I can do with the sensor design to force the curve to be affected more in the clean water side that the dirty? If not, is my best bet to just ensure I have a high resolution ADC with very little noise? Thank you very much AI: I'd suggest you are limited in resolution by the distance through the water sample with your current sensor. The datasheet shows you have no more than 5.7 mm of water depth providing resolvable transmissive information. You are also severely limited with the sensor since it provides only transmissive sensing without any backscatter sensing. Perhaps you could make you own sensor from two flat glass plates and install multiple mirrors to bend the sensing path through multiple passes through the water. This would give a longer path but still lacks any backscatter sensing. This may be of help ...have a look at the GLI Method 2 sensor.
H: How to step down from 4.5V to 3.3V? I'm trying to step down my source voltage of 4.5V to a 3.3V suitable for the MPR121. The chip has at the best settings a typical current of 393μA. The 3.3V has to be quite accurate as to not damage the keypad. When researching, I've come across two possible solutions that could achieve what I want to do: Voltage Divider In this case, can the output be stable and accurate enough to be able to step down and voltage and if so, what value resistors would I use? Voltage Regulator This is then another IC just to step down the voltage (footprint wise), any suggestions on which regulator I would use (linear, switching)? AI: Since the voltage drop ratio is relatively low and the output current requirements also low, use a linear regulator. 3.3 V is a common voltage, so there are many fixed linear regulators available at that voltage. These things have only three pins and are very simple to use. The pins are the input voltage, ground, and the output voltage. You will also need a 1 µF or so ceramic cap between input and ground, and between output and ground. You are dropping (4.5 V) - (3.3 V) = 1.2 V. You have to be careful to choose a regulator that can work with that headroom. These are often called LDOs (Low DropOut). The efficiency from the voltage drop will be 73%, plus a little more loss for the quiescient current. At only 400 µA output, the overall wasted power will be very small. Also take a look at the quiescient current spec. For some linear regulators, that would add significantly to your 400 µA figure. Others work with only a few µA. Take a look at the MCP1700 series, but there are many many others that would be fine too. Some older LDOs are not "0 ESR output cap stable". Simply stay away from them. They were designed before the era of small and cheap ceramic capacitors that could do a few µF. The MCP1700 series I mentioned is 0 ESR output stable, requires a maximum of 350 mV headroom, has only 4 µA quiescient current, and can deliver up to 250 mA. These are my "jellybean" LDOs, meaning that's what I use unless there is a good reason not to. I don't see one in this case.
H: What kind of LC circuit can produce this response? I am trying to come up with an equivalent circuit for a structure that I'm working on. The response is shown below: The modelling of the structure hasn't incorporated any loss, therefore the real part of the impedance is 0. I am not a circuit expert, and would like some pointers on what kind of circuits would be capable in producing a response like this. AI: simulate this circuit – Schematic created using CircuitLab http://www.falstad.com/circuit/e-filt-hipass-af.html DIY java simulator This circuit applies to all crystals, ceramic resonators , SAWs and some ceramic caps Here 30nH is a short length of wire and C2, 0.1pF is 10x bigger than cap which determines series resonance C1 ... from any frequency e.g. from kHz to GHz Low frequency Xtals are in the Henry range but still fempto-farad [fF] for the "motional capacitance" Update after some effort on my part. A 4th order LC filter can be made to the reverse pole zero shape.
H: How to use a reference design in datasheet? I'm trying to gain a better understanding on what this is and how to use it. I know this is a datasheet for SAMB11 SoC, more specifically the reference design on page 50 is what I'm inquiring about. In a general sense, is a reference design the simplest/fastest way to get the target component up and running, or is this one of many examples of how to implement this SoC? Is it safe to use this as a starting point depending on how I'm using it? I guess I'm wondering when this circuit is built what will it give me? Thanks for your patience and time. AI: In a general sense, is a reference design the simplest/fastest way to get the target component up and running, or is this one of many examples of how to implement this SoC? Every manufacturer is free to make their own decisions of course, but generally a "Reference Design" is the design that the manufacturer used when they were compiling the documentation. Thus, the performance, features and behaviour described in the rest of the datasheet is with reference to the Reference Design. In practice, that tends to mean that it's a good example of how to get everything running on the IC. Is it safe to use this as a starting point depending on how I'm using it? Oh definitely. Unless you have something else to go by (eg. experience or similar design) the reference design is always an excellent place to start. It's possible you could come up with a more efficient/streamlined design if your requirements are narrower than everything the IC offers, but you can always leave bits out or re-design down the track. I guess I'm wondering when this circuit is built what will it give me? It'll give you a functional circuit that demonstrates all the functionality described in the datasheet. Note that building this circuit correctly is not trivial. The matching network in particular takes some design nous. At a glance the rest is straight-forward and well documented - just be sure to follow the comments!
H: How to direct input and output of circuit for multifunctional 4 bit calculator I am trying to build a 4 bit calculator that can add subtract and multiply using a FPGA board (Altera DE2). I have an adder/subtractor and multiplier already built and functioning but need to know how I can merge each function and give an output based on user input. For each module: there will be 8 inputs (X0-X3,Y0-Y3) which represent the two numbers used for the calculations the adder/subtrator has an additional input to select the function (add or sub) The multiplier has an additional input to reset the clock that is used for the counter that shifts the bits the multiplier has 8 outputs (A-H) that are fed into the The add/sub has 7 outputs for the sign display for results that are negative The add/sub has 5 outputs all outputs except the sign display are fed into an already functioning 7 segment BCD display How do I connect the 2 modules in a way that only the selected module gets the input and the output from only the selected module goes to BCD display? I have included a start to my schematic with the display attached to their respective output ports. AI: Claudio Avi Chami is right, you probably don't need to select on the input - just hard wire your input to both blocks. They will both do their thing but you can just ignore the output of one of them. Then, on the output, connect each pair of corresponding pins to the inputs of a 1-bit (2-input) mux. Connect the output of the mux to the corresponding pin on the LCD. Then connect the select line of the mux to your signal that selects which module to use. Repeat for all pairs of corresponding pins. Shown here is the result for just 3 pairs of corresponding pins: simulate this circuit – Schematic created using CircuitLab
H: Bright flash using capacitor and LED? I'm looking to produce an intensely bright single flash exceeding 2000 lumens for half a second. If I hook a charged 10,000uF electrolytic capacitor up to a 100w LED chip, will it work and how bright will it be? Is there a better way of doing this electronically? AI: If you want a super bright flash there is no better way than using a high voltage Xenon discharge. In other words, and electronic camera flash. LEDs will not give you anywhere the near the same peak power. Your setup might get you a peak power of ten times rated continuous power, that is around 1kW peak, but a camera flash can be in the hundreds of kilowats. There is a good wikipedia article on the techniques used. If you want the ultimate, there is a banned weapon called the isotropic radiator, which uses high explosive to pump a compressed Noble gas like Xenon. Anyone looking at it at night would probably get eye damage even from humdreds of meters distance
H: AM (Amplitude Modulation) Colpitts How would I amplitude modulate the Colpitts oscillator directly? Can this be done by making Vcc the audio signal and DC offsetting the audio signal in order to get the Colpitts to fundamentally oscillate? AI: You can't push a colpitts oscillator (or hartley or clapp) too far because you will either get an overly distorted sine wave or you'll kill oscillations completely. Even if distortion wasn't too bad (i.e. you didn't push things too much) there would be an associated frequency modulation due to the changing bias conditions brought about by amplitude changes. The underlying mechanism here is the so-called "miller" capacitance between base and collector - basically the depletion layer in the PN junction in that part of the BJT is modulated by the voltage across collector and base. In fact any oscillator of this type produces a cyclic distortion that is related to the change in capacitance due to the actual oscillation voltage appearing between base and collector. So, my advice is add an amplitude modulator to the output of the colpitts and this can be easily done with a diode (and the appropriate DC control levels) plus an output filter resonant at the carrier frequency. Here's a very simple AM circuit idea you can experiment with: - The blue waveform is the modulated carrier and the red signal is a triangle wave modulation signal. You can get quite respectable results with virtually a really small handful of components.
H: Why does this isolation transformer have another transformer in it? The isolation transformer I'm looking at: http://www.mouser.com/ds/2/336/HX1188NL-515471.pdf My question is why is it there? Is it not enough to have just the transformer on the left? AI: That "transformer" is a common mode choke. It's used to suppress EMI (either being induced onto the line and affecting the circuit or being transmitted from the circuit out over the line). It's called "common mode" because it's very effective in suppressing HF currents that are common to both lines.
H: N-MOSFET and high frequencies I am using this Mosfet (http://www.vishay.com/docs/91017/91017.pdf) as a switch. Vd is a sinusoid, amplitude = 2V Vgs is either 0V or 5V. In low frequencies (less than 1 MHz), everything is fine: -if Vgs is 0V, Vs is 0V -if Vgs is 5V, Vd is a sinusoid. However, when I use frequencies upper than 3 MHz, Vd=Vs, whatever Vgs is (0V or 5V). I suppose that the MOSFET cannot support every frequency, but I do not see in the documentation such a limit. Is it implicit? Thank you for your time! AI: a MOSFET gate is a capacitor. the data sheet will list the capacitance. the circuit you use to drive the gate will need sufficient current drive capability to charge such a capacitor within a time smallish compared to the period of your driving signal. another way to think of this is a low pass filter comprised of the output impedance of your gate drive circuit and the gate capacitance of the MOSFET. this will be what limits your gate switching frequency.
H: What does the reference designator NT mean? I found NT1 in a Microchip demo board schematic (103-00419-R1.pdf) but can't figure out what it means. There is a box with a note saying "Default connection between 1-2. User to cut the trace if 5V VDD is needed.", with NT1 as a little box with wires going to pins 1 and 2 on the header. I can't find the refdes on Wikipedia or Google in general, so I'm out of ideas. Here's a screenshot of the part in question (on the right side): AI: "Net Tie". You will note that one one side of the net tie 'component' the net is named 3.3V, on the other VDD. PCB cad systems usually assume that all connected nets have the same name, but that components have can have more then one pin, so to allow you to connect two nets together in a defined place you create a component having two pads, no BOM entry and a footprint that connects the two pads together. Placing this component then satisfies the need for each net to have a single name while allowing the nets to be connected together at a defined point. Altium (Which is what that was drawn in) calls these net ties hence NT.
H: Protecting circuit from railway's high voltage I am designing a control system which is going to be installed near a functioning railway. The system has many interfaces that are externally connected to different devices, it has Wiegand, RS485, Ethernet and a few digital IO pins. I am currently thinking of using varistors on every pin except the Ethernet of course, current limiting resistors to avoid short circuits and fuses on power pins. Are TVS diodes/Zener diodes or opto coupling necessary or is the varistor enough in this case? (The MCU is likely going to be an Atmega1284 or another AVR8) My question is what are good practices and solutions to avoid EMI and ESD and any other interference caused either by humans or the railway? I know Ethernet and RS485 are protected, we will be using shielded cables for those. AI: I have monitored with a spectrum analyzer 200 meters from a traction train railway on CATV 300MHz cable TV. The ingress was huge and caused noticable interference when the train went by. Most causes were earth ground oxidation to the coax and the quality of the coax ( single, double shield , moisture ingress from cuts) This quality is determined by a measurement of Transfer Impedance and CMRR or in other words ingress due to imbalanced differential lines ( signal and return) over the entire spectrum of interest and the induced Current on this differential inbalanced impedance. The solutions are many. Low differential balanced impedance. High CM impedance greater than 5x the signal bandwidth or whatever does not cause group delay distortion. To assume your twisted pair is only susceptible to transient voltages and not signal integrity is being naive. If you solve signal integrity issues with a test using equivalent A/m impulse noise with xx ns rise time and V/m impulse noise with V/m and same rise time for coupling capitance, you can do near field experiments along your chosen signal cable and measure on a spectrum analyzer or DSO the induced signal into match impedances on your cable and extrapolate. Once you fully understand the ingress from these sources and can expertly simulate it with dV/dt and dI/dt levels from reactive switch tests then you can safely be protected from transient levels. Assume your ground connections age by 10 ohm to >100 Ohms over time unless you have a spec otherwise. better than powerlines and simulate this in your design and design verification test (DVT) plans. Common solutions are STP cables, controlled earth ground impedance ( braid , flat wire or Litz wire ) CM chokes around cables at both ends and optimal impedance matched SMD Ethernet Baluns on board for each jack. In my "green weanie" days as a junior design Engineer in Aerospace in late '70's, I once made the faulty assumption that my "long" 9600 baud RS485 differential cable link was good at both ends for a rocket pre-launch SCADA network. After I left the company and never had a chance to do a field trial, one engineer told me when ever the VHF transmitter was used, it caused all kinds of data errors. Moral of my truth, learn from my mistakes and include a Pi filter at both ends to suppress demodulated AM VHF/UHF from causing signal integrity errors which is basically a Balun with an RF load cap. ( common mode choke)
H: Quality parameters to include in contract with hardware supplier my company (not related to electronics at all) is now getting a simple control board for a product designed by a third party. The very same third party will also be in charge of building such control boards, realistically for the lifetime of the product. Last time such an arrangement was made no quality specifications (MTBF, DOA, maximum acceptable number of returns from clients, ecc ecc) were made and it turned out a mess. Low quality design, 1% DOA and 8% failure rate within first year. Still, since no "quality agreement" was signed beforehand we were unable to get compensated or to obtain the design and get it reviewed/built somewhere else. What clauses and what defectivity threshold should I include in the contract? AI: You should understand that there's a contract, and there's a reality. For contract to become the reality there should be lever to pull (respective clauses based on measurable things), and you should be able to pull this lever. Putting several clauses into the contract guarantees nothing if you can not enforce those clauses properly. no quality specifications (MTBF, DOA, maximum acceptable number of returns from clients, ecc ecc) You must review already available contract templates related to your product family, and compile the best from them, keeping in mind peculiarities like local law, legal system differences (in case of offshore contract), even accounting and finances (e.g. taxes). Low quality design, 1% DOA and 8% failure rate within first year. This is about levers you really can pull against your supplier: can you terminate contract? can you pose penalties, and how collect-able are they? what supplier can do to mitigate issues in the field, and how robust they are to modify design in production? finally, what is the damage to the brands - yours and your supplier? we were unable to get compensated Compensation, or penalties, may not be tied to the partner production costs, they may be tied to your customer costs, which may talk business rather than technology - for example SLA (service level agreement), system availability, including business services based on the device. You must think about which issues you will shift to supplier - only their design and production, but also business issues arising from their messing around with the design. unable ... to obtain the design and get it reviewed/built somewhere else. Question - who owns intellectual property (IP) for the design? Did you pay for design development, and who is an owner of it? If it is not you, then you are totally out of control and every minute you risk to be left out of business (related to usage of the design). Consult IP lawyer to see what is the outlook and realities regarding the design ownership. And finally - if you experience, see or feel issues with another party, you must consider termination as soon as possible; terminating at later stages will be much more painful. Be decisive about your business. P.S. I was managing double digit $M services contracts, and kind of experienced in the subject. Actually it does not matter how much money it costs, contracting is about you knowing your business, risks in your business, and be rigorous to yourself and others around you - in the interests of your business. Update: following your questions - Have the board tested to a specific ESD standard. What standard should we refer to for consumer products? I can not answer on it, internet search shows that every country may have its own standard; also you need to mind situations when device is affected by the ESD, and when another entity is affected by your device (e.g. human gets electric shock). Set a maximum percentage of in-field failures that would then trigger a "free" board re desing. What is a reasonable one? Not possible to advise anything sensible as this should account for the type of device you have, its usage, environment it is used in etc etc. In general you must supply your customers with devices and documentation how, where and when to use these devices, and if fault occurs, there must be investigation why it occur - customer violated T&C, or board is defective, or design is defective. Board is defective - you push manufacturer, design is defective, you push designer. You do not need to wait until a horde of customers scream on you and dump your brand. Where may I find related contract templates? Ask your lawyers, or search internet, but you must be skilled enough to understand legal language and related matters to make a good piece. Root cause is bad design of the power stage. The HW design/supply tho never recognized this, if I was in charge they would never work with us again, sadly my boss` opinion is different (they are cheap) and that is why I need to get things written down really good - to protect the company and myself. Ok, now it is clear, you are in cover-my-own-*ss mode. I wonder what is your position in this organization. Anyway, to make things right: you design, or you request supplier to design very good user documentation which details how device should be used, when and where, covering maximal situations of misuse (aka "do not boil the watches"). in the contract, or in the terms and conditions with customers, you explicitly sign paper which states that customer has read and understood usage terms and conditions, and that if misuse is proven, customer will pay for repairs (how much - consult with suppliers). You amend contract with supplier that (a) all devices returned from the field will be subject to their examination, RCA and conclusion if issue was related to misuse or design/manufacture defect, and (b) that all devices with conclusions showed design/device defect will be repaired/replaced by the supplier for free, with covering of shipping costs (back and forth) at the supplier account. However doing so you risk your customers become angry, because you will need to think how to compensate them when their device is in repairs, and how to ensure supplier does not trick you and customer concluding that the case was a misuse. This is purely business and business model issue, which your boss should approve and of course is off-topic for discussion here. I'm an engineer in this "organization". Some more insights then. You must not be forced to write clauses into contracts, it is just not your job - if this stance and wording is ever allowed in this organization. As an engineer your task might be to give input into the contract - the values which you requested originally - e.g. MTBF, it can be calculated basing on MTBFs of the components used - simple and stupid way, but gives at least some indication. However, you may be smarter than that, look here. It lists the following items, which compose reliability testing and assurance framework: Single test: each device, before shipped to customer, is being tested for DOA (you are as an engineer to develop procedure); Test plan tailored for your product: have quick questionnaire for customer when customer thinks that device has failed, documenting what had been done, how, which symptoms showed, and what customer expects; Life cycle approval: get input from supplier what is the lifecycle of their developed and manufactured device (very similar to MTBF); Product benchmark: most probably you are not alone on the market, try seeing how competitor product are doing (or similar products if yours is unique) - from technical point of view (do not consider business). Then compile presentation for your boss so that he can see your estimations and his risks.
H: Upper limit for resistance values in voltage divider? I am measuring the voltage of a 24 V battery using the ADC from Arduino Uno. I use a voltage divider to bring the voltage below 5V (the maximum the ADC can accept). I noticed that as long as the ratio of R1 and R2 is the same for my voltage divider - it's better to use higher resistance values to avoid slowly discharging the battery. I tried using resistors up to mega-ohm range. What do I sacrifice by using higher and higher resistances? The intuition tells me, that this cannot go forever, because high enough resistor would be equivalent to an open circuit. By reading other similar questions it seems I might be sacrificing accuracy, but I am not sure, as other questions typically have a different setup. If it is accuracy, is there a formula for that? When does the standard voltage divider formula break down? Giga-ohms, Tera-ohms? How to calculate that? P.S. I am aware of this question, however the answer there seems very convoluted, comparing load and no-load scenarios (where I don't have any load at all). If that is the answer, I do not understand it. AI: The voltage divider breaks when it is not a voltage divider anymore. And when that happens? When the current through the load starts to be the same order of magnitude of the current through the resistors of the divider. But this may be not the only factor in choosing a voltage divider values. For the specific case of the Arduino ADC, and according this link Input impedance of Arduino Uno analog pins?, the recommended source impedance of anything connected to an Arduino ADC input should be 10kOhm max. Since your source impedance is dominated by your resistors, the resistors of the divider should also be in the order of tens of kOhm.
H: Opamp Unity gain follower stability Please give explanation considering opamp as ideal one as well as practical one. Edit: I got the explanation here. Decompensated Operational Amplifiers AI: Ideal op amp: stable with infinite bandwidth. Practical: look for opamps that are guaranteed "unity gain stable" (most modern op amps are unity gain stable). The bandwidth can be determined by looking at the "unity gain bandwidth" or sometimes the "gain-bandwidth product." The bandwidth can be further limited for large amplitude signals by the slew rate, usually expressed as the change in output voltage per unit time.
H: How to find input impedance of RA30H1317M1 RF power amplifier module? I am building an amateur radio VHF transmitter. I want to use off the shelf transmitter module like DRA818V (or similar) as the drive amplifier and RA30H1317M1 as the final amplifier. This is my idea: simulate this circuit – Schematic created using CircuitLab The driver outputs 500mW, while the RA30H1317M1 has absolute maximum input power of 100mW. I have studied datasheet of RA30H1317M1, but I can't find the input impedance. If I knew the input impedance, then I could calculate the divider. My questions: Is it okay to drive RA30H1317M1 this way, or what is the proper way to do it? Should there also be an LPF between the amplifiers? AI: The test block diagram show input matched into 50 ohms, and electrical spec says input match is better then 3:1, so it seems likely that things input is specified to be somewhere in the 3:1 circle around a nominal 50 ohms restive. Note that full shout is specified at 50mW input, meaning your driver is good for ~10dB more then you need, personally I would pick a smaller driver stage, it will reduce your power consumption, the final is not linear anyway so IMD is not a concern. Some pad at the input is not a horrible thing, maybe 3dB or so at a nominal 50 ohms, it helps with stability. A BPF at the input is usually pretty nasty unless you know what you are doing, you do not want the filter output to go high Z where the amp still has gain or it will honk.
H: How to make a variable frequency circuit Anybody can explain how do I make my circuit for variable frequency output with variable resistor not trimmer/variable capacitor my circuit shown in fig AI: If you replace C4 with back to back varicap diodes you will get some control of the filter frequency. A BB171 has about a 20:1 control range from about 5 pF to about 80 pF but don't quote me on this. The centre point of the two varicap diodes is used for dc control and can be connected to a pot via a 100 kohm resistor. There are plenty of google pictures of examples. A 20:1 range in capacitance produces a bit more than a 4:1 range in resonant frequency tuning. It's a square root thing. One thing to watch out for of course is the change in tank Q is capacitance changes across the range. This may or may not be a problem to you. The output of your circuit is also very susceptible to loading impedance so watch out for this two and maybe consider using an emitter follower stage after the tank.
H: Load-line of MOSFET when analyzing triode mode When we are analyzing MOSFET within a circuit (e.g. as an amplifier) with drain or source resistor we must consider the load line of transistor, like so: When used in triode mode a.k.a. ohmic mode there is said that drain current increases linearly with drain-source voltage and when used in active region there is often said that drain current changes almost nothing with drain-source voltage. How is this possible if increasing drain current decreases drain-source voltage and vice versa when considering load line? AI: The load line in your graph shows what current flows (Id) over a varying Vgs. Your text describing triode mode and saturation concerns the behavior of Id over varying Vds (not Vgs) while keeping Vgs constant. Here the load line crosses the horizontal (well almost) part of the blue curves, that is the saturation region. The triode mode region is the part on the left where the blue lines are nearly vertical.
H: Why does four Apple Model Numbers (A1457, A1518, A1528 & A1530) have two FCC IDs (BCG‑E2643A & BCG‑E2643B) These are model Numbers of iPhone 5S. I want to know why two Model numbers of iPhones have the same FCC ID? I am assuming that as they are two models, they are NOT the same for Apple yet they have the same FCC ID . What constitutes a change of FCC ID and when does it take place? Another example would be of iPhone 5 where Models A1428, A1429 and A1442 have the same FCC ID BCG‑E2599A. Why so? AI: I get the impression that Apple has done testing of different functions on multiple hardware with similarities. From FCC: The FCC IDs applies to different frequencies. Could be the Wifi / Bluetooth module is tested and filed under one number for a specific model, but the radio was tested on another model. From 2643B test report: Hints of similarities with 2643A except for the Wifi. Usually similarity is used so you don't have to retest every configuration (memory, battery, etc) of a hardware. But the exact definition of similarity probably have to be discussed with FCC.
H: Convert 24 V to 20 mA (none is variable) I have a PLC (take the "P" with a pinch of salt), it has some analogue inputs (4-20 mA) that I need to use to detect if 24 V is present or not. I can program the logic to say above X mA 24 V must be present, below X mA it mustn't be present (the reason is I have some controllable relay outputs, but there is no feedback to say which state it is in and the only spare inputs are analogue). Is it simply a case of I = V/R, so on the analogue input common I stick 0 V DC and on the input I put a 1200 Ohm resistor and the 24 V DC into that. My logic could then be if >10 mA 24 V must be present, <10 mA it isn't? AI: Your idea is correct. simulate this circuit – Schematic created using CircuitLab Figure 1. The equivalent circuit for your setup. Most 4 - 20 mA inputs use a resistor to convert the current to a voltage of either 5 V (250 Ω) or 10 V (500 Ω) at 20 mA. Don't forget to add the series resistance of the 4 - 20 mA input for your calculation. With the configuration shown in Figure 1 you will get \$ I = \frac {24}{1450} = 16 \; mA \$ and a power dissipation in R1 of \$ P = I^2 R = (16m)^2 \cdot 1k2 = 330 \; mW \$. If the device has a 500 Ω input the current and power will be reduced.
H: How to connect a diy audio signal to my "expensive" audio interface I'm a hobbyist and after some experimentation (around 4 years) I want to connect my signals I create on the breadboard to my expensive audio interface. Until now I was using the "mic" input on my laptop to connect the signals and I haven't damaged it. I'm using a 40k resistor in series with each of my 2 signal(s). I create my signals with a "modular synthesizer" running on an arduino due, DACs and with low-voltage op-amps (mcp6002-4) and I power my circuit with the due's power supply (3.3V). Now my "expensive audio interface, a DSP platform that costs around 4k euros, has two mic/DI inputs (located at the same plug) with just these specifications: "2 x MC (switchable phantom power), 2 x instrument input with impedance transformer, adjustable input level 2 x Mic, Hi-Z or Line XLR/jack phantom power 48 V (switchable)" So I would appreciate some info about the power of the mic/line inputs in general. Should I take any precautions electrically so to avoid damage the line inputs of my audio interface and the low voltage signals of my audio experimentation circuit? (will activating phantom power accidentally, destroy my circuit and Arduino? Does it pass through the unbalanced 1/4 jack?) Are the 40k resistors a good value? Can you point me to an article that discusses line connections, levels, impedance, phantom power, direct boxes etc. EDIT: The mic input (the physical) takes an XLR cable but at the center there is a hole that takes a jack input. It is labeled as MIC/DI. The signals appear at the same hardware source module in the software. So to make it clear: I don't want to connect to "mic" input but at "DI" line instrument input (which is located at the center of the MIC XLR cable. I suspect phantom power only goes via the XLR cable (3 pins) The hardware also has balanced line inputs but I'm using these to connect a secondary audio interface. Lastly: Should I measure the voltage at the audio interface inputs with my multimeter? The physical input is like this: AI: Don't use the mic inputs, they're not for that. You plug microphones into microphone inputs. The phantom power level of 48V might fry your Due as it's a DC level designed to operate active microphones. Kit is interconnected via line I/O. That's a higher level signal but line in & line out are specifically designed for this. So use your line input which must be available on a €4k DSP. This will tell you all you need to know:- The final thing to know is that the input impedance for a line input is circa 10kΩ, so pretty high. If you drive your line out with a simple op amp follower, you'll just impedance bridge between the two and all will be well. Only you can decide if your DSP is consumer or professional level. If you start with consumer based line levels, you'll be fine and can then see if it feels as if the input can take a higher rms level. Since you've built your kit yourself, it's likely that impedance bridging (low -> high) will also allow use of very high impedance inputs. These will be labelled high-Z and might be of the order of 500kΩ. They're for piezo equipment or a guitar but should also be okay for your Due. A tip is to feed your kit's output into a PC audio card's line input as a level test. If your PC doesn't toast and you can hear it, then there you go...
H: Working out AT28C16 LED output from datasheet I'm trying to work out how to connect some LEDs so I can view the value of the value stored in the AT28C16 (pdf) chip. I'm actually following a youtube series. In the video he chose a 330Ω resistor but I'm not sure how he would work this out given we don't know the maximum VOH value. My LED's are rated at 1.8v - 2.2v 20mA Please could you explain how to read the datasheet to calculate the resistor value required. Thanks. AI: Normally Voltage Output High will be a little less than VCC, as most ICs do not use internal boost circuits that could make it possible to be higher. In this case, you could look at the Absolute Maximum rating: Absolute Maximum Ratings* Temperature Under Bias............... -55°C to +125°C Storage Temperature ................. -65°C to +150°C All Input Voltages (including NC Pins) with Respect to Ground ...............-0.6V to +6.25V All Output Voltages with Respect to Ground ...............-0.6V to VCC + 0.6V Voltage on OE and A9 with Respect to Ground ...............-0.6V to +13.5V So the max you should ever see on this IC is VOH as (VCC - GND) + 0.6V. But it would likely be 5V. So using Ohm's law for the led resistor, (5V - 2.2V) / 330 = 0.008A or 8 mA. But since the LED would not show 2.2 volts forward drop at 8 milliamps, then you should adjust the resistor up a bit. 470Ω or 500Ω may be better. In any case, you are using the EEPROM in a non-standard way, which means it does not have specs for this type of operation. There is no way to know how well the outputs will deal with 8mA on each of the pins, if the voltage drops due to the higher current, if they will blow. The VOH spec is only tested at 0.4mA source.
H: Driving a laser diode with an LM317 I want to make a a line of sight sensor. I want to drive a laser diode with constant current and aim it at a photodiode, then using a transimpedence amplifier I will monitor any changes in light intensity. The reason I am using a photodiode and a laser diode is because I want fast response times. I have in my hands 2 OPV382 laser diodes that I fail even to turn on. I am using an LM317T to provide constant current to my laser diode. In the datasheet it states that the diode starts conducting at less than 2mA and that the maximum continuous current should be 12mA. I decided to try with a low current first just to be on the safe site but nothing happened. (datasheet: (http://eu.mouser.com/ProductDetail/Optek-TT-Electronics/OPV382/?qs=NVJATC80C48isfMgbQ%2FZnA%3D%3D)) My circuit was a resistor between the adjust and output of the LM317T to set the current and then the laser diode connected to the output and to ground. I used various resistor values from 220 Ohms to about 70(and connected the laser diode only momentarily but still nothing). I also added another resistor from adjust to ground to set the voltage to 2.2V that is the maximum forward voltage and again nothing. Can you explain to me what I'm doing wrong? The supply voltage is 5V(measured 4.9V) The only thing I can think of is the series resistance of the laser diode which is 20-55ohms according to the datasheet, could this be affecting the rest of the circuit? AI: The laser diode requires 7 mA and it will drop a maximum of 2.2 volts at that current. If you have a regulated 5 volt supply, you can simply put a 400 ohm, or slightly larger, resistor in series with it. A 1/10 watt or higher power rating for the resistor will be more than sufficient. Since the laser is infrared, you will not be able to see it with the naked eye. But a clever trick is to look at it with your phone video camera. Most phones are quite sensitive in the infrared range. You can confirm that this works with your phone by trying it with your TV remote. You should see flashes from the IR LED when you are pushing a button on the remote.
H: 20 Mhz AM transmitter and receiver - Receiver does not receive I am doing a circuit for an AM transmitter and receiver. I extracted it from http://www.pyroelectro.com/projects/pyro_rf_receiver_27mhz/index.html. Receiver: I have already done the transmitter, and the antenna seems to irradiate as there is some interference with other devices. However, my question in this part of the circuit is if the indcutor L3 (the one right before the antenna) is needed, and why? can I change its value? On the other hand, I have also done the receiver; however, it does not seem to receive anything. It is important to say that I tailored the design so that I could use a 20MHz carrier. In the transmitter I used a crystal of 20MHz, and in the receiver (tank circuit), I used an inductor of 10uH and a combination of ceramic capacitors of around 6.66 pF. I suppose I do not detect anything because I used a choke inductor (those that are fabricated and come in a green package), and I did not made my own (I did not made my own as I have no way of meassuring it). Also, I do not see any envelope detector on the receiver... which part in this circuit acts as the envelope detector. And finally, why does it not work? I really appreciate your answers AI: In your text you describe this as a 20 MHz transmitter but the original design is for a 27 MHz transmitter and receiver. If you have attempted to alter the design, it would be helpful for you to describe your design changes. The transmitter is a very marginal design that will have significant harmonic content in its output. Take care that you are meeting any local regulations as this could cause interference to other services. On the other hand, you can use this harmonic content to confirm that your transmitter is working if you have a receiver cable of detecting any of the harmonics. L3 in the transmitter helps to match the antenna impedance to the final amplifier of the transmitter. Removing it without lengthening the antenna will reduce the ERP and therefore shorten the effective range of the transmitter. On the receiver side, the detection is performed by the regenerative design. The values of L2 and C2 are critical for correct functioning of the receiver.