text
stringlengths
83
79.5k
H: Can two electrolytic capacitors be made into a bipolar? I know if you put an electrolytic capacitor backwards they explode and it's quite fun to observe them exploding. I am wondering if you can use two of them (and some diodes) to turn it into a bipolar capacitor so it may be used with AC signals? AI: Yes. You can. Connect in the following manner --|(--)|-- Short both the -ve terminals. The other two +ve terminals can be used as leads of a non-polar capacitor. I know that if you do like this the capcitance will reduce. I thinks it by 1/2.
H: Sending CAN protocol data(1Mbps) via serial port I would like to build a tool for viewing CAN protocol data on my computer. The CAN protocol data is captured using a microcontroller (STM32F4 discovery board) and sent to a PC via a serial port. My requirement is to send only CAN ID, DLC, DATA and time stamp (i.e at what time a particular CAN ID was received). I have a few questions that I couldn't find good results for after searching the Internet: Is it possible to send CAN data via the serial port using some compression techniques for the data packets since the CAN protocol operates at 1Mbps and I am planning to run the serial port at 115200 baud rate? If yes, could you provide information or links? Have you seen any similar projects which send 1Mbps CAN data via serial port. If yes, could you provide information or links? AI: I suppose that your CAN devices won't be sending data all the time. Implement a buffer (e.g. a circular buffer) and fill it while you receive data, then send out the data via your serial port. Once the buffer overflows, make sure you signal that to the host as well. Compression only makes sense when you know your data. If the data is random, you could use e.g. RLE, which is a quite simple type of encoding and doesn't require much calculation power. But as I said before, I don't think you will need that.
H: Arduino and FTDI Communications Short Version Can I make a serial connection from an Arduino to a device with an FTDI connector? Long Version I have a wifi shield for an Arduino that is misbehaving. I know their is an FTDI connector on the shield that you can use to do diagnostics on the chip but I do not have an FTDI cable at the moment. Is the FTDI connector just a serial interface or is it something different? If it is serial would it not be possible for me to actually write an application for the Arduino that would issue commands over the FTDI connector for diagnostics purpose? AI: "If it is serial would it not be possible for me to actually write an application for the Arduino that would issue commands over the FTDI connector for diagnostics purpose?" Yes. You would need to implement a second UART or Serial port, as the hardware serial port on the Arduino is tied to the serial/usb connection. The original serial port talks to your computer, and the second talks to whatever serial device you would like, in this case, the wifi shield. See http://www.arduino.cc/en/Reference/SoftwareSerial for a software uart.
H: five month countdown timer circuit? I'm a beginner in electronics, so I just have an idea in my head and don't know how to actually make it. I want to make a countdown circuit, for an event in next 5 month. The display format is like this: DDD HH MM SS Days have 3 digits, hours, minutes and seconds have 2 digits and I want to show the numbers on 7-segment displays. I just want to show remained time to a specific time, some day in October. I want to make it with a PIC or anything that you suggest. I googled it, but just found a MM:SS countdown circuit. I can't expand it for solving my question. How can I do that? AI: In order to expand the display to displaying DDD MM HH, you need five more I/O lines for the DDD and SS. The PIC16F84A microntroller shown in the linked circuit has only 13 I/O lines and there are all used. (If you don't need the buzzer and relay outputs then potentially you have two spare outputs, but as I mentioned above, you need five.) So If you want to stay with the PIC16F series of microcontrollers in your linked circuit, I would expand it to meet your needs using a PIC16F883 which has 24 I/O pins. (They make PIC16F microcontrollers with 18 and 20 I/O pins, but only in SMD packages, and I assume you want through-hole). Just add five more outputs going to additional transistors like T1-T4 driving the five new seven segment displays, and the a-g inputs of the displays to the parallel bus coming out of resistor pack R5-R11. For a crystal, you need to choose one that has a tight tolerance as possible to keep the drift to a minimum. Timer1 of the PIC16F883 can run off of a 32KHz 30 ppm watch crystal. Software wise, set up Timer1 to interrupt once a second. Use the Set and Select buttons as in the previous application to set up the initial DDD:HH:MM:SS values. On each interrupt, update the display -- decrement seconds, when they roll over from 00 to 59, decrement minutes etc.
H: Using forward voltage drop of diodes with linear regulator I want to clamp the output of a adjustable three-terminal linear regulator Vout to -1.8V relative to a reference voltage Vref. My approach is to use the forward voltage drop of three diodes but I am having trouble sizing the resistor to ground R2. Clearly, it needs to be large enough to limit current through D1...3. What other boundary conditions are there? I imaginge it must not be too large either to avoid pinching off the regulator's ground connection or create an voltage rise due to Iadj? Or what are the relevant boundary conditions for sizing? The LM338 has: Iadj = 50 µA Vref = Vout-Vref = 1.25V simulate this circuit – Schematic created using CircuitLab In the alternative I thought of something along the lines of: simulate this circuit AI: Using an op-amp is probably the best way to deal with this requirement but you might be better with an emitter follower circuit: - The op-amp will drive the NPN so that its emitter (Voltage across the load) is the same voltage as the +Vin pin on the op-amp. I borrowed "The Photon's" drawing because although it was wrong, it was easily amended to show an emitter follower. If you did use a FET (N channel with source to load) this would work but you might find that Vin needs to be higher for it to work. BJT base drive only needs to be 0.6V above the emitter - I'd still recommend using an op-amp with rail-to-rail output capability so that Vin can be a bit lower. I'm also assuming the op-amp is powered from Vin and GND.
H: Minimum files for Atmega1284p AVR-GCC? does someone have the minimum setup (ie. a 1284p Makefile and an example blink.c) for an ATMega1284p and AVR-GCC? I seem to be stuck setting things up with using the Arduino IDE and I'd like to test my breadbord setup with a simple blinking LED to rule out any mistakes here. These is some information: Makefile: #---------------------------------------------------------------------------- # On command line: # make all = Make software. # make clean = Clean out built project files. # make coff = Convert ELF to AVR COFF. # make extcoff = Convert ELF to AVR Extended COFF. # make program = Download the hex file to the device, using avrdude. # Please customize the avrdude settings below first! # make debug = Start either simulavr or avarice as specified for debugging, # with avr-gdb or avr-insight as the front end for debugging. # make filename.s = Just compile filename.c into the assembler code only. # make filename.i = Create a preprocessed source file for use in submitting # bug reports to the GCC project. # To rebuild project do "make clean" then "make all". #---------------------------------------------------------------------------- # MCU name MCU = atmega1284p # Processor frequency. # This will define a symbol, F_CPU, in all source code files equal to the # processor frequency. You can then use this symbol in your source code to # calculate timings. Do NOT tack on a 'UL' at the end, this will be done # automatically to create a 32-bit value in your source code. # Typical values are: # F_CPU = 1000000 # F_CPU = 1843200 # F_CPU = 2000000 # F_CPU = 3686400 # F_CPU = 4000000 # F_CPU = 7372800 F_CPU = 8000000 # F_CPU = 11059200 # F_CPU = 14745600 # F_CPU = 16000000 # F_CPU = 18432000 # F_CPU = 20000000 # F_CPU = 8000000 # Output format. (can be srec, ihex, binary) FORMAT = ihex # Target file name (without extension). TARGET = main # Object files directory OBJDIR = obj # List C source files here. (C dependencies are automatically generated.) SRC = $(TARGET).c # List C++ source files here. (C dependencies are automatically generated.) CPPSRC = # List Assembler source files here. # Make them always end in a capital .S. Files ending in a lowercase .s # will not be considered source files but generated files (assembler # output from the compiler), and will be deleted upon "make clean"! # Even though the DOS/Win* filesystem matches both .s and .S the same, # it will preserve the spelling of the filenames, and gcc itself does # care about how the name is spelled on its command-line. ASRC = # Optimization level, can be [0, 1, 2, 3, s]. # 0 = turn off optimization. s = optimize for size. # (Note: 3 is not always the best optimization level. See avr-libc FAQ.) OPT = s # Debugging format. # Native formats for AVR-GCC's -g are dwarf-2 [default] or stabs. # AVR Studio 4.10 requires dwarf-2. # AVR [Extended] COFF format requires stabs, plus an avr-objcopy run. DEBUG = dwarf-2 # List any extra directories to look for include files here. # Each directory must be seperated by a space. # Use forward slashes for directory separators. # For a directory that has spaces, enclose it in quotes. EXTRAINCDIRS = # Compiler flag to set the C Standard level. # c89 = "ANSI" C # gnu89 = c89 plus GCC extensions # c99 = ISO C99 standard (not yet fully implemented) # gnu99 = c99 plus GCC extensions CSTANDARD = -std=gnu99 # Place -D or -U options here for C sources CDEFS = -DF_CPU=$(F_CPU)UL # Place -D or -U options here for C++ sources CPPDEFS = -DF_CPU=$(F_CPU)UL #CPPDEFS += -D__STDC_LIMIT_MACROS #CPPDEFS += -D__STDC_CONSTANT_MACROS #---------------- Compiler Options C ---------------- # -g*: generate debugging information # -O*: optimization level # -f...: tuning, see GCC manual and avr-libc documentation # -Wall...: warning level # -Wa,...: tell GCC to pass this to the assembler. # -adhlns...: create assembler listing CFLAGS = -g$(DEBUG) CFLAGS += $(CDEFS) CFLAGS += -O$(OPT) #CFLAGS += -mint8 #CFLAGS += -mshort-calls CFLAGS += -funsigned-char CFLAGS += -funsigned-bitfields CFLAGS += -fpack-struct CFLAGS += -fshort-enums #CFLAGS += -fno-unit-at-a-time CFLAGS += -Wall CFLAGS += -Wstrict-prototypes CFLAGS += -Wundef #CFLAGS += -Wunreachable-code #CFLAGS += -Wsign-compare CFLAGS += -Wa,-adhlns=$(<:%.c=$(OBJDIR)/%.lst) CFLAGS += $(patsubst %,-I%,$(EXTRAINCDIRS)) CFLAGS += $(CSTANDARD) # cts CFLAGS+= --combine -fwhole-program CFLAGS+= -fwhole-program #---------------- Compiler Options C++ ---------------- # -g*: generate debugging information # -O*: optimization level # -f...: tuning, see GCC manual and avr-libc documentation # -Wall...: warning level # -Wa,...: tell GCC to pass this to the assembler. # -adhlns...: create assembler listing CPPFLAGS = -g$(DEBUG) CPPFLAGS += $(CPPDEFS) CPPFLAGS += -O$(OPT) #CPPFLAGS += -mint8 #CPPFLAGS += -mshort-calls CPPFLAGS += -funsigned-char CPPFLAGS += -funsigned-bitfields CPPFLAGS += -fpack-struct CPPFLAGS += -fshort-enums CPPFLAGS += -fno-exceptions #CPPFLAGS += -fno-unit-at-a-time CPPFLAGS += -Wall #CPPFLAGS += -Wstrict-prototypes CFLAGS += -Wundef #CPPFLAGS += -Wunreachable-code #CPPFLAGS += -Wsign-compare CPPFLAGS += -Wa,-adhlns=$(<:%.cpp=$(OBJDIR)/%.lst) CPPFLAGS += $(patsubst %,-I%,$(EXTRAINCDIRS)) #CPPFLAGS += $(CSTANDARD) #---------------- Assembler Options ---------------- # -Wa,...: tell GCC to pass this to the assembler. # -ahlms: create listing # -gstabs: have the assembler create line number information; note that # for use in COFF files, additional information about filenames # and function names needs to be present in the assembler source # files -- see avr-libc docs [FIXME: not yet described there] ASFLAGS = -Wa,-adhlns=$(<:%.S=$(OBJDIR)/%.lst),-gstabs #---------------- Library Options ---------------- # Minimalistic printf version PRINTF_LIB_MIN = -Wl,-u,vfprintf -lprintf_min # Floating point printf version (requires MATH_LIB = -lm below) PRINTF_LIB_FLOAT = -Wl,-u,vfprintf -lprintf_flt # If this is left blank, then it will use the Standard printf version. PRINTF_LIB = #PRINTF_LIB = $(PRINTF_LIB_MIN) #PRINTF_LIB = $(PRINTF_LIB_FLOAT) # Minimalistic scanf version SCANF_LIB_MIN = -Wl,-u,vfscanf -lscanf_min # Floating point + %[ scanf version (requires MATH_LIB = -lm below) SCANF_LIB_FLOAT = -Wl,-u,vfscanf -lscanf_flt # If this is left blank, then it will use the Standard scanf version. SCANF_LIB = #SCANF_LIB = $(SCANF_LIB_MIN) #SCANF_LIB = $(SCANF_LIB_FLOAT) MATH_LIB = -lm #---------------- External Memory Options ---------------- # 64 KB of external RAM, starting after internal RAM (ATmega128!), # used for variables (.data/.bss) and heap (malloc()). #EXTMEMOPTS = -Wl,-Tdata=0x801100,--defsym=__heap_end=0x80ffff # 64 KB of external RAM, starting after internal RAM (ATmega128!), # only used for heap (malloc()). #EXTMEMOPTS = -Wl,--defsym=__heap_start=0x801100,--defsym=__heap_end=0x80ffff EXTMEMOPTS = #---------------- Linker Options ---------------- # -Wl,...: tell GCC to pass this to linker. # -Map: create map file # --cref: add cross reference to map file LDFLAGS = -Wl,-Map=$(TARGET).map,--cref LDFLAGS += $(EXTMEMOPTS) LDFLAGS += $(PRINTF_LIB) $(SCANF_LIB) $(MATH_LIB) #LDFLAGS += -T linker_script.x #---------------- Programming Options (avrdude) ---------------- # Programming hardware: alf avr910 avrisp bascom bsd # dt006 pavr picoweb pony-stk200 sp12 stk200 stk500 # # Type: avrdude -c ? # to get a full listing. # AVRDUDE_PROGRAMMER = dragon_pp # com1 = serial port. Use lpt1 to connect to parallel port. AVRDUDE_PORT = usb AVRDUDE_WRITE_FLASH = -U flash:w:$(TARGET).hex #AVRDUDE_WRITE_EEPROM = -U eeprom:w:$(TARGET).eep # Uncomment the following if you want avrdude's erase cycle counter. # Note that this counter needs to be initialized first using -Yn, # see avrdude manual. #AVRDUDE_ERASE_COUNTER = -y # Uncomment the following if you do /not/ wish a verification to be # performed after programming the device. #AVRDUDE_NO_VERIFY = -V # Increase verbosity level. Please use this when submitting bug # reports about avrdude. See <http://savannah.nongnu.org/projects/avrdude> # to submit bug reports. #AVRDUDE_VERBOSE = -v -v AVRDUDE_FLAGS = -p $(MCU) -P $(AVRDUDE_PORT) -c $(AVRDUDE_PROGRAMMER) AVRDUDE_FLAGS += $(AVRDUDE_NO_VERIFY) AVRDUDE_FLAGS += $(AVRDUDE_VERBOSE) AVRDUDE_FLAGS += $(AVRDUDE_ERASE_COUNTER) #---------------- Debugging Options ---------------- # For simulavr only - target MCU frequency. DEBUG_MFREQ = $(F_CPU) # Set the DEBUG_UI to either gdb or insight. DEBUG_UI = gdb # DEBUG_UI = insight # Set the debugging back-end to either avarice, simulavr. #DEBUG_BACKEND = avarice DEBUG_BACKEND = simulavr # GDB Init Filename. GDBINIT_FILE = __avr_gdbinit # When using avarice settings for the JTAG JTAG_DEV = /dev/com1 # Debugging port used to communicate between GDB / avarice / simulavr. DEBUG_PORT = 4242 # Debugging host used to communicate between GDB / avarice / simulavr, normally # just set to localhost unless doing some sort of crazy debugging when # avarice is running on a different computer. DEBUG_HOST = localhost #============================================================================ # Define programs and commands. SHELL = sh CC = avr-gcc OBJCOPY = avr-objcopy OBJDUMP = avr-objdump SIZE = avr-size NM = avr-nm AVRDUDE = avrdude REMOVE = rm -f REMOVEDIR = rm -rf COPY = cp WINSHELL = cmd # Define Messages # English MSG_ERRORS_NONE = Errors: none MSG_BEGIN = -------- begin -------- MSG_END = -------- end -------- MSG_SIZE_BEFORE = Size before: MSG_SIZE_AFTER = Size after: MSG_COFF = Converting to AVR COFF: MSG_EXTENDED_COFF = Converting to AVR Extended COFF: MSG_FLASH = Creating load file for Flash: MSG_EEPROM = Creating load file for EEPROM: MSG_EXTENDED_LISTING = Creating Extended Listing: MSG_SYMBOL_TABLE = Creating Symbol Table: MSG_LINKING = Linking: MSG_COMPILING = Compiling C: MSG_COMPILING_CPP = Compiling C++: MSG_ASSEMBLING = Assembling: MSG_CLEANING = Cleaning project: MSG_CREATING_LIBRARY = Creating library: # Define all object files. OBJ = $(SRC:%.c=$(OBJDIR)/%.o) $(CPPSRC:%.cpp=$(OBJDIR)/%.o) $(ASRC:%.S=$(OBJDIR)/%.o) # Define all listing files. LST = $(SRC:%.c=$(OBJDIR)/%.lst) $(CPPSRC:%.cpp=$(OBJDIR)/%.lst) $(ASRC:%.S=$(OBJDIR)/%.lst) # Compiler flags to generate dependency files. GENDEPFLAGS = -MD -MP -MF .dep/$(@F).d # Combine all necessary flags and optional flags. # Add target processor to flags. ALL_CFLAGS = -mmcu=$(MCU) -I. $(CFLAGS) $(GENDEPFLAGS) ALL_CPPFLAGS = -mmcu=$(MCU) -I. -x c++ $(CPPFLAGS) $(GENDEPFLAGS) ALL_ASFLAGS = -mmcu=$(MCU) -I. -x assembler-with-cpp $(ASFLAGS) # Default target. all: begin gccversion sizebefore build sizeafter end # Change the build target to build a HEX file or a library. build: elf hex eep lss sym #build: lib elf: $(TARGET).elf hex: $(TARGET).hex eep: $(TARGET).eep lss: $(TARGET).lss sym: $(TARGET).sym LIBNAME=lib$(TARGET).a lib: $(LIBNAME) # Eye candy. # AVR Studio 3.x does not check make's exit code but relies on # the following magic strings to be generated by the compile job. begin: @echo @echo $(MSG_BEGIN) end: @echo $(MSG_END) @echo # Display size of file. HEXSIZE = $(SIZE) --target=$(FORMAT) $(TARGET).hex ELFSIZE = $(SIZE) -A $(TARGET).elf AVRMEM = avr-mem.sh $(TARGET).elf $(MCU) sizebefore: @if test -f $(TARGET).elf; then echo; echo $(MSG_SIZE_BEFORE); $(ELFSIZE); \ $(AVRMEM) 2>/dev/null; echo; fi sizeafter: @if test -f $(TARGET).elf; then echo; echo $(MSG_SIZE_AFTER); $(ELFSIZE); \ $(AVRMEM) 2>/dev/null; echo; fi # Display compiler version information. gccversion : @$(CC) --version # Program the device. program: $(TARGET).hex $(TARGET).eep $(AVRDUDE) $(AVRDUDE_FLAGS) $(AVRDUDE_WRITE_FLASH) $(AVRDUDE_WRITE_EEPROM) # Generate avr-gdb config/init file which does the following: # define the reset signal, load the target file, connect to target, and set # a breakpoint at main(). gdb-config: @$(REMOVE) $(GDBINIT_FILE) @echo define reset >> $(GDBINIT_FILE) @echo SIGNAL SIGHUP >> $(GDBINIT_FILE) @echo end >> $(GDBINIT_FILE) @echo file $(TARGET).elf >> $(GDBINIT_FILE) @echo target remote $(DEBUG_HOST):$(DEBUG_PORT) >> $(GDBINIT_FILE) ifeq ($(DEBUG_BACKEND),simulavr) @echo load >> $(GDBINIT_FILE) endif @echo break main >> $(GDBINIT_FILE) debug: gdb-config $(TARGET).elf ifeq ($(DEBUG_BACKEND), avarice) @echo Starting AVaRICE - Press enter when "waiting to connect" message displays. @$(WINSHELL) /c start avarice --jtag $(JTAG_DEV) --erase --program --file \ $(TARGET).elf $(DEBUG_HOST):$(DEBUG_PORT) @$(WINSHELL) /c pause else @$(WINSHELL) /c start simulavr --gdbserver --device $(MCU) --clock-freq \ $(DEBUG_MFREQ) --port $(DEBUG_PORT) endif @$(WINSHELL) /c start avr-$(DEBUG_UI) --command=$(GDBINIT_FILE) # Convert ELF to COFF for use in debugging / simulating in AVR Studio or VMLAB. COFFCONVERT = $(OBJCOPY) --debugging COFFCONVERT += --change-section-address .data-0x800000 COFFCONVERT += --change-section-address .bss-0x800000 COFFCONVERT += --change-section-address .noinit-0x800000 COFFCONVERT += --change-section-address .eeprom-0x810000 coff: $(TARGET).elf @echo @echo $(MSG_COFF) $(TARGET).cof $(COFFCONVERT) -O coff-avr $< $(TARGET).cof extcoff: $(TARGET).elf @echo @echo $(MSG_EXTENDED_COFF) $(TARGET).cof $(COFFCONVERT) -O coff-ext-avr $< $(TARGET).cof # Create final output files (.hex, .eep) from ELF output file. %.hex: %.elf @echo @echo $(MSG_FLASH) $@ $(OBJCOPY) -O $(FORMAT) -R .eeprom $< $@ %.eep: %.elf @echo @echo $(MSG_EEPROM) $@ -$(OBJCOPY) -j .eeprom --set-section-flags=.eeprom="alloc,load" \ --change-section-lma .eeprom=0 -O $(FORMAT) $< $@ # Create extended listing file from ELF output file. %.lss: %.elf @echo @echo $(MSG_EXTENDED_LISTING) $@ $(OBJDUMP) -h -S $< > $@ # Create a symbol table from ELF output file. %.sym: %.elf @echo @echo $(MSG_SYMBOL_TABLE) $@ $(NM) -n $< > $@ # Create library from object files. .SECONDARY : $(TARGET).a .PRECIOUS : $(OBJ) %.a: $(OBJ) @echo @echo $(MSG_CREATING_LIBRARY) $@ $(AR) $@ $(OBJ) # Link: create ELF output file from object files. .SECONDARY : $(TARGET).elf .PRECIOUS : $(OBJ) %.elf: $(OBJ) @echo @echo $(MSG_LINKING) $@ $(CC) $(ALL_CFLAGS) $^ --output $@ $(LDFLAGS) # Compile: create object files from C source files. $(OBJDIR)/%.o : %.c @echo @echo $(MSG_COMPILING) $< $(CC) -c $(ALL_CFLAGS) $< -o $@ # Compile: create object files from C++ source files. $(OBJDIR)/%.o : %.cpp @echo @echo $(MSG_COMPILING_CPP) $< $(CC) -c $(ALL_CPPFLAGS) $< -o $@ # Compile: create assembler files from C source files. %.s : %.c $(CC) -S $(ALL_CFLAGS) $< -o $@ # Compile: create assembler files from C++ source files. %.s : %.cpp $(CC) -S $(ALL_CPPFLAGS) $< -o $@ # Assemble: create object files from assembler source files. $(OBJDIR)/%.o : %.S @echo @echo $(MSG_ASSEMBLING) $< $(CC) -c $(ALL_ASFLAGS) $< -o $@ # Create preprocessed source for use in sending a bug report. %.i : %.c $(CC) -E -mmcu=$(MCU) -I. $(CFLAGS) $< -o $@ # Target: clean project. clean: begin clean_list end clean_list : @echo @echo $(MSG_CLEANING) $(REMOVE) $(TARGET).hex $(REMOVE) $(TARGET).eep $(REMOVE) $(TARGET).cof $(REMOVE) $(TARGET).elf $(REMOVE) $(TARGET).map $(REMOVE) $(TARGET).sym $(REMOVE) $(TARGET).lss $(REMOVEDIR) $(OBJDIR) $(REMOVE) $(SRC:.c=.s) $(REMOVE) $(SRC:.c=.d) $(REMOVEDIR) .dep # Create object files directory $(shell mkdir $(OBJDIR) 2>/dev/null) # Include the dependency files. -include $(shell mkdir .dep 2>/dev/null) $(wildcard .dep/*) # Listing of phony targets. .PHONY : all begin finish end sizebefore sizeafter gccversion \ build elf hex eep lss sym coff extcoff \ clean clean_list program debug gdb-config main.c #include <util/delay.h> #include <avr/io.h> int main (void) { DDRB = 0xff; while(1) { PORTB |= (1<<PB1); //Bit setzen - set bit _delay_ms(500); // halbe sekunde warten - wait half a second PORTB &= ~(1<<PB1); // Bit loeschen - reset bit _delay_ms(500); // halbe sekunde warten - wait half a second } return 0; } The "build process" (well, that's a little overstated): $ make -------- begin -------- avr-gcc (GCC) 4.7.2 Copyright (C) 2012 Free Software Foundation, Inc. Dies ist freie Software; die Kopierbedingungen stehen in den Quellen. Es gibt KEINE Garantie; auch nicht für MARKTGÄNGIGKEIT oder FÜR SPEZIELLE ZWECKE. Size before: main.elf : section size addr .data 0 8388864 .text 214 0 .stab 1848 0 .stabstr 231 0 .comment 17 0 .debug_aranges 32 0 .debug_info 363 0 .debug_abbrev 225 0 .debug_line 222 0 .debug_frame 36 0 .debug_str 201 0 .debug_ranges 16 0 Total 3405 Size after: main.elf : section size addr .data 0 8388864 .text 214 0 .stab 1848 0 .stabstr 231 0 .comment 17 0 .debug_aranges 32 0 .debug_info 363 0 .debug_abbrev 225 0 .debug_line 222 0 .debug_frame 36 0 .debug_str 201 0 .debug_ranges 16 0 Total 3405 -------- end -------- Uploading everything to the ATMega1284p through the AVR Dragon using HVPP: $ avrdude -c dragon_pp -P usb -p m1284p –u –U flash:w:main.hex avrdude: AVR device initialized and ready to accept instructions Reading | ################################################## | 100% 0.15s avrdude: Device signature = 0x1e9705 avrdude: safemode: Fuses OK avrdude: jtagmkII_close(): bad response to GO command: RSP_ILLEGAL_EMULATOR_MODE avrdude done. Thank you. This seems complete, but does not yield a blinkon PORTB PB1 (or any other pin for that). If I have forgotten anything that could be of help, please let me know. AI: This is my general purpose Makefile for use on Linux. Haven't tested it on Windows or iOS. Only the first few variables normally need to be edited. baudrate=19200 src=project avrType=atmega1284p avrFreq=20000000 programmerDev=/dev/ttyUSB003 programmerType=arduino cflags=-g -DF_CPU=$(avrFreq) -Wall -Os -Werror -Wextra memoryTypes=calibration eeprom efuse flash fuse hfuse lfuse lock signature application apptable boot prodsig usersig .PHONY: backup clean disassemble dumpelf edit eeprom elf flash fuses help hex makefile object program SHELL := /bin/bash ifdef baudrate baud=-b$(baudrate) else baud= endif help: @echo 'backup Read all known memory types from controller and write it into a file. Available memory types: $(memoryTypes)' @echo 'clean Delete automatically created files.' @echo 'disassemble Compile source code, then disassemble object file to mnemonics.' @echo 'dumpelf Dump the contents of the .elf file. Useful for information purposes only.' @echo 'edit Edit the .cpp source file.' @echo 'eeprom Extract EEPROM data from .elf file and program the device with it.' @echo 'elf Create $(src).elf' @echo 'flash Program $(src).hex to controller flash memory.' @echo 'fuses Extract FUSES data from .elf file and program the device with it.' @echo 'help Show this text.' @echo 'hex Create all hex files for flash, eeprom and fuses.' @echo 'object Create $(src).o' @echo 'program Do all programming to controller.' edit: [[ ! -f $(src).cpp ]] && echo '// VERSION: ' > $(src).cpp vi '$(src).cpp' -s <( printf ":set number\n:1 s/^\/\/ VERSION: .*$$/\/\/ VERSION: $$(date)/g\n" ) makefile: vi Makefile clean: rm $(src).elf $(src).eeprom.hex $(src).fuses.hex $(src).lfuse.hex $(src).hfuse.hex $(src).efuse.hex $(src).flash.hex $(src).o date object: avr-gcc $(cflags) -mmcu=$(avrType) -Wa,-ahlmns=$(src).lst -c -o $(src).o $(src).cpp elf: object avr-gcc $(cflags) -mmcu=$(avrType) -o $(src).elf $(src).o chmod a-x $(src).elf 2>&1 hex: elf avr-objcopy -j .text -j .data -O ihex $(src).elf $(src).flash.hex avr-objcopy -j .eeprom --set-section-flags=.eeprom="alloc,load" --change-section-lma .eeprom=0 -O ihex $(src).elf $(src).eeprom.hex avr-objcopy -j .fuse -O ihex $(src).elf $(src).fuses.hex --change-section-lma .fuse=0 srec_cat $(src).fuses.hex -Intel -crop 0x00 0x01 -offset 0x00 -O $(src).lfuse.hex -Intel srec_cat $(src).fuses.hex -Intel -crop 0x01 0x02 -offset -0x01 -O $(src).hfuse.hex -Intel srec_cat $(src).fuses.hex -Intel -crop 0x02 0x03 -offset -0x02 -O $(src).efuse.hex -Intel disassemble: elf avr-objdump -s -j .fuse $(src).elf avr-objdump -C -d $(src).elf 2>&1 eeprom: hex #avrdude -p$(avrType) -c$(programmerType) -P$(programmerDev) $(baud) -v -U eeprom:w:$(src).eeprom.hex date fuses: hex avrdude -p$(avrType) -c$(programmerType) -P$(programmerDev) $(baud) -v -U lfuse:w:$(src).lfuse.hex #avrdude -p$(avrType) -c$(programmerType) -P$(programmerDev) $(baud) -v -U hfuse:w:$(src).hfuse.hex #avrdude -p$(avrType) -c$(programmerType) -P$(programmerDev) $(baud) -v -U efuse:w:$(src).efuse.hex date dumpelf: elf avr-objdump -s -h $(src).elf program: flash eeprom fuses flash: hex avrdude -p$(avrType) -c$(programmerType) -P$(programmerDev) $(baud) -v -U flash:w:$(src).flash.hex date backup: @for memory in $(memoryTypes); do \ avrdude -p $(avrType) -c$(programmerType) -P$(programmerDev) $(baud) -v -U $$memory:r:./$(avrType).$$memory.hex:i; \ done Compiling is configured in such a way that any warning will throw an error and exit. However it may sometimes seem a nuisance, it is actually a good programming practice. I usually create a directory with the name of my project, eg. blink. Then in that directory I copy my generic Makefile and I create project.cpp. I always use the same name for the cpp file, so I don't have to change my Makefile To compile and flash project.cpp you just type make flash. If you want more information about the other options in the makefile type make help. What happens when you type make flash is: the dependency hex is executed; hex in turn has a dependency of its own: elf, which is executed; elf has a dependency as well: object, which is therefore executed; object has no dependencies, so it executes the commands on the following line: avr-gcc ...; when the object dependency is fulfilled, control is given back to elf which executes the commands on the lines below it; control is given back to hex ... ; control is given back to flash ... . If you are not planning to use this Makefile for burning the fuses, you can comment out the srec_cat lines, a command that often needs to be installed (apt-get install srecord on Debian'like distributions).
H: Adding the AVR Dragon's HVPP to the Arduino IDE I would like to teach my Arduino IDE to use my AVR Dragon's HVPP (High Voltage Parallel Programming) interface to upload compiled sketches to my boards (an ATMega1284P-PU in this case, supported by mighty1284p). Working setup so far: Arduino IDE with mighty1284p, I added this to the boards.txt of mighty1284p to support my uC's internal-RC-oscillator clockrates of 1MHz and 8MHz. avrdude can talk to the m1284p and set the fuses: avrdude -c dragon_pp -P usb -p m1284p -U lfuse:w:0x62:m -U hfuse:w:0x99:m -U efuse:w:0xFF:m -U lock:w:0xFF:m I can compile / check a sketch successfully, then upload the temporarily created .hex manually to the 1284 using the Dragon: cd /var/folders/px/xmf1dygn6p5_fn1hh05c6gyr0000gn/T/build4082512148637729777.tmp/ avrdude -p m1284p -c dragon_pp -P usb -U flash:w:Fade.cpp.hex This works but includes the manual step of cding into the temporary build directory and manually uplading the .hex. How would I use the AVR Dragon's HVPP directly from within the Arduino IDE? I have added a section to programmers.txt as such: dragonpp.name=AVR Dragon HVPP dragonpp.communication=usb dragonpp.protocol=dragon_pp dragonpp.force=false which I assumed would be correct for the above avrdude parameters. Within the IDE I choose Tools - Programmer - AVR Dragon HVPP, the Upload. The debug output shows this: (...correct compiling, linking and such...) /Applications/Arduino.app/Contents/Resources/Java/hardware/tools/avr/bin/avrdude -C/Applications/Arduino.app/Contents/Resources/Java/hardware/tools/avr/etc/avrdude.conf -v -v -v -v -patmega1284p -carduino -P/dev/tty.Bluetooth-PDA-Sync -b115200 -D -Uflash:w:/var/folders/px/xmf1dygn6p5_fn1hh05c6gyr0000gn/T/build1852336263469118058.tmp/Fade.cpp.hex:i avrdude: Version 5.11, compiled on Sep 2 2011 at 18:52:52 Copyright (c) 2000-2005 Brian Dean, http://www.bdmicro.com/ Copyright (c) 2007-2009 Joerg Wunsch System wide configuration file is "/Applications/Arduino.app/Contents/Resources/Java/hardware/tools/avr/etc/avrdude.conf" User configuration file is "/Users/cts/.avrduderc" User configuration file does not exist or is not a regular file, skipping Using Port : /dev/tty.Bluetooth-PDA-Sync Using Programmer : arduino Overriding Baud Rate : 115200 avrdude: Send: 0 [30] [20] avrdude: Send: 0 [30] [20] avrdude: Send: 0 [30] [20] avrdude: ser_recv(): programmer is not responding avrdude: stk500_recv(): programmer is not responding avrdude done. Thank you. So the -p parameter is correct, but -c and -P are wrong. Any ideas how I could add the Dragon correctly? AI: Your problem is that the programmer settings in your part entry (shown in your other question) override the programmer settings in the programmer entry. Just lose these lines in the part entries: mighty_opt8.upload.protocol=arduino
H: How to plug 2 5m RGB LED light strips together? I got two 5 meters RGB LED light strips and would like to connect them together, controlling them as a single strip. Each strip power is 72 W, the input voltage is 12V, it has 300 5050 SMD LED emitters and the manual says it needs 6A of current. The controller that came with each fits that same spec. What should I do to achieve it? Each strip comes with a controller and a remote. Should I get another controller big enough or an amplifier maybe? This is the product: http://dx.com/p/jr-5050-waterproof-72w-4500lm-300-5050-smd-led-rgb-light-strip-w-remote-controller-12v-175950 AI: These strips are usually arranged in numerous parallel segments of a few series LEDs. The RGB strips can be a little different, but they work basically the same way. It looks like your strip is analog control and not digital (which would have little ICs in the strip to control individual LED color), so the specific color ground lines are all in parallel. You should be able to connect numerous strips in parallel - 12V to 12V, red to red, green to green, and blue to blue to blue. That is the same reason you can cut the strips into smaller segments if you want. There is a limit to how much current can flow through the strip itself, which varies between manufacturers. Your other limit to how many strips can be connected together is how much power your supply / remote controller can handle. It should specify a limit in watts / current or total number of LEDs or parallel segments. That information should be on a sticker on the power supply or be listed in the manual. From your product page: Power 72 W Which is 6A at 12V DC. That means your power supply/controller has to be able to handle at least 144 W (12A @ 12V DC) to connect two strips in parallel. Although, it is never a good idea to run something long term near its maximum ratings, so the supply should really be rated for something closer to 200 W. If you are able to power multiple strips from your supply, it would be best to power them with a star topology so the current for each strip is separated. For example, connecting Christmas lights together end to end is daisy chaining, and the current for every light strand has to flow through the strands before it. Powering multiple light strands from one power strip is more of a star topology, with the current to each strip only flowing through itself (think of the power strip as your power supply).
H: Sample and Hold When we have a data source oscillating at a frequency of say 4KHz , this implies that the Nyquist sampling rate would be at least 8KHz. We know that the inverse of the signal frequency is the time it takes for a period to complete, i.e 1/4KHz = 250µs, but, what if we take the inverse of the Nyquist frequency, we get this 1/8000 = 125µs, which is half the period of the original signal. what does that give us?, initialy I was thinking it is the time when each of the sampling pulse is sent, but it doesn't make any sense that only two sampling points would be gotten in a cycle of the source signal, I need help please. AI: As you're noticing, Nyquist really speaks to the minimum sampling frequency for a given Bandwidth. You often get people who broadly apply the 2X+ factor and take it no further. But as you noticed there are other effects, what about the phase alignment in sampling? (which will affect amplitude), what about aperture size (the time period of sampling). etc. What is important is that you need to understand what it is about your waveform that is the salient features. Say for 8Khz signal you really care about the shape of the waveform, which could very well mean that you have to go up to the 3rd or 4th harmonic. In this case then you might want to over sample by 4X2.1 = 8.4X (I use 2.1 as an example simply because Nyquist says greater than but not equal to. Or conversely is the 8Khz a carrier with a modulation at 100Hz, then you could down convert with a mixer and sample at ~ 250Hz or you can sample at 16KHz and do the down conversion in the digital domain ending up with a 250 Hz data rate.
H: Logisim - Tunnel that outputs the OR result of all of its inputs Consider this screenshot: When the two values differ, I get this: This makes sense, but it's a problem. I would like the tunnel to function like this: when it's getting mixed values, it outputs the OR result of the values instead of an error value. So, in the case of the above image, the tunnel's output would be '1'. Will I have to spam OR gates and a bunch of different tunnels to do this, or is there a built in tool/plugin I can use to do it for me? AI: In Logisim, a tunnel is just a hidden wire — all tunnels with the same name (in this case, "Tunnel"), are effectively connected together. When you try to force two instances of the same tunnel to different values, anything that tries to evalulate the value (including a third tunnel) will show an error. To create a new signal that is a logical OR of one or more existing signals, you need to instantiate an actual OR gate.
H: What's the difference between delayed branch and branch prediction? I'm studying how delayed branch works and I'm trying to distinguish delayed branch from branch prediction. What is the difference? Is delayed branch a means to facilitate a control hazard? AI: Delayed branch and branch prediction are two different ways of mitigating the effects of a long execution pipeline. Without them, the pipeline needs to stall whenever a conditional branch is taken, because the instruction fetch mechanism can't know which instruction should be executed next after the branch instruction until the computations on which it depends are completed. Delayed branch simply means that some number of instructions that appear after the branch in the instruction stream will be executed regardless of which way the branch ultimately goes. In many cases, a compiler can put instructions in those slots that don't actually depend on the branch itself, but if it can't, it must fill them with NOPs, which kills the performance anyway. This approach keeps the hardware simple, but puts a burden on the compiler technology. Branch prediction is a more hardware-oriented approach, in which the instruction fetcher simply "guesses" which way the branch will go, executes instructions down that path, and if it later turns out to have guessed wrong, the results of those instructions are thrown away. Various systems have different ways of improving the accuracy of the guess. Sometimes the compiler puts a clue into the instruction stream, and sometimes the hardware keeps track of which way each branch has gone in the past.
H: How to calculate battery capacity required for home applications? The issue I'm facing is calculating current draw from different household items. Like, if a fan is rated at 75 Watts, its current draw would be A = W/V so A = 75/220 and A = .34 amperes. That doesn't sound right to me, because I have two 150 A-h batteries connected in series to an inverter/charger. Assuming that I use up only 60% of the batteries' capacity to prolong their life, I have 180 A-h of battery capacity. And also if the inverter is only 80% efficient, I have 144 A-h of usable battery capacity. If the fan does indeed draw .34 amperes of current, that would mean that this battery setup should run the fan for around 424 hours. I know I'm doing something wrong, I just don't know what it is. Could you guys help me out here? What's the backup time for a 75 watt load drawing power from an inverter connected to two 150 A-h batteries in series? AI: Your 75W fan required 220 Volts, it will not operate at typical battery voltages such as 12V. A 75W fan operating (indirectly) from a 12V supply would draw 6.25 A not 0.34 A. Even with a theoretical 100% efficient invertor, the current on the 12V side is greater than the current on the 220V side to deliver the same power across the system from battery to Fan. Note that the reason that power is delivered through the grid at high voltage is to keep currents low. losses and cable sizes are proportional to (a function of) current. High V are used to keep currents low. You convert 12V DC into 220 V AC using your inverter. Depending on capacity and utilisation the efficiency of an inverter might be the 80% you estimate or it may be worse - 50% Ref So your 6.25A may become 13A drawn from the 12V battery. Therefore your 300 Ah battery-set might power a 75W fan for less than 24 hours. Update: for 2 x nominal 12V SLA battery, the following diagram may clarify what is happening. simulate this circuit – Schematic created using CircuitLab
H: What does DAC resolution mean? I am reading the specs of a DAC (this one), and it says the resolution is 10 bit. Does it mean it can generate 1024 voltage levels at the sample rate which is 12 GSa/s? It sounds pretty surreal. Can it really do communications at 120 Gb/s or what would be the practical limitation? AI: "Does it mean it can generate 1024 voltage levels at the sample rate which is 12 GSa/s?" : yes, although with limits on analog bandwidth. If you ask it to generate alternating 0 / 1023 levels, for a 6GHz signal, it will be attenuated a bit as the rated output bandwidth is only 5GHz. "Can it really do communications at 120 Gb/s or what would be the practical limitation?" : yes, via a number of parallel digital links that operate at a more reasonable 375Mbit each.
H: Using noise to increase effective resolution of ADC I managed to increase the effective resolution of a 10 bit ADC by several bits by repeatedly sampling a noisy input and taking an average over many samples. I'm wondering what some techniques are for creating this sort of "noise" purposely to achieve the same effect. Obviously the simpler the circuit the better. There's probably a name for this technique that I'm not aware of? I have some spare digital output pins on the microprocessor that I could some how use to generate the "noise". In case it matters, I'm using a 10 bit ADC with full scale 0-5v to measure the voltage across a thermistor (50k) in series with a 10k resistor, where the thermistor resistance will vary from about 7k to 160k. Thanks! AI: Dithering is one way, as in "rawb"'s answer. In audio, the usual accepted standard for plain dithering was a triangular PDF dither with a peak-peak amplitude of 1 LSB, added to the high res (e.g. analog) signal before quantisation (e.g. the ADC). The same applied not just to ADCs but to any other truncation process, such as going from studio equipment down to 16 bit for CD mastering. This triangular PDF signal was easily generated as the sum of two uniform PDF dither signals, each 0.5 LSB pk-pk amplitude, from indepenent (or at least uncorrelated) random or pseudorandom generators. A lot of work was done on this in the 1980s, among others by Decca in London who built their own studio equipment, and they showed that with TPDF dither, signals (pure tones) could be detected about 20dB below the (broad band) noise floor, with no observable harmonic distortion (i.e. nothing distinguishable from noise) Another way is applicable if the bandwidth of interest is less than the Nyquist bandwidth, as is usually the case in oversampling converters. Then you can improve massively on the plain dithered results. This approach, noise shaping, generally involves embedding the dithered quantiser in a closed loop with a filter in the feedback path. With a simple filter you can get one extra bit of resolution per halving in frequency as Jon Watte says in a comment, but with a third order filter you can do considerably better than this. Consider that a 256x oversampling converter ought to give 8 bits additional resolution according to the above equation, however 1-bit converters operating this way routinely give 16 to 20 bit resolution. You end up with very low noise in the bandwidth of interest (thanks to high loop gain at those frequencies), and very high out-of-band noise somewhere else, easy to filter out in a later stage (e.g. in a decimation filter). The exact result depends on the loop gain as a function of frequency. Third and higher order filters make it increasingly difficult to stabilise the loop, especially if it starts generating incorrect results during overload (clipping or overflow) conditions. If you're careless or unlucky you can get rail-to-rail noise... Lots of papers from circa 1990 and onwards by Bob Adams of dBX, Malcolm Hawksford of Essex University and many others about noise shaping converters, in the JAES (Journal of the Audio Engineering Society) and elsewhere. Interesting historical note : when CD was first being standardised, the Philips 14 bit CD proposal went head to head with Sony's 16-bit LP-sized disk. They compromised on the slightly larger CD we still have today with 16 bits and allegedly at Morita-san's insistence, enough recording time for Beethoven's Ninth Symphony. Which left Philips with a pile of very nice but now useless 14-bit DACs... So Philips first CD players drove these DACs at 4x the sampling rate, with a simple noise shaping filter (may have been 2nd order but probably first order) and achieved performance closer to 16 bits than contemporary 16-bit DACs could. For 1983, ... Genius.
H: How did handheld video games from the 70's and 80's work? I'm curious about how the early handheld video games from the 70's and 80's worked. You know, those small games with a LCD display with "fixed elements" meaning it was hard wired for one (or a small number of) specific games(s). For instance, this one: or this: Specifically, what type of micro controllers did these games use? As I recall, there was usually one big epoxy "blob" at the center of the board which probably controlled everything. So my question is, what was inside this blob? Was it a small standard microcontroller with a ROM running the actual game, or was it a custom CPU/controller for every design? Or maybe it wasn't even a CPU but more like a simple state machine hardwired for the game in question? I realize the answer may depend on the particular game. Has the design/schematics for any of these games been released or is there any good books/references about how they were made? Finally, what tools were used to design microcontrollers like this? Was it based on something like VHDL/Verilog synthesis, or was it a more low-level approach with wiring the individual elements? AI: While the 4004 and Z80 were available, they were both unlikely to be used in handhelds where low part count, cost and battery life were important. There were families of 4-bit and 8-bit microcontrollers from the 1970s that have been mostly forgotten - these usually had on-chip ROMs so after developing your program on a relatively expensive emulation system - always in assembly language - you paid your money and the silicon manufacturer made a mask - and printed YOUR specific ROM contents on a waferload of blank chips, then tested, sliced and packaged* them : six weeks later you had a delivery of 25000 micros with permanent ROM contents and prayed that the program would work... (* Or supplied die for you to fit your own epoxy blob over) Texas Instruments had the TMS-1000 and sold probably millions with the same damn set of 20 slightly-off-key tunes for doorbells. National Semi had the SC/MP ... don't know much about it except that I believe it was used in the first Sinclair computer, the "Science of Cambridge MK14", a year or two before the ZX80, ZX81 and Spectrum. Intel had the 8048 eight-bitter that may still be used in PC keyboards as far as I know ... they even had an EPROM version (8748) which appealed to the smaller company (no mask costs! YAY!) I was bitten when the promised CMOS version of the 8748 was quietly dropped... Zilog had (still have, I think) the Z8. Motorola, oddly, kind of missed this boat despite the 6800 - the 6802 may have been an attempt but the 6811 was (to my recollection) quite late in the game. RCA had the CMOS (low power!!!!) 1802 with the weirdest instruction set on the planet. And a long-forgotten company called General Instruments had another offering, which my "Preliminary" datasheet from 1983 calls the PIC1652 or PIC1654 (with 256 or 512 words of program ROM.) and a fairly freaky instruction set. I expect that sank without trace...
H: Thermocouple signal conditioning - how to deal with low output swing margin of amplifier? I have this thermocouple conditioning circuit. Let's say my gain is 100. Minimum low level for AD8237 is 20mV. As K type thermocouple outputs 40uV per 1C, I will have 4mV per 1 degree C. This means I will not see change on the output while temperature goes from ambient to ambient + 5C. Suppose, I don't want to loose these 5 degrees at the bottom of my range. What do I do? I think, I have to inject some offset into the +IN input? If so, how do I do it? What are the other possibilities? I would like to avoid using split supply. AI: Note that the 20mv error is a measure of how close the output can go to 0v (using your supplies) against a 100 kilohm load pulling the output to "mid-supply" i.e. 2.5V. Without that pull-up, as in your application, the output will go considerably closer to 0V. However if you need to inject offset; the place to do it is "ref" i.e. pin 6 - whatever voltage is there, is the "0V" reference point for the output. Note also that the thermocouple will ONLY generate 0V at ambient if it is actually a pair of thermocouples; connected back to back, one measuring the temperature of interest, the other being held at ambient (or 0C or some known point) to act as a "cold reference". (This may be what your "thermocouple" does, but its worth mentioning because the circuit doesn't show it). Otherwise you have at least three thermocouples in the circuit; two of them being unknown quantities, formed by the connector pins. In which case the actual voltage is anybody's guess. One further point - I see you connect the thermocouple via a connector. It's worth considering what happens when unplugged : I recommend a high value resistor from -in to +in to prevent +in floating. (1 megohm with the 1na bias current implies an offset of 1 mv with no thermocouple, and it will have no measurable effect with the thermocouple present)
H: Are these pcb layouts good? I'm working on an USB audio interface; the board has an analog section, with amplifiers and filters for the input and the output signals, a digital section, with the DAC, ADC, and the microcontroller which will control both ICs, and a power supply unit, which generates all the necessary voltages for the whole circuit (dual supply for opamps, 5 volts for micro and ICs). I've been struggling the whole week to cram all this stuff together onto a single, 2-layer PCB, but eventually a modular design looks like a better solution to me; it is easier to design a single module, and it will be easier to debug later. I've read loads of somewhat similar stuff on the internet, about mixed signal layout considerations, and I've tried to keep those considerations in mind during the final layout. The separation of grounds looks like the main thing to look at; now, I'm using a dedicated, isolated PSU which gives power to all the circuitry, and it's designed so that the digital and analog grounds are the same; but, since I'm going modular, there can be actual separation of grounds between the building blocks of this circuit. Enough talking. I'd like you to take a look at the first modules I've finished, which are the power supply and the ADC preamp filter. The PSU generates +/9V,+5V, and a voltage reference of +2.5V used in the preamp filter; it uses a Murata DC-DC converter which takes 12V in and spits +/9V @ +/-111mA; the +9V rail is connected to a pair of LDOs, one which generates +5V, and one which genereates a precise series voltage reference of +2.5V, which is used to bias the input filter for the ADC. I separated the input ground from the DC-DC converted ground, as it is an isolated type converter. I stitched the whole ground plane to reduce its total impedance. The MCU needs 5V, 30 mA max, and it has a 3.3V regulator; the DAC and the ADC get 5V and 3.3V, and they consume together 60 mA max; will the remaining 21 mA be enough current to power four MC33078 and two AD797? (two MC33078 are for the input filter, the other two with the two AD797 are for the output filter). Schematic and PCB: The other module I've managed to finish is the ADC filter preamp; it is a stereo balanced line receiver filter/buffer to balanced out; it just gets a pair of balanced audio signals and filters unwanted noise from them, feeding the differential receivers of the ADC. This one was actually more difficult; I tought that using the top layer to accomodate two power planes (upper part is +9V, lower part is -9V), while using the bottom plane for the ground plane was the best solution, in terms of ease of layout, and in terms of pcb space required. I got some doubts about the +2.5 V trace going under those 1206 components, but that was the only way I had to route it; I also have some doubts about the need of electrolytic 10uF decoupling caps near the PSU connector, and of the smaller 1uF decoupling cap between 2+5V and ground, near the opamps. Schematic and PCB: Bear in mind that this is the first well-thought circuit design I'm making, and it's probably the second circuit I'm making that will actually get printed and populated. Moreover, the filter circuit was not designed by me, but was the suggested reference filter designed in the ADC schematic. In the case the PSU will not give enough power to the circuit, I can easily use an ATX power supply to power everything, at least for the sake of prototyping. Any help, hint, criticism, or whatever is really appreciated! AI: Let me start by saying that I have not done a complete review of your design. Doing that properly would take hours, and I just don't have that kind of time at the moment. Given that, below are my comments in no particular order. You need much more capacitance on the power input and the +/-9v outputs of your converter. I would recommend something in the range of 22-100 uF. What you have there is a switching regulator and you need some caps to keep it fed, and to smooth out the output ripple. I would also increase the size of the caps on the "output" side of the inductors to at least 10 uF. 22 uF would be better. The REF5025 needs a 22 uF cap on the output, according to the TI datasheet. Additionally, if you intend this to be low noise then the chip should be located on the same PCB, and ideally right next to, the chip that needs the 2.5v. The LP2985, the +5v regulator, needs (you guessed it) more caps. At least 10 uF, 22+ uF would be better. The filtering on your preamp will filter the noise out quite well. Unfortunately, it will also filter out your signal. It has a cutoff frequency near or below 1 KHz. You probably want something else. Here is the frequency response from my own simulations. You should put your preamps on the same PCB as the ADC's, for lowest noise. Consider removing the 22 ohm + 4.7 nF filter on the output of your preamps-- but look at the app notes for the ADC and follow their recommendation for filtering. Depending on the ADC, this could increase your total noise. That's all I have right now. Basically: More caps and less PCB's.
H: What's an alternate pin function on a microcontroller? I'd like to know what "Alternative Function" refers to in the context of the IO ports of a microcontroller. I don't need to know how to activate it when connecting to a peripherial, but I'd like to know what it exactly is and why we'd need it. AI: Many pins of your microcontroller have different functions. The 'normal' function would refer to GPIO, General Purpose Input/Output. In that case, you can use these pins directly by writing to and reading from the relevant registers. 'Alternate' functions would refer to other functions, that may include I2C, SPI, USART, CCP, PWM, Clock, ADC, etc... How you control the pins when in an alternate function depends on the peripheral, but it generally comes down to writing to and reading from special function registers (SFR); the peripheral takes care of the rest. Which function is standard after a RESET depends (it is not always GPIO!), and you can find that in the relevant datasheet. Most of the time, you can select the function you want to use on-the-fly, so you can switch between peripherals. By using one pin for several peripherals, you can make microcontrollers with very much features. However, because you most of the time want that peripheral on that pin all the time (and don't want to switch functions on-the-fly) you can't use all peripherals in one program, or at least not at the same time. On the other hand, that isn't really often needed anyway. As Connor points out, 'alternate function' can refer to something else as well, in just a slightly different context: here it isn't about what function you put on a pin, but about what pin you use for a function. This is called Peripheral Pin Select, and basically means you can select which pin your peripheral is using. You could, for example, do RS232 over RA1 and RA2 or over RB1 and RB2. See Connor's answer for a more detailed description (and upvote him for this).
H: Piano keys touching detection I would like to make glove with pressure sensor on each finger to detect a moment when piano player touches a key. What kind of sensor should I use? I tried use simple buttons, but it is very uncomfortable for player :) AI: The other answers are just as valid as mine. I'm just offering yet another option. Use a piezo buzzer. Something like what is in this picture: When the piezo element is struck/stressed/etc it will produce a voltage spike that can be detected. They are commonly used in drum machine pads, elevator buttons, etc. They are fairly cheap, so you could easily get one and hook it up to an o-scope and experiment.
H: Optimal Solution for function using Distributed Law I was given the function: \$ h = .' + + .'. + . + '..'. \$ I was told to construct it in the most optimal way using only any size NOR gate by applying distributive law. I proceeded as follows: \$ h = (C+B'C)(C+D'F) + B'CD + BE + AB' \$ Which means I applied the distributive law on C and B'CD'F. Assuming \$x\$ to be \$C\$, \$y\$ to be \$B'C\$ and \$z\$ to be \$D'F\$ I got \$(x+yz = (x+y).(x+z))\$ and continued as follows for the remaining terms to come up with a final equation: \$h = (AB' + BE + BD + B'C +C) . (C+D'F + BE +AB'+BD)\$ I then used de Morgan's law to come up with the NOR design. That is all I did. Can any other simplification be done? AI: There is an obvious simplification: \$ C + B'CD'F = C \$ Which makes the function a lot smaller by eliminating a redundant term: \$ h = A.B′+C+B.C′.D+B.E \$ The result \$ h \$ is not dependent on input \$ F\$.
H: What type of stepper motor should I use in my project and how do I control them? I'm building a video camera pan and tilt head. Here's what I'm getting at: I'll need one motor to control the horizontal rotation of the camera, and one for the vertical. The horizontal motor shouldn't meet much resistance, as I'll be using bearings to carry the weight. The vertical motor on the other hand, will have to move the camera to a position and hold it there. The camera might be anything up to 5kgs. I'm designing it to obviously be as balanced as possible minimising strain on the motor, but there may still be a fair amount. I'm using stepper motors because I need to be able to do repeatable movements, as well as extremely slow movements, e.g. move 2 steps, wait 10 seconds, move 2 steps again. I've been looking at the SparkFun stepper motors, but I've been recommended Farnell and have been completely overwhelmed by choice. So what kind of stepper motors would you recommend? I'd even been thinking of using two different types of motors, given the different stresses they'll be under. For instance, using this for pan and this for tilt. I'd tried calculating the holding torque from the 125 oz.in figure, but haven't come up with anything meaningful. And finally, as I'm going to be controlling this from an Arduino, what is the best way to connect up the two stepper motors to the Arduino? The SparkFun stepper motor driver doesn't seem very cost-effective, given I'd be using two of them at $16 each. What would you recommend? Just to be clear, I don't mean this to be a shopping-list type post. I'm asking whether the motors I mentioned are suitable or not, and if not what kind of things I should be looking for. AI: One important factor when working with stepper motors is hold torque. You can do some simple calculations based on your camera weight and platform radius to determine the torque due to gravity, and your hold torque should be a fair about larger than that or the motor will slip. You'll also need stepper motor drivers in order to control the motors and provide enough current as well as the proper signal to control the motor. I've had experience with the Pololu A4988 I'm conjunction with Arduino; it's simple to use and only used 2 digital IO pins (at minimum). You can also spare more pins and control things like the driver enable, etc. As for actually choosing a motor, that can be tough. I'd pick a price range and form factor that you'd like to be within and use that to narrow it down. I'd also stay away from the spark fun motors as they are usually on the more expensive end and not the best bang for the buck.
H: Powering several amplifiers with one power supply I need to power 5 industrial amplifiers for a project but I am not sure I can use one power supply for all. Here is the amplifier model I will use: http://www.hbm.com.pl/pdf/a0114.pdf And manual says "Supply voltage 15 ... 30 V DC (unregulated)" I also didn't get this. Does it mean I don't need a regualted input? AI: Yes, you can use a common power supply for all five amplifiers. The supply must be capable of providing enough current for the peak demand of all five amplifiers at once (e.g., 5 × 125 mA = 625 mA). For example, the NT101A/NT102A power supply shown in the datasheet has a rating of 650 mA, which would be plenty. And yes, it does not need to be a regulated supply, as long as it always produces a voltage in the specified range.
H: Using an RC snubber with a triac. Is this design safe? (simulation included) I'm designing an ac switch for a washing machine motor. No speed control is intended, only a switch to turn the motor on for 10 seconds and off for 10 seconds. I'm using a MOC3063 zero-detecting triac driver and a BT137-600E triac. This circuit was recommended in the MOC3063 datasheet (I have changed the triac to BT137): I have empirically found the model of the motor: I haven't yet implemented this design and I was experimenting with the values using a circuit simulator. I used the following circuit to see what would happen without the snubber (for the sake of simplicity the triac(s) have been replaced with a switch): (Add http: at beginning of image link) The switch disconnects after 160ms. The result of the simulation shows a 250000V spike!: Now I use the circuit with the snubber: The voltage taken from the same place shows: So it's obvious the snubber has reduced the voltage spike from 250000 to 2000 volts. And the dv/dt is about 40v/us which is lower than the triac rated dv/dt which is 50v/us. Question 1: I calculated dv/dt by dividing the maximum voltage of the spike by the time it took for the spike to reach its maximum (~2000v/50us). Is this correct? I can get higher or lower values depending on the section of the slope I calculate this. Is the dv/dt I have obtained reliable? Question 2: Supposing that I had obtained the dv/dt correctly in question 1, the triac can now handle the dv/dt but will it be able to handle the 2000v spike? Does it have a breakdown voltage? I don't see anything in the datasheet except the "repetitive peak off-state voltage" which is 600 volts. What is the one time peak voltage a triac can handle while in off state? Question 3: The simulations showed that almost all of the ringing spike voltage (2000v) shows up on the snubber capacitor and the snubber resistor only sees a 20 volt (ringing) spike. What kind of capacitor should I use to handle this voltage and with what voltage rating? Can a 600v cap handle this one-time spike? Question 4: The power dissipated in the resistor is shown below (Remember once the triac turns off it will remain off for 10 seconds) What kind of resistor (wattage that it can handle) should I use? And do I need to use a carbon composition resistor (which is usually used in snubbers) or will a normal carbon film resistor suffice here? Question 5: What will happen If I replace the 10nf snubber capacitor with a 100nf capacitor? If I do this the voltage spike will go down to nearly 600v (see figure below) and I will be on the safe side regarding all the parameters (dv/dt, spike voltage, capacitor voltage). So why shouldn't I use 100nf? Why did the datasheet sheet of MOC3063 recommend a 10nf but the simulations are showing much better performance using a 100nf cap? Sorry for the long post and many Questions. But I really need to understand this. AI: Your simulation is far too pessimistic, because you are opening your switch at a zero-crossing of the source voltage. Because of the inductive nature of your load (the motor), this corresponds to nearly the peak current flow. In actuality, your triac will conduct until the current zero-crossing, at which time, there will be very little energy stored in the inductance of the load.
H: 32 bit, 4 way multiplexer I'm relatively new to electronics and recently decided to design and build a very simple CPU as a personal project. My instruction size is 32 bits and I want to have 32 bit registers so I am going to need several 32 bit wide, 4 way multiplexers. I know how to construct one using 32 1 bit, 4 way multiplexers but I'm wondering if there is a better way to do this? EDIT: Essentially this: How to create an n-bit multiplexer but in terms of actual hardware. AI: One way to achieve what you want is use dense tri-state buffers, such as: http://www.ti.com/lit/ds/sces422/sces422.pdf You will need one (2 to 4) decoder to decode address and drive enable controls for 4 32-bit buffers. Exactly one buffer will be enabled at any given time. Here is one example image:
H: Reducing LED PWM Noise: Which is the best option? Driving an LED with a microcontroller should be easy. But when looking into noise, things can get complex... As an instantaneous voltage indicator used in a variable power supply (which will be used to simulate a photovoltaic array of a student's designed satellite), I am using some LEDs controlled by PWM (~31 KHz). simulate this circuit – Schematic created using CircuitLab After the first revision of the PCB I have realized that each one of the LEDs is generating 200 mVpp in the 5V line: My question: which is the best way to reduce this noise? Why? From my (small) design experience I could come with the following possibilities. Which is the most effective, taking into account real-world components (ESR in capacitors etc)? Any other suggestion? A combination of several? (Note: changing the PWM frequency is not a good option because that same signal drives other devices) a) Decoupling Capacitor simulate this circuit b) Low-pass filter (The resistor forms an RC low pass filter with the gate capacitance of the MOSFET, removing the high frequency components of the switching) simulate this circuit c) Snubber simulate this circuit AI: Your a) solution is a good one, just make sure the capacitor is as close as possible to the resistor/LED/transistor-branch as possible. Start with a 100nF value and see how that works. If ripple is still too large to your liking, add a electrolytic capacitor in parallel to the 100nF. The 100nF will suppress the higher frequency components and the electrolytic capacitor will do better for lower frequency components. As @pjc50 says in one of the comments, a gate series resistor as shown in b) is good practice too to avoid ringing. I'd personally pick a lower value, say 100Ω. It will suppress ringing and will avoid the transistor spending too much time in linear mode (=dissipating heat).
H: From pulse height to number of electrons Assume that there is a pulse on a scope with \$100\;mV\$ height, as shown in picture This pulse is coming from a photomultiplier tube. How to determine the number of electrons from which this pulse is constructed? AI: Assuming 1MOhm input resistance of the oscilloscope you can compute the current from the voltage and integrate it to get the charge. Divide by the charge of an electron and you get the number of electrons. For the lower pulse that would give about -200nA * 15ns = -3e-15 C ~ 20000e. Note that I'm not an expert w.r.t. photomultipliers, so I don't know whether this is of the right magnitude and whether there are some caveats of this measurement method.
H: How do I work out what size resistor I need? After asking why the fuse in my ATV was blowing, I was told to insert a resistor between the switch and ground. The problem now, is determining what size resistor to use. Since fuses are ~$2.00 a piece, I'd like to avoid the trial-and-error method. The power source is a 12V 12AH motorcycle style battery. The voltage will likely fluctuate during operation due to the alternator (12 - 16V?). The circuit is basically a direct short to ground through an LED: I have no knowledge about the LED, other than it's part of a switch rated for 20A @ 12VDC. The fuses are 15A. How can I determine what size resistor I need? AI: I have no knowledge about the LED, other than it's part of a switch rated for 20A @ 12VDC. There's your problem. Since you don't know anything about the LED, you'll need a variety of resistors (or an adjustable multi-turn potentiometer) and will have to experimentally establish what looks good to you. Assume the LED will have a forward drop of around 1V (could be less, could be more). Start with 1mA of forward current and go from there. \$ I_{LED} = \dfrac{12V - 1V}{1mA} = 11 k \Omega \$ Then, install the resistor in series with the LED, apply power, and measure the battery voltage and the resistor voltage to actually figure out the LED voltage drop and power: \$ V_{LED} = V_{BAT} - V_R \$ \$ I_{LED} = \dfrac{V_R}{I_R} \$ \$ P_{LED} = V_{LED} \cdot I_R \$ \$ P_R = V_R \cdot I_R \$ Repeat this iteration of increasing the LED current / reducing the resistor value until you find the minimum LED current that's useable for you (i.e. bright enough to see) and go with it. Also make sure your resistor is sized appropriately. (Using a fixed resistor once you settle on a value will be more reliable then leaving a potentiometer in the circuit.) If you arbitrarily go with too much LED current, the LED will end up with a short lifespan and go dark prematurely.
H: Different RS485 Full-Duplex with RS422 Are RS485 Full-Duplex interface same with RS422 interface? AI: RS-422 and RS-485 use the same electrical interface specification. If you use that interface to build a bidirectional link on a single pair of wires, it's called RS-485, and it supports half-duplex operation. If you use that interface to build two unidirectional links on two pairs of wires, it's called RS-422, and it supports full-duplex operation. This is sometimes mistakenly referred to as "full-duplex RS-485", but that's really a nonsense phrase.
H: Detecting 1'bx and 1'bz bits in a Verilog variable I have a 128 bit wire: wire [127:0] test; I want to stop simulation if any of the bits of test is 1'bx or 1'bz. This is what I tried: if(&test == 1'bx || &test == 1'bz) begin $stop; end Unfortunately, 1'bx and 1'bz do not seem to propagate under &. I have also tried replacing &test with |test. How can I quickly check if test contains any "don't care" or "tristate" bits? AI: Bit-wise xor. Any x or z in the packed array that is xor will result in 1'bx. if ( ^test === 1'bx ) begin $stop; end
H: what is a transceiver reconfiguration controller on FPGA I have come across this on an (Altera) FPGAs that make use of high speed protocols but don't know what it does. AI: A transceiver is a combination of analog and digital blocks. The analog block (the PMA) can have sub blocks such as clock recovery unit, transmitter PLL, bit serializer and de-serializer and the digital part consist of one of all of the following blocks, data-aligner, phase-compensation and in many cases something like 8b/10b decoder and encoder. To simplify the whole thing, when you connect 2 devices through high speed serial interface, you need create some kind of control to make sure your data transfers correctly between these devices. This gets even more complicated when you have multiple lines of data running at multi Gigabits/second. The transceiver 'aligns' those lines and takes care of the electrical issues between those devices. And at the same time, it makes sure that faulty data is detected and in some case it is fixed before sending it to the higher layer of the communication protocol. You can create most of the digital parts in HDL, but many modern FPGAs have those implemented in HW already to make it easier for people to use. Both Altera and Xilinx have some wizards to generate a wrapper for the type of serial interface you want to access. You can find some more in dept information about them on these pages: Altera Transceiver Overview: Stratix IV and HardCopy IV Wikipedia - Multi-gigabit transceiver Xilinx 7 Series FPGA Transceivers Wizard You can also read this interesting short booklet from Xilinx : High-Speed Serial I/O Made Simple
H: What voltage is put across the Tx+, Tx-, Rx+, and Rx- lines over ethernet? I'm trying to figure out how to wire up an LED, probably controlled by a FET, for an activity indicator for ethernet. I'm not trying to make one that supports official error codes or anything, but just something that blinks on and off as the line goes high or low. So what is the voltage differential applied to the lines, and would they be high enough to power an LED or control a FET spliced into the line? EDIT:The point of this indicator is actually not to verify transmission or proper TCP/IP activity, but instead just to make sure that there is anything happening on the line. AI: This isn't practical. Since modern Ethernet has a code, there are transitions even when there are no packets. The LEDs you see on networking equipment are driven by the PHY, not straight off the cable. Even given that, the transitions are way too fast to see it go off an on with tranisions. The light would be apparently always on, and not blinking. If you wish to wire up a signal, the fast Ethernet (100BASE-TX) voltages are +1 V, 0 V, and -1 V. I believe it is ~100 ohms. 1 V/100 ohms = 10 mA. If your LED is in parallel, you need it to not effect this too much.
H: Doping concentration in anode region I'm stuck with this doping problem and can't figure out where to go with it. The built-in voltage of a GaAs pn junction diode is 1.25 V when the diode's temperature is T = 320K. The cathode region of the diode is doped with phosphorus at a concentration of 1e17 \$cm^{-3}\$. Determine the required doping concentration in the anode region. Would I go about this by using the equation \$V_{bi} = V_T\cdot ln(N_a\cdot\frac{N_d}{n_i^2})\$? \$V_T\$ being the thermal voltage \$N_a\$ being the acceptor concentration on p side \$N_d\$ being the donor concentration on the n side \$n_i\$ being the intrinsic carrier concentration I have completed the following work for the above question. Can I be checked for accuracy please! AI: That is the correct formulae for an abrupt junction, just be sure to use the right temperature for the thermal voltage \$ V_T=\dfrac{KT}{q_e}\$ and the right intrinsic carrier concentration \$n_i\$ at that temperature.
H: Ceramic vs. Film Capacitor: Which one is preferred in audio circuits? I'm considering building an amplifier following instructions from a MAKE magazine article. However, as I was reading the circuit schematic, I noticed that the author denotes that the capacitors C101, C104, and C105 are supposed to be "film capacitors." Is there a reason as to why one would use film instead of ceramic capacitors in this application? Also, if a website denotes "metal film capacitors", is that the same as a "film capacitor"? Right now, the only difference I know of in types of capacitors is that electrolytic capacitors have a polarity, while ceramic ones don't. I was wondering if film vs. ceramic has a similar differentiation. AI: "Film Capacitor" typically denotes polyester or polymer film as the dielectric - as another answer points out, metallized film capacitors are the same thing: A metallic coating being applied to an extremely thin polymer film, to create the conducting electrodes of the capacitor. In general, ceramic capacitors are somewhat non-linear in their frequency and voltage responses, compared to film capacitors. Another issue with ceramic capacitors is that they tend to behave as microphones, thus picking up ambient sound and modulating the voltage across them accordingly. Also, for smaller values (a few pF), ceramics were more commonly used, while larger values were somewhat likely to have film considered as an option - or at least that was how it used to be, before capacitors became so inexpensive with the advent of SMT, that the price difference became negligible except for huge volumes. Both film caps and ceramic ones are non-polarized, so that isn't a difference.
H: Transmitting power and PWM signal through the same cable I am working on an Arduino based project, which required to supply 24VDC at 2A and 3 PWM outputs from the main board to the other board. The 24VDC is for powering the LEDs, and PWMs are for dimming control. To make wiring simpler, instead of running 5 separate wires (3 PWMs, 24VDC, and GND), I plan to use a multi-conductor cable like this and circular connectors like this for this purpose. Is this a proper way of delivering both power and signal through the same cable? What other methods are recommended? AI: I would suggest using separate wires for current return ("supply ground") and digital ground. Connect them at each end via small-value resistor. Suppose the wires in your cable are 0.5 ohms. If you didn't separate out your supply and digital ground, then you'd lose two volts (out of 24) in your cable. An 8% energy loss in the cable isn't wonderful, but it's not the worst thing in the world. On the other hand, the ground level at the power consumer would be a volt higher than at the supply, meaning that a logic 1 which was output as 3.3 volts would be seen at the other end as only 2.3 If the main power supply was connected to the digital logic with a 1 ohm resistor on the ground side, there would be two paths via which ground currents could flow--direct through the supply return (0.5 ohms), or else through two ohm resistors and the data ground (2.5 ohms). The downstream supply ground would be 0.83 volts above the source ground level, but the two digital grounds would be within 0.17 volts of each other (at the supply, digital ground would be 0.33 volts above supply ground; at the load, the digital ground would be 0.33 volts below supply ground).
H: Improving a Push-Button Debouncing Circuit I'm having problems testing a 7490 decade counter, the switch sometimes toggles 2 or 3 counts (it's a pushbutton). This is already using this debouncing circuit I found: (well great, I can't post images yet. It's an inverter that's connected to another inverter that feeds back the first inverter in a loop and is also the output. The push-button for the zero output is connected between ground and output and the one output push-button is connected between the two inverters and ground.) Does anyone know any ways I can improve my mechanical switch toggling? An easy fix to the circuit I'm using would be ideal, but if that's not possible, what's a decent debouncing circuit I can use that works 100%? I just need it to be negative-edge triggered. Help much appreciated EDIT: Here's the image: AI: Implement your debounce like this to fully eliminate the effect of the switch contact clatter. Captured courtesy of this site. Update After discussion in the comments section I am adding the below picture to help visualize the "NAND latch" in the DeMorgan equivalent of its logic to show how the latch really is a low true input latch.
H: How can I figure out an unknown UART's speed? So I have a piece of hardware I'm reverse engineering. I believe I've found a UART on the board. I have an FPGA connected to it that just does: ExtTX <= PCRX; PCTX <= ExtRX; So, in this way I can theoretically handle any speed the FTDI on my FPGA can(which is fairly fast). So, I have the hardware portion covered. Now though, using Linux(or Windows), how can I figure out what the speed and stop bits and other configuration options of the serial connection should be? AI: You want to measure the shortest transition-transition time. This will, assuming the bus you're looking at has a decently varied content, tell you the bit-time for the serial interface. You can then go from that to the overall baud rate. Really though, if you're serious about doing reverse-engineering or any sort of electrical engineering at all, you will need an oscilloscope (and probably a logic analyzer), so you should think about buying one soon.
H: Is this a common LCD display? I'd like to know if this LCD display is a common one like the HD44780. Sadly there is no model number on it, only the brand(?) "onv" or maybe it should be "auo" when you read it upside down. It's part of a car radio. It has a 40 pins connector. connector: whole display: AI: That display is almost assuredly a graphics display. That would make it MUCH different from a HD44780 style display that natively displays characters. Enough information is not provided to determine if this display has a controller onboard that is capable of receiving the graphics display pixels and managing the refreshing of the screen to keep the display alive. Alternatively this display could be one of the type where a real time video signal needs to be fed to the display in order to keep the screen active. In this alternate mode the display would be working similar to a computer driven LCD monitor that requires a continuous feed of a VGA or DVI video signal to keep the screen active. In either case be prepared for a fair amount of work to get the display going. Personally I would suggest that you begin work with something that you can get some decent information on it in order to figure out what it does and how you can "talk" to it.
H: Will LEDs in opposing directions be damaged? I was trying to build a simple RC circuit that causes an LED to fade in and out. However, I have electrolytic capacitors, which means that they have polarity. If I connect a circuit as follows , is the circuit capable of damaging an LED due to back voltage? (The capacitor is supposed to have polarity, but I didn't know how to put the symbol in CircuitLab.) I think not, because the other LED has very little resistance when the voltage is at any substantial level. simulate this circuit – Schematic created using CircuitLab When making the circuit in CircuitLab, I left the component values (e.g. 100 ohms or 1V) as default, as I'm mainly interested in the generic circuit, not a specific one. AI: Note that each diode shunts the other's reverse voltage. Neither diode is permitted to have more reverse drop across it than that which is established by the forward drop of the other. This kind of circuit basically shows a symmetric voltage drop in either polarity. Back-to-back diodes are used (for instance in op-amp feedback) to create symmetric clipping (before the signal swings to the power rails). The clipping can be softened with resistors. In your circuit, when the switch is in the battery position, and the capacitor is initially empty, the capacitor initially looks like a short. This is when the most current flows. The conducting diode is protected by the 100 ohm resistor, and the reverse diode is off. As the capacitor fills, the current dwindles down to zero. When the switch is flipped to the ground position, the capacitor empties. This time the other diode conducts and the first diode is off. Initially, the same current flows which flowed during the other diode at the start of charging. The current eventually dwindles to zero. The resistor protects the diode stack by dropping most of the voltage, and this works in either direction. To notate a polar capacitor in CircuitLab, right click on the part, click Edit Parameters, and then use the combo box next to the DISP: field to choose polar.
H: AC DC Analysis of diode in the following circuit: Im looking to find what \$v_D(t)\$ is given \$V_γ = 0.7 V\$ and \$r_f = 0\$. Now when doing the DC analysis, find that \$VDQ = 0.7 V.\$ But im having trouble finding IDQ. Would that be \$4.7/7V\$? Once, I find that I use \$r_d = VT/IDQ\$ correct? Thanks you all very much! AI: DC analysis: The voltage drop across the diode is 0.7 volts, and the DC current through the diode is only determined by the resistor: $$IDQ=\frac{7V-0.7V}{4700\Omega}=1.34mA$$ For the AC analysis you can use the differential resistance $$r_d=\frac{kT}{qIDQ}\approx\frac{0.025}{IDQ}\Omega=18.65\Omega\quad\text{(at room temperature)}$$ So for \$v_D(t)\$ we get $$v_D(t)=-\frac{r_d}{r_d+4700\Omega}0.4\sin(\omega t)V=-1.58\sin(\omega t)mV$$
H: Party-line / Full-duplex intercom - mixing audio signals Party-line or full-duplex intercoms allow for the simultaneous transmission and reception of audio from all connected stations, i.e. effectively there is an audio 'bus' and each station takes a feed from it and routes this to its headset, and also adds a signal from its microphone that is then available to the other stations. The connecting wire can be either balanced or unbalanced. simulate this circuit – Schematic created using CircuitLab My question is: how do you add these signals together? Do you simply connect the outputs of the output op amps at each station together onto the wire (via some form of AC coupling, perhaps?). Is that what this guy does in this circuit here?: ComClone2 Circuit Diagram AI: No, that's not what he does. He feeds each microphone through an amplifier then through an output impedance that feeds the line through R14 and C4 shown in thick red circle below: - I've shown the wire that is the actual line used by the intercom in a thin red/brown circle. Because of the output impedances of several units connected all at once there will be no clash of signals and, within reason, the line voltage will be an approximate sum of the contributions of all active intercoms. AC coupling via C4 is needed for the op-amps but note that there is a call circuit that needs to put DC onto the line to "ring" the other intercoms. Side-tone cancellation takes place around U2B and for a 1-to-1 call you won't get feedback in your ear of your own voice. With several of these units connecting at once you may get feedback from mic to ear and this could cause havoc to other users too. So be aware of this. Balanced Line out with DC signalling: -
H: Heatsink earthing? I'm currently working on my first circuit that dissipates enough power that I need to use a heatsink. The component in question is a TS1085CZ33 which I'm running with a maximum load of 2.8A dissipating around 7.5W. I'm just wondering a couple of things: Is the heatsink tab on these chips (I'm using the TO-220 variant) electrically connected internally? If so, what is it connected to? If it isn't connected should I ground the heatsink? AI: I think if you used an up-to-date spec you'll see the above in red - this spec has stuff about RoHS compliance and the device being lead-free so it's probably more recent than the one in the OP link. TS1085 newer link
H: Why use a pull-down resistor with a LDR and microcontroller? I am looking at this: http://learn.adafruit.com/photocells/using-a-photocell It's connecting VCC <-> LDR <-> resistor <-> GND. And the analog input is between the LDR and the resistor. I understand than a resistor may be necessary to control the current (in case the LDR resistance is very low). But Why can't I just do the following: VCC <-> LDR <-> resistor <-> Analog Input And forget about the pull-down? AI: When working with a digital circuitry that senses analog voltage, for example a microcontroller, or let's say Arduino, you are measuring voltages. However, without current, voltage cannot be present. For a voltage to be created on a component, there need to be a current flowing on it. According to Ohm's law; \$V=I*R\$, when \$I=0\$, the equation becomes; \$V=0*R=0\$. Thus, no voltage will be present, and the microcontroller will not be able to measure anything. Proper way of sensor connection Check out the schematic below. First, have a look at the left side, a proper LDR connection with a proper pull-down resistor. A current will flow through R2 and create a voltage drop on it. \$Vanalog =Isensor*R_2\$, where \$Isensor\$ is determined with the total resistance of the sensor and \$R_2\$. Since LDR's resistance changes with the light, the current, hence the voltage will change. You may have noticed that there is a resistor I drew in the input. This is called the input resistance, or impedance, of the microcontroller and is generally very big, such as \$10M\Omega\$. In this configuration input resistance and \$R_2\$ are connected in parallel, so their effective resistance is going to be; \$\dfrac{1}{\Omega_{total}}=\dfrac{1}{R_2}+\dfrac{1}{R_{in}}=9990.00999\Omega\$ which is almost equal to \$10k\Omega\$. So, there will be no change. The voltage AnalogValue is then, \$Vanalog=Isensor*10k\$, where \$Isensor\$ is \$\dfrac{Vcc}{R_1+10k}\$ Let's say our sensor \$R_1\$ is 10k at the current lighting condition. And \$Vcc=5V\$; \$Isensor=\dfrac{Vcc}{R_1+10k}=\dfrac{5}{10k+10k}=250 \mu A\$, \$Vanalog=2.5V\$ What if there was no pull-down resistor? If there was no pull-down resistor, the configuration would be as shown in below diagram. The sensor current, \$Isensor\$ would be the same as \$Iinput\$, since all the currents on a wire are the same. Our microcontroller measures the AnalogValue, the voltage on the pin. Let's calculate the values for this scenario, too: We know that \$Isensor=Iinput\$, now let's assume, again, that LDR is \$10k\$, AnalogValue is calculated as follows; \$Vanalog=Isensor*R_{in}\$, where \$Isensor\$ is \$\dfrac{Vcc}{R_1+R_{in}}=\dfrac{5}{10k+10M}\approx500*10^-9\approx500nA\$ Thus, \$Vanalog=Isensor*10M\approx(500*10^-9)(10*10^6)\approx5V\$ As you can see, since almost no current flows, there is almost no voltage dropped on the sensor and even though we could read 2.5V on the previous proper configuration, we have read 5V with the same light, i.e. when \$R_1=10k\$. This configuration will not work.
H: Analysis of Shunt Circuit I am having trouble computing the closed loop gain of the circuit below: It seems clear that this configuration is a Shunt Voltage but I just can't figure a way of computing the feedback factor and the open loop gain.I know the gain has to be a transimpedance and so it can't just be the transistor gain beta. Given: A = Open Loop Gain, Acl = Closed Loop Gain, K = feedback constant \$ I_{in}=I_{fb}+I_b=I_b+kVout \$ \$ A = \frac{V_{out}}{I_b} \$ Can someone please help me prove that: \$ A = -\beta R_c \$ \$ k = -\frac{1}{R_f} \$ \$ A_{cl} = \frac{\beta R_c R_f}{R_f + \beta R_c} \$ AI: I'm not sure what your second diagram is trying to show, but it together with all the stuff following (I tuned out after the first diagram) suggests that you are making this complicated. It is good to see that you have thought about this problem and put some effort into solving it, but it's easier to just solve it than to try to figure out exactly what you did. First, let's define "gain" in this context. Since the input is a current and the output a voltage, the gain is the ratio of the output voltage change to the input current change to cause it. Unlike more common voltage to voltage amps, gain is therefore not dimensionless and is actually a resistance. This is OK as long as it's kept in mind. This is called a transimpedance amplifier. Note that this kind of gain can't be expressed in dB, for example. Another condition is that the input current be within the range so that the output does not clip, which is also often referred to as the linear region. Everything below here assumes this condition has been met. Let's also make the simplifying assumption that the transistor has a fixed gain over the region of interest. That's actually a reasonable enough assumption even in real applications. In general, you want to design actual circuits to be tolerant of wide gain range, preferably from whatever minimum the transistor datasheet guarantees to inifinity. In this case we'll call the tranistor gain B, which is the ratio of the change in collector current to the change in base current. To solve for the gain, start by removing Rf. The current gain from input to what goes thru Rc is then simply B by definition, making the overall gain magnitude B*Rc. Note that this circuit inverts as you have defined the input and output quantities, so the gain is actually -B*Rc for the simplified case where Rf is removed. So far we have:   Vout = Iin * (-B*Rc) Actually this is really the change in Iin times the gain makes the change in Vout. Let's not forget that, but it's less cluttered to write it as above. So what does Rf add to this? If we make the simplifying assumption that the base voltage of the transistor stays constant, then any change in Vout subtracts form the Iin that caused it by the porportionality factor of Rf. Put another way, from the point of view of the rest of the circuit Iin is really (Iin - Vout/Rf). Plugging that into the equation above we get:   Vout = (Iin - Vout/Rf) * (-B*Rc) Rearranging to get Vout as a function of Iin, we get:   Vout = -(B Rc)/(1 - (B Rc/Rf)) * Iin Again, Iin and Vout in the equation above actually denote incremental changes in those quantities. Note that when Rf is removed, meaning it goes to inifinity, the gain becomes -B*Rc as we had before.
H: Problem with the 74160 decade counter I am experimenting with different TTL components and am having a lot of trouble with the 74160 decade counter. I don't know if it's my inexperience or if I'm dealing with a malfunctioning IC, but the IC isn't doing what it's supposed to, and it heats up pretty darn fast (I always check for heating once I power up) After some testing, I noted that all I have to do is connect the GND to ground and the +5V pin to 5V and it starts heating up fast. If I add a resistor connecting to ground, it doesn't heat up, but it heats up once I connect the remaining pins (as it stands, pins 7 and 8 are grounded, 9 and 16 are in +5). As soon as I power it up, all the outpins turn HIGH. So here is what I'm wondering: Do I ever need a resistor once a TTL datasheet tells me to connect such and such pins to ground or to +5V or can I connect them directly? Can I conclude that my 74160 is short-circuited/malfunctioning? Is there anything else I might be doing wrong? AI: After looking at the datasheet, the part normally pulls 60-70mA at 5V, dissipates 350mW. This will be warm to the touch, not hot. Not sure why you're using an old TTL device, newer fast CMOS parts have standby currents in the 100uA range. You mention adding a resistor to gnd, what's the other end of the resistor connected to? You may float TTL inputs, hopefully there are no DC loads on the outputs, otherwise the chip will draw current unnecessarily. Try this: connect a series resistor (5 ohms) inline with Vcc. If you get a voltage drop > 0.5V across the resistor, then either the chip is damaged or the outputs (including RCO) are driving heavy loads.
H: DC Motor Overvoltage Operation I am new to PWM control. Is it advisable to operate a motor at higher than rated voltage with the "PWM average voltage" more than the nominal voltage specified in the datasheets? After some researching, the majority of the information that I kept seeing is that it should be OK (with a recommendation to operate at high PWM frequency). Secondly, if the "average PWM voltage" is kept less than the nominal voltage (with the supply voltage much higher) is it correct to refer to it as over-voltage operation? Or is it only when the average voltage is higher than nominal voltage? AI: As long as the PWM frequency is fast enough, it's average voltage is what counts. No, the average PWM voltage should not exceed the motor's rated voltage, at least not for long. This is no different that applying a DC voltage to the motor. Using a high voltage supply and then less than 100% PWM to compensate is a perfectly legitimate way to run a motor, again, as long as the PWM freuqency is fast enough. In effect you are creating a switching power supply that converts the high voltage to the lower one used to drive the motor. It may not look that way because the induction of the motor widings are a integral part of this power supply. It can be useful in controlling a motor to occasionally for short periods of time drive it with a higher than rated voltage. If using this kind of control scheme, care should be taken to guarantee that the overvoltage does not persist. You could go so far as measuring the temperature or calculating the energy dissipated by the motor along with the assumed decay to ambient to calculate the maximum drive you are allowed to apply at any instant. You can't go too far with this. Generally 1.5x the specified drive voltage will be OK for short periods of time, especially when heating is taken into account. At higher voltages you will hit other limits, like arcing and insulation breakdown. Note that these apply to the peaks of the PWM waveform, not the average. For example, a "5 V" motor run from 30 V with a PWM duty cycle of 1/6 at 25 kHz will likely be fine. 30 V insulation capability pretty much happens by applying any insulation at all. However, the same factor of 6 applied to a 110 V motor would be a bad idea without consulting the manufacturer. 660 V applied to a "110 V" motor could cause damage, even if this is only at the peaks of the PWM and the average is still within specs.
H: Dtmf based robot Does any one know any thing about connecting the audio phone jack to micro-controller. I am making a DTMF based mobile controlled robot and i do not know how to connect a 3.5mm audio phone jack to micro-controller. I do not know the colored wires connections. Somebody help me out please. AI: It's usually this pinout: ground on the barrel, audio on the tip and ring. If you have some wires coming from a cut plug, use a multimeter in resistance testing mode touched to the plug to work out which is which.
H: What are the factors that significantly affect the life span of a DC motor? I'm currently planning to run a DC motor continuously on solar power. So I would like to know which things I should be very careful about for optimum lifespan of the motor. I found the following two, and would like to know what other factors do: Voltage Current Source AI: Voltage and current - Basically because you will run your motor continuously, you will want to run your motor at or below its nameplate rating. Too high of a current means your motor will heat up too much (see below). Higher current also increase the current density in your brushes which will shorten life. Too high of voltage would mean your motor is rotating faster than typical and could reduce bearing life. At higher speeds your brush pressure may not be optimized, either, so you could reduce brush life. Temperature - Most standard motors are designed for a maximum ambient operating temperature of 40 °C. This temperature takes into account the temperature rise that will result from running your motor at its rated temperature. If your ambient temperature goes above 40 °C (or if you run your motor above its nameplate rating) your motor may getting hotter than it should. If the motor gets too hot, it may affect the life of the insulation, the life of the bearings, or the life of the brushes. If insulation temp is an issue, you can go to a higher class insulation (for example, from Class B to Class F or from Class F to Class H). If bearing temp is an issue, you can look at getting higher temperature grease in your bearings. The temp of your brushes can cause a change in the friction between brush and commutator. You can get different grade brushes with different coefficients of friction that can help you with this. Low temperatures can also affect life. For example, your bearing grease has a lower temp limit as well and you can specify a low temp grease if necessary. Moisture - If your motor is outdoors or needs to be washed down, then your motor should be sealed. Basically you want to prevent corrosion on metal parts. You can get sealed bearings, water-proof wire exits, o-rings between end bells and housings, etc. to prevent leaking. High humidity can actually cause brushes to have a shorter life, too. UV - Some plastics and rubbers will degrade in sunlight, so if the motor will see direct exposure to UV, you will want to make sure your materials are compatible. This may affect lead wires, grommets, brush holders, etc.
H: How do I control a seven segment display with a shift register and an Arduino? I recently purchased 4 seven segment displays from sure electronics. They came with their own shift register board which combines the 74HC595 IC with the ULN2003. I want to be able to tie into this board using the Arduino and after several hours of research and trial and error I have had no luck. The data sheet for the seven segment displays and the shift register board can be found here: http://www.sure-electronics.net/mcu,display/DE-DP005.pdf . I can't even figure out the functions or names of the pins on the shift register board at this point. Any help would be much appreciated. AI: The names of the pins are clearly labeled on both the back of the board, and the datasheet. PWR_IN is 5v to 15v, 12v preferred. The two regulators on the board provide the appropriate voltages from there. DIMM IN is the '595's OE, Output Enable Pin CLK IN is '595's SH_CP AND ST_CP Clock Pins DATA IN is the '595's SI Serial Input Pin CLK IN and DIMM IN are buffered to the output side with a buffer chip. DATA OUT is the '595's Q7", Serial Output Pin, which goes to the next display. And PWR_IN is directly tied to PWR_Out. Since Latch and Clock are tied together, a quick search online indicates: I had to add one more clock up/dw because, as the datasheet says, latch clock is one step behind the shift clock, when tied together as a fix to any issues. Other than that, this can be driven with the regular Arduino ShiftOut library. The DIMM IN pin must be driven low/0 for the display to be on.
H: Double side PCB with solder via? Normally vias are made with through hole plating. In the past they used to make vias by soldering a piece of wire to connect the top and bottom layers (like a short piece of 24 gauge copper wire). Could the same be done with solder? Question: Can a VIA in a double sided PCB be connected with solder? Is there any precedent for such? AI: I don't see why not, but I think you will find that it's easier to assemble with wire. The trouble I suspect is heating both sides of the board simultaneously such that you can get the solder to flow on both sides, then overcoming the surface tension of the solder and getting it to go through the hole. That, and solder is more expensive than wire.
H: How does this circuit multiplex an antenna between a transmitter and receiver? I've built a Softrock RXTX kit (full schematic: pdf gif, see lower-left corner). It has one connector for one antenna used for both receive and transmit, but I don't fully understand (likely due to my inexperience with RF design) what all the parts do. The full schematic is a mess, so I've done my best to transcribe the most relevant parts: (source: circuitlab.com) So, I understand that when \$\overline{PTT}\$ is low, then Q10 is off, which isolates the receiver. Q11 is on: why? When \$\overline{PTT}\$ is high, Q11 is off, which is good, since I don't suppose we want to ground the receiver's input. How do we know that Q10 will be on, given that its source is floating? What does C27 accomplish? Say I had an RF amplifier and I wanted to add a similar circuit to bypass it when receiving. Would I have to worry about disturbing the impedance matching? What should I consider when selecting the transistors to use? I intend to cover HF at least up to 30 MHz. AI: In your excerpted schematic, you left out the rest of the receiver input, which includes a DC path to ground through the coil L4 and the primary of the transformer T5. That's why you can assume the source of Q10 is at DC ground. Depending on its value, C27 is probably there to provide an AC impedance that helps to isolate the switching circuitry from the low-pass filter in the transmit path (which is there to reduce harmonic emissions). It's a compromise between providing sufficient isolation while transmitting and providing sufficient signal when receiving. Yes, you do need to pay attention to the impedance issues (see answer about C27 above). One of the things that C27 does is give you a more definite value of capacitance to incorporate into the design of the transmit path. The main factors for transistor selection would be the peak voltage that it (Q10) can withstand when off, and the terminal-to-terminal capacitance, which, if too high, will limit the amount of isolation you can achieve. Note that you can't use a "power MOSFET" in this application; its body diode will conduct on the negative peaks and create problems.
H: Why are copper pours called copper pours? I was adding a ground plane to a PCB layout today, and my curiosity peaked. Virtually every EDA package uses the term "Copper Pour." Why is it called a copper pour? It doesn't really make sense considering the copper is etched away. Is it because the pour fills in all areas that aren't part of the net, or is there some historical reason? AI: I suspect it's because it's as if you poured copper and it filled the empty space on the board like a liquid. See the paint bucket tool in many graphics programs.
H: FPGA Logic Gate Count I found an FPGA board that I liked. It uses a Xilinx Spartan 6 LX45. When I went to the datasheet for the Spartan 6 series, it only said that there were 43,661 logic cells. How many gates does that equate to? Or rther, how would I figure out the number of total gates from the number of logic cells? AI: FPGA manufacturers don't use equivalent gate counts much any more, even in the hand-wavyest marketing materials. Like lines of code or megahertz of processor speed, it's a highly inaccurate metric for measuring the device capability, and in the FPGA markets the customers wised up enough to suppress its use. To estimate the size device you need, you'll need to look at the summary on p. 2 of the datasheet you linked. Usually you can get a decent idea early on in your design process how many flip-flops, how many i/o's and how much ram your design needs. One or the other of those will typically be the critical resource that determines the size of part you need. If you aren't tightly cost-constrained, use a device 2x or more bigger than you think you need. It will give you room for feature creep in your design and also speeds up development because the design tools won't need to work so hard to fit your design into the available resources. Edit, pulling in things from comments, You mentioned that your design is mostly unclocked. The issue with this is that FPGA design tools depend on clocking and the resulting timing constraints to drive optimization of the synthesized design. If you want to do unclocked design in an FPGA it's possible in principle, but you're not going to get much help from the tools (or vendors) and you'll probably need to find a specialized community who do that kind of thing to get any support. In any case, you can look at the Spartan 6 Configurable Logic Block User's Guide to see what resources are available in each block. Then mentally map your design to those resource to see how many blocks you need. That should be enough to let you pick the right size device. For example, you can see in that document that the LX45 part contains about 27,000 6-input LUTs. Each LUT can be used to implement an arbitrary combinatorial logic with up to 6 inputs. If you can express your logic in terms of this primitive, you can estimate whether your design fits into the device.
H: How to turn DC motor to specified angle We have bought 2WD turtle bot with Arduino. http://www.dfrobot.com/index.php?route=product/product&product_id=65#.UZxObIoW3RQ Now we need to turn it to 90 angle. We need to understand how we can do it. We don't have encoders. AI: Drive the left and right motors in opposite directions. If your design requirements constrain you to having no feedback, your best bet is to experiment with the time you turn on your motors. Note that this value can change based on things like the battery voltage and floor texture. Since it has an Arduino, I'd suggest adding a nice gyroscope and integrating the output to find your angular displacement. Or some other way to find out how far you've turned.
H: Substitute a MOSFET in a motor controller I'm trying to build a CoffeeBot from MAKE magazine, and I had a question about one of the components. The circuit is pretty simple; it uses an Arduino to control a gear motor via a MOSFET transistor. The Arduino is using a digital pin which I believe is capable of PWM, but in the code provided it merely turns ON/OFF. So my question is about the MOSFET. I understand the basics of transistors, but why this particular part? The part listed is a 12N10L Logic Level MOSFET. What makes a MOSFET in this instance better than a regular NPN transistor? Is it the current that it can handle? Or is it somehow better at high frequency switching, were I to use PWM to control speed? Also, if it is the current/heat capabilities of the part that makes it ideal for this job, what advantages would it have over something like a TIP31 or a TIP120 transistor? Edit: People have rightly noted that I forgot an important detail, the current required by the motor. This is a small gear motor GM8 that seems to stall out at 400mA. Does that change the issues with the TIP120? AI: Two reasons: In a BJT, the current that must pass through the base is related to the current that flows from collector to emitter, by the DC gain of the device. The GPIO pin on the Arduino would need to supply this base current. In either device, thermal power i.e. heat generated at the switching device is related to the current through it, thus: P = V x I = I^2 x R where V is the voltage Vcesat between Collector and emitter for the BJT, or in the MOSFET case R is the Rdson. The TIP31 mentioned, has a DC gain of as low as 10 at 3 Ampere load, and 25 at 1 Ampere. This means to drive just 1 Ampere through your motor, a base current of 40 mA is needed, which is the maximum rated current for any GPIO on the AVR chips used in most Arduino boards. In practice, devices should never be operated at maximum rated values, so the TIP31 is not an option. The TIP120 has a better DC current gain, so base current wouldn't be such a problem. However, it has a Vcesat of 2 Volts at 3 Amperes and 4 Volts at 5 Amperes. This means between 6 Watts and 20 Watts of heat will be generated at the BJT for such currents. Not nice. The MOSFET, on the other hand, has a rated Rdson of 0.12 Ohms with gate at 5 Volts. So heat generated would be around 120 milliWatts at 1 Ampere, 1.08 W at 3 Amps, and 3 W at 5 Amperes load current. Much cooler than the BJT, though one would still use a heat sink at 3 Amps and up. Gate current is also not an issue, since MOSFETs being voltage driven devices, pass negligible current at the gate anyway, except a small amount instantaneously at turn-on, to charge the gate capacitance. The question does not specify current needed by the motor, but there are many super-cheap logic level MOSFETs available that show excellent low Rdson characteristics even at gate voltages as low as 2.5 Volts. A fine and really inexpensive such MOSFET is the IRLML2502, sold for under 25 cents, which you could consider in place of the specified MOSFET, if the load voltage and current specifications are met: Under 0.08 Ohms Rdson for merely 2.5 Volt gate voltage, and good for up to 3.4 Amps without any trouble.
H: What does the RNF pin do on an H-bridge? The H-bridge in question is a BD6212FP. Pins 7 and 8 are marked "RNF". Below is a link to the datasheet. http://www.farnell.com/datasheets/46169.pdf AI: If you look at the block diagram, it's apparent that those pins are where the H-bridge return current comes out. The reason they're labeled RNF and not GND, is that often times you will see a low value (e.g., under an ohm) resistor inserted in that return path. The resistor provides a voltage drop proportional to the current the bridge is passing. If you measure the voltage and you know the resistance, then you can compute the current via good old Ohms law. You can connect these straight to ground if you like, and ignore the current, but knowing the current is really useful. It gives you an idea of the kind of load you have on the system. For instance, if you're driving a motor, then the difference between a happily spinning motor and one stalled out (and therefore probably overheating) is that the latter will pull a lot more current.
H: Topology of solar PWM charger I am looking into the design of the most simple (yet functional) PWM solar chargers for lead acid batteries. (Numerous examples can be found on Ebay for about 10 bucks.) I have a hunch that they should be able to work as constant voltage sources without any magnetic filter components since they operate from a current rather than a voltage source. - But I am not sure. My understanding is that the simplest charge controller is something like this: simulate this circuit – Schematic created using CircuitLab My questions are twofold: Given that the cell and the supply wire have some capacitance which is charged to Voc while the switch is open, does the battery see spikes of Voc when the switch closes or are the capacitances typically small enough and the supply wire LC filter typically low enough to prevent this? - Is there a back-of-the-envelope calculation to convince myself either way? Assuming a fully charged battery, some chargers (claim to?) realise a constant voltage maintenance mode. Does this require an explicit LC filter at note N2 or can this be realised with the inductance of the wire to the battery combined with a low-value shunt capacitance at N2? - My hunch is as follows: 10 cm of 2mm diam. wire should have about 100nH. Combined with 47µF this gives a cut-off frequency of about 70 kHz. PWM at a faster rate should thus not require an explicit inductance. Many thanks. Edit 1 Below is an image of the interior of the Juta CMP12 solar controller. (Source: http://solar-nm.blogspot.de/) It looks to me like there are no magnetic components... Connections from left to right: 1&2 pv panel 2&3 battery 5&6 load Edit 2: In response to Russel's ammended answer Two things still have me stumped: If you connect a PV panel to a battery then current flow will redcue as PWM duty cycle redcues. It will not be a linear reduction as vpanel will rise as load is reduced, tending to act against the current reduction from the PWM. This is at the core of what I was trying to get at with my question: I agree, that average current flow will reduce with PWM duty cycle however I would think that it will in fact be a linear reduction even if Vpanel rises during the off-part of the cycle. Vpanel should have no effect on an on/off switch. Vpanel should be x Volts during the on-part of the cycle and Vmax during the off-part of the cycle Only with a linear regulator (partly-on switch), Vpanel would be somewhere between x Volts and Vmax and here the non-linear effect would occur. Right? That out of the way, on to how constant voltage PWM of a current source is realised without any filters: For argument's sake, lets assume the battery was "utterly" full and shall now be kept on float voltage. The current would then have match either the self-discharge rate of the battery. Let's assume it is orders of magnitude smaller than the current produced by the PV panel. - But on/off PWM without filter is not able to produce a smooth instantaneous current. (Only a linear regulator with a partly-on switch could.) With PWM on/off regulation, I would expect the following should then happen: If the battery was full and it was (over-)charged with a non-negligible current (during the on-phase), its voltage should increase linearly from 14.4 V upwards similarly to a capacitance. (Otherwise one would not need to reduce current during the absorption phase.) But if no charge current is applied to a full battery, its voltage is only its open circuit voltage, say 12.8V. Now if one modulates an on/off switch, should the instantaneous voltage seen by the battery be not as follows: A rise during the on-phase starting from some value in excess of 14.4 V upwards. Then an instantaneous drop by (14.4V-12.8V) = 1.6V when the switch opens. Followed by a slow fall due to self-discharge to the battery open circuit voltage 12.8V. Knowing that a PWM controller must low-pass filter the voltage seen by the battery over many cycles and regulate it to a temporal average of 13.8V, we can calculate the some value in excess of 14.4 V above: Since the target voltage around which the PWM controlled voltage shall oscillate is 1V larger than the open circuit voltage (13.8V-12.8V)=1V, the instantaneous value seen by the battery oscillates around 14.4V+1V=15.4V. The instantaneous voltage seen by the battery during the off-phase of the switch must be 13.8V+x decreasing to 13.8V-x (since the temporal average is then 13.8V assuming the duty cycle << 1) by the same argument, during the on phase the battery should see 15.4V-x increasing to 15.4V+x So my question: How can this not be very bad for a battery? The effect should be grid corrosion during the on-phase (15.4V overvoltage) and possibly gassing during about half of the off-phase (if the modulation depth x is large enough?). I suspect that the voltage profile drawn above is inaccurate but I cannot see where my error in thinking lies. AI: My original answer (below was based on my incorrect assumption that Arik was suggesting using the wiring inductance in a controller to replace formally provided inductance. In fact, he is saying that in the controllers of interest there is NO formal inductance, and he was wondering if the wiring inductance served a useful role overall. Simple PWM can be used to vary the current which a PV panel will deliver and to control bttery voltage. It can act as current limiter, constant current controller or voltage controller. A PV panel used without an energy converting controller, suh a an MPPT controller, usually acts much like a CC (constant current) source. This is because Vmp (Voltage at max power) is > Vbattery under most sun conditions and the panel is loaded with a lower effective resistance load than is optimum. An MPPT controller increases the effective load resistance so the supply voltage can rise to the optimum value. If you connect a PV panel to a battery then current flow will redcue as PWM duty cycle redcues. It will not be a linear reduction as vpanel will rise as load is reduced, tending to act against the current reduction from the PWM. However, in practice you can set current to any value equal to or lower than what ypu'd get with a hard connection. If you want to limit battery voltage to a certain value then simply reducing or stopping current flow when the voltage is high enough, will work as a "constant voltage' source. current Simple MPPT can be little more than the buck converter I outlined plus a controller. By doing no more than holding panel voltage at about 80% - 85% of Voc_panel_full sun you will get very close to true MPPT performance. Second addition As the question evolves, so can the answer :-). There is no doubt that simple bang/bang on/off control is undesirable and causes undesirable battery current and voltage variations. My comments about the controller being able to control voltage are true over a long time period relative to a PWM cycle but all sorts of interesting stuff may happen over a single cycle or a small number of cycles. Adding an inductor allows energy storage and smoothing - an existing controller MAY be able to be "improved" by just adding an inductor and flyback diode and maybe one or 2 reservoir caps depending what is there now BUT the existing control circuitry may have a fit (or not) due to the changed response. It would probably in many cases to use the existing power level hardware with L,C,D as requisite plus either new software or (possibly more easily) a new control core. An MPPT controller needs cost little more than what is there now. Pricing is often controlled by "because we can" and "because they can't" factors. Having the series switch (probably MOSFET) in linear or resistive more would help make behaviour nicer at the expense of power dissipation in the switch. The heatsink size is uncertain as can only be seen end on but it looks substantial. If the switch is run as a resistor then it COULD be setup to operate a setady current feed to battery. If desired this could be only done in holding mode where current is low. eg in Panle V_light)load is say 17V and Vbat hold is say 12.6V and Itrickle is say 100 mA then dissipation in a FET in this mode is P = V x I = (17 - 12.6) * 0.1 = 0.44 Watts = minimal. If you could sink say 5 Watts and needed to provide current from 18V to 12V you could have I = W/V = 5/(18-12) =~~ 800 mA. Using on/off PWM is non ideal and will lead to waveforms something similar to those shown by Arik in his second edit. The magnitude of the spikes will depend on how much capacitance is present BEFORE the switch, to a lesser extent how much capacitance after the switch, wiring resistance (and to some extent inductance) and battery characteristics and state of charge and, importantly, PWM frequency. Arik has shown the signals as step changes at switching boundaries followed by linear ramps. I would expect the step changes to be modified by effect of capacitances and linear ramps to tail off into more or less steady state flat spots as PWM on or off time became long relative to battery & PV panel settling down to steady state under the given conditions. I do not show a capacitor on the PV panel in my outline schematic below but if there is one then the PV panel will slew more slowly and can be held near a constant voltage more closely. This would limit the more objectionable spikes and excursions shown by Arik. Also, an ideal battery may exhibit step change and "instantaneous steady state" conditions as suggested but it is likely that in real life you get more complex responses- an oscilloscope would be your friend here. Original answer - useful but not what was asked for. It is extremely likely that you circuit diagram is incorrect and that the simple PWM controllers are Buck converters as shown below. Wiring inductance could notionally be used but in practice is too small to use in sensibly practical converters. Resonant frequency is not the critical factor. At resonance Vcap would swing 'widely'. What is required is an inductor such that delta V is small during the on cycle - perhaps 1V p-p and ideally quite a lot less. Using wiring inductance would probably require MHz range switching and would be likely to produce ill defined low efficiency high RFI situations. With a suitable controller such a circuit can provide constant current or current limiting or voltage controlled output. D1 is usually either a Schottky diode or a synchronously controlled FET switch.
H: DC/DC Converter minimum load? I'm using the NTE0506MC DC/DC converter which boosts 5V to 6V. The output voltage I'm measuring with no load is 9V not 6V. In page 3 of the datasheet they mention that it needs a minimum load of 10% of the rated load, which in my case is 16.7mA (0.1*167mA). So I decided to put a dummy resistance of 232 Ohms (after trying several larger values) in parallel with the output decoupling capacitor. The measured output voltage in this case is 6.33V which is much better but still not 6V (my acceptable tolerance is +-300mV). The 232 Ohms should maintain an output current of ~27mA (6.33V/232) which is more than the required minimum current (16.7mA). Why is the output still not close to 6V? How can i solve this issue? My circuit is exactly the same as the one in page 4 of the datasheet for the part NTE0506MC and my load needs a max of 20mA. Update: 25/5/2013 I added two 200 Ohm 0.25W in parallel to the output of the regulator and the ouput voltage is now 6.18V which is within my specs. AI: The converter is operating within its specifications. See the "Tolerance Envelopes" at the top of page 3 of the datasheet. The output is allowed to be as much as 10% high at 10% load, which could be as high as 6.6 V in your case. Try other loads, up to 100%, and see if it generally follows the load line that they show.
H: DC-DC converter, 12V to 400V? Is this possible? To be able to charge a high voltage battery (~400V) from solar panels I need a dc-dc converter that can boost up the voltage from the low voltage system (~12V) to the higher voltage. The power needed is about 400W, or 1A at the output. I have read that it is not practical to boost more that a factor of 6, and this is a factor of about 30-35. Would two boost (step-up) converters connected is series work or is there a better solution? AI: In general, a single boost circuit is preferable to using two boosters in series. I say this because, given the powers you require, a transformer is the most efficient way to do the step-up from 12V to 400V. Not using a transformer will limit the power you can handle in stepping up the voltage. At these sorts of power levels you'd be hard-pushed to find a booster that doesn't use one. Once it is accepted that a transformer is a requirement, any turns ratio from a 100:1 (step-down) to 1:100 (step-up) is just a matter-of-fact (see comment below next paragraph). Because a transformer is necessary to economically achieve the power output, the turns ratio is going to be an easily achievable 40:1. A 24Vp-p input at a frequency of (say) 10kHz will, under no-load conditions, produce an output of 960Vp-p which after rectification/smoothing is going to be about 475VDC. But, high turns ratios and efficiencies don't always go hand-in-hand - I'm thinking of X-ray tube power supplies that I've used - they were 50KV/4mA output (200W). The step-up transformer was a large ferrite operating at 50kHz and, with about 1200 turns on the secondary you were starting to hit self-resonance. It was a resonant transformer circuit so that was no problem but more than 1200 turns and you were on the downward slope of decreasing efficiency. Input turns, from memory were about 6 so turns-ratio of about 1:200. The output fed a multi-stage cockcroft-walton voltage multiplier to take the output up from about 2000Vrms to 50kV. More than likely (given the power), you'll need a H-bridge driver which will apply nearly 24Vp-p to your primary winding and the turns ratio will be 40:1 to produce +400VDC. It will need a reasonably sophisticated control system that involves pulse width modulating the drive to the primary and decent monitoring of the output to ensure you stay regulated. I would also say that a secondary shut-down circuit would be needed should a single-fault arise. Caution - fire hazard, electric shock hazard Given the power supply's sophistication I'd be very hesitant to believe that two boosters would be at all more effective than one booster on the grounds of efficiency, performance, physical size or cost.
H: 5-bit AD converter I just started working with AD converters and I got a trivial question about it. In this one scenario, I have a 5-bit AD converter with a resolution of 312.5 mV. How can I calculate Vfs based on that? What does the equation look like? Let's say you have a resolution of 0.5 V with a 4-bit AD Converter the answer should be 0.03125V but how do you get to that result, that's basically my question. The equation or method is what I'm looking for. AI: I'm not really sure what "dissolution" means in this context. In your first example, I would think that 312.5 mV is the step size, or resolution, for a one-count change (also called one LSB). A 5-bit converter would have 25 = 32 steps, so the full-scale range would be 312.5 mV × 32 = 10.000 V. In your second example, you seem to be saying that 0.5 V is the full-scale range, and that a 4-bit converter (24 = 16 steps) would have a step size, or resolution of 0.5 V / 16 = 31.25 mV.
H: Arduino: Measure external voltage I have a "problem" while trying to measure the external voltage from a coin cell. I have simplified my circuit, but basically I have a DS1307 real time clock powered by the Arduino 5v output, and a coin cell connected to vBat to keep the clock alive while the Arduino is shutdown. My idea was to measure the battery level with Arduino by connecting its positive to an analog input and read the voltage level. So, I thought, as the max voltage is 3v, I can use the 3v3 as AREF and set analog reference to external and then read accurate results. The result is pretty cool and the readings from Arduino are the same as reading the battery voltage from my multimeter. The problem comes when I disconnect the source power from Arduino (an USB connection). For my surprise, the Arduino does not shutdown (some leds keep turned on), and seems to be powered by the 3V input (at the moment not a coin cell, but two AA batteries) connected to Analog pin. Why is this happening? How is the Arduino taking current from the Analog Pin? Is this little circuit an electronic aberration? AI: Most, if not all, input pins on AVR controllers are internally protected by clamping diodes. These diodes prevent that an input voltage can be higher than the supply voltage. Why these internal diodes are there: It is important to realize that when an input pin does rise above the supply voltage and these diodes weren't there, the chip might act as an SCR, latch, short the battery, and the chip itself would probably get damaged by overcurrent/overheating. Why your Arduino runs from the battery Take a look at the circuit diagram below. When Vcc is detached from the normal power source, there is a path from the battery, via D1, to Vcc. The battery voltage is just high enough for the controller to operate. simulate this circuit – Schematic created using CircuitLab How to solve it The solution is simple, add a high value resistor in series between the battery and the controller's input. Most AVR's have a maximum input leakage current specified (IIL) of 1μA, so you want to make sure the voltage drop is acceptable with that current, say 100kΩ. But beware! You do have to realize that the input may draw this current from your button cell and shorten its life time.
H: Raspberry Pi: driving Servo with PWM I'm experimenting with a small servo motor with the raspberry pi using PWM (pulse-width modulation). My understanding is that the servo should respond to different length of pulse. I should be seeing some motion if I output a constant square wave, right? And if I output a constant HIGH signal the servo shouldn't do anything. Is there a way to test for sure that the servo is working properly? I don't have an oscilloscope so I can't really confirm that my code is outputting the way it should - thinking about putting an LED and I should see a blinking light. AI: Have a look at the specs for your servo. It will expect to see a pulse every 20mS or so. The duration of the pulse will determine the angle. N.B. This 20mS timing may vary depending on the servo type used. The length of the pulse within this time will determine the position (min about 1ms, max about 2mS) The three connection wires are Power (+5V), Control Signal and Ground. Check the maker for specific colour code. Test the servo is working by setting up a 1.25mS pulse (0 - 5 - 0) every 20 mS. The arm should swing to the 0 degree position. Then change the pulse length to 1.75 mS and the arm should swing to 180 degrees. Putting a constant high signal on the input is not recommended. As regards testing the output you could try a high impedance speaker (>64R) connected in series with a small capacitor (say 0.1 uF). connect one side to ground and the other to a probe (piece of wire). You should hear a low frequency hum if the output line is switching on and off.
H: DC Fan Control (Switching on/off) I'm using an MC25060V2 fan, and I'd like to switch it on and off through software occasionally (i.e. not PWM or anything fancy like that). My thought is to either put an NPN BJT or a N-channel MOSFET on the low side and control it's gate with a 5V microcontroller GPIO pin. Any reason I should choose one technology over the other (or another approach altogether)? Here are schematic representations of what I'm proposing (just logical, representing fan as an inductor and part numbers / values not important). simulate this circuit – Schematic created using CircuitLab I'm thinking to use something like a BS207FS in the case of a FET, or a 2N3904 in the case of a BJT. AI: There are plenty of ways, Here's two: - Note the diode across the motor - this is needed to protect the BJT (or MOSFET) when the transistor is switched-off - this circuit will be OK for low-power fans. Rate the transistor in accordance with the current the fan needs. Or... This circuit will run from 5V or a lot higher should your fan be (say) 24V or 12V. Not shown is the reverse connected diode which you must have. Pick a P-channel MOSFET with low "Vgs(threshold)" to ensure it turns on properly
H: What is the relationship between bandwidth and noise bandwidth What is the mathematical formulae for noise bandwidth for a 1st order low pass system (and a second order LP systems analysis if possible). What is the importance of this concept, how and when is it applied? A nice graph showing the noise BW and the BW and mathematical derivations would be excellent (for both 1st and 2nd order systems). Extensions to bandpass filters would be helpful too. AI: The noise bandwidth \$B_N\$ of a (linear time-invariant) system is defined as the bandwidth which an ideal filter with a rectangular frequency response would need to have to get the same noise power at the output, given that the input noise to both systems is identical and white. The ideal filter is usually assumed to have the same maximum gain as the system under consideration. From this definition, it follows that the noise bandwidth is given by $$B_N=\frac{1}{H_{max}^2}\int_{0}^{\infty}|H(f)|^2\;df\tag{1}$$ where \$H(f)\$ is the system's frequency response and $$H_{max}^2=\max_{f}|H(f)|^2$$ Definition (1) is valid irrespective of the specific characteristics of \$H(f)\$. So it is valid for lowpass systems as well as for bandpass systems, or other types of filters. For a simple first order lowpass system (e.g. an \$RC\$ lowpass), we have $$H(f)=\frac{1}{1+jf/f_c}$$ with \$f_c\$ the -3dB cut-off frequency. From (1) we get for the noise bandwidth $$B_N=\int_{0}^{\infty}\frac{1}{1+(f/f_c)^2}df= f_c\int_{0}^{\infty}\frac{1}{1+x^2}dx= f_c\arctan(x)|_{0}^{\infty}=f_c\frac{\pi}{2}$$ In this case we see that the noise bandwidth is larger than the -3dB frequency by a factor of \$\pi/2\$. For a second order system everything is similar, but slightly more complex. The integral gets a bit more involved and the value of \$H^2_{max}\$ needs to be determined, because there can be overshoot in the frequency response, depending on the damping. If I have more time later on I might add details about second order systems. For the time being I hope that the answer is clear enough so that everybody can derive the noise bandwidth of any system they are interested in.
H: Is it ok to power several op amps with the same pair of 9V cells? I'm experimenting for the first time in the wonderful world of operational amplifiers. I'm processing signals in various ways - amplify, envelope detection, voltage differences, voltage-to-current, and so on. Many of these steps use an op amp. Is it usually acceptable to power several (dual-supply) op amps with the same pair of 9V cells? AI: Yes. In general all the circuitry on a board that uses the same voltage has the same supply. The only exception is when you have (for example) "digital" 5 V that will have noise on it and "analog" 5 V that is cleaner. I am assuming that the batteries can supply enough current to power your several op-amps. 25 mA is fine. Higher currents will result lower mAh.
H: Modulation that is used in WiMAX Which modulation is used in WiMAX? And how does it work? I found QAM-64 but I'm not sure about that. AI: More than one. thats for sure: http://www.tutorialspoint.com/wimax/wimax_physical_layer.htm
H: Fix Conflicting IO Standards I am using the Basys 2 Spartan-3E FPGA board with Xilinx. I need the pmod i/o to be at 1.8v so I am using LVCMOS18 IOSTANDARD. You can find all of the IOSTANDARD's available for Spartan-3E in this document. Whenever I try to compile my project it always errors out on the Place & Route phase that says: ERROR:Place:864 - Incompatible IOB's are locked to the same bank 0 Conflicting IO Standards are: IO Standard 1: Name = LVCMOS25, VREF = NR, VCCO = 2.50, TERM = NONE, DIR = INPUT, DRIVE_STR = NR List of locked IOB's: mclk ERROR:Place:864 - Incompatible IOB's are locked to the same bank 3 Conflicting IO Standards are: IO Standard 1: Name = LVCMOS25, VREF = NR, VCCO = 2.50, TERM = NONE, DIR = OUTPUT, DRIVE_STR = NR List of locked IOB's: LEDs<7> I have done some research on that error and I believe it means that there are different IOSTANDARD's on the same bank. The pins that I directly set are on bank 1 but the conflicts are on bank 0 and bank 3. When I remove the IOSTANDARD from the pmod i/o it compiles successfully but those pins are not 1.8v then. My constraints file looks like: # clock pin for Basys2 Board NET "mclk" LOC = "B8" ; # Bank = 0, Signal name = MCLK NET "mclk" CLOCK_DEDICATED_ROUTE = FALSE; # Pin assignment for LEDs NET "LEDs<7>" LOC = "G1" ; # Bank = 3, Signal name = LD7 NET "LEDs<6>" LOC = "P4" ; # Bank = 2, Signal name = LD6 NET "LEDs<5>" LOC = "N4" ; # Bank = 2, Signal name = LD5 NET "LEDs<4>" LOC = "N5" ; # Bank = 2, Signal name = LD4 NET "LEDs<3>" LOC = "P6" ; # Bank = 2, Signal name = LD3 NET "LEDs<2>" LOC = "P7" ; # Bank = 3, Signal name = LD2 NET "LEDs<1>" LOC = "M11" ; # Bank = 2, Signal name = LD1 NET "LEDs<0>" LOC = "M5" ; # Bank = 2, Signal name = LD0 # Loop Back only tested signals NET "spi_si" LOC = "B2" | DRIVE = 2 | IOSTANDARD = LVCMOS18 | PULLDOWN ; # Bank = 1, Signal name = JA1 NET "spi_so" LOC = "A3" | DRIVE = 2 | IOSTANDARD = LVCMOS18 | PULLDOWN ; # Bank = 1, Signal name = JA2 NET "spi_cs" LOC = "J3" | DRIVE = 2 | IOSTANDARD = LVCMOS18 | PULLDOWN ; # Bank = 1, Signal name = JA3 NET "spi_sck" LOC = "B5" | DRIVE = 2 | IOSTANDARD = LVCMOS18 | PULLDOWN ; # Bank = 1, Signal name = JA4 NET "sw0" LOC = "P11"; # Bank = 2, Signal name = SW0 How do I fix these conflicts? The Basys2 has the CP132 package. Here is the chip diagram and bank schematic: AI: As you have surmised, you get errors when you have incompatible IO standards in the same bank. It's best to know exactly how this stuff works, because the tools will gladly give you a bitfile that ends up burning out your FPGA due to incompatible IO. As you posted above, we can consult the Xilinx datasheet for the device family, DS312. Supported IOSTANDARDs are set by the VCCO of a given bank. Notice that input and output are supported only when the number in the IOSTANDARD matches the voltage; input is supported for anything equal or lower. Now look at the last page of the Basys2 schematic. It appears that all the VCCO's are connected to 3.3 volts. This means that you can do LVCMOS33 input/output, and all other LVCMOS input. You cannot do the LVCMOS18 output that you would like to do. If the UCF was set up for LVCMOS33 on all the other pins, the tools would recognize the impossibility of what you're trying to do and give an error. As it stands, the IOSTANDARDs in the UCF are blank, which default to LVCMOS25, and so the tools detect a mismatch. You might be wondering why Diligent set up their UCF the way they did, since everything defaults to 2.5 volts when the board is actually 3.3. I don't know. The point is that the FPGA accepts pin settings without complaint, as long as the settings are consistent within the bank; it's up to you to make sure the external voltage matches. So you could "fix" your problem by setting a whole bank to LVCMOS18, but then you would probably fry your SPI device with 3.3 volts. The way I see it, you have several options. Serious surgery on the Basys2 board, changing the VCCO's appropriately. Probably not worth it. External level shifting circuitry. You can read the serial input just fine, but the other SPI signals have to be shifted down. You can either do this with something like an 74LVC series logic chip, some series FETs, or just resistor dividers. Current limiting resistors. This method relies on the clamping diode in the target device to limit the voltage, and the resistor limits the current to a safe level. There are resistors out to the PMOD pins, but they're probably not enough. Do due diligence if you choose this option.
H: High voltage meter I am working with high voltage arcs a lot but I have no way to measure the voltage. What is a good way to measure very high voltages (up to tens of thousands)? AI: One way is to use a high-voltage probe. For example if you check the B&K PR 28A Datasheet you'll see it's rated for 1kV to 40kV DC and 1kV to 28kV RMS AC. As a ballpark price they are currently $US72 at Digikey. They include a 1000:1 divider to bring the voltage down to levels that normal test equipment can deal with. I have seen DIY solutions in the past, but of course you'd have to be very careful with the materials used and the construction so I wouldn't really recommend it. Especially if you don't have to knowledge / equipment to test insulation breakdown.
H: What is the magnetic analog of resistivity? Would the magnetic analog of resistivity be the inverse of permeability? I ask this because resistivity is defined by E/J, and so taking the magnetic analogs as H/B, I see that this is simply the inverse of permeability. AI: I believe that "magnetic resistivity" is referred to as reluctance (inverse of permeance) and is given in units of \$A\cdot\text{turns}/Wb\$ (Amp-turns per weber) in an equivalent magnetic circuit. Reluctance can be calculated as: $$\mathcal{R} = \frac{F}{\phi}$$ where \$F=N \cdot i\$ is the magnetomotive force and \$\phi=B\cdot A\$ is the flux, thus giving: $$\text{reluctance} = \mathcal{R} = \frac{l}{\mu\cdot A}$$ Reluctance is defined by the magnetic permeability of the material (as well as its physical dimensions) which is given as: $${\mu = \mu_0\mu_r}$$ Where \$\mu_0\$ is the magnetic constant and \$\mu_r\$ is the relative permeability of the material. Magnetic permeability is a measure of how easily a magnetic field is formed within the material and is affected by temperature, humidity, and magnetic field strength (among other things). Hopefully someone can provide a more complete answer, as this is just off the top of my head.
H: What is the easiest way to make a stepper motor go? What is the easiest/ cheapest way to make that motor move like a dc hobby motor would when connected to a battery? Speed doesn't matter. I just want it to infinitely rotate. AI: Assuming you really are using a stepper motor (?), you'll need either a stepper motor driver, or a microcontroller and a basic current driver (chip or a few FETs). You'll also need to know whether your motor is unipolar or bipolar. Without these, there is no way to apply current to the correct coils in such a way that the motor will move. I'm always keen to suggest the Pololu A4988. It's very simple to use, relatively inexpensive, robust, and makes driving a stepper motor almost exactly the same as a DC motor. Bringing the "STEP" pin high and leaving it there will run the motor at full speed, just like a basic DC motor. Pulsing the input allows for slower, finer rotation control. It's also simple to drive a stepper from an Arduino; All you need is an Arduino and a ULN2003A current driver or similar. It's also useful to understand how the device you're trying to work with works. I'd suggest you check out this quick little bit on steppers.
H: Physically determine unknown wire's gauge Is there any way to easily and accurately determine the gauge of stranded wire by measuring it? I know you can pretty easily determine the gauge of solid-core wire with just a pair of calipers by measuring the conductor diameter, but with stranded wire, wouldn't there be variance in packing-efficiency of the strands? I also had little luck finding a wire-gauge to stranded-wire-diameter table. I have a bunch of spools of wire that are incorrectly labeled. It appears that someone has respooled a number of the reels, as there are (at least) multiple reels all labeled "22 gauge" with different diameter wire on them. AI: http://www.zierick.com/pdf/wire.pdf Image is high-resolution to maintain legibility of the small text. This shows the stranded diameter and sub wire gauge+qty.
H: ASIC vs ? -- Performance & Cost I'm interested in creating a custom linux based solution that will crunch sha256 cryptographic algorithms at insane levels of speed. I'm also on a limited budget... I have no background in EE but I do w/ Software. My question is custom ASICs have to be designed by some big company w/ some crazy fab process and takes a while. Anything out there that has the low power consumption and high speed processing of an ASIC? I read about SoCs and what looks to be hybrid type chips... i.e. - Zynq-7000 and this Parallella makes use of the same board/chip I believe. Wondering if I can build something to produce tens of thousands of hashes per second. Even if I need to cluster them (at decent power and price constraints) Thanks for any help. Currently reading research papers about ICs and everything, just don't understand how something economical yet powerful doesn't exist. AI: If your budget is less then 10-20 thousand dollars (or more realistically 100K+), you have no hope of getting an ASIC made. The common device used instead of an ASIC, in situations where you cannot afford the NRE (non-returnable expenses - basically the cost for producing the masks for etching your asic, as well as the design costs), is to use a FPGA. There is a list of papers related to using FPGAs for hash-cracking here. A lot depends on what you mean when you say your budget is "limited". Big FPGAs can be several hundred dollars a piece, and that's just for the IC (integrated circuit)! The great majority of FPGA-based hash crackers that have been built by amateurs are done by reverse-engineering existing products from e-bay that use FPGAs, commonly real-time video encoding/processing devices. Really, if you're serious about this project, you should spend some time researching and understanding HDLs. FPGAs (as well as ASICs) are not "programmed" in C, or any other common language you may be familliar with. Instead, they use a language (called a 'Hardware description language") that describes a series of logic operations and registers, that all operate in parallel. It's quite different then C. Then, you should purchase an inexpensive FPGA development board (digilent makes some nice ones), and get your hashing algorithm working and tuned. Then you could look at scaling up. It's worth noting that none of this would run linux at all. Everything ASICs do (as well as FPGAs) is so-called "bare metal" the way a ASIC/FPGA works is fundamentally different then a microcontroller, and there is no such thing as a FPGA "operating system". The extent to which linux (or any operating system) is likely to be involved is simply communicating with the special-purpose hardware and feeding it things to hash/reading back hashing hits. This would probably occur over USB, or if you're using a platform like the Zynq, through shared memory. It's interesting that you mention the parallela people, as that's a pretty good example of the costs involved in doing a fairly complex ASIC. They needed about ~$750,000 to get their design produced as hardware.
H: How to know about multiplexed inputs/outputs I am working on AM3352 processor, and i need to know, which of the input/output pins are multiplexed. How to do this using datasheet, it may sound simple thing, but i am new to the processors. Please Help. Link to datasheet of AM3352 AI: Look at Table 2 in the datasheet. Some of the pins have several signal names : these ones are multiplexed. For example this pin is not multiplexed: A4 DDR_WEn ddr_wen and this one is E18 ECAP0_IN_PWM0_OUT eCAP0_in_PWM0_out uart3_txd spi1_cs1 pr1_ecap0_ecap_capin_apwm_o spi1_sclk mmc0_sdwp xdma_event_intr2 gpio0_7
H: Use an active PS/2 -> USB adapter as USB-Serial Bridge I'm working on a small robotics project where I need to enable my old phone (Nokia 5230, shows up as USB Serial port on Windows but USB cable is straight-through) to talk to an Arduino UNO R3 (doesn't support USB Host mode natively). My idea is wiring together the USB cable for the phone and the USB end of the adapter and wire the DATA/CLOCK port out of the PS/2 socket to the Arduino and communicate over USB serially. Essentially, the phone will act same as a PS/2 keyboard connected does to the Arduino. Sorry if my answer is lacking on original research, I am just inquiring if it's feasible (and some general advice) so I can start reading more on it in depth. ps: With active I mean, Windows shows the adapter as composite USB device (with nothing connected at the PS/2 ports) and there is a circuit inside with some capacitors and a big epoxy blob on it. AI: Basic USB is an asymmetric connection: a master connects to a slave. You can't connect two masters or two slaves (you can if you rewire cables, but it won't work). The USB cable for your phone connects a master (the PC) to a slave (your phone). The PS2-USB converter is probably a slave that connects to your PC (the master). (Your PS2-USB converter could be less than a slave, if it relies on keyboard to do part of the job.) Hence you can't (successfully) connect your PS2-USB slave to your phone slave.
H: Solution to this signal even or odd Can you please explain this function whether it is odd or even step by step: \$x(t) = cos(t) + sin(t) cos^2(t)\$ AI: This function is neither even nor odd: $$x(-t)=\cos(-t)+\sin(-t)\cos^2(-t)=\cos(t)-\sin(t)\cos^2(t)$$ And since $$x(-t)\neq x(t)\quad\text{and}\quad x(-t)\neq -x(t)$$ this function is neither even nor odd.
H: What advantages to permeability tuning as against capacitive tuning? I learnt recently about the RBZ80 ; a vintage valve receiver which used variable permeability tuning rather than variable capacitor tuning. In thinking about the possible reasons for permeability tuning in lieu of capacitive tuning the thoughts that come immediately to mind as follows The receiver was designed for hostile environment. Air dielectric would be bulky. Any other dielectric may be damaged/corrupted by exposure to the elements (e.g. salt water). What advantages does permeability tuning offer as compared to capacitive tuning? AI: The permeability tuning (variable inductance) is essentially a slug of ferrite that can be screwed in and out of a coil wound around a former. Once the circuit is tuned wax can be used to seal it into position or the screw thread can be locked mechanically. The variable capacitor has large inter-meshing plates that are subject to mechanical damage or vibration and is physically a large component (size matters). The range of tuning using variable capacitors is greater than inductive tuning for a single tuning coil. Switching coils (bands) is easy to do. Variable capacitors are quite expensive (not a military consideration) to produce so there is a cost saving as well. For a portable radio used in warfare size, mechanical robustness and functionality are important.
H: SPICE Model For Voltage Reference Anyone aware where can I find SPICE models for MAX6126, MAX6250 etc? I searched a lot and could not find. If anyone knows, can you please share the link/file? AI: These are complex devices containing many transistors and components, so any SPICE model will be a macro model, made up of voltage controlled currents sources, Current controlled current sources , Voltage controlled voltage sources etc. etc. and the only people who can do that reasonably are the designers themselves. Look on the Maxim website for these models, if they don't have them look to someone who second sources these parts and might have done it. The company that is best at this is LT itself so I'd suggest seeing if there is a close match and then it would be likely that LT has these devices in the library for LTSpice.
H: How are BGA dies constructed? Are the dies made for a BGA package any different from the dies for a QFN or a DIP Package? Do BGA dies have connections on the lower side of the die? In the substrate? How are terminals put in the package under the die? I came across this picture. Are BGA packages actually PCBs? and are signals just taken off the die edge and routed to the bottom of the package? If so, how does this help in decreasing inductance at high frequencies? I also came across the image below which makes it look like the BGA packages have connections from the bottom side - from the substrate. AI: Yes, BGA packages are like little circuit boards. On high pin count devices, in nearly all cases, the balls that lie directly under the die are mostly ground (and sometimes power) connections. The ground balls are fed directly through to the substrate of the die, while the power balls connect to internal power planes. Also, because of their direct metallic connection to the die, the ground balls help remove heat from the package. All of the I/O is connected to balls near the periphery of the package, keeping their internal traces — as well as the PCB traces they connect to — shorter and inductances lower.
H: Calculating the resistor needed to replace batteries with another power supply The project I'm working on is going to use a LiPo Battery to run a Raspberry Pi, Arduino, and some LED's. I currently have all of that working. One final thing I want to do is have a "Speaker" plugged into the Raspberry Pi. I went to the local electronics store and got a little battery operated speaker thingy that has a 3.5mm audio jack. The speaker runs off of two AAA batteries. I would like to wire it up so that the speaker runs off of the lipo battery instead of AAA. That way when I connect/disconnect the battery the whole system turns on and off as one. My question is what resistor would I need to appropriately power the device? Two AAA batteries only put out 3v but my power source is 5v. I don't really know how much current the device needs though so I don't know how to calculate a resistor. EDIT: Did a little search on the internet and found a link to the product description: http://www.radioshack.com.eg/en/auvio-universal-speaker-for-media-players-40-036 That is the item I have. Due to complexity, budget, and size constraints I am unwilling to add another voltage regulator to the system. The battery I have is a 7.4v LiPo battery running through a BEC that is set to the default (5.1v). If I can use the power source directly great. If not then I'll have to just omit the sound feature of the project which will make it significantly simpler because the only reason for the Raspberry Pi is to provide sound. EDIT 2: This is my BEC: http://www.quadrocopter.com/Battery-Eliminator-Circuit-BEC_p_625.html AI: No resistor at all. Once again, questions should stick to what you want to know or accomplish, not how you think it should be done. Your basic question is apparently how to power this "speaker" (clearly more than just a speaker) from the power source you supply rather than the two AAA batteries it is designed for. You have available some sort of lithium battery and a regulated 5 V supply generated from that somehow. First, you need to find out whether the batteries in your speaker unit are ground-referenced. If they are, you can proceed. If not, then this is beyond your level at this time and you either need to find a different speaker unit or a altogether different approach. Run the speaker normally with a fairly strong signal into it. With a voltmeter, measure between the negative terminal of the combined AAA battery pack and the outer ring of the 3.5 mm plug. There should be 0 V, both when measuring AC and DC. Of course exactly 0 will never happen, so in this case anything over about 10 mV means the two points aren't really connected. If they are connected, then the battery is ground-referenced and you can proceed. If the lithium battery voltage is around 3 V, then use it directly. If this battery is a single cell, this might just work. Basically, if the lithium battery voltage is below the regulated 5 V output, try connecting the battery to the + side of where the AAA pair would go, and ground to the - side. If the lithium battery voltage is higher than 5 V, then it would be best to to use that directly to make some sort of regulated 3 V to drive the speaker unit with. A linear 3.3 V regulator is a quick and simple answer, but might get warm when the speaker is producing loud sound. Try it and see. If that is not acceptable or the lithium battery voltage is substantially higher than 5 V, then use a switching regulator instead. There are many switching regulator chips out there that can do this with a few external parts. You can even use one that has a fixed 3.3 V output. Added: You now say the lithium battery puts out 7.4 V and the link to the speaker unit rates it as 1/2 W, but it's not clear if that is input power or power to the speaker. Just to see where you're at, .5W / 3V = 170 mA. We can't really tell from the sparse information in the link, but lets say top current draw of the speaker unit is 200 mA at 3 V. With just a linear regulator, the regulator would dissipate (7.4V - 3V) * 200mA = 880 mW. That's rather wasteful and something like a TO-220 package will get hot but probably OK with a modest heat sink. You can try a 7803 regulator. The other thing to try is to power the speaker unit from your existing 5 V source. I don't know what a "BEC" is, so can't tell if this is a linear or switching regulator and how much current it can support. The speaker will draw more current at 5 V than at 3 V. If a lot more, it may get damaged. After all, it's meant to run from 3 V. 5 V may be OK, but you're a test pilot then and you can't complain if it vanishes into a greasy puff of black smoke.
H: Altium help with Routing I am working with a new computer at my company learning Altium. I have created the schematic and the PCB files inside my project and successfully imported everything from the schematic to the PCB. My problem now is that all of the Route buttons are grey and unusable. Even all of my Auto Route options are grey. What problem would be causing this and how do I fix it? AI: If you have multiple licenses, make sure you're using the one with proper permissions.
H: N-Ch FET with open-drain output I stumbled across this line in this datasheet (page 10): Serial Data Output Pin. Internal N-Ch FET with open-drain output that requires external pull-up resistor. It shifts out the previous eight SDI bits that allow daisy-chain operation of multiple packages. Could somebody briefly explain what this means? Specifically, if I'm not planning on using the daisy-chain functionality, can I leave this pin unconnected or do I need to connect it to an external pull-up resistor regardless? AI: If you are not using the serial daisy chain then you can leave the SDO pin open and not have to install a pullup resistor on this pin. The AD5290 part is designed to allow its use in analog circuits that have power supplies up to and including +/- 15V. This allows the "digital pot" terminals to connect into OpAmp circuits (and other similar types) that operate on voltage rails of this magnitude. The main reason that the SDO pin on this part is open drain is that it allows the device to be built into a package of 10 pins and eliminates a need for a digital logic level supply pin. If the SDO pin (the only digital output on the part) was designed with a push-pull output then the extra digital supply pin would be required. As others have stated the open drain nature of the pin is such that the pin could be be tied together with multiple other open drain outputs but that has very little utility in the use of this particular part. Instead it can have a role in the daisy chain operation when the digital supply levels are other than a 5V supply. A close look at the VIH and VIL specifications for the part will show that the inputs to this device can be driven equally well from a digital realm of 5V or from 3.3V.The open drain SDO can be externally wired up to a 5V or a 3.3V pullup with equal utility depending on which digital rail is used in the system.
H: Using a N-Mosfet to switch a common cathode ledstrip I am trying to drive a RGB-ledstrip using the following circuit: http://learn.adafruit.com/rgb-led-strips/usage As I am using a Common Cathode led-strip instead of a Common Anode one as used in the above link thus I came up with this 'modified circuit'. Because my experience with FETs was nihil this didn't actually work as I would need at least a voltage => Vcc to saturate the MOSFET. VCC + STP50N06FI | N-MOSFET | ||-+ ___ ||<- | \_----------||-+ |___/ | Logic 5V (PWM) | Ledstrip V -> (Common - Cathode) | | === GND (created by AACircuit v1.28.6 beta 04/19/05 www.tech-chat.de) Is it possible to use a N-channel MOSFET at all for this application? and if so, what do I need to do to make this work (Control the LED-strip brightness with a PWM signal, logic 5v). AI: If all you need is to turn all the LEDs on/off using PWM, you can attach the N-chan MOSFET between the common cathode and ground. Here's an example circuit, ignoring LED current limiting schemes for now. simulate this circuit – Schematic created using CircuitLab If what you want is to control individual LED's, then you might still be able to use an N-Chan MOSFET, but your control signal will need to be at least Vthresh above the source voltage to turn the LEDs on. Alternatively, if you have use an up-stream P-Chan MOSFET, your control signal will need to be at least the level of the source voltage to turn the LEDs off (this is the MOSFET pin source voltage, not supply voltage). If your micro is not able to achieve either of these levels, you can use a second transistor to drive the transistor in series with the LED(s). Here's a basic example which uses a P-chan MOSFET to drive the LED's, and a N-chan MOSFET/pullup resistor to control the P-chan MOSFET: (again, ignoring any LED current limiting schemes) simulate this circuit Resistor values were chosen semi-arbitrarily. You can probably get away with anywhere from 1k up to 100k. Smaller values will draw more supply current when the N-chan MOSFET is on, larger values are more susceptible to noise when the N-chan MOSFET is off.
H: Pull Up and Pull Down Resistor Values I usually use 10K Ohm resistors for pulling up or pulling down micro-controller lines. This is only because I have tons of them. Recently while looking up pick and place surface mount information I found out that often times the number of types of components you have can play a part in the cost. For example it might save money to change the 10K Ohm resistors to some other value already present on the board. Given that, my question is; what factors are in play to determine appropriate ranges of resistance for pull up or pull down resistors? AI: Too small: This causes excess current draw when you want to "counter-act" the pull-up/pull-down. Say you have a pull-up circuit like this: simulate this circuit – Schematic created using CircuitLab When M1 is "closed", you're effectively shorting output to ground. So the current flowing though R1 is: \begin{equation} I = \frac{Vs}{R1} \end{equation} Depending on what resistor power rating you have, limitations of Vs, and limitations of the switch M1 (doesn't have to be a MOSFET or even a transistor), you can figure out a lower resistance limit which will start to cause problems. You can do a similar calculation for pull-down resistors. When R1 gets way small (or the on resistance of the switch gets large), you start getting voltage-divider circuits so all of a sudden the output level isn't a digital signal anymore. You could use this to improve the lower limit value, but in reality there's usually no need to get anywhere near this lower limit. Too large: Large pull-up resistors have will excessively limit the amount of current which can be provided to the output. There are a few consequences to this: If the output has some capacitance (trace parasitics, gate capacitances, etc.), the output voltage could have some significant rise time similar to a RC circuit. The assumption that all external "sources" and "couplings" are insignificant start to break down. For example, cross-talk or antenna-like effects could become significant. I can't really say how to account for the second point or other points in general, but let's take a look at the first point: An RC circuit has a time constant of t=RC. If you have a signal line which relies on pull-up/pull-down resistors (I2C), the fastest signal rise time you can transmit is limited by this time constant.
H: calculating the spectrum of digital modulation methods I have just started studying the techniques of digital modulation. The book i am reading only has mentioned the psd of bpsk, bfsk , qpsk and msk. I would like to know, how the psd is actually calculated and I would also like to know, why the angles used in qpsk are (2m+1)pi/4 why cannot be one with a 89 degree and another with 91 degree instead of 90 degree and 90 degree. By these degree I indicate the angle between to signal vectors. AI: Either a sine or a cosine has a 0 at multiple of pi/4. So one or the other is zero (say I is 0, Q is the one you want. This allows you to just read the value; the one that isn't zero (Q here) is that value. If you read the value at (say) 89 or 91 degrees, the one (I) wouldn't be quite 0. And the one you wish to read (Q) isn't quite its full value. This makes it harder to get the right values. And this reading isn't easier.
H: Stable voltage and current source I want to build a simple and stable constant voltage source (voltage regulator) and current source. I built one using a LM317 but the output is not stable, it varies when temperature changes. Do you have any suggestions to improve the design? CURRENT SOURCE VOLTAGE SOURCE AI: For a temperature compensated, high precision voltage source, the LM317 is not the way to go. It is a very basic voltage regulator, of ancient design, about as viable to electronic designers today as the LM741. They serve at best as teaching tools, preferably to teach what not to use. Further, any variable regulator that depends on external components (e.g. the potentiometer in the question's schematic) for set-point determination will be at the mercy of all temperature effects on all those external components, and all connections to them - even poor solder joints can be a problem. This is why for temperature-sensitive designs, fixed voltage regulators are a preferred option. There are many modern parts with considerably better temperature compensation than the LM317. The Micrel MIC5219 regulator, for instance, has a temperature coefficient of 40 ppm/oC, compared to the 200 ppm/oC of the LM317. For something that is more easily available in retail in India, consider the Texas Instruments µA723 series of voltage regulators (30 to 100 ppm/oC). It is not a great part, but way better than the LM317. These can occasionally be found in the electronics components markets, selling for less than a third of the nearest Digikey price, though the shopkeepers will typically have no clue what they are selling. A precision voltage regulator with temperature compensation will typically require more support components than a basic regulator such as the ones in the question. For instance, from the above datasheet: This part, and other similar parts, can also incorporate current limiting, if that is the original purpose of the current source in the question. If, on the other hand, the compensated current source is an independent requirement, read on... As with voltage regulation, current regulation too requires fixed set-point parts for the best stability: Variable current designs will necessarily suffer from temperature effects on all set-point setting components. One family of current regulators I have used with excellent temperature compensation are the SuperTex CLxx series of regulators: The SuperTex CL2 is rock solid for 20 mA temperature-stable current regulation. Similarly, on the SuperTex product selection page, they offer parts for various other current settings and supply voltages. These current regulators are very easy to use 2-terminal parts, no external support components needed: To gain a bit of insight into the challenges faced in temperature compensated current regulation, please see this paper: "A Precision Low-TC Wide-Range CMOS Current Reference".
H: Measuring power consumption / current of an Arduino using a multitester What is the correct way to measure the power consumption and / or current draw of an Arduino using a multitester? AI: Select the milliamp range on the meter Connect the red lead to the meter's milliamp socket disconnect the power supply from the board connect the ground of the supply connector to the ground of the board connector Connect the red probe of the meter to the positive connector on the supply Connect the black probe of the meter to the positive connector on the board. observe the meter reading multiply milliamps/1000 by voltage to get power in watts voltage measured with second meter rated voltage of supply voltage measured earlier or later
H: ATtiny85, ARduino IDE, TinyDebug: Error sending int values The setup: An ATtiny85 is programmed using the Arduino IDE, it is supposed to read the ambient light from an LDR and turn on/off an LED accordingly. To get a better understanding of what my sensor readings are in different lighting situations I'd like to send the reading to my serial console using TinyDebug, a part of the Arduino-on-ATtiny libraries. Here's the minimum sketch: String myString; void setup() { pinMode(0, OUTPUT); Serial.begin(9600); } void loop() { int sensorValue = analogRead(2); String myString = String(sensorValue); Serial.write(sensorValue); Serial.write("\n\r"); if (sensorValue > 500) { digitalWrite(0, HIGH); } else { digitalWrite(0, LOW); } delay(500); } On my serial console all I receive is garbage: l x x l { t n z u l | q m n o (...) Changing the codes Serial.write(sensorValue); to Serial.write("hello"); nicely prints out hello hello hello (...) so I suppose the serial communication itself works. Trying to convert the int value to a string and sending that string, ie Serial.write(myString); gives a compiler error: (...) sketch_may24b.ino: In function 'void loop()': sketch_may24b:11: error: no matching function for call to 'TinyDebugSerial::write(String&)' /Users/cts/Documents/Arduino/hardware/tiny/cores/tiny/TinyDebugSerial.h:728: note: candidates are: virtual size_t TinyDebugSerial::write(uint8_t) /Users/cts/Documents/Arduino/hardware/tiny/cores/tiny/Print.h:75: note: virtual void Print::write(const uint8_t*, size_t) /Users/cts/Documents/Arduino/hardware/tiny/cores/tiny/Print.h:74: note: virtual void Print::write(const char*) Any idea how I can send my sensor values to my serial console instead? AI: Assuming you're using Arduino-Tiny with the hardware serial port (and I think the soft-serial port too), I think the solution should be as simple as changing your Serial.write(intVar) to Serial.print(intVar) As far as I can tell, the write function seems to be a lower-level thing, that only works on strings. What I think is happening is that you're passing an int, and the tiny is treating that int as the pointer to a zero-terminated string, and is proceeding to print the contents of a random memory address until it finally hits a zero. As usual, the solution is to go to the source. It's worth noting that I'm not sure how the actual serial library (either HardwareSerial.cpp or TinyDebugSerial.h is actually involved in the build process. I suspect it's something done in a makefile or the arduino tool, but I don't see any direct reference from either of the actual serial libraries to the print library, that I believe handles the actual formatting of arguments before passing them off to the serial libraries as a set of bytes. The whole arduino build process is very confusing, and I have not done much C++ at all, so I'm not familiar with how inheritance works in it.
H: common mode choke inductance question In a common mode choke, my understanding is that common mode currents will produce fluxes that add together, which increases the effective inductance between them and so attenuating these currents. When differential currents are flowing through the common mode choke, the fluxes will flow in opposite directions, so that there is no coupling and the individual fluxes will cancel for a net 0 flux, effectively making the inductance appear very low if at all. So when I am looking at specs for common mode chokes with an inductance specification, does this rating pertain to common mode signals only? That is my guess, but I'm still trying to get a better understanding. AI: Your assumptions are spot on. Another thing to bear in mind is that any leakage inductance between the common-mode windings manifests itself as a parasitic differential mode inductor - this is why it's common to see toroidal common-mode chokes with the windings opposite each other (not interleaved) to maximize leakage inductance and get that 'free' differential mode filtering.
H: gSpice: 'model is not a mosfet' error message I'm trying to simulate a circuit. So: I run netlist on the .sch file. It completed successfully. I've opened the .net file with gSpiceUI and tried to simulate, but an error was given: M1: model is not a mosfet The netlist file: ************************************************************** * Electronic circuit simulation file generated by gSpiceUI * * Version 0.9.98 Alpha (14/10/2009) * ************************************************************** * Component Definitions C1 1 2 100nF CLoad 0 5 100uF DOff 0 1 1N4007 DOn 1 3 1N4007 L1 3 2 22uH M1 2 4 5 DMP2004WK RLoad 0 5 1M V1 5 3 DC 5V * Signal source (V2 4 3 pulse 0 5 10n 10n 100n 1u 2u) V2 4 3 GENERATOR(1) * Model Definitions .MODEL 1N4007 D(IS=76.9p RS=42.0m BV=1.00k IBV=5.00u CJO=26.5p M=0.333 N=1.45 TT=4.32u) .MODEL DMP2004WK PMOS( LEVEL=1 VTO=-1.00 KP=0.200 GAMMA=1.24 + PHI=.75 LAMBDA=208u RD=0.126 RS=0.126 + IS=200f PB=0.800 MJ=0.460 CBD=40.6p + CBS=48.7p CGSO=240n CGDO=200n CGBO=1.31u ) * GNU-Cap Simulation Commands .OPTIONS NOPAGE .PRINT TR V(RLoad) .GENERATOR AMP=5.00 OFFSET=0.00 FREQ=1.00K PHASE=0.00 INIT=0.00 MIN=0.00 MAX=5.00 DELAY=0.00 RISE=0.00 WIDTH=5.00u FALL=0.00 PERIOD=1.00m .TR 0.00m 10.00 50.00u TE 27.00 BASIC .END Do you know why this error appears how to make this working? If any additional information is required plz ask, cuz I'm not sure if there's additional info I need to supply. Thank you. AI: I don't know gSpice specifically, but in ngSpice (a SPICE3-based free SPICE), the syntax for a MOSFET card is MXXXXXXX nd ng ns nb mname ... Notice, four terminals must be specified: drain, gate, source, and body. In your deck you specified only three connections, and the simulator probably thought DMP2004WK was the name of a new node where the body is connected. Typically the body of a PMOS should be connected to the source or to a positive voltage rail.
H: Weird current at 8051 port 1 pin The pins in Port 1 has internal pull-ups when used for input. The Mosfet is off and the drain is pulled high using an internal pull-up resister. I connect an ammeter and see that the current (IL current) flowing to ground is about 15uA. I inferred that the value of the internal pull-up resister is 333k. Now I connect a 10k resister and measure the current. I get 0.5mA (5/10k). What happened to the internal pull-up resister? Is not the internal pull-up resister and the 10k resister in series? What is happening? When I use external pull-up at Port 0, things appear as expected - Resisters in series and both resisters acting as voltage divider to create a voltage drop at the pin. ps: I am using AT89s52. ps: The same problem exists in port 2,3 ps: Here is the current for different resisters: - 330 ohms - 15uA - 1k - 15uA - 100k - 50uA - 220k - 22uA The Max IL (as per datasheet) is -50uA AI: This is an informed guess rather than a definite answer, but I suspect there isn't a resistor there at all: resistors take up a lot of area in CMOS and aren't very accurate. Instead there will be a small current source made out of MOSFETs and limited to 50uA. It might be non-linear to guarantee that, no matter what pulldown you connect, the input is close to one rail or the other.
H: VHDL - Convert from binary/integer to BCD and display it on the 7-segment display As part of my project for the Digital System Design course I have to use a component to display on the 7-segment display a INTEGER range 0 to 9999 (or a std_logic_vector(13 downto 0)). I used the double dabble algorithm to convert a 14 bit binary number into BCD and then extracted the 4 digits. I tested the component in Active-HDL and it works good, but once loaded on my Basys2 FPGA board, it displays the wrong numbers. For example, instead of "1770", I get "0064". I am pretty sure that the problem is caused by something from the first process, where the BCD digits are generated, but I couldn't figure out what the problem is. Here is the code of the component: library ieee; use ieee.std_logic_1164.all; use ieee.std_logic_arith.all; use ieee.std_logic_unsigned.all; entity DISP_SOLD is port (EN: in std_logic; CLK_PLACA: in std_logic; SUMA: in integer range 0 to 9999; -- Cathodes L18_CA: out std_logic; F18_CB: out std_logic; D17_CC: out std_logic; D16_CD: out std_logic; G14_CE: out std_logic; J17_CF: out std_logic; H14_CG: out std_logic; -- Anodes AN0_F17: out std_logic; AN1_H17: out std_logic; AN2_C18: out std_logic; AN3_F15: out std_logic); end DISP_SOLD; architecture ARH of DISP_SOLD is -- digit_pattern_array = current_BCD_digit signal digit_pattern_array : std_logic_vector(6 downto 0) := "0000000"; signal current_segment : std_logic_vector(1 downto 0) := "00"; signal cathode_select : std_logic_vector(3 downto 0) := "0000"; -- count use for the clock divider signal count : std_logic_vector(6 downto 0) := "0000000"; -- MUX_CLK is the clock resulting from the clock divider signal MUX_CLK : std_logic; signal cifra_mii: std_logic_vector(3 downto 0):="0000"; -- 1st digit signal cifra_sute: std_logic_vector(3 downto 0):="0000"; -- 2nd digit signal cifra_zeci: std_logic_vector(3 downto 0):="0000"; -- 3rd digit signal cifra_unitati: std_logic_vector(3 downto 0):="0000"; -- 4th digit begin process(EN, SUMA) variable BIN: std_logic_vector(13 downto 0); variable BCD: std_logic_vector(15 downto 0):=(others => '0'); variable i: integer:=0; variable CONVERTED: std_logic:='0'; begin if EN='1' and CONVERTED='0' then BIN := conv_std_logic_vector(SUMA, 14); -- Convert Binary to BCD (Double Dabble algorithm) for i in 0 to 13 loop bcd(15 downto 1) := bcd(14 downto 0); --shifting the bits. bcd(0) := bin(13); bin(13 downto 1) := bin(12 downto 0); bin(0) :='0'; if(i < 13 and bcd(3 downto 0) > "0100") then --add 3 if BCD digit is greater than 4. bcd(3 downto 0) := bcd(3 downto 0) + "0011"; end if; if(i < 13 and bcd(7 downto 4) > "0100") then --add 3 if BCD digit is greater than 4. bcd(7 downto 4) := bcd(7 downto 4) + "0011"; end if; if(i < 13 and bcd(11 downto 8) > "0100") then --add 3 if BCD digit is greater than 4. bcd(11 downto 8) := bcd(11 downto 8) + "0011"; end if; if(i < 13 and bcd(15 downto 12) > "0100") then --add 3 if BCD digit is greater than 4. bcd(15 downto 12) := bcd(15 downto 12) + "0011"; end if; end loop; if SUMA /= 0 then CONVERTED:='1'; end if; cifra_mii <= BCD(15 downto 12); cifra_sute <= BCD(11 downto 8); cifra_zeci <= BCD(7 downto 4); cifra_unitati <=BCD(3 downto 0); end if; end process; -- CLK_PLACA: from 50MHz to MUX_CLK (~390Hz) divizor_CLK: process(CLK_PLACA) begin if rising_edge(CLK_PLACA) then count <= count + '1'; end if; MUX_CLK <= count(6); end process; variable current_BCD_digit: std_logic_vector(3 downto 0); begin if rising_edge(MUX_CLK) then current_segment <= current_segment + '1'; case current_segment is when "00" => current_BCD_digit := cifra_mii; cathode_select <= "1110"; when "01" => current_BCD_digit := cifra_sute; cathode_select <= "1101"; when "10" => current_BCD_digit := cifra_zeci; cathode_select <= "1011"; when "11" => current_BCD_digit := cifra_unitati; cathode_select <= "0111"; when others => null; end case; case current_BCD_digit is when "0000" => digit_pattern_array <= "0000001"; when "0001" => digit_pattern_array <= "1001111"; when "0010" => digit_pattern_array <= "0010010"; when "0011" => digit_pattern_array <= "0000110"; when "0100" => digit_pattern_array <= "1001100"; when "0101" => digit_pattern_array <= "0100100"; when "0110" => digit_pattern_array <= "0100000"; when "0111" => digit_pattern_array <= "0001111"; when "1000" => digit_pattern_array <= "0000000"; when "1001" => digit_pattern_array <= "0001100"; when others => null; end case; end if; end process; L18_CA <= digit_pattern_array(6); F18_CB <= digit_pattern_array(5); D17_CC <= digit_pattern_array(4); D16_CD <= digit_pattern_array(3); G14_CE <= digit_pattern_array(2); J17_CF <= digit_pattern_array(1); H14_CG <= digit_pattern_array(0); AN0_F17 <= cathode_select(0) when EN='1' else '0'; AN1_H17 <= cathode_select(1) when EN='1' else '0'; AN2_C18 <= cathode_select(2) when EN='1' else '0'; AN3_F15 <= cathode_select(3) when EN='1' else '0'; end ARH; AI: In VHDL (and HDLs in general) a for loop does not denote sequential execution like it does in a software programming language — it denotes the construction of multiple parallel instances of the hardware described in the body of the loop. In your case, you have many assignments to the same variable BCD/bcd, and these are conflicting with each other. If you really intend to construct a system based on two shift registers (one binary, one bcd) that takes 14 clock periods to do the conversion, then you need to describe the hardware that way, and set up a state machine to control it. On the other hand, if you really want to do it entirely combinatorially, then you need to create different intermediate variables (e.g., arrays that are indexed by the loop control variable) to hold the results at each stage.
H: A heat sensor that powers an LED with no controller I'm a programmer, and I can wrap my head around having a controller board which monitors a temperature sensor llike this http://www.digikey.com/scripts/dksearch/dksus.dll?pv469=113&FV=fff4001e%2Cfff80137&mnonly=0&newproducts=0&ColumnSort=0&page=1&quantity=0&ptm=0&fid=0&pageSize=25 , and then tells an led how bright to glow based on the temperature. Is it possible to do this with just a breadboard and a powersource? Pipe the output of the sensor reading into some series of components to regulate how bright the LED would glow ? AI: Assumptions: The question is really about accepting input from a temperature sensor, and indicating it via an LED's intensity, without using a microcontroller. The temperature sensor shown is merely indicative, any temperature sensor that can be used for this kind of implementation would do. The temperature sensor linked in the question has an I2C digital interface. I2C requires some form of processor or microcontroller at the receiving end, to query the device, retrieve data (the temperature in this case) and process it. Instead of such a digital temperature sensor, an analog sensor would allow an implementation such as described, i.e. lighting an LED based on temperature, without using any microcontroller. For example, the Analog Devices TMP37, 20 mV / oC output. This output voltage can be amplified using a DC coupled up-amp, then used to drive another op-amp as a current driver, such that the current at the output will vary by the sensed temperature. If the temperature sensing range is wide enough, the voltage amplification can be dispensed with, and a single op-amp as a voltage controlled current source can be built, using the temperature sensor's output as the control signal. LEDs are current-driven devices rather than voltage driven (as long as their required forward voltage is met), in other words the intensity of light output varies non-linearly with the current through it. Hence, this arrangement would vary the intensity according to temperature. Some careful tuning of the amplification stages will be required, to achieve the right intensity gradient against the desired temperature range, but that is not too complex. Such a circuit can be easily built on a breadboard.
H: Need Help: Wiring LEDs with 2 source voltages I have a series of LEDs I would like to have display at 2 brightness levels and am confused how to properly wire this. This is for additional running lights/brake lights on my bike. LEDs - Typical Voltage 2.2V, Max Voltage 2.6V, Forward Current 150ma simulate this circuit – Schematic created using CircuitLab Source voltage 1 (running lights): 11.5V Source voltage 2 (brake lights): 0.5V at all times, 11.7V when brake is engaged When using the series LED calculator I believe for the Source voltage 1 I should set the forward voltage to 1.1V and 75ma (half of typical voltage and half of forward current) and then set the Source Voltage 2 to 1.1V and 75ma (the other half of voltage and forward current). That in my eyes would result 2.2V and 150ma. http://led.linear1.org/led.wiz Is this right or should I be approaching this wiring different? AI: Note: The Voltage sources are indicated the wrong way around in the question's schematic, going by the LED direction shown (negative ground circuit). First the always-on running light case. For the LED resistance calculator, use: supply voltage = 11.5 Volts Vf = 2.2 Volts ( x number of LEDs) current = 75 mA The entire forward voltage of all the LEDs would appear at the supply leads (not half), but you would want only about half the rated current to flow. Now, the brake lights: In order to prevent current from flowing from the running light lead to the 0.5 Volt brake light line when the brakes are off, you would need a diode on the brake light line, connected so as to be forward biased, same direction as the LEDs. A 1n4001 diode should do fine. simulate this circuit – Schematic created using CircuitLab For the LED calculator, use: Supply voltage = 11.7 - 0.7 = 11 Volts (the diode drops around 0.7 Volts) Vf = 2.2 Volts ( x number of LEDs) current = 75 mA The reason for doing this is, the currents from the two sources add up in going through the LEDs. Hence, when both supplies are high, 150 mA will flow. When just the running lights are on, 75 mA will flow. The voltages do not add up between leads. As an added twist, if the LEDs need to light up only at one-third intensity for running lights, and full intensity for braking, this is easy: Just take 50 mA for the running lights calculation, and the remaining 100 mA calculation for the brake lights calculation, in the bullet points above.
H: 5v solar batterypack to 12v output This is just a concept I was thinking about. I have an arduino project with some dc motors that needs to run outdoors for a long time. The DC motors themselves do not need to run all the time, usually a 2-3 minute burst and at most for 30 mins at a time. They are 12v motors, drawing 0.4A at peak. I expect the overall circuit to draw around 2A during load. I was looking at these solar battery packs, meant for charging phones with 5v output (has 3000mAh battery). I was wondering if I use a DC/DC booster and boost it to 12v to drive the motors, is it going to work well? I expect the battery will just drain faster, but since the motors are not used all the time, the solar panels can top up the battery when the system is resting for rest of the day. For reference: Solar batteypack: http://www.amazon.com/Opteka-BP-SC4000-Powered-Capacity-4000mAh/dp/B005ZSVZRY/ref=sr_1_6?ie=UTF8&qid=1369413420&sr=8-6&keywords=solar+battery DC/DC step up regulator: http://www.robotshop.com/ca/productinfo.aspx?pc=RB-Dfr-121&lang=en-US AI: That could almost work. The thing to consider with DC/DC voltage converters is that they can change voltage but they can not create or destroy energy, thus the power demand from your energy source is the same as the power demand from the load, plus inefficiencies in the converter. Power is the product of voltage and current, so if your 12V motors draw at most 2A, then the power (at most) is: \$ 12V \cdot 2A = 24W \$ Knowing that the solar battery you referenced is approximately a 5V voltage source, we can calculate what the current must be: \$ 24W = 5V \cdot I \$ \$ 24W / 5V = I = 4.8A \$ That's assuming 100% conversion efficiency. The website for your DC/DC converter says the efficiency is 90%, but that seems overly optimistic to me. Let's say we need 6 amps to allow for some inefficiency. The question then is this: can your battery pack provide 6 amps? It doesn't look like this one can. So, either you need to find one that can, or reduce your motor load, or store the energy in a 12V battery that can supply 6 amps at least long enough to run your motors (and then recharge it slowly).
H: How to realize this transfer function with OP-AMPs? This is a homework question. I know that: Given a transfer function of \$H(s)\$ below, we can realize it with an OP-AMP as follows. \$H(s)=-\dfrac{2}{s+2}=-\dfrac{\dfrac{1}{2}}{\dfrac{s}{4}+\dfrac{1}{2}}=-\dfrac{Z_f}{Zi}=-\dfrac{\dfrac{R_f}{R_f*s*C_f+1}}{Rin}\$ where \$R_{in}=R_f=\dfrac{1}{2}\Omega\$ and \$C_f=1\text{F}\$ However, now that I have to realize a transfer function with complex numbers, I am puzzled on how to do so. Could you lead me to the correct direction on realizing the following transfer function using OP-AMP(s)? \$H(s)=\dfrac{1}{s + 0.383 + j*0.924}\$ Above equation is a part of: \$H(s)=\dfrac{1}{s^2 + 0.765*s + 1}=\dfrac{1}{s + 0.383 + j*0.924}*\dfrac{1}{s + 0.383 - j*0.924}\$ Note: In the big picture, I have to realize a HPF of 4th order using cascaded(serial) decomposition method. Normalized transfer function of the filter is given as: \$H(s)=\dfrac{s^4}{s^4 + 2.613*s^3 + 3.414*s^2 + 2.613*s + 1}\$ This can be written as: \$H(s)=\dfrac{s^2}{s^2 + 0.765*s + 1}*\dfrac{s^2}{s^2 + 1.848*s + 1}\$ \$=\frac{s}{s + 0.383 + j*0.924}*\frac{s}{s + 0.383 - j*0.924}*\frac{s}{s + 0.924 + i*0.383}*\frac{s}{s + 0.924 - i*0.383}\$ AI: Here is the vital part of your question: - This tells me that you'll need two cascaded 2nd order high-pass-filters. Cascading the filters is the same as the multiply in the middle of the bottom equation. Here's a sallen-key high-pass filter (remember you'll need two cascaded): - The transfer function for it is: - Now you need to convert your individual 2nd order equations into a form that suits the sallen-key formulas. From experience (and with a little help from google and wiki) your formulas are of the form: - And this means that for the left-hand part of your equation, 0.765 = Wo/Q AND 1 = (Wo)^2 By my reckoning, this means Q = 1/0.765 and Wo = 1. Equate these values to the sallen-key formulas for Wo and Q to get the resistor and capacitor values for the left hand stage. Then repeat for the right hand stage of your formula. This isn't as easy as it sounds and a little trial and error will be needed. Assume both capacitors are the same value and that R1 is half of R2 - try and get values that match Q and Wo - if Q is too low make R1 a bit smaller and repeat/iterate. Alternatively, use a website where you can enter F (Wo/2Pi) and Q. Here is one that looks suitable. It gave the following result for the first part of your transfer function: - Note that there is a little tiny discrepency in the numbers due to the suggested website using standard resistor and capacitor sizes. Maybe you can find one that doesn't default to using standard values. Then it is just a simple matter of cascading the output from the left-hand sallen-key filter into the input of the right hand sallen-key filter and you have your answer.
H: Gyroscope questions: building a stable platform I'm embarking on a summer project and have a few questions. I'm planning to build a 3-axis stable platform for electronics. (Picture a flat surface that stays parallel with respect to the ground while the supports for this platform are free to rotate in any axis) My questions are: Is it better to use 3 1-axis gyros each located on the center of the axis of rotation or 1 3-axis gyro located in the middle of the platform? Could I accurately compensate for gyro drift by using accelerometers and implementing a kalman filter? I need an accurate and fast gyro for this application. I've been looking through a digikey catalog (http://www.digikey.com/product-search/en/sensors-transducers/gyroscopes/1967243?k=gyro&ColumnSort=1000011&fid=0&pageSize=25) and was wondering what specs I should look for. Do I want a gyro with high sensitivity (mV/°/s) or high range (°/s)? Thanks Guys!! AI: Doesn't matter a whole lot. However, in consideration of question 2, you might consider getting one "6 DOF" sensor (3 gyros, 3 accelerometers). Integrated sensors probably have better sensor-to-sensor alignment than you can achieve with individual assembly. Yes, but it isn't trivial to get it right, especially in highly dynamic situations. Look up terms such as "coning" and "sculling". Sensitivity is more important than range. Remember, if the system is working correctly, the platform isn't moving at all. You want to catch any deviations (errors) as quickly and accurately as possible. If you had perfectly frictionless gimbals, you wouldn't need an active system at all — the mass of the platform would keep it oriented in space.
H: Connect different devices (Ground/Voltage mismatch) To begin with, I realize that voltage is relative but I do not understand how to get a common ground so that everything is relatively correct to go in and out of the various chips. The picture summarizes the setup: Xbox plugged into the wall -> Wired USB Xbox controller which the input runs at 1.8v. Streaming the controller button input through a level shifter (SN74LVC245A) which shifts the logic to 3.3v -> SPI chip (MCP23S17) -> FPGA (Basys 2). The FPGA is connected to the computer via USB which is plugged in to the wall. I would expect the multimeter would read 1.8v because of the assumed common ground. But I guess this is not the case because of what the value suggests. Right now both the laptop and Xbox are plugged into the same power strip but I could see how there might be further mismatch if they were on separate rails. How do I work around this issue? Have I overlooked something simple or is it a flaw in design? I realize that 4 buttons would be easier to go straight into the FPGA after coming out of the level shifter but I plan to add a lot more of the controllers input which I do not have enough pins for. AI: Just to make things simple, I'm going to assume that everything in your consumer electronics is designed properly (reasonable assumption) and is still operating to it's designed standards. Another assumption I am going to make is that at least one of these units and perhaps both has only a 2 pronged plug that receives mains voltage. The power transported and converted in the wall wart/power plug is transported in an AC waveform. There should be no direct DC connection, unless there is, in which case there will be a third prong (called safety) on your plug into the mains current. And even then that ground/safety may not be carried through the USB port of controller line. Basically one or both of the DC sides of the power brick has no ground reference with the AC side. The DC is "floating", there is no assumed ground. The solution for you is that you need to establish that ground, by connecting all of the grounds on all of the devices together. For proof you simply need to measure the differences in grounds between the controller and the USB based FPGA ground. Do so with both DC and AC and you'll find that it is drifting around. If there is a fault, or things have failed you'll notice it by circuit breaker going. Have you ever noticed that when you plugin a USB port from one computer to another that there is sparks? but everything works once connected? It's the same thing happening here.
H: Circuit with a MCP9700 temp sensor - What this capacitor is used for? I have built a circuit that uses a MCP9700 to read the temperature as explained here. This is the circuit schema: I understand that C2 is used as a decoupling capacitor to clean up the noise (datasheet). However, I am not sure about C1. Is it a decoupling capacitor as well? I did the experiment without C1 and the temperature reading was not significantly affected. AI: The answer is on the datasheet 1st page: So in short it doesn't hurt and slows down the response to prevent overshoot, presumably upon power up or from Power supply ripples. The PSR is \$0.1 ^\circ C/V\$ which is not bad.
H: Storage in embedded linux I am basically from a background of small microcontrollers(arm7,cortex-m series). As I started working with a cortex a8 board(beagleboard xm), I always wonder how to store applications in flash/ram memory. In small controllers, you will directly give the location(which you know either flash or ram), but whensoever I store an application /home/root which memory is it stored? Basically where the rootfile system is located in memory? AI: As is the same with Linux on full-size machines, the bootloader passes the location of the root as the root argument to the kernel. The bootloader itself (usually Das u-Boot) is responsible for checking NOR flash, NAND flash, SD, USB, SATA, PXE, etc. to see if a kernel and optional initrd can be found, and passing options as defined in the board's environment variables. TL;DR: Check the bootloader configuration and kernel command line.
H: MPLAB assembly issue code jumps I am having some issues with what I think to be good code. This is for a uni assignment but this is not the code I'm submitting. This is a test file to understand how it all works. Some of this code was given in the assignment: the snum & binary sections. I need to get each number in snum in turn and display the equivalent binary number on a 7 segment for a few seconds. It runs in the debugger until 'call binary', then it jumps back to start (after that instructino). I don't know what I am doing wrong. Any ideas? ; Directive sets processor type ............................. list p=16F84A #include "P16F84A.INC" ; Set configuration fuses ................................... __CONFIG _CP_OFF & _WDT_OFF &_PWRTE_ON & _RC_OSC ; Code protection off, watchdog timer off, power up timer on, RC Clock errorlevel -302 ;No warnings, register not in Bank 0 PCL EQU 02 ; Program Counter Low PORTB EQU 06 ; Port B Data Register TRISB EQU 86 ; Port B Data direction register STATUS EQU 03 ; Status register RP0 EQU 05 ; Bank select bit timer EQU 0C ; GPR1 used as delay counter point EQU 0D ; GPR2 used as table pointer org 000 goto start snum addwf PCL,F dt "0001035020" ;Substitute your student number ;(10 ASCII digits) ; Pattern table for seven segment display on Port B .. binary addwf PCL,F retlw b'00111111' ;Set display to 0 retlw b'00000110' ;Set display to 1 retlw b'01011011' ;Set display to 2 retlw b'01001111' ;Set display to 3 retlw b'01100110' ;Set display to 4 retlw b'01101101' ;Set display to 5 retlw b'01111111' ;Set display to 6 retlw b'00000111' ;Set display to 7 retlw b'01111111' ;Set display to 8 retlw b'01101111' ;Set display to 9 ; Initialise Port B (Port A defaults to inputs)........ start bcf STATUS,RP0 ;Bank select 0 clrf PORTB ;Clear Port B data latches bsf STATUS,RP0 ;Bank select 1 movlw 0x00 ; movwf TRISB ;Set port B lines to output bcf STATUS,RP0 ;Bank select 0 ; MAIN LOOP nextdigit movlw d'10' subwf point,W btfsc 3,2 goto nextdigit movf point,W call snum call binary movwf PORTB NOP NOP incf point goto nextdigit end AI: You don't initialize the point register. That means it can hold any value when you start looping the nextdigit loop! Try adding a clrf point under the start label. For debugging: in MPLAB -> View -> File Registers, you can see the value of point during runtime.
H: Transistor current limit circuit - power handling? I am designing a simple current limiter, which charges a large (4.7mF) capacitor with a charge current of (roughly) 500mA from a supply voltage from about 10-20V - see the below circuit. My dilemma is that I already have a bunch of MMBT2222A and it would be nice to use this part without another line item. Whilst it can happily push 500mA through the collector, it only has a power rating of 350mW, which will be massively exceeded when the capacitor is first charged as the voltage across the collector will be about 10V and hence the power will be about 5W for the first 200ms or so. In the circuit above, I have added an extra 2N2222A in parallel (with the idea of adding more as required) with the existing one, but I feel this strategy if fraught with danger - mismatched gains will cause unequal currents and defeat the whole purpose of the exercise. So, is there a clever way to achieve this circuit? Or, do I just have to bite my tongue and put a transistor in place with a better power rating? AI: Use a small emitter resistor to counter difference in transistor gain like this: simulate this circuit – Schematic created using CircuitLab