text
stringlengths
83
79.5k
H: Finding new props that will fit I have an old wifi drone with 2-blade props that I would like to upgrade to 3-blade props, but I'm having a hard time finding any that will fit my drone and I'm not familiar with the different methods that props are attached. Are there some attachments that are more common/standardized or do they basically just vary by manufacturer? When I search around, most props seem to be made to screw down directly onto a rotor, but my drone has a configuration where each motor turns a gear that turns an axle and the props are fitted over this axle and then secured with a tiny screw from the side. Is there a name for that kind of attachment? Here's a picture of the propeller, this might give a better idea of how it attaches: AI: I've never seen a prop attached like that, so I don't think you'll be able to find replacements. However, as you have a shaft, you can get a collet prop adapter and then use props that are designed to bolt on. These clamp onto a smooth shaft as you tighten the bolt. You just need to find one the right size for your shaft, and then find props with the right size hole to fit over the threads. Brydon raised the point that you won't find collet adapters small enough. Really small drones usually just use press-fit props. You can use a small drop of superglue if necessary.
H: How does the Switchblade 300 drone maneuver? I was wondering how the Switchblade 300 drone (https://www.avinc.com/tms/switchblade) is able to maneuver? On all photos I can find, there is no sign of any ailerons - so does it wiggle the wings at the hinge point, does it use thrust vectoring or are the ailerons simply hard to see? The drone has both a forward wing as well as a rear wing (similar to a wing and elevator, but sized about evenly), two rudders (or simply vertical stabilizers) and a pusher configured propeller in the tail. AI: After investigating the very few detailed images available of this loitering munitions UAV, it appears to use ailerons on the front set of wings. I've found one image which shows the underside of the UAV and its possible ailerons. If you look closely, there appears to be some form of crease hinge in front of the trailing edge of the wing as well as two things (potentially pushrods, aileron limiters, or simply mounting latches for when the UAV is in a folded form) While there is no documentation I know of that supports this or any other maneuvering methods, the image makes it appear that the Switchblade 300 uses ailerons as its primary method of maneuvering mid-air. Another image (credit to sempaiscuba) shows the front ailerons in even greater clarity.
H: Drone recommended for quality shooting but for non-experts I would need to buy a serious drone to make quality videos and footage. However, searching online there is a lot of stuff that is passed off as "better than others". Could someone recommend a drone that is simple to use and is able to: Shoot at a quality of at least 1080p 60fps Have macros to follow a subject The drone should be durable and smart (avoid obstacles) Particularly resistant to sand and wind I would need not to spend too much so if you can recommend me the best quality/ price ratio drone with these pretty general features I would be grateful AI: This is exactly the niche that DJI have dominated. Near professional level features but easy to learn. They're not priced as 'toys' but they're a lot cheaper than professional gear. I'm not an expert in their various models, but features like following a subject and obstacle avoidance are quite advanced and rule out most of the cheaper copycat drones. 'Real' TV and movie filming will use much more expensive drones that require a lot more knowledge and experience, but it seems that any youtuber can buy a Mavic and produce great shots after skimming the manual. At the other end of the scale, there are cheaper options but none of them are nearly as popular. Either they don't have the same features, just don't work as well or they aren't as easy to use. There are a lot of cheap imitations aimed at people who don't do much research. They look similar and claim similar features for a lot less, but are missing features you don't realise are important, or just aren't much good.
H: Can additional meta information be automatically stored with DJI-mini-2 images When a dji-mini-2 drone makes a photo, is it possible to automatically store the information that the drone clearly knows at the time of snapshot. For example: lat/lng of the location height of the drone angle of the camera AI: The Mavic Mini, the Mavic, and the Mavic 2 do store these data. The latitude, longitude, altitude (meters above the WGS84 elipsoid), and azimuth will be written to standard EXIF tags. The pitch, roll, and, yaw of the drone and the camera gimbal are written to XMP tags. Depending on your EXIF viewing method you may not be aware of those tags. Try using gdalinfo at the command line, or EXIFTool to view more of the image metadata. See this post for more information on exploiting these data.
H: How to know current image bands? I work with sets of images from sensors such as: DJI P4 Multispectral DJI P4 PRO MicaSense RedEdge-MX MicaSense RedEdge-P Then I extract attributes from the images. For this, I need to know the current band configuration for each orthomosaic I am working on. Is there a way to know current band configuration? I use Agisoft Metashape for image processing and QGIS for analysis. Does the Raster Calculator from Metashape tells the current band configuration? What about the Camera Calibration on Metashape? If you change the band configuration there, is it reflected in the Raster Calculator? AI: Download ExifTool (freeware). Run your images against ExifTool and check the Band Name in EXIF tools. My REd Edge mx image IMG_0031_.tif shows it as a blue band, for example. With respect to the Phantom 4 images that is an RGB image so unless you are using software to exploit and reassemble your image bands you can expect band 1 to be red and band 2 to be green... Exif Tool should help you identify the bands of the DJI multispectral but typically you can expect band 4 to be the IR image. My Parrot Sequoia images come in named bands is that not the case with the Red Edge? Either way, again, EXIF Tool.
H: RC Airplane Fuel Tank Issue I am using a RC Fuel tank which looks like: Is there anything special about the cap? The problem is I opened the cap. Should it never be opened? There are a few drops that leak in intervals when I put it back from the circled region. Should I buy a new one? Or should I just tighten the cap hard enough or put some seal on the lid? EDIT1: On a website selling the fuel tank the following is stated: "High quality and durable shatter proof tank of 1000ml. This fuel tank has 3 openings. One for fuel, another for fuel refill and another for compression from muffler." What is meant by "for compression from muffler"? Thanks AI: My experience with glow engine models is that the tanks are made to be opened and resealed; the user needs to be able to replace the tubing inside the tank (which deteriorates over time due to methanol and nitromethane exposure) as well as in some cases change the length of the flexible "clunk" pickup tube to optimize for tank position and type of flight regime (trainer, sport, aerobatic, or pattern?). Additionally, this appears to be a metal cap, so it should be capable of handling enough tightening torque to slightly compress whatever seal is inside the cap. When the tank pressurizes (assuming you're running pressure), the tank pressure will slightly push the cap away from the tank body; there needs to be enough compression on the seal to stay sealed in this state. Prompted by comments and the edited question: tank pressure, also sometimes broken down to "muffler pressure" or "crankcase pressure" is a method of pressurizing a model fuel tank to ensure high G maneuvers (tight loops, for instance) don't starve the engine for fuel exactly when you need power the most. By pressurizing the tank (typically to only 0.1 bar or so) the tank pressure can force fuel up some distance, even against, say, 20 G. For trainers, it's unneeded; for most sport aircraft, very optional, but for aerobatic or pattern flying it's almost a necessity (last time I flew a lot, pattern models were starting to use actual fuel pumps for this).
H: FPV System recommendations Here are my desired specs for an FPV camera - let me know if they are impossible: 5.8 Ghz Digital rather than Analog, for less interference/artifacts? CCD instead of CMOS, because a global shutter would be better. This can be flexible if not possible. Could anyone point me in the right direction? AI: It sounds like what you are wanting isn't actually an FPV system, but rather a wireless video downlink. There are a number of systems used in commercial video production capable of streaming video from basically any HD source to an HDMI endpoint. If you're using a device with an HDMI capture you can then use it for your image processing on the computer. Unfortunately most of these systems are quite expensive as they're primarily used for video production. There's a few in the $600-800 range such as the CVW Swift 800Pro, which I've seen used on cinema drones for libe video feeds. There's also the Hollyland Mars 400. Some of them actually transmit the video to an app over wifi as well, so there may be a way to use that for your image acquisition. Check out B&H for other options as well. They're a knowledgeable distributor and could probably help you look into options.
H: PhaseOne iXM-100 IIQ to JPG I was given some PhaseOne iXM-100 .IIQ photos and I need to convert them into .jpg format. The iXM-100 was used with a DJI drone, though I am unsure if it was the M300 or M600. Due to have thousands of them, I'd like the conversion to happen programmatically (Python). I've tried a couple different methods, but both have failed. First Code Block; works, but the output image is static. Seems that when I change the image mode, it changes the static color. Some modes work and some do not. from PIL import Image rawData = open("D:/Downloads/P0036906.IIQ", 'rb').read() imgSize = (640,480) img = Image.frombytes('RGB', imgSize, rawData) img.save("D:/Downloads/P0036906.jpg") Second Code Block; errors out. Seems that the library cannot read an .iiq format import rawpy import imageio path = "D:/Downloads/P0036906.IIQ" with rawpy.imread(path) as raw: rgb = raw.postprocess() imageio.imwrite('D:/Downloads/P0036906.jpg', rgb) Traceback (most recent call last): File "iqq_jpg.py", line 23, in <module> rgb = raw.postprocess() File "rawpy\_rawpy.pyx", line 887, in rawpy._rawpy.RawPy.postprocess File "rawpy\_rawpy.pyx", line 790, in rawpy._rawpy.RawPy.dcraw_process File "rawpy\_rawpy.pyx", line 438, in rawpy._rawpy.RawPy.ensure_unpack File "rawpy\_rawpy.pyx", line 432, in rawpy._rawpy.RawPy.unpack File "rawpy\_rawpy.pyx", line 936, in rawpy._rawpy.RawPy.handle_error rawpy._rawpy.LibRawIOError: b'Input/output error' Any suggestions on how to convert IIQ to JPG? Input: Output RGB: Output CMYK: AI: What I found is IIQ is a proprietary raw format that PhaseOne built/uses. I was unable to find any python tools that could convert into jpeg. I did find a discussion on using exiftool with IIQ if anyone finds that helpful. I evntually found that I needed to download a free trial version of Capture One in order to convert.
H: Rewiring 2S2P as 4S I have LiPo battery that is in 2S2P configuration, however I need higher voltage (4S) battery for my project. From my understanding 2S2P battery has 4 cells same as 4S (2 in series and 2 in parallel). Would I be able to rewire cells like shown in the image below to achieve this assuming all the cells are the same voltage? If I understand correctly this would halve the capacity and double the voltage. Would this cause any side effects or potentially damage the battery? AI: Yes, that's exactly right. It's just a difference in how the cells are connected. Don't worry about balancing the voltage - your charger will do that for you when you charge it. No current will flow between the cells when you connect them. The biggest issue here is securing the wires so that the delicate cell tabs aren't pulled or moved during use. Look at how everything is insulated and secured, and try to do as good a job when reassembling it.
H: Amperage of FC, ESC There is something I totally don't understand about an amperage. According to spec. AIO FC has 5 A and 6 A ( 5 seconds). However, sometimes I see in OCD far beyond 5A and 6A. And FC, ESC are OK. Shouldn't they burn down? More about my ignorance. As I understand, more C is better in LiPo. Let's take very good LiPo battery: "TATTU R-Line Version 5.0 V5 6S LiPo Battery 1200mAh 22.2V 150C 6S1P XT60 Plug RC FPV Racing Quadcopter Drone Battery" According to spec. : Discharge Rate: 150C Max Burst Discharge Rate: 260C 1c = 12 A 12 A * 150 = 1, 800 A !!! What ??? !!! ESC that can handle 1, 800 A ! Please, enlighten my ignorance AI: sometimes I see in OCD far beyond 5A and 6A Those specs are probably per motor, so 4 motors could pull 20A. And I think you mean OSD, for On Screen Display. 1c = 12 A Nope, 1200mAh is 1.2A, so 1c is 1.2A I think some battery manufacturers measure C ratings as the maximum the battery will produce when shorted across a current meter. It will do that once, briefly and then die. As basic rule of thumb, if your battery gets hot, you're overloading it and shortening its life.
H: What is this pad on the HappyModel EP2 2.4G ExpressLRS Receiver? I have soldered wires to "HappyModel EP2 2.4G ExpressLRS Receiver". There is one pad I don't know and can't find what it is for. See the pic "EP2 RX connection diagram". The problem is it was my first time I soldered something real (let alone test boards for practice). As you can see on the picture GND pad and "unknown" pad are really close. Maybe my solder leaked from GND to "unknown". I cannot see with my eyes. But what I can do is to make a continuity test with my multimeter. I can hear a buzzer sound. So two pads are connected. But I don't know if it is my bad soldering or it was designed. Very interesting that a receiver model EP1 seems to have GND and "unknown" connected as you can see on the picture "EP1 RX connection diagram" . BTW, I have seen an ELRS receiver from BetaFPV where is the pad very similar to "unknown". (BETAFPV ELRS Lite Receiver 2.4GHz) However there is a documentation about this: "Firmware upgrade button (Use a metal object to short-circuit the GND while powering on)" When I soldered wires to the board ( connected the receiver ) , sometimes when I touch or move the receiver, the drone disconnects from the computer (USB connection) and even all lights stop blinking. Looks like a shortage or some joints not soldered reliably by me. Anyway I accurately stuck the receiver with mounting tape, closed the drone. And it flies. It was the first fly test - take off, hover a little bit, forward, turn and landing. However, I am afraid that with more aggressive flight the receiver could disconnect. Long story short, if somebody knows what the hell the "unknown" pad is and if it should be connected to GND by default. P.S. Actually it is very easy to check with a new receiver, just make a continuity test. I would do it by myself, but I am a beginner in soldering and really afraid of re-soldering everything. AI: The pad in question appears to be a continuation of the Ground pad, which when connected to the pad immediately 'above' (per the images) puts the receiver into 'boot' mode. See https://www.expresslrs.org/2.0/quick-start/receivers/hmep2400/ Image from above link
H: Can poor soldering of ESC wires lead to low voltage draw from a Lipo battery? Question: Would improper soldering of the power wires of my electronic speed controller limit the amount of voltage that my ESC draws from the Lipo battery? My situation: I have to solder an XT60 male connector onto the power terminals of my ESC. As you can see in the highlighted part of the picture below, my ESC has no connectors on it. My ESC is the QWinOut 2-4S 30A ESC which can be found: https://qwinout.com/products/qwinout-2-4s-30a-rc-brushless-esc-simonk-firmware-electric-speed-controller-with-5v-3a-bec-for-2-to-4s-lipo-battery-diy-multicopter-quadcopter This male connector will be joined with a female connector that is attached to my Lipo battery which is a 1300mAh 2S 20C battery. I need this connector so that I can power my motor which is a 1000KV motor. This motor came packaged with the ESC and is designed for that ESC. It can be found on: https://www.amazon.com.au/QWinOut-Brushless-Outrunner-Multi-Copter-Quadcopter/dp/B07CVDHQKS I had a great deal of difficulty soldering the terminals of my ESC onto the male connectors. My fear is that improper soldering will mean that when I connect my ESC to the battery, the ESC will not draw its rated voltage. As a result this would damage my LiPo battery? For extra information: My code and wiring will be identical to the one in this video. https://www.youtube.com/watch?v=uOQk8SJso6Q AI: Your concerns are in the right place, but the reasoning is incorrect. First things first: You don't draw voltage from a battery Current flows as a result of differential voltage. The ESC doesn't tell the battery how much voltage to put out, or even how much current. Current flows based on the load of the motor. Having a bad connection will cause the voltage at the ESC to be lower than the battery voltage, but only because the bad connection is acting as a resistor. It will get very hot, wasting energy that could be sent to the motor as heat. Sometimes this can cause the solder to melt at the connection, and come loose. This is then dangerous as the loose cable can cause a short circuit. The battery won't be damaged The battery will not be damaged by a poor connection (unless you cause a short circuit). In fact, the current will be lower and your battery will be happy to chug along at the lower current. This doesn't mean you don't have to keep an eye on the battery voltage - if it is low, then you can damage the battery by over-discharging. Always keep the battery within 20-80% state of charge (you can go up to 100% before using it, but don't store it like that). Proper solder joints are critical Practice soldering, get a practice kit, or hopefully you have enough spare xt60s to practice on. Use flux and a flux core solder, keep the tip wetted with solder all the time, and make sure you're getting enough heat that the solder flows nicely, without overheating the connector. Add solder at the joint, not on the iron. a 40W iron is a little small for xt60s, so hopefully you have a 60W or higher. Post pictures of your solder joints on Reddit (or another forum-like site), and ask what you can improve. You will always be able to improve something. Your setup seems weird a 1000kV motor (I'm guessing a suppo 2212 based on the ESC that it came pre-packaged with) on 2S lipo would need to spin a large prop to be worthwhile. This is a discussion for another question entirely, but the aircraft you intend to put this on might be underpowered.
H: What kind of drone could be remote controlled over 4km of open water? Aparently Taiwan shot down a civilian drone near the Lion islet of the coast of Xiamen (source). The island is 4km away from the Chinese city of Xiamen. Question, what kind of drone would be able to fly 4km over the open ocean while being remote controlled from land? I would think that for a low end commercial drone the range of the remote control is nowhere near 4km but I have no experience with that. Essentially is something like that within the capabilities of a commercially available hobby pilot drone or does that require military grade hardware? AI: Any purpose built machine can make it out that far: https://rotorbuilds.com/build/29119 That is well within the range of hobby grade analog and digital control and video systems. The DJI digital FPV system has a hard limit of 14km due to a timing anomaly, however for long rangers it is generally pretty reliable out to that point HDZero has definitely made it out that far Analog long range FPV records are pushing 100km. These are fixed wing machines, but multirotors can definitely make it that far in terms of battery life, and the radio equipment used is the same My personal FPV distance record on a multirotor is 4.7km with analog 5.8ghz video and ELRS radio. Worth noting that navigation is much more difficult than direct range when flying over open ocean. Magnetometers must be carefully calibrated so that a course can be maintained effectively.
H: ESC wiring to FC for wing - 2 or 3 pins? I'm new to the wing builds and I'm a bit confused. I ordered the kit for the AR Wing Pro, which includes BLDC motor and ESC. The ESC comes with the mandatory 2 power lines and 3 thick lines for the motor. Also it provides the common 3 pin servo plug and a additional two wire plug that looks like a low voltage DC output to me On the other side my flight controller H743-WLITE manual notes, that you should not connect all 3 pins of the servo connector, as it would provide also power back to the FC and cause damage? Now I'm confused what needs to be connected to get DSHOT150 with telemetry for INAV? Is the middle red wire a power or TX? Would love to get some tipps from more experienced pilots :) AI: I don't know that model and couldn't find any instructions for the ESC, but plugs like that are usually used for power. It's very unusual to have two sets of battery plugs on an ESC like this. I guess it's there in case you want to use it to power the VTX and camera. Note that the ESC may not have been designed specifically for this model, and there's some variation in the voltages used by various VTXs and cameras. One of the challenges of fixed wing FPV is that you often don't need a power distribution board for the motor, so finding somewhere to power the FPV gear involves modifying the ESC, or soldering extra wires to the battery plug. You could use a voltmeter to see if it's 5v from the BEC or battery voltage. It could even be a 9 or 12v output. If you don't have a voltmeter, you could open the heat shrink on the ESC slightly to see if it's just connected to the battery wires. Just because it's there doesn't mean you have to use it, but it may make connecting the FPV gear easier.
H: Software recommendations to combine aerial images I am taking aerial images which look like this: As all of them are taken from approximately the same height and have GPS coordinates (but can overlap a little), I would like to combine all of them in one big image. This seems like a standard procedure and I assume that there are already available tools (preferably open-source) which can do it, but I am not sure what is the right keywords to find them. Does anyone know any tool which can do the task (also what is the proper terminology for this task)? AI: A useful keyword is 'orthophoto', and more specifically in this case a mosaic. An effective open-source tool I have used for this is OpenDroneMap: https://www.opendronemap.org. I usually combine this with the (closed source but free) Pid4D capture app for collecting the images: https://www.pix4d.com/product/pix4dcapture To specifically address your DJI tag, I have used both of the above with pictures from Mavic 2 Pros and Inspire 2s; you can also install Pix4D Capture onto a DJI CrystalSky: https://support.pix4d.com/hc/en-us/articles/360016627091-How-to-install-latest-compatible-APKs-on-DJI-CrystalSky-and-Yuneec-ST16-ST16S-PIX4Dcapture
H: How to account for propeller (fixed pitch) in model aircraft performance calculations? I'm working on performance calculations for a plane I'm building. By setting the sum of my drag formulas (one for the wing, one for the fuselage, both with predicted drag coefficients) equal to the thrust at a given power setting, I'm able to come up with some predictions for speed. They seem unrealistically fast, given my experience with previous builds for which I did not do performance calculations. I figure that this is because of the prop, but I do not know how prop performance varies with different air speeds. My questions are: As airspeed increases, does the propeller produce drag similar to a wing while producing the same amount of thrust, or does it produce less thrust? How do I take the prop into account in these calculations? I'm using a 1550 Kv motor, for calculations I am using 7.4v at full power with an APC 8x4 prop, produces 468 grams/4.59 newtons at 8,620 rpm (rpm from data sheet). AI: The easiest way to account for propeller thrust, is just by using basic physics. Being $P$ the power supplied to the propeller, the propeller changes this power in a thrust $T$ moving the airplane at a speed $V$. In this process, some 20% of the power (at the design point) is lost in inefficiencies which are incorporated in an efficiency factor $\eta$: $T=\eta \frac{P}{V}$ $\eta$ depends on the speed, with a typical trend visible in the following plot taken from McCormick B.W. Aerodynamics, Aeronautics & Flight Mechanics. John Wiley & Sons, Inc.: As visible, $\eta$ depends on the blade pitch as measured at 75% of the span and on the advance ratio $J=\frac{V}{nD}$, where $n$ is the rotating speed and $D$ the diameter. The theory behind this plot and therefore the plot itself is valid for a more or less broad range around the peak: at $V=0$ it gives $\eta=0 \Rightarrow T=0$ which is not realistic since also at zero speed the propeller provides thrust. For that part of the thrust (so called static thrust) you can refer to this answer. Note that this plot refers to "real size" propeller, I can't say if those values are also valid for an RC propeller due to the lower Reynolds number i.e. higher viscous-related effect. Design point for propeller is normally around their $\alpha$ for max $L/D$ and this latter value reduces a lot at lower Reynolds numbers. Anyway propeller's manufacturer should provide that plot.
H: Can I use a reversable drone BL_Heli ESC directly from a PWM receiver? If I wanted reversable brushless motors in a boat or robotics project, where I don't have a flight controller to handle DSHOT communications, could I use BL_Heli ESCs? Do they still understand PWM signals? Do they need configuration? This is largely because I don't want the brake and reduced speed reverse that car ESCs have. AI: I don't know about BLHeli, but you should be able to use BLHeli_S in "3D mode" with PWM. They do need configuration, you will have to set motor direction to bi-directional.
H: Do mounting motors at different height each other has any impact on performances? Multirotor motors are typically mounted on same height. I'm wondering what should I expect if they are mounted at different eights each other. Model will work exactly in the same way as if they are on same level or there is some significant impact on model dynamic? For better clarity take in account the following arrangement cases for a quadcopter: CASE A Different height levels between front motors and back motors mount. So, a CW motor and a CCW motor with related propellers are on arms at height H1, and a CW and CCW motor and related propellers are at another height H2. CASE B Different height level between CW motors and CCW motors. So, both CW motors on arms at height H1 and both CCW motors are at another height H2. AI: Designs with different height motors are generally aiming to increase the separation of the airflow through front motors and rear motors. As the quad flies forwards, the rear motors enter air that has been disturbed by the front motors. Lifting the rear motors theoretically puts them in cleaner air, increasing efficiency. Motors are generally at the same height to simplify frame design, as the highest stresses are found between the motors. Attempting to pass these stresses between the top and bottom plate requires extra frame material, which generally outweighs any aerodynamic gains.
H: Could plane models propellers be used effectively for quadcopters? There are CW & CCW propellers specific for moltirotor and other propellers as the following APC E style, distributed as propeller specific for planes. I'm wondering if such plane propellers mounted on quadcopter: a) works correctly; b) works correctly with some adaption to mount upside down the pair for CCW; c) works correctly but with noticeable loss of performances/efficiency compared to proper CW/CCW propellers; d) they would impact significantly the performances and they shouldn't be used for multirotor at all; Thanks in advance. EDIT To clarify why I have mentioned turning it upside down. I was considering the fact the propeller thin "blade-like" side when the propeller rotate on clock-wise motor cut through the air but in case it is mounted on a motor rotating in CCW the "dull" side would be used instead and this is clearly inefficient, but, if the propeller is turned upside down on the CCW motor, the thin blade like side would be the side that cut through the air. So it's unclear to me why the air flow design shouldn't be if not equally efficient at least better than using a CW APC normally mounted on CCW rotating motor. AI: The multirotor APC propellers were introduced because APC had a very limited CCW range. They also tend to have lower pitches than their aircraft propellers. Apart from those two things, there's very little difference. If you can find a CW/CCW pair with the pitch and diameter that suit your multirotor, it should work just as well as the 'multirotor' propellers. Similarly, 'multirotor' propellers are handy for planes with two electric motors, allowing you to have contra-rotating props. Note that you can't turn CW prop into a CCW prop by mounting it upside down. It will still blow in the same direction, just at about half the efficiency. Another minor difference is that 'multirotor' propellers don't include the set of shaft adapter rings. You need to order these separately (or reuse some of the spares you have from flying planes). I should add that APC 'multirotor' propellers are designed for slow, long-duration multirotors. They don't have propellers suitable for FPV race drones.
H: How much torque do I need for the servomotor which turns my canard? I don't know that my question is belong to here or physics community. I desire to make a supersonic micro-jet model plane also powered by model rocket engine to reaches less than Mach 3 and it has a pair of canard for pitch angle THERE IS TWO CANARDS AND EACH CANARD HAS IT'S OWN SERVOMOTOR SO THERE IS TWO SERVOMOTOR FOR TWO CANARD (a servomotor for each canard). The maximum angle of attack of canard is 15 degree and somehow, my the model plane can reaches 561 m/s (about mach 2) The problem is: How much torque do I need for rotating each canards? Is there any formula to calculate it? There is a picture of the canard and it's dimensions: Root chord = 10 cm Tip chord = 8 cm Height = 6 cm Sweep angle = 18.4 degrees (Obviously leading edge is the leg of the shape which hasn't dimension) The distance between the connection point of the servomotor shaft and the root chord from the leading edge of the canard is equal to the distance of the center of pressure from its leading edge. How much torque do I need for rotating each canards? Is there any formula to calculate it? How much torque needs for 250 m/s (for it's normal speed)? AI: The standard equation for aerodynamic forces is a half, times the density of air, times the speed squared, times the area, times an aerodynamic coefficient. $$L=\frac{1}{2}\cdot\rho\cdot v^2\cdot A\cdot\ C_L$$ You usually see it like this, for calculating Lift, but you can replace the 'L' with Drag or Moment if you replace the coefficient of drag or moment. The half seems to just be tradition. Possibly the equation was originally derived by integrating for speed. It's quite handy though, because it means that the coefficient rarely exceeds 1. You have to be careful with your units. Speed and density will use meters, so calculate your area in meters, not centimetres. 0.09 * 0.06 = 0.0054m^2 ρ (the Greek letter rho) is the density of air at sea level and is about 1. 250m/s is extremely fast for a model, but squared gives you 62,500 The coefficient varies depending on the aerofoil and angle of attack. There's more information here. You could use 1 as a worst case or use the sample aerofoils in the chart at the end. I'll use 0.04. If you really think you'll reach transonic speeds, it could be higher. That gives us 0.5 * 0.0054 * 1 * 62,500 * 0.04 = 6.75 Newton Meters. A typical servo (Futaba S3115) has 3Kg-cm of torque (often mis-labelled as Kg/cm), which is 0.3 Nm. You can get high-torque servos - this one has 16.2kg-cm, or 1.6NM OTOH, if you are 'only' flying at 250mph (111m/s) the torque will be 1.3Nm And you won't need 120 degrees of movement, so you could gain some mechanical advantage in the pushrod geometry.
H: Would the dle 55RA be enough to power my paramotor(ducted)? I plan on using the dle 55ra engine to produce 5hp at 8,500 rpm. I am using it in a paramotor, and to save space I am considering using a ducted propeller similar to the Fulcrum power pod. The duct intake would have to be directly behind my back, so I plan to add around 6 inches of space to allow air intake. The engine would be mounted on my back facing into the intake. Effectively, it would be a normal paramotor with a duct around the fan blades. My estimated duct wall clearance is 1 or less mm. I need to produce 15kg of thrust, so how would I design this fan? I considered using a heating duct, which is an 18-16 reducer. This would allow me to get some benefits of the thrust tube and would make it easy to have a front-mounted propeller. Would this propeller work? It is 18x7" TLDR: Would my combination of propeller, duct and engine produce 30kg of thrust. Thank you! P.S. Aviation stack exchange sent me here AI: Probably not. Our usual rule of thumb is that you need 50W per lb. So 5hp (3,750W) would only be good for 75lb. I'm no expert in paragliders, but a quick google indicates that 20-30hp is common. There some on-line calculators like E-Calc where you can try different motor speeds and propeller sizes to estimate the thrust, although they rely on test results for individual propellers and may well not have accurate data for anything this large. A smaller prop will be less efficient, and a duct will add weight and drag. A really well designed duct can recover some of the efficiency lost by reducing the prop size, but (in the RC world) it's common for ducted fans to use twice the power for the same performance as a prop.
H: How to choose a flight controller for my DIY drone? I am assembling a drone. So far, I have learned that it is 450mm. I'm using 8 to 12 21700 Li-Ion batteries. When I search for AIO flight controllers, I believe I have to search for "30A" and "450mm" but I'm not sure. Is there an app or online service for winnowing down these decisions? AI: You need a few things from a power system/flight controller. I don't think you should use an AIO, but that's personal opinion Cell count First things first - you say you're using "8 to 12" lithium cells. I imagine you won't be running these all in series, but it's necessary to point that out. Flight controllers and motor controllers have a maximum voltage they can support, 4S is common, 6S is also quite common, but slightly more expensive. 8S support is less common, and 12S support is in the X-class range and is prohibitively expensive for most people. I would start with either a 4S or 6S 21700 pack, and then consider adding a second one in parallel, but I digress. Find a flight controller who's voltage rating is at least as high as how many cells you plan to use (it's okay if it's higher) Motor control/AIO The term AIO is well overused in this community to mean too many things. The way you've worded it, it seems you're looking for a flight controller with a built in ESC. That's acceptable, but you may have better luck finding suitable components if you use separate ESC and flight controller. You're including 30A in your search - this is good, but may be falling into a common misconception. You don't need a "30 amp" ESC, you need one that is rated for at least 30 amps (you could probably get away with 20). That means a 100A ESC is perfectly suitable for this application. Be careful not to limit your search by specifying a specific rating when you don't have to Find an ESC (or AIO flight controller) that is rated for at least 25 amps and 6S if you plan on running a 6S battery Frame size You're building a 450 size quadcopter. That's a great place to get started - however the flight controller doesn't care. The frame you have will tell you (or you can measure) the mounting holes for a flight controller. It will likely be 45mm, 30mm (commonly 30.5), or 20mm. It may be any combination of these. This is the "stack size", or the spacing between the bolt holes for the flight controller to mount. You need a flight controller that is the same size as these holes - if your frame supports multiple then that opens up your options. Find a flight controller with mounting holes that match with those in your frame
H: Error in the explanation of Ingenuity: The Mars Helicopter by NASA members? I was watching this video by Veritasium on Youtube, where he talks with NASA engineers about the Mars helicopter. At the attached timestamp they talk about how a completely overlapping rotor configuration is actually more efficient than non-overlapping rotors. This in direct contradiction with this post. Which one of them is correct? AI: The thrust being the same, a coaxial design is going to need some 15% more power than a single-rotor helicopter. This increment in power is due to the aerodynamic interference among the upper and lower rotor and the fact that the lower rotor works in the slipstream of the upper one. This decrease in efficiency in respect to a single-rotor design is partially offset by the fact that in a contra-rotating design the tail rotor is not needed anymore: a tail rotor normally uses some 10 to 15% of the total power. Plus a contra-rotating design is more compact since each rotor has to provide only half of the thrust and the radius can be reduced consequently. So, both are right according to how one define efficiency: for Ingenuity compactness is for sure the way to define efficiency.
H: Can I put lipo and li-ion batteries in series I have two 3.7 volt 900 mAh LiPo battery, which I connected in series to provide 7.4 volts. But now I want 11.1 volts. Instead of buying another LiPo battery, am I able to put these two LiPo batteries and a 3.7 volt 900 LI-ON in series to achieve 11.1 volts? Is it safe to power a 1000kv brushless motor, 30a ESC, and an Arduino Nano with this connection? AI: What I think you needed to ask is "are lithium-polymer and lithium-ion battery cells of the same capacity compatible in a single pack?" The answer to that is that those are two different names for the same battery chemistry. As long as the discharge current limits match (as well as cell voltage and capacity), you should be good to go within the current limits of the cells. Where there's a potential for problems is that the two older cells may have some number of charge-discharge cycles already elapsed, while the new cell won't, therefore the discharge curves may no longer match (assuming they did when all the cells were new). So, while this should be theoretically okay (assuming all the cells can source your 30 A requirement without damage), I'd probably be more inclined to use all new cells for the 3s pack and save the 2s pack for its original application. By sourcing the new cells all together, you can ensure that all their specs match, including their life cycle condition. Lithium cells are relatively cheap compared to an entire drone stack and the potential liability if you have an in-flight battery fire and wind up igniting something on the ground.
H: How can I calculate the weight needed in the nose? I have built this plane: But the nose cone is too light. I have tried adding different objects in the front but each thing was either too light or too heavy. How can I calculate the weight needed in the front? Wingspan: 30cm (front wings) Wingspan: 18cm (back wings) Length of plane: 30 cm Weight of plane: 12 grams AI: This aircraft is "close coupled" meaning the tail is very close to the wing. Lengthening the fuselage will make it easier to balance. This can be done both fore and aft of the main wing. Then add weight to the nose as needed. Secondly, the tail can be flat. Let the wing do the lifting. Airflow interactions between a fore and aft wing can affect stability. Finally, have the tail pitched down a bit to help raise the wing angle of attack in flight. You can do this with "elevators" or with the whole tail. The leading edge of the tail should be lower than its trailing edge, providing some "down force" in flight. A little experimentation may be needed to get it right. Use balsa wood to save weight.
H: What type of charts are on the Remote Pilot exam? What type of aeronautical charts are on the remote pilot certificate exam? Are they VFR charts? AI: If you are referring to he USA FAA Part 107 exam then yes, VFR charts. You can download them here: https://www.faa.gov/air_traffic/flight_info/aeronav/digital_products/vfr/
H: Can you still fly under the exemption for recreational flyers after obtaining a remote pilot certificate? I have two drones. One has a camera, and I might use it commercially at some point. It could carry a Remote ID transmitter. The other does not have a camera, and has no use other than recreational flight. It's way too small to carry any "cargo." If I get a remote pilot certificate to be able to fly under Part 107, can I still fly the unregistered drone under the exemption for recreational flyers after Remote ID rules take effect? AI: Yes. If you hold an FAA Part 107 license, and you are not flying in a manner that requires a 107 license, you are free to fly as a recreational pilot. See here: https://www.faa.gov/uas/recreational_flyers
H: Do FAA regulations apply indoors? If I fly my drone around inside my house, do FAA regulations apply? (Ignore the fact that they will almost certainly not care.) AI: No, FAA Rules do not include indoor spaces. See this link for clarity: https://www.faa.gov/faq?combine&field_faq_category_target_id=1491&page=1
H: Does VLOS under Part 107 allow for glasses? I'm studying for the Part 107 exam, and one of the sample questions is "What vision aids can a Remote PIC or a Visual Observer use to help them maintain Visual Line-Of-Sight (VLOS) while flying an sUAS?" The answer is "None. VLOS must be maintained with no visual aids." Is this true? At significant distances, all I would see of a small drone without glasded would be a fuzzy dot in the sky. I could see it clearly with glasses, though. Is that enough for Part 107m AI: Yes, per below citation. (Emphasis mine) § 107.31 Visual line of sight aircraft operation. (a) With vision that is unaided by any device other than corrective lenses, the remote pilot in command...
H: How do I store a Li-ion pouch battery without any discharger or balancer? I have a single-cell 3.7V 900 mAh Li-ion pouch battery and a 7.4V 30c 2s 850Mah Li-ion pouch battery. I want to store these two batteries (the manufacturer suggests I should store the 2s pack at 3.8V). The problem is I do not have a discharger or any equipment that can bring down the voltage of the batteries so I can safely store them. I have a few questions regarding this... can I use up the remaining charge (plugging the battery into a random appliance) until that multimeter reads 3.8V for the 2s pack? When I check the voltage of each cell in the 2s pack, the first cell reads a voltage of roughly 4 volts, and the second cell reads around 8 volts. Why are the cells having two different voltages? What voltage should I store in the single cell? Thanks. AI: I will answer your question following the text you have posted (in Italics) adding the answer in Bold text. Moreover I assume that your 1S battery is a typical LiPo cell and that your 2S Battery has the typical wiring diagram as follows (the colors of the cables may differ), but the ones shown in the picture are the most common ones: Answering your questions: 1. *can I use up the remaining charge (plugging the battery into a random appliance) until that multimeter reads 3.8V for the 2s pack?* You may discharge each cell connecting for instance a resistor, until each cell reaches 3.8V (Measuring the voltage between the blue and red wires for the first cell and between the black and the blue wires for the second cell), that means the 2S Pack will reach 3.8+3.8= 7.6 Volts (measuring the voltage between the red and black wire as shown in the picture). Do not try to discharge each cell below the minimum voltage shown in the battery data sheet -usually Never below 3V per cell that means a minimum of 6V for a 2S pack). If you try to discharge the cell further, be aware that the battery may get damaged and in extreme cases catch fire or explode 2. *When I check the voltage of each cell in the 2s pack, the first cell reads a voltage of roughly 4 volts, and the second cell reads around 8 volts. Why are the cells having two different voltages?* If you get those readings that means that probably actually you are measuring one cell reading about 4 V (for instance between the black and the blue wire) and in the second case you are measuring two cell S. (for instance between the black and the red wires reading about 8V). Measuring each cell you should be reading about 4V for each cell. 3. *What voltage should I store in the single cell?* Usually 3.8V is a good figure, but please reefer to the data sheet of your battery. Please be careful performing these measurements, treat, store and use the LiPo batteries properly, reading the product data sheet and acting according to the manufacturer reccomendations. Enjoy and be safe.
H: Is my FC for regular rc planes or for drones with multiple props? recently I decided to build a drone. While I'm learning everything I picked a FC. While researching about the FC I'm not sure if I picked the right one. https://www.alibaba.com/product-detail/2023-New-20X20mm-MATEK-H743-WLITE_1600761573416.html?spm=a2700.galleryofferlist.normal_offer.d_title.4f695995q9B3Ya Can I use this FC for a quadcopter? What are the differences of FC's between plane and drone? AI: This Controller is just for planes. On the Schematic Wiring Picture on the manufacturing-website you can see the wiring to the motors. Here are just two S-Pins for motorcontrol. http://www.mateksys.com/?portfolio=h743-wlite#tab-id-4 For a quadcopter you need 4 S-Pins to control every single motor separate.
H: Does all BLHELI_32 ESC supports bi-directional rotation? Hello im new to drone stuff, im just wondering if any BLHELI_32 is compatible with bidirectional rotation? since blheli 32 software supports bidirectional rotation is it safe to assume that all blheli32 boards can do it? Was planning to use an ESC as a motor driver and an arduino as the controller. Im getting this ESC from hakrc although it does not advertise as bidirectional as i have not seen a 4in1 drone esc that advertized its rotation (i assumed that this is common knowledge for in the drone community? ), was hoping that it probably can :slight_smile: AI: Reverse rotation for 'turtle mode' or 'flip after crash' has been around for a good few years, so I'd guess that anything you buy these days will support it.
H: Difficulty with purchasing drone frames Recently I purchased the following five inch frame to house my BR2212 2200KV brushless motors from racestar: The arms of this frame contain slits that are too small for the screws used on the base of my motor. I was under the assumption that drone frames had standards for their sizes e.g. the mounting holes of 5 inch frames would always be Xmm in diameter and evenly spaced along a Ymm PCD. What makes my issue even more bewildering is that the holes on the base of my motor perfectly align with the slits. Were the slits 1mm wider then my motor would have been able to fit on this frame. My questions are: a) What is the method for attaching motors onto the slits of those frames? I cannot find anything on Youtube or anywhere on the internet. b) Is there a standard for the mounting holes on drone frames? I need to know so that I do not purchase another incompatible frame. AI: There is no 'standard'. Motors are available in a wide range of sizes for different applications. You wouldn't expect to fit a tinywhoop motor or a 10hp giant scale motor on this frame. Most motors at about this size use M3 bolts. However some slightly smaller ones (such as 1608 motors that used to be common on 250 size quads) use M2. It looks like your frame has both 2mm and 3mm slots, giving you the option of smaller motors. 2212 motors are quite big for a 5" prop / 250mm diagonal frame. 2207 or 2306 is common now, 2204 or 1608 used to be common. So it looks like you're putting large motors on an old frame. There are a variety of common bolt spacings, and most motors and frames have two slightly different bolt spacings. This increases the chance of being able to use at least 2 bolts, which is adequate for most uses. 4 bolts is better than 2, so I'd drill out the frame to accept M3 bolts. If you have a metal 'X' mount for the motor, you could use that as a guide. You could also use needle files to enlarge the hole. Either way, avoid breathing the carbon dust.
H: Should I use a tandem wing for my RC aircraft? I am designing an RC airplane. My goal weight is under 10 lbs so I'm designing parameters based off of a 10 lb weight to allow for some overshoot. I also would like the wingspan to be around 4.5 ft for convenience purposes. I want it to be a high endurance flyer so I'm using a wing loading of about 20-25 oz./ft2. The issue is based on these parameters I end up needing a huge chord length (assuming straight wing) and always have an aspect ratio of under 5, which seems way too low for an endurance plane. I was trying to explore ways to increase the aspect ratio without increasing the wingspan or wing loading and I found some interesting tandem wing airplanes. However, it seems like in terms of efficiency tandem wings have proven to be inferior. Does anyone have any suggestions on how I could design around these parameters? Or do I need to either increase wingspan or decrease weight? Thank you. AI: An "endurance plane" is best built like a glider. Nice long high aspect ratio wings. For "convenience", do what the Navy does, use foldable or removable wingtip for storage. Decreasing weight always helps, but G loading parameters and overall strength must also be considered. From experience, a 5 foot × 8 inch wing with an Enya 40 (around 1 hp) turning a 10 × 6 prop weighing around 5 lbs fuelled: Wing area: 5 ft × 0.67 ft = 3.35 ft$^2$ 5 lbs ÷ 3.35 ft$^2$ × 16 oz/lbs = 24 oz/ft$^2$ So: 10 lbs × 16 oz/lbs × ft$^2$/24 oz = 6.7 ft$^2$ 6.7 ft$^2$ × 144 inch$^2$/ft$^2$ = 965 inch$^2$ Essentially, an 8 foot × 10 inch wing (Aspect ratio around 10) would be a good place to start. 10 lbs with engine and fuel included is reasonable. The glider-like Reaper comes in at around 10 lbs weight/Horsepower (1 HP = 0.746 kW). For your endurance aircraft, 1 kW may be sufficient for cruising, and you may be able to lift a bit more weight given sufficient runway.
H: To which device should the fpv camera be connected: flight controller or video transmitter? I have FC Matek F405-WMN and 5.8GHz transmitter TBS UNIFY PRO32 HV. The F405 wiring diagram (http://www.mateksys.com/?portfolio=f405-wmn#tab-id-4) shows that two cameras can be connected to it. But I also see camera pins on the trasmitter. So, to which device should I connect camera (CADDX ANT 1200TVL)? Also, I see on wiring diagram how to connect control RX module (I'm using crossfire nano rx), but I don't see here the video tx module. Should I use any free appropriate pins? AI: Originally the camera's video output was sent directly to the VTX. However it is now common for it to be connected to the flight controller, which adds the on-screen-display (OSD) data, and then to the VTX. The flight controller has two pads for video-in (C1 and C2) and a video-out pad for the VTX (labelled VTX). Note that the camera and VTX may not use the same voltage. You should look up the voltage they require, and connect to the relevant pad (either 5v or VBat). The 0v wires will all be common, so any 'ground' pad will work. The Tx/Rx pins on the VTX are for 'smart audio', which is nothing to do with audio, but allows the flight controller to communicate digitally with the VTX. That allows you to change VTX settings, such as channel. Connect these to any spare UART Tx/Rx pads on the flight controller. Note that the Tx output on one device connects to the Rx input on the other. This is entirely optional. It will still send video without it.
H: How to connect TBS Crossfire nano rx to Matek F405-WMN? How to connect crossfire nano rx to FC Matek F405-WMN? I don't see commands receiver connection on the wiring diagram http://www.mateksys.com/?portfolio=f405-wmn#tab-id-4 Is it possible to connect two receivers (2.4GHz and 868/915MHz) to one FC and switch them on-fly? AI: This is the section of the layout diagram with the four white boxes, in the bottom right. The white boxes represent common receiver types. The top white box labelled CRSF is for Crossfire receivers. The section of the flight controller marked by a yellow line indicates the pads used by receivers. The reason for this is that some receivers don't include telemetry, so don't use the TX2 pad, and SBUS receivers send an inverted signal that needs to be handled slightly differently. No, you can't connect multiple receivers. Instead many receivers have two antennas and automatically use the one with the best signal.
H: How to plan Mavic Mini flight using ios/android app? I want to create plan (automate) for my Mavic Mini (1) drone using phone app. I want to move my drone in both x and y axis. So, it should gain some height, then move ahead and then again getting low again and some forward/backward movement. This is my requirement. I tried to find solution but no simple straight answer. (IOS apps are expensive but if any app confirms that I can also lower down height and increase height, basically y axis then also I am good to buy that app.) AI: It looks like Litchi is compatible with a Mavic Mini 1. With Litchi you can set waypoints to which your drone will move to, you can set altitudes, drone angles, camera angles, etc,then capture images or video. Litchi is a low cost ($24.99) software solution for both iOS and Android.
H: Should I physically solder the components to the flight controller, or should I solder the quick disassemble connectors? How is the community do it? Should I physically solder the components to the flight controller, or should I solder the connectors so I can disassemble the model? Which type of connectors can I use? I plan to use XT60 for motors ESC (it's two motor plane) and XT90 for battery connection, but what connectors should be used for other devices like cameras, transmitters, etc? AI: There isn't a perfect answer for this, although XT60 and XT90 connectors are very popular for batteries. 3.5mm bullet connectors are also common for motors in fixed wings. Soldering is great if you're short of space (like on a racing quad) but it's not always practical to solder everything, especially in a confined equipment bay in a foam fixed wing (where you could melt the foam). Additionally, soldering can lead to the wire breaking due to fatigue, because the solder creates a hard spot. This isn't an issue for the big power wires, but it can be for small wires, particularly on large models where they could move around a lot. In theory a crimped connector has more strain relief, especially where one plug contains many wires. Where these are factory-made, they're usually neater than soldering, and are often necessary to squeeze enough connections onto a board. if the equipment comes with them, I'd use them. The 3-pin servo-plug is a widely used connector for low-power devices in models. It's a too big for a racing quadcopter (and likely to work loose after multiple crashes) but allows you to feed wires through holes in foam mouldings. They're also robust enough to handle being re-connected regularly, if you have to remove your wings for transport. (note that many of the smaller connectors won't stand up to that). Another popular connector is the 6-pin Multiplex MPX plug. This is commonly used to replace up to four ordinary servo plugs for models with removable wings. One plug instead of 4 saves time and confusion.
H: Smooth landing with Betaflight? With Betaflight I often find myself crashing the drone while landing. It gets better with practice, but in general there seems to be no throttle command for constant speed descent. Any throttle value below stable hovering means the craft will go down with ever-increasing speed. With some luck I manage to reapply throttle just before the craft hits the ground, reducing the touchdown speed, but it's quite hard for a novice pilot to do it right. This is in contrast to most commercial drones which seem to limit the speed in general, and especially near ground. Most actually have a landing button, but simply going down until the drone touches the ground works too. I have a barometer on my FC, so Betafligth must be well aware of the vertical speed. Is there a configuration which makes landings more novice-friendly (by limiting the speed or otherwise)? Are there alternatives (e.g. iNav, Cleanflight, etc.) which are more noob-friendly? AI: INav has altitude hold (or rather, the stick controls the rate of ascent/descent) like a photography drone. Betaflight does not have this mode, and it is unlikely to get it. Alt hold does not aligns with the philosophy of the project as a whole. Betaflight is for acro flying. Cleanflight has a GPS mode, which presumably controls altitude, but it's marked as a 'WORK-IN-PROGRESS. This mode is not reliable yet' You're right that throttle position gives you vertical acceleration, not speed. Effectively it's always in 'rate' mode. Like an RC helicopter, you have to constantly juggle the stick to maintain a steady hover. Converting your transmitter to remove the notches on the throttle can help, as they're never quite in the right place. I also found that pinching the stick gave me finer control. Race drones are pretty tough, and also tend to over-react when they touch down, so I usually disarm when a few inches above the ground. Some people stick half-inch thick foam pads to the underside.
H: Using a raspberry pi 4 for motor control I am fairly new to electronics and am trying to build a drone from scratch as a hobby project. I want a setup where I can control all the motors individually through the software. Through some research I found that ESCs are the way to go for BLDC motors, and I am planning on getting this one. I also came across this post which is almost identical to the thing that I am trying to do, except that I am using a Raspberry Pi. After multiple failed purchases due to not knowing what would work with my setup, I am reluctant to buy this and want to know how to connect it before the purchase. This is the diagram of the ESC: My biggest worry is that the raspberry pi's diagram only has 2 PWM (pulse width modulation) pins (32, 33): How would I connect these 2 to have access to the speeds of all the motors individually? AI: I work with RaspPi a lot. I will keep it short, 2 big problems: 1- PI is slow AF! Way too slow to use for the drone feedback loop (runtime) 2- And where is the 5kloc+ of C lang code coming from? The software in use is very complex. We have not seen ANY new FC firmware in 10+ years (correct if wrong). BetaFlight is the same code base from BaseFlight (2010 era) that was forked from another project. KISS/FETTEC has been around for 10+ years as well. No one is writing new firmware, many have tried. Fianlly, the LINGYUN ESC should be skipped. If you actually are going to dedicate all this time in a total scratch build, at least get good parts and things that you can replace in a month or year. The difference between Arduino's is grand, but many are fast little units and serve as the base for all flight controllers. PI's don't have the dedicated C lang/asm potential you need. They will compile and run, but I don't think the perfomance will be any good.
H: QGIS KML to DJI Pilot 2 for Flight Route I am unable to import KML polygon layers I generated in QGIS in DJI Pilot 2 on my DJI remote. I get the following error: Attempt to invoke interface method java.util.iterator java.util.List.iterator() AI: Based on this post, this solution works great: <?xml version="1.0" encoding="UTF-8"?> <kml xmlns="http://www.opengis.net/kml/2.2"> <Document> <Style id="transBluePoly"> <LineStyle> <width>1.5</width> </LineStyle> <PolyStyle> <color>7dff0000</color> </PolyStyle> </Style> <Placemark> <name>QGISexport</name> <styleUrl>#transBluePoly</styleUrl> <Polygon> <extrude>1</extrude> <altitudeMode>relativeToGround</altitudeMode> <outerBoundaryIs> <LinearRing> <coordinates> Copy here and remove this text </coordinates> </LinearRing> </outerBoundaryIs> </Polygon> </Placemark> </Document> </kml> To automate this process for regular workflows, I wrote this script which makes the process super fast for many KML files.
H: iOS DJI Pilot App no option to connect to device I just got a DJI Mavic 3 Enterprise Multispectral (M3M) for work and am learning how to use it. I download all of the DJI apps on the iPhone App Store, and the only one compatible with my drone supposedly is the "DJI Pilot" app. (I notice that there is no "DJI Pilot 2" app on the appstore...) However, I see absolutely no option to connect my device to the app and cannot figure out how to do so… Is anyone else having this problem? Where are they cannot even find an option to connect to their drone through the DJI Pilot iOS app? AI: Evidently there is no phone app for the Mavic Enterprise models at this time. Here is DJI's response: Dear Customer, Thank you for contacting DJI Technical Support. On the App Store, there is not an APP for the M3M, because the M3M is using the pilot 2 APP which is built into your RC. Here is the only software which is mentioned on our download center for the computer: DJI Assistant 2 for MG Here is a link they provided listing the specific software available for the product
H: How do i make 2 Cetus X Drones use different frequencies, so I can fly them at the same time? Me and my friend both have a Cetus X Drone. We want to fly them at the same time, however both the video and the controller signal interfere. As a controller I use the BetaFPV LiteRadio that came with the Cetus X starter kit. The goggles can automatically search for the right frequency, so I only need to change the sending frequency of the drone. I downloaded the Betaflight Configurator, however I am kind of overwhelmed with all the settings. Also I couldn´t figure out how to edit the controller at all. Can someone please help me figure this out? Sorry if this is kind of a basic question, I am very new to the drone community. AI: The BetaFPV RC transmitter should already use a modern frequency hopping protocol (much like a phone) that allows multiple radios to be used without interfering with each other. One of the advantages of broadcasting in the GHz range is that they can send hundreds of data packets every second. Each one is sent at a different frequency, in a pseudo-random pattern that the receiver can follow. There are about 100 different frequencies in the band, so one packet in 100 may conflict with the second transmitter, but the receiver just ignores it and you don't notice the slight delay. The FPV video is more old-fashioned. It uses a single frequency, and there's only room for 6-8 in the whole 4.8GHz band because they're much higher band-width signals. (Note that different manufacturers have split it in different ways but they're all using the same band) You have to pick a frequency manually. There are two ways to do this. Some (older) video transmitters have a button to step through the frequencies and LEDs to indicate what channel they're using. Most VTXs these days can be configured by the flight controller, through a menu in the on-screen display. When you first power up, the OSD displays instructions for getting into the menu for a few seconds. Throttle center, Yaw left, Pitch forward Once you have done this, you will see the menu open up on the screen. You can now navigate the menu using the right stick (pitch/roll) to enter, exit, and scroll items. With the OSD menu open, navigate to Features > VTX SA. This will open all of the Smart Audio settings. Within this menu, you can now set the video band, channel, and power option to your preferred setting. To save your new settings, use the Set item near the bottom and navigate back to the main menu. There are some pictures here: https://www.kiwiquads.co.nz/how-to-change-your-vtx-settings-in-betaflight-osd/
H: MINI F7 (HD +OSD +VTX SWITCH) How does the switch in the MINI F7 (HD +OSD +VTX SWITCH) flight controller works? I know that you have to connect to pads, but then how can you change, for example, from DJI system to a SBUS input? Thanks! AI: I think 'VTX switch' means that you can switch the power to the VTX on and off, with the BetaFlight 'pit mode'. That could be useful if your VTX overheats when it's not flying. 'Pit mode' normally refers to an option in the VTX where it runs at very low power to avoid interfering with other pilots, but not all VTXs support it, so I guess cutting the power is a way for the FC to offer a similar feature. There only appears to be one video-in pad, so I don't think it can be used to switch between cameras. SBUS is a receiver protocol, not a video signal.
H: Why is Random Forest feature importance biased towards high cadinality features? I understand how a random forest algorithm works but could someone tell me the rationale behind Random Forest feature selection being biased towards high cardinality features? AI: Random forest (as almost any other algorithm) is prone to selecting variables which can lead to a one-to-one relationship with the $Y$ variable. Why? Because you are leading them to be overfitted. If your variables have high cardinality, it means they form little groups (in the leaf nodes) and then your model is "learning" the individuals, not generalizing them. The more "cardinal" the variable, the more overfitted is the model. For example, if you have social security number as variable (biggest cardinality possible), this variable will for sure have the biggest feature importance. Because in the leaf nodes you will find every individual with his social sec. number and his output. But the capacity of generalization of the model is zero. Almost every task in data science looks that this doesn't happen.
H: Applying Hold-out and CV technique I have a methodology question: are hold-out and CV generalization-optimization techniques mutually exclusive? It gets really confusing to me at times, because in the most recent project I have been doing something as follows: I have split the dataset into TRAIN and TEST sets (with stratified distribution) I have applied gridsearchCV on the TRAIN set with cv = 10, so effectively the model was splitting TRAIN set into TRAIN' and VALIDATION sets at each fold I have used the optimized trained model with the TEST set, and ended up with results 0.97 (TRAIN set) vs 0.68 (TEST set). Normally, when I see results like that my immediate reaction would be to assume that I am overfitting to the TRAIN set, and that the model does not generalize well enough. However, I have already used CV with the TRAIN set to make sure I was tuning the model to generalize as well as possible. I do not have any immediate tools (nor I should touch the already trained model) to improve this score now, since as I understand Hold-out technique, it just gives me the final score of how my model performs on previously unseen data. At the same time, it feels kind of strange, since with using both CV and Hold-out, I am guaranteed to get some outliers in the hold-out set (TEST in my text above), which will fail. Any suggestions or ideas? AI: You are correct in the sense that when tuning your model via. grid search you are technically not leaking any data. But, recall that tuning your model (via. a specific procedure such as grid search) is one of only many steps you probably took in fitting your model pipeline. In particular, areas such as pre-processing, feature engineering, imputation, model tuning, data aggregations, etc. The point of the test set is to capture the entire model building process and not just the process of model tuning. Furthermore, it is highly known that validation scores reported during model tuning tend to be optimistically biased (and this bias tends to be worse with smaller datasets). This is because the probability of finding a set of hyper parameters that coincidentally minimizes the error for the validation set but not to the overall population (i.e., overfitting to the validation set) becomes higher the finer your grid is. Imagine theoretically tuning your model to one million different hyper parameter combinations. The probability of selecting a bogus set of hyper parameters (i.e. that are only optimal for the validation set) is now quite large due to the sheer number of possible candidates you have elected to try. With a test set, this still won't prevent overfitting to a validation set. However, it will allow you to detect the problem and give you the true unbiased measure of model performance. At the same time, it feels kind of strange, since with using both CV and Hold-out, I am guaranteed to get some outliers in the hold-out set (TEST in my text above), which will fail. Indeed, which is a major drawback of cross validation and data splitting especially for smaller datasets (with a lot of outliers/noise). Basically, the performance measure you observe tends to be highly variable with how you split the data in the first place (that is, the seed you choose when splitting your data can lead to large changes in estimated model performance depending on where your outliers fall). The solution is unfortunately, not very glamorous and time consuming. In order to gain more certainty in our estimate, we need more than just a single estimate of model performance. Thus, simply repeat the entire model building process again with a different data partition (a different test set). Repeat this however many times, and average over all repeats. Possibly, form a confidence interval that allows you to see for yourself how variable your model's performance is. I will also note that there are other ways to get around this problem, such as "optimism adjusted bootstrap", but recent issues have arisen with this method which have potentially shown that for high dimensional data this method does not do well, despite being more efficient than cross validation. Since high dimensional data is the norm these days, I have my doubts but perhaps it may be of use to you.
H: Clustering, Mixed Data Set with Ordinal and Nominal Scale Data After reading a bit how categorical data can be considered in clustering, I came to the conclusion that most of the post do not make distinction between nominal scale data e.g. colour: red, green, blue, and ordinal scale data e.g. size: small, medium, large. However, the distances between the items make sense at the ordinal scale even if they are not necessarily the same between all items. My questions: Can I simply convert ordinal scale data to numeric scale without causing much trouble in clustering? I think yes, for the above reasons, but I would be pleased if you could confirm. For nominal scale data, where the distances between the items make no sense, would be harder to capture. The easiest way I have found, if there are not too many items on the scale, was to break down the scale and add a variable for each item. E.g. originally, we have colour: red, green, blue, and we make variable colour_red, clour_green, and colour_blue, where each of them can take a value: 0 or 1. See the post form Jordan A on K-Means clustering for mixed numeric and categorical data. It seems to me a valid numeric scale of type ratio as it has a non-arbitrary zero value expressing the complete absence of something and 1 expressing the presence of something. Do you have experience with this in clustering? Is this a valid approach? Otherwise I know I should use e.g. kproto (Kproto) for mixed data sets or kmodes (Kmodes) for plain, nominal data sets. Thank you for you responses. AI: Yes, it causes almost no trouble. The only caution you must have is the possibility of having a regular ordinality where the ordinality is actually irregular. For example, having shirt size 1, 2 and 3 but the size "3" is not three times bigger than size "1". The dummy variable creation is a very valid approach when having categorical variables in your dataset, not only for clustering but for almost every model you could construct.
H: Estimating effort of building machine learning model I know it depends on the problem and various other factors like data availability, the complexity of the use case, the workload of developer ..etc, but can someone suggest effort estimation of building a minimum viable product. I have taken the following parameters into consideration while preparing effort estimation(close to 3months) for building a price prediction model based on two problem definitions(no sample data supplied by client), which would be integrated with web interface. Data loading & pre-processing - 1 week Development of model - 2.5 month Deployment - 2 weeks. not sure whether it's overestimated or underestimated. Note: Please do not close with the comment "too broad" as I do not have much/can not provide more information. Requesting expert advice. AI: There is something called the 80/20 rule in data science. It comes from surveys that have shown that data scientist usually spend 80% on gather and cleaning data and only 20% on actually using it to build models and the rest. No one can tell you how long each step is going to take since it depends on your case. But I can tell you that 1 week / 2.5 months differs a lot from 80/20 and I think you should consider re-calibrating those two, but only you know the facts. You can read more in this survey: CrowdFlower Data Science Report 2016 - check the section called "How a Data Scientist Spends Their Day"
H: What is the difference between semantic segmentation, object detection and instance segmentation? I'm fairly new at computer vision and I've read an explanation at a medium post, however it still isn't clear for me how they truly differ. AI: Object Detection : is the technology that is related to computer vision and image processing. Its aim? detect objects in an image. Semantic Segmentation : is a technique that detects , for each pixel , the object category it belongs to. All object categories ( labels ) must be known to the model. Instance Segmentation : same as Semantic Segmentation, but dives a bit deeper, it identifies , for each pixel, the object instance it belongs to. The main difference is that it differentiates between two objects with the same label. Here's an example of the main difference. In the second image where Semantic Segmentation is applied, the category ( chair ) is the output class, all chairs are colored the same. In the third image, the Instance Segmentation, goes a step further and separates the instances ( the chairs ) from one another in addition to identifying the category ( chair ) in the first step. Hope this clears it up for you a bit.
H: clustering before or after PCA? I'm newbie into data science, and I had some problems dealing with my project. I'm trying to visualize multidimensional data into 2D after clustering with using a lot of methods. (kmeans, DBSCAN, OPTICS, agglomerative, spectral...) I have multidimensional data. (11 columns - attributes , 150K rows - number of data). It is slightly sparse-alike data, for example, which means one datum has numeric values like (0, 0, 6.5, 0, 0, 7.5, 0, 0, 4.5, 0, 0) So, each datum has approximately 2~5 non-zero attribute values... Below is not exactly same with my project, but it's similar. https://scikit-learn.org/stable/auto_examples/cluster/plot_cluster_comparison.html But, as I'm new to this, I am curious about the sequence of PCA and Clustering. I think there are 2 scenarios. [1. Do clustering before PCA] That means, I am using PCA just for visualization. But I have a question. In that case, If I process clustering with raw data, are all clustering algorithm (mentioned above) fit to my data type well. [2. Do clustering after PCA] In this case, I have other problems. My data's importance of components are like below. PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10 Standard deviation 1.4173 1.1836 1.1141 1.0108 0.99109 0.95231 0.89091 0.8456 0.71542 0.64610 Proportion of Variance 0.2009 0.1401 0.1241 0.1022 0.09823 0.09069 0.07937 0.0715 0.05118 0.04174 Cumulative Proportion 0.2009 0.3410 0.4651 0.5673 0.66551 0.75620 0.83558 0.9071 0.95826 1.00000 As far as I've understood about visualizing multivariate data into 2D, I have to choose 2 PCs.(e.g.> PC1, PC2). However, my data's PV is slightly lower than I'd expected. So, is it okay if I choose (PC1, PC2) to coordinates to be clustered and process clustering? Also, can I choose other PCs (e.g. PC5, PC8) to coordinates to be clustered? AI: This is often done to visualize if there is any structure in the data. Often you color the clustering differently to check if samples from the same cluster are close. Often data contains a lot of redundant information. With many dimensions, you get the curse of dimensionality. This can lead to a few large and many small clusters. By reducing the dimensionality to a few informative features the clustering solution often improves. This is very dependent on the dataset. Perhaps all of your dimensions are informative then dimensionality reduction won't help much. Look at scoring metrics such as silhouette analysis/score, gap statistic, or elbow method. In your case, most of the dimensions seem informative. Perhaps you can remove one or two dimensions, but it likely won't have much effect on the clustering solution. You can choose whatever PC's you'd like but they are ordered according to the amount of variance they explain. You can look at how much the different original features contribute to the different PC's.
H: Is the F1 Score sensitive to the threshold? Is the F1 score sensitive or indifferent to the threshold (for defining positive or negative)? AI: Yes, it is. The formula: $$F_1 = \frac{2TP}{2TP + FP + FN}$$ There is no symmetry. You have only $TP$ in the nominator which means that $F_1=0$ if $threshold=1$.
H: xgboostclassifier prediction error after saving the model and restoring it I have trained a xgboost model and during training, the prediction works fine. But if I stop the script and start a restoring script to restore and predict, then for the same test dataset I get every data classified into one class. The weird part is that, even with the first prediction, i restore the model as shown below, X = df.drop(['label'], axis=1) y = df['label'] training_count = 0 X_train, test_data, y_train, test_label = train_test_split(X, y, test_size=0.1, random_state=7) model = XGBClassifier(learning_rate=0.5, n_estimators=250, max_depth= 5) model.fit(X_train, y_train) model.save_model('trained_model_full') #validation model = XGBClassifier(learning_rate=0.5, n_estimators=250, max_depth= 5) booster = xgb.Booster() booster.load_model('trained_model_full') model._Booster = booster model._le = LabelEncoder().fit(test_label) start = time.time() pred = model.predict(test_data) end = time.time() The above code works and gives me 99% accuracy. But if I remove the training part and just restore it like below, then it fails to work. I get 50% accuracy. X = df.drop(['label'], axis=1) y = df['label'] training_count = 0 X_train, test_data, y_train, test_label = train_test_split(X, y, test_size=0.1, random_state=7) #validation model = XGBClassifier(learning_rate=0.5, n_estimators=250, max_depth= 5) booster = xgb.Booster() booster.load_model('trained_model_full') model._Booster = booster model._le = LabelEncoder().fit(test_label) start = time.time() pred = model.predict(test_data) end = time.time() This is a strange issue. Have anyone come across something like this? if so could you help me out? AI: It's better to use joblib to save the model: import joblib #This one saves the model joblib.dump(model, 'trained_model_full') #This one loads the model model= joblib.load('trained_model_full')
H: Prerequisites for openCV I am looking to start learning openCV to perform a car detection program. Should I know data science and/or machine learning stuff before going for openCV? If not, where can I start learning openCV with Python? AI: OpenCV has a module to detect faces, eyes etc. With pretrained models you may also be able to detect cars. https://www.pyimagesearch.com/2018/11/12/yolo-object-detection-with-opencv/ There are many good tutorials online. Just have a look. So if you are happy with this, you don‘t need to know much about data science. However, if you want to go a step further and train your own models, you will need to look into e.g. Keras (with pretrained models). https://keras.rstudio.com/articles/applications.html My view is that OpenCV is a cool tool and you can do a lot with it. However, off-the-shelf applications have their limitations (of course).
H: Improving the results of CNN Edit 2 I solved my problem. The issue was caused by the validation_generator. I used the method flow_from_directory with shuffle = true. By changing the value to false and calling the method validation_generator.reset() before model.predict_generator() for computing the confusion matrix solved my problem. The reset()-method seems to be very important. Edit: I was able to isolate the problem a little. I noticed that evaluate_generator method returns the correct values from the training, e.g. [0.068286080902908, 0.9853515625]. However, the predict_generator() method behaves strangely. The results look like this: [[8.8930092e-06 5.8127771e-04 3.8436747e-06 7.7528159e-07 9.9940526e-01] [1.4138629e-03 9.9854565e-01 5.4473304e-07 3.9719587e-05 1.8904993e-07] [9.0803866e-07 2.7020766e-05 7.9189061e-07 4.9350000e-09 9.9997127e-01] ... [5.0964586e-06 4.5610027e-04 2.6184430e-06 1.6962146e-07 9.9953604e-01] [2.9692460e-08 3.1284328e-10 4.7919415e-09 1.0000000e+00 1.4161311e-12] [2.1354626e-06 9.6519925e-06 1.9460406e-07 4.6475903e-09 9.9998796e-01]] #### I did some image classification with a CNN. The Accuracy of the training and validation set are high and the losses for both of them are low. However, my confusion matrix does not have the typical diagonal from the upper left to lower right. If I understand the confusion matrix correctly, I have a lot of misclassifications. So, how can I improve my model to get better results? The distribution of samples each class is: early: 800 healthy: 749 late: 764 leaf mold: 761 yellow: 708 The Structure of the model: model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu',input_shape=(150, 150, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Dropout(0.15)) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Dropout(0.2)) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(256, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(256, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(BatchNormalization()) model.add(layers.Flatten()) model.add(layers.Dropout(0.6)) model.add(layers.Dense(150, activation='relu', kernel_regularizer=regularizers.l2(0.002))) model.add(layers.Dense(5, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer=optimizers.Adam(lr=1e-3), metrics=['acc']) These are the accuracy and losses of the training: Epoch 00067: val_loss did not improve from 0.08283 Epoch 68/200 230/230 [==============================] - 56s 243ms/step - loss: 0.0893 - acc: 0.9793 - val_loss: 0.0876 - val_acc: 0.9784 Epoch 00068: val_loss did not improve from 0.08283 Epoch 69/200 230/230 [==============================] - 58s 250ms/step - loss: 0.0874 - acc: 0.9774 - val_loss: 0.1209 - val_acc: 0.9684 Epoch 00069: val_loss did not improve from 0.08283 Epoch 70/200 230/230 [==============================] - 57s 246ms/step - loss: 0.0879 - acc: 0.9803 - val_loss: 0.1384 - val_acc: 0.9706 Epoch 00070: val_loss did not improve from 0.08283 Epoch 71/200 230/230 [==============================] - 59s 257ms/step - loss: 0.0903 - acc: 0.9783 - val_loss: 0.1352 - val_acc: 0.9728 Epoch 00071: val_loss did not improve from 0.08283 Epoch 72/200 230/230 [==============================] - 58s 250ms/step - loss: 0.0852 - acc: 0.9798 - val_loss: 0.1324 - val_acc: 0.9621 Epoch 00072: val_loss did not improve from 0.08283 Epoch 73/200 230/230 [==============================] - 58s 250ms/step - loss: 0.0831 - acc: 0.9815 - val_loss: 0.1634 - val_acc: 0.9574 Epoch 00073: val_loss did not improve from 0.08283 Epoch 74/200 230/230 [==============================] - 57s 246ms/step - loss: 0.0824 - acc: 0.9816 - val_loss: 0.1280 - val_acc: 0.9640 Epoch 00074: val_loss did not improve from 0.08283 Epoch 75/200 230/230 [==============================] - 57s 247ms/step - loss: 0.0869 - acc: 0.9774 - val_loss: 0.0777 - val_acc: 0.9882 Epoch 00075: val_loss improved from 0.08283 to 0.07765, saving model to C:/Users/xxx/Desktop/best_model_7.h5 Epoch 76/200 230/230 [==============================] - 56s 243ms/step - loss: 0.0739 - acc: 0.9851 - val_loss: 0.0683 - val_acc: 0.9851 Epoch 00076: val_loss improved from 0.07765 to 0.06826, saving model to C:/Users/xxx/Desktop/best_model_7.h5 AI: This is a bit strange... one problem may be that you do not have too many training samples. Do you use a pretrained model? If not, using a pretrained model can potentially improve classification accuracy (especially with limited training samples). https://keras.io/applications/ -Edit- This is a good sample code: https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/5.3-using-a-pretrained-convnet.ipynb Adjusted for multilass: import keras from keras.applications import VGG16 conv_base = VGG16(weights='imagenet', include_top=False, input_shape=(150, 150, 3)) import os import numpy as np from keras.preprocessing.image import ImageDataGenerator base_dir = 'C:/kerasimages' train_dir = os.path.join(base_dir, 'train') validation_dir = os.path.join(base_dir, 'val') test_dir = os.path.join(base_dir, 'test') datagen = ImageDataGenerator(rescale=1./255) batch_size = 20 def extract_features(directory, sample_count): features = np.zeros(shape=(sample_count, 4, 4, 512)) labels = np.zeros(shape=(sample_count)) generator = datagen.flow_from_directory( directory, target_size=(150, 150), batch_size=batch_size, class_mode='binary') i = 0 for inputs_batch, labels_batch in generator: features_batch = conv_base.predict(inputs_batch) features[i * batch_size : (i + 1) * batch_size] = features_batch labels[i * batch_size : (i + 1) * batch_size] = labels_batch i += 1 if i * batch_size >= sample_count: # Note that since generators yield data indefinitely in a loop, # we must `break` after every image has been seen once. break return features, labels train_features, train_labels = extract_features(train_dir, 2000) validation_features, validation_labels = extract_features(validation_dir, 1000) test_features, test_labels = extract_features(test_dir, 1000) from keras.utils import to_categorical print(train_labels) print(train_labels.shape) train_labels = to_categorical(train_labels) print(train_labels) print(train_labels.shape) validation_labels = to_categorical(validation_labels) test_labels = to_categorical(test_labels) train_features = np.reshape(train_features, (2000, 4 * 4 * 512)) validation_features = np.reshape(validation_features, (1000, 4 * 4 * 512)) test_features = np.reshape(test_features, (1000, 4 * 4 * 512)) from keras import models from keras import layers from keras import optimizers model = models.Sequential() model.add(conv_base) model.add(layers.Flatten()) model.add(layers.Dense(256, activation='relu')) # NUMBER OF CLASSES model.add(layers.Dense(3, activation='softmax')) model.summary() conv_base.trainable = False from keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') # Note that the validation data should not be augmented! test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( # This is the target directory train_dir, # All images will be resized to 150x150 target_size=(150, 150), batch_size=20, # Since we use categorical_crossentropy loss, we need binary labels class_mode='categorical') validation_generator = test_datagen.flow_from_directory( validation_dir, target_size=(150, 150), batch_size=20, class_mode='categorical') model.compile(loss='categorical_crossentropy', optimizer=optimizers.RMSprop(lr=2e-5), metrics=['acc']) history = model.fit_generator( train_generator, steps_per_epoch=100, epochs=30, validation_data=validation_generator, validation_steps=50, verbose=2) ####################################### # Fine tuning #conv_base.summary() conv_base.trainable = True set_trainable = False for layer in conv_base.layers: if layer.name == 'block5_conv1': set_trainable = True if set_trainable: layer.trainable = True else: layer.trainable = False model.compile(loss='categorical_crossentropy', optimizer=optimizers.RMSprop(lr=1e-5), metrics=['acc']) history = model.fit_generator( train_generator, steps_per_epoch=100, epochs=100, validation_data=validation_generator, validation_steps=50) model.save('my_model_multiclass.hdf5')
H: Churn Prediction Model based on Customer Activities I am new to data science, so forgive me if i have not done my research well. I want to build a system that calculates the churn scores for each customer and hence try to prevent it. I just want to test the concept, so result is not really important. The business was about sending newsletter to customer and receive ad revenue for it. The data We have customer activity like open emails, read emails, click emails, and that is about it. The idea is to look into the customer activity and see if we can identify if the customer is about to cancel the subscription. I have look into several classification models and wonder if there is a model that can look into the activities history, not just the current events. Any help will be appreciated! AI: First of all, you need to see your churn rate to see how rare it is. If your churn rate is really low ( below < 5% of total labels ), then you could treat this as an anomaly detection system, meaning you train your model on non-churn users ( if churn is the label you're predicting then the non-churns would have 0 as label ), and flag any activity different from what you trained your model on as an anomaly ( in case you don't have enough examples of activities that led to churn ), or if you have enough 'churners' to train your model on, treat this as a classification problem. I'll suggest some indicators for you even though i haven't worked on a churn model before. Assuming you only have email activity ( following your statement ) you could : create a feature count on the email activity to see if there's a pattern to churn after certain numbers of non-reads,non-clicks,non-opens. try to collect more features if there are more because 3 is a low feature count for a predictive model ( maybe not only the activity but the user's personal information like age / type of profession etc.. ) you can also see this article discussing user churn. Hope this helps you.
H: Problem reconstructing Keras's IMDB reviews Keras's sample IMDB reviews are represented as a sequence of word indexes. However, a dictionary is provided that matches the index to the word I'm trying to use this to reconstruct the text of a review, but I'm getting gibberish. Here's my code: from keras.datasets import imdb (x_train, y_train), (x_test, y_test) = imdb.load_data() ind = imdb.get_word_index() def find_word(val): for key, value in ind.items(): if value == val: return key rev = x_train[0] #which review to look at rev_recon = [] #hold the reconstructed review for i in range(len(rev)): word = find_word(rev[i]) rev_recon.append(word) print(' '.join(rev_recon)) >>>the as you with out themselves powerful lets loves their becomes reaching had journalist of lot from anyone to have after out atmosphere never more room titillate it so heart shows to years of every never going villaronga help moments I thought the sequence of word indexes was supposed to represent the sequence in which the words originally appeared in the review. Am I missing something? Here's my code: AI: Nevermind, it wasn't very clear to me that the defaults for load_data() actually shift the sequence indices by 3, and so to obtain the proper indices for the dictionary you either need to offset those indices during lookup or shift the dictionary values, e.g.: ind = imdb.get_word_index() ind = {k:(v+3) for k,v in ind.items()} And also every review is padded with a 1 at word [0], and so reconstructing the review means starting at [1] or passing start_char=None in load_data()
H: About sklearn.metrics.average_precision_score documentation There is a example in sklearn.metrics.average_precision_score documentation. import numpy as np from sklearn.metrics import average_precision_score y_true = np.array([0, 0, 1, 1]) y_scores = np.array([0.1, 0.4, 0.35, 0.8]) average_precision_score(y_true, y_scores) 0.83 But when I plot precision_recall_curve precision, recall, _ = precision_recall_curve(y_true, y_scores) plt.plot( recall,precision) I got the picture: why the area under the precision_recall_curve is not 0.83? AI: According to the documentation, the value is not exactly the area under curve, it is $$\text{AP} = \sum_n(R_n - R_{n-1})P_n.$$ which is a rectangular approximation. For your specific example, i.e. R P 1 0.0 1.0 2 0.5 1.0 3 0.5 0.5 4 0.1 0.66 it is calculated as $$\begin{align*} \text{AP} & = \overbrace{(0.5 - 0.0)\times1.0}^{(R_2 - R_1)P_2} + \overbrace{(0.5 - 0.5)\times 0.5}^{(R_3 - R_2)P_3} + \overbrace{(1.0 - 0.5) \times0.66}^{(R_4 - R_3)P_4} \\ &= 0.5 + 0.00+ 0.33 = 0.83 \end{align*}$$ which is the area under the red curve as illustrated below: compared to $$\text{AUPR}=0.5 + \overbrace{\frac{0.5 + 0.66}{2} \times 0.5}^{\text{trapezoid area}} = 0.79$$ which is the area under the blue curve.
H: Use machine learning to predict next schedule meeting for sales officers I have a project with data of sales field officers who visit their customers and enter the progress details. Visit can be an order or any kind of customer interaction. Let's say one sales guy has around 1000 customers, It's only natural that he might skip some customers which can result in loss overtime. So I have the data for the visits done by him since the customer was onboarded. What model should I use to check the old frequency of visits done by him and factors which are stated below so that I can say like: "Do you want to get in touch with "this" customer. ?" So basically by checking his previous interaction rate, In case if he forgot to visit, I need to recommend like that. Data points are like: date/time - date when visit done remarks - what was discussed there is a status which is related to internal and some other customer details. So what model should I use or which technique do you think is best suited for this problem. I'm fairly new in machine learning and kind of learning it by doing. AI: What I understood is, since your sales person skipped the customer meeting you want to display details or send a notification like: Last date of conversation Discussion points and so on. such that sales person decide her/his priority based upon the importance of discussion. If so, I'm not sure how much it's possible with ML. But one thing is possible, where you can trained RNN(Recurrent Neural Network) based time series model. This model will predict the next meeting time, which is based upon the pattern of historical meeting data. In that way you can book sales person calendar along with the last meeting Key-Discussion points or send an update in case sales person hasn't attended that person.
H: k-means classifies one data point as a group I have 1000 sets of one dimensional data (360 each in length), and I want k means to classify what is a small/medium/large value (n_clusters=3) for each set of data, but I'm getting a lot of instances where the large group only has 1 data point because that value is so far away from the rest, but the rest look like they can clearly create 3 clusters. In some other cases, it does seem to make sense to use 1 data point as the large group since the rest are so close together. It's not clear if there can be 3 distinctive clusters. What would be an efficient way to deal with this? AI: Two ideas come to mind, which could be combined or not. Try to identify the single point as an outlier, and remove it from consideration for the clustering. Allow $k$ to vary a little. Using both and allowing $k\in\{2,3\}$ allows you to find only two groups in the main set of points, plus the outlier. Using just (2) with $k\in\{3,4\}$ could find clusters Low/Med/Large/Outlier...that has the nicety that outlier detection is done by the k-means algorithm rather than another preprocessing step, but runs the risk of finding four honest clusters when you only wanted three.
H: Why do we have to divide by 2 in the ML squared error cost function? I'm not sure why you need to multiply by $\frac1{2m}$ in the beginning. I understand that you would have to divide the whole sum by $\frac1{m}$, but why do we have to multiply $m$ by two? Is it because we have two $\theta$ here in the example? AI: It is simple. It is because when you take the derivative of the cost function, that is used in updating the parameters during gradient descent, that $2$ in the power get cancelled with the $\frac{1}{2}$ multiplier, thus the derivation is cleaner. These techniques are or somewhat similar are widely used in math in order "To make the derivations mathematically more convenient". You can simply remove the multiplier, see here for example, and expect the same result.
H: What is the point of getting rid of overfitting? I'm having trouble understanding why I would use dropout, regularization, data augmentation, etc to get rid of overfitting in the first place. I get that if your model is too large or data is too sparse then your model may start to memorize data and not perform well on new data. However, are there any cases in which adding dropout, regularization, etc would increase accuracy on the validation set? For instance, if my training acc is 95% and val accuracy is 70%, would removing overfitting simply bring the training accuracy down lower to the val accuracy? Or is there a way to actually improve training accuracy? I assume there is but some intuition on this would be very much appreciated! AI: Its like studying for an exam with only Past year papers (PYP) and you are the classifier. It would not be wise not to practice any PYP all for the exam, leading to poor performance in the exam (under-fitting). On the other hand, it would be terrible to memorize the answers to the PYP as you cannot generalize well to the exam paper which is definitely different than the PYP (over-fitting).
H: Neural network for linear regression? In this example on tensorflow website, I was surprised to see a neural network's predicted values graph as a straight line. Isn't the purpose of neural networks to approximate non-linearities of data into it? Please let me know what I am missing here. AI: Neural networks are very good function approximators. Hence, they can approximate a wide range of nonlinear functions. Remember that linear functions are easier to represent than nonlinear functions. Hence, the neural network will clearly be able to approximate a linear function. This can be easiest seen if we only use linear activation functions. But we can also use a nonlinear ReLU and still be able to arbitrarily approximate the function on a compact set. The question that we should have is rather: Is it overkill to approximate a linear function with a neural network when a linear regression would do the job? The answer should be clear that you should rather use a linear regression instead of a neural network. The given example just wants to demonstrate that even without knowing the relationship between our predictors and criterion (this is sometimes called domain knowledge) the neural network will still be able to approximate the function without the need for additional domain knowledge.
H: What is the correct procedure when "joining" data takes ~6 hours? I am dealing with bike-share data. I have 2 DataFrames: trips_df (subset shown), total entries = 1,048,568 weather_df (subset shown), total entries = 2,654 I am trying to calculate and attach the total_precipitation for each trip, as a column. I do this by looking up the start_timestamp and end_timestamp datetime for each trip from trips_df, in the weather_df, and summing the precipitation_amount within those times, then attaching that value back in the trips_df under the new column. I can attach the code if it's helpful. I ran the code on a subset of 65 entries and it took ~1.3s. (CPU times: user 1.27 s, sys: 8.77 ms, total: 1.28 s, Wall time: 1.28 s). Extrapolating that performance to my entire data, it would take (1.3 * 1048568)/65 = 20971.36seconds or 5.8hours. What am I supposed to do in this situation? For context, this is a Kaggle style data science project so I'll have to do further data wrangling, and data extraction then apply a predictive model. AI: As hinted at by @spacedman I was doing the timestamp lookups wrong. I was forming a set(weather_data['start_precipitation_datetime']) and set(weather_data['end_precipitation_datetime']) for every lookup. Here is the working (more efficient) code. CPU times: user 118 ms, sys: 4.23 ms, total: 122 ms Wall time: 124 ms for 65 rows. Roughly 35minutes instead of 6hours for the entire DataFrame. def sum_precipitation(datetime1, datetime2, weather_data, start_dates_set, end_dates_set): time1_rd = datetime1.replace(minute=0, second=0) time2_ru = datetime2.replace(minute=0, second=0) + dt.timedelta(hours=1) if time1_rd in set_start: start_idx = weather_data.start_precipitation_datetime[ weather_data.start_precipitation_datetime==time1_rd].index[0] if time2_ru in set_end: end_idx = weather_data.end_precipitation_datetime[ weather_data.end_precipitation_datetime==time2_ru].index[0] precipitation_sum = weather_data.iloc[start_idx:end_idx+1, 7].sum() else: precipitation_sum = 0 else: precipitation_sum = 0 return round(precipitation_sum, 3) def join_weather_to_trips(trips_data, weather_data): trips_weather_df = trips_data.copy() start_hr_set = set(weather_data['start_precipitation_datetime']) end_hr_set = set(weather_data['end_precipitation_datetime']) fn = lambda row : sum_precipitation(row.start_timestamp, row.end_timestamp, weather_data, start_hr_set, end_hr_set) col = trips_data.apply(fn, axis=1) trips_weather_df = trips_weather_df.assign(total_precipitation=col.values) return trips_weather_df
H: Which animal has similar brain complexity to VGGNet I know that it sounds weird, but I read somewhere that VGGnet has roughly 14.5k neurons and 138/144M parameters depending on the exact architecture as it's one of the biggest. Also I read here, that the biggest neural networks trained on super computers have roughly 80M neurons, which is similar to frog's brain. If we had to compare the complexity of VGGNet to brain of some kind of animal (even simple one) what would it be? AI: According to this wiki: List of animals by number of neurons The closest ones you got are: Pond snail with 11,000 neurons. Sea slug with 18,000 neurons. I think this was more of a biology question since you covered the data science part.
H: Applying mean encoding before or after splitting into train and test set I have a dataset of 50000 observations with columns of high cardinality. The best way to encode them is with mean encoding, then to use regularization. I will use CV rather than smoothing. But when I see people use it, they use it on train and test set. Should I first split my dataset into train and test set and then encode or can I encode directly from the beginning on my full dataset? If I should split the data into train and test set first, can someone tell me why? AI: The purpose of having a test set or a validation set is to be able to check the performance of your model on data it has not seen before. If you perform feature engineering with the test data present you will get a data leakage. That happens when give your model information about your test data during training. It is especially bad when doing target encoding with the label mean since it will give your model information about the distribution of the labels in the test set. The effect is that you will get an overly optimistic test score that will not reflect the performance on new truly unseen data.
H: Abstraction in Neural Networks In my nonlinear dynamics class in college, we discussed a simple perceptron with two input neurons and one output neuron that is trained on the patterns pattern | Input | Output 1 | 1,0 | 2 2 | 3,1 | 6 3 | 0,0 | 0 Solving the system of equations to determine the weights $$w_1(1) + w_2(0) = 2$$ $$w_1(3) + w_2(1) = 6$$ $$w_1(0) + w_2(0) = 0$$ gives $w_1 = 2$ and $w_2 = 0$, which means the perceptron has made an abstraction. In more advanced neural networks, (a) how do you test for abstractions in your network and (b) how do you interpret what that abstraction means? AI: Since the weight of each feature (column) determines how important that column is in determining the output value, the abstraction means the most efficient and correct weights. For instance, here the features for pattern 1 are 1 and 0, and each of them are given a weight multiplying with which the output is calculated: feature1 * weight 1 + feature2 * weight2 = output So based upon the current pattern, the best values for weights to give the already extant outputs, are the ones given. The test for the abstraction can be done via the compile method. First initial wights are assigned to features, then a single record or a patch of data passes through network and a cost function is calculated, then the error is backpropagated and new weights are set for the features (based on how much responsible each of them is for the error) and this continues.
H: TDIDT Decision Trees algorithm What is the Difference between TDIDT, ID3, CART, and C4.5? My main concern is about TDIDT, Is it first ever algorithm that came with Decision trees? Is it predecessor or successor of ID3, CART, and C4.5? What are the differences to others(ID3, CART, and C4.5?)? AI: TDIDT stands for "top-down induction of decision trees"; I haven't found evidence that it refers to a specific algorithm, rather just to the greedy top-down construction method. Therefore (seemingly) all the other algorithms you mention are implementations of TDIDT. The first iteration is due to Hunt, the "Concept Learning System" in 1966. ID3 is due to Quinlan in 1979, improving upon the CLS. (Fun fact: it was originally designed to tackle the problem of deciding winnability of King-rook vs king-knight chess endgames.) This was further improved to C4.5, then to C5.0. This branch only works for classification. CART ("classification and regression trees") was developed roughly in parallel with ID3, by Breiman, Friedman, Stone and Olshen in 1984. As the name suggests, this branch allows prediction of continuous variables. The main differences between branches is how splits are determined, using different metrics. Early on it seems there were differences in handling missing data and such, but these appear to have mostly been included in all the more recent iterations. Of course, there have been numerous other improvements or variations. Some (historical?) tree building algorithms generate nonbinary trees, some make splits on linear combinations of features instead of just one at a time, some try to reduce the greedy nature of the algorithm by looking ahead, some produce regression models in each leaf instead of constant functions, ... https://en.wikipedia.org/wiki/Decision_tree_learning#Decision_tree_types W-Y Loh, "A Brief History of Classification and Regression Trees" (slides) Lecture slides (Aida Nordman?) Quinlan's 1986 paper similar question on Quora similar SO question C5.0 introduction
H: Training LSTM with different sequence lengths in Keras functional api I am trying to train an LSTM model using Keras functional API. My training data is of shape: >>> data.shape() (100000,variable_sequence_lengths,295) where 100000 corresponds to the number of instances (the whole number of sequences) and 295 denotes the number of features in each element of a given sequence. I am getting errors regarding the shape of the input data. How to define the shape of the data in the input layer and the subsequent one (LSTM) considering variable sequence lengths? from keras.layers import Input, Dense, concatenate, LSTM from keras.models import Model import numpy as np inputs = Input(shape=(x,y,z)) x=LSTM(128, return_sequences=True, input_shape=(a,b,c))(inputs) .... What values should x, y, z and a, b, c take? AI: Model specification is per sample, number of samples should not be included in the dimensions, i.e. (a, b, c) or (sample_size, timesteps, dimension) should be changed to (timesteps, dimension). Also, for variable-length sequences, timesteps should be None. That is, (timesteps, dimension) should be (None, 295). The same goes for (x, y, z). Please check out this post for feeding variable-length multi-dimensional sequences to Keras LSTM.
H: What are some good machine learning techniques for developing rules that govern changes in data? Say I read a set of data in and it looks like so: [ 1, 2, 3, 4, 5 ] Now, there's an unknown rule set by my data's source that says if the 3rd element in my list is set to 5, the fifth element must be greater than 10. So, subsequent data reads look like this: read #2: [ 2, 2, 3, 4, 5 ] read #3: [ 3, 2, 5, 4, 11 ] read #4: [ 4, 2, 5, 4, 11 ] If I'm only seeing the data, is there a way I can extrapolate that rule, so I end up with this: if (field3 == 5 and field5 < 10) return false; return true; I know it would be hard for a machine to differentiate the rules without far more data than this example, but in reality, I have thousands of these data records with hundreds of rules that govern their state. Is there an established method for doing this type of pattern recognition? AI: Yes. There are different methods to extract some rules from data. It depends on the type of rules that you want to extract. Anyhow, different types of the decision tree (such as fuzzy decision tree) and association rules to find the rules that have strong support and confidence, could help you in this way.
H: Splitting train and test in multilabel classification to avoid missing data in the train set I have a dataset (600 rows) composed of two columns: -Summary: which contains the text of a document -Keywords: which contains the keywords that identify that document. Summary KeyWords_in_Array_wo_insurance 0 court sanction scheme transfer insur reinsur b... [insolvency] 1 immedi custodi sentenc month week impos direct... [administration of justice, civil evidence, se... 2 motorist injur hit run collis car identifi dri... [negligence, road traffic] 3 claimant given permiss continu claim compani a... [insolvency, civil procedure] 4 court gave guidanc approach taken applic relea... [civil procedure, costs] 5 plaintiff solicitor entitl declar life critic ... [trusts] 6 claimant insur establish requir standard road ... [personal injury, torts] 7 minimum indemn requir institut charter account... [arbitration, civil procedure, costs, accounta... 8 applic secur cost court could take account eve... [civil procedure, insolvency, cpr, costs] I want to classify a summary in specific keywords. The keywords are not mutually exclusive. My code is: X_train, X_test, y_train, y_test = train_test_split(df_final["Summary"], df_final["KeyWords_in_Array_wo_insurance"], test_size=0.20, random_state=42) mlb = MultiLabelBinarizer() y_train_mlb = mlb.fit_transform(y_train) classifier = Pipeline([ ('vectorizer', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', OneVsRestClassifier(LinearSVC()))]) classifier.fit(X_train, y_train_mlb) y_predicted = classifier.predict(X_test) all_labels = mlb.inverse_transform(y_predicted) y_test_mlb = mlb.transform(y_test) print("Accuracy = ", accuracy_score(y_test_mlb,y_predicted)) I am getting a low accuracy score: 20% Therefore I am thinking that my classification is not good enough. The reason it might be that some keywords are used only once. For example, the keywords "animal" or "partnership" or "succession" are used only in 1 row. (Meaning, they are assigned to only one summary) I think, therefore, that when I split the dataset in training and test, some "lonely keywords" enter the test dataset but not the training one. Thus, the model will never be trained on them. Is this the reason why my accuracy is so low? Or am I doing something else wrong? AI: Accuracy is an awkward measure to use to assess a model predicting classification into multiple classes, and rare events are hard for models to predict well. Outcome categories that exist in your test set but not your training set will of course lead to lower accuracy, but it's far from the only thing that can go wrong. And, if these outcomes are rare relative to the number of observations you have, it's unlikely that your model is highly predictive but reduced to 20% accuracy based solely on the failure to classify a handful of rare events. It's more likely that the model has fairly poor performance overall. Importantly, this is not necessarily the result of you doing something wrong. Some things are difficult to model, and the best-performing model possible may still have a poor success rate (however you want to measure that). Without knowing more about your modelling procedure it's hard to make specific suggestions, but a few things come to mind (this is nowhere near an exhaustive list): Summarizing the documents may not be a very reliable process. If you have strong protocols for summarizing each, reviewing those might help to find odd edge cases where similar observations get summarized differently in some systematic way, or differing observations being summarized together improperly. If you are manually summarizing each document, it may be worth having multiple people perform the summarization and then evaluation the inter-rater reliability of the summaries. The documents from which the summaries are derived may differ from one another. If we're describing highly formulaic documents this might not be an issue. But if the documents are written in prose by different people, it's very possible that issues of personal style and preference on the part of the writers is producing systematic differences from other writers, making it hard to classify a document from an imaginary, generic author. There could be a mismatch between the number of predictors and outcome classes and the number of observations available to train the model on. 600 observations may be enough for a model classifying a document into one of three categories based on the presence or absence of 15 predictor variables. It is not enough for a model classifying a document into one of 400 categories based on 600 predictor keywords.
H: Terminology: "flat geometry" in the context of clustering Sklearn's documentation refers to "flat" or "non-flat" geometry of clusters to describe the use-cases of their implemented clustering algorithms. Those terms are not directly defined. However, the following quote states: Non-flat geometry clustering is useful when the clusters have a specific shape, i.e. a non-flat manifold, and the standard euclidean distance is not the right metric. So far, I do understand that a manifold type of shape is poorly represented by a cluster centroïd and that clustering such dataset should rather rely on local density, nearest neighbors, or connectivity constraint. However, I do not understand how it relates to the concept of flatness according to Wikipedia's definition: In geometry, a flat is a subset of a Euclidean space that is congruent to a Euclidean space of lower dimension. It seems contradictory to me. For instance, a hypersphere is not geometrically flat but would be flat using sklearn's terminology. My questions are then: Are flat and non-flat geometry a legit terminology in machine learning and statistics? If yes, what is the mathematical definition? If no, what is the most appropriate alternative? (e.g. manifold vs convex ???) AI: Are flat and non-flat geometry a legit terminology in machine learning and statistics? These are terminologies from Mathematics, they are valid in any field. What is the mathematical definition? (Non) flat manifold In mathematics, a (Riemannian) manifold is said to be flat if its curvature is everywhere zero; otherwise non-flat. This is very different than the definition of flat object in geometry. According to that definition, only points, lines, and hyper-planes are flat (not for example hyperspheres or segments). For example, set $\left\{(t,t):t\in(-1,1)\right\}$ is a 1D flat manifold in ${\Bbb R}^2$, set $\left\{(t,t^2):t\in(-1,1)\right\}$ is a 1D non-flat (positively curved) manifold in ${\Bbb R}^2$, and a hypersphere is an $n$D non-flat (positively curved) manifold in ${\Bbb R}^{n+1}$. Accordingly, a cluster with a (non) flat shape corresponds to a (non) flat manifold. Here are some examples from the document. Points are concentrated around (A) two 1D non-flat manifolds (circles) which are non-convex, (B) two 1D non-flat manifolds (arcs) which are non-convex, (C) three 1D flat manifolds (segments) which are convex, (D) three 0D flat manifolds (centers as points) which are convex. (Non) flat geometry vs a Flat By "flat geometry" the document means "Euclidean geometry" not "a flat object defined in geometry". If we measure distances (consequently lengths, areas, volumes, etc.) via Euclidean distance we are obeying the Euclidean geometry, otherwise, we are obeying a non-Euclidean geometry. For example, if we measure a distance between two points by following a non-flat manifold (a geodesic), we are obeying a non-Euclidean geometry. As an illustration, In (A), the red line measures a distance obeying a flat geometry, the blue line measures the distance obeying a non-flat geometry (by moving along the non-flat manifold). If an appropriate map of manifold to a lower dimension is possible (B), obeying a flat geometry would be equivalent to obeying the non-flat geometry before mapping.
H: Selection of base model for transfer learning Is there a golden rule which gives intuition on which base model needs to be used for a give image classification problem. Most of the articles gives the below details which says how to train the model based on the dataset. However I was not able to find good reference for the selection of base model. Thank you AI: There is no specific rule associated with the base model selection for transfer learning. It is generally a trade-off between model precision and resource allocation. As the number of layers increases, the number of parameters increases and the model becomes more and more resource heavy but deeper models tend to have better accuracy over shallower counterparts. Here's a comparison: Apart from that also refer to this question: Which is the fastest image pre-trained model? if the speed of the model matters.
H: Neural network or other algorithms? I have a regression problem, with a million rows or so, around 10-15 features. What should work better on that particular setting? Neural network or regular regressors? AI: This is more of question how to select the correct machine learning algorithm, I would refer you to the following blog Which machine learning algorithm should I use? Regression Algorithms models the relationship between variables that is iteratively refined using a measure of error in the predictions. Most popular examples are: Ordinary Least Squares Regression (OLSR) Linear Regression Logistic Regression etc ... On the other hand, Artificial Neural Networks models are inspired by the structure and/or function of biological neural networks. "Neural networks currently provide the best solutions to many problems in image recognition, speech recognition, and natural language processing." Neural Networks and Deep Learning/. Neural Networks are hard to train; thus my recommentation not to start with Neutral Network.
H: Plot misclassfied values in matplotlib I have two lists. One is target and another is predicted values in a binary classification. The target is always zero. And I have to plot the results of the prediction against target in such a way, that when I get a value of 1 in prediction, then i need to plot it with a black dot on a square plane. The plane should contain white area (which means it has both target and prediction as zero) and black dots (in case of target zero but prediction as one). How can I achieve this? Thanks for the help in advance. Something like this : AI: Well since the points would be shown when the target is zero and the predicted value is $1$, then all the plots would be located in the same place and would overlap. So I think you can give a random value for these points and then plot them. If the white space is the density of $(0,0)$ values and depicts the ratio of $(0,0)$ points, then you should change the size of the figure or the points. Do you want to do so? import numpy as np import matplotlib.pyplot as plt predicted = list([random.choice([0,1]) for i in range(100)]) target = np.zeros(100) #adding a random value to the predicted and random values randomed_predicted = [random.random() for i in range(predicted.count(1))] randomed_target = [random.random() for i in range(len(randomed_predicted))] plt.figure(figsize=(5,5)) plt.gca().axes.get_yaxis().set_visible(False) #hiding the yaxis labels plt.gca().axes.get_xaxis().set_visible(False) #hiding the xaxis labels plt.scatter(randomed_target, randomed_predicted, c="black")
H: Manually creating plants images dataset for machine learning plant type classification and leaf segmentation A group I work with wants to create its own plants data set that will be used for multiple projects like plant type classification and leaf segmentation for starters. They are willing to provide all the means necessary for the creation of this data set. A partner agriculture company is willing to assist with planting what ever is needed and have someone monitor and capture images every X duration as per requirements. Different growth stages are desired as well. I was asked to write the requirements for how the images should be acquired, however I'm not quite sure of that myself. What are some key points one should know for such a task and are there any known guidelines or tutorials available for data sets creation? Note that currently for this task only 1 plant will be captured in each image as the non-learning methods that will be used to label and segment the images support only 1 plant per image. I was thinking that based on the time it takes a specific plant to fully grow (which is known, approximately), N images should be captured every X days covering 360° of the plant. Also, images are to be captured from different angles and distances. Some questions I have in mind that I cant seem to answer yet, Number of images, the more the merrier as long as there are differences? What about the location of the (single) plant in the image? Is it crucial or could it be augmented after the data set has been acquired? What about camera resolution? is any decent camera sufficient for the task or should it be high end? I'm still researching what other considerations there are and would appreciate any information in that matter AI: Yes, in deep learning (image classification) more is merrier and there are several ways to do this. The approach you have mentioned of taking N images every X days is ok if you have the workforce to complete this task. Another method would be to artificially blow up your dataset using image augmentation. This augmentation will including rotating the plants, cropping, blurring, zooming, and much more. There are several libraries available to do this task and you can increase the size of your dataset by many times. The location of the plant inside the image is important. If the picture does not have anything important in it, then there is nothing that the model would gain from them. AUgmenting a useless image will create more useless images. The plant does not need to be in the center of the picture but the picture should contain something important i.e. a feature that can be helpful to classify that plant. Any decent camera will be more than enough for the task as long as the classification does not need features like the patterns on leaf or edges or anything like that. any decent modern camera now a days is more than enough to capture these fine details.
H: k means clustering when k = n I want to find the minimum value of the objective function if we set K equal to the number of samples. I know the objective function is $J=\Sigma_{n=1}^{N}\Sigma_{k=1}^{K}r_{nk}||x_n-\mu_k||^2$ And we take the derivative to get $\mu_k=\frac{\Sigma_{n=1}^{N}r_{nk}x_n}{\Sigma_{n=1}^{N}r_{nk}}$ However, when plugging that back in and set K=N, it seems unable to be simplified: $J=\Sigma_{n=1}^{N}\Sigma_{k=1}^{N}r_{nk}||x_n-\frac{\Sigma_{n=1}^{N}r_{nk}x_n}{\Sigma_{n=1}^{N}r_{nk}}||^2$ Any help? AI: Well, when k = n, the obvious global minimum occurs where $x_n = \mu_k$ for all cluster centers. This falls out from your derivate because $r_{nk} = 1$ iff $n = k$.
H: How can I measure if a population has the same distribution as other? Is there a population similarity index of some kind which could help me determine if two populations in two different datasets are the same or at least similar? The datasets have the exact same number of variables. I would like a measure which assess the similarity variable-by-variable, but an overall measure works too. The context of my question is that I would like to know if a variable is suitable or not for using in a classification method given the similarity of it between populations. AI: What I'm gonna say might seem too simple, but I assume, it might not be bad to fit a multivariate distribution (like Gaussian) to each distribution and then figure out what the mean and covariance matrix are. Mean might not depict so much information but the variance and correlation that are exposed in the covariance matrix might be helpful. For the case of determining if a variable is useful or not, analyzing its correlation with other parameters might turn out to be useful. (These are my own thoughts)
H: Understanding softmax layer in Deep Neural Network In a DNN, once the logits vector is produced say $[y_0,y_1]$ where the number of neuron in the logits layer is 2, the condition holds where $y_0 >= 0$ and $y_1<= 1$. This vector is then passed into the softmax layer where it squeezes the value to be within the range of 0 to 1. However, how do i prove that $y_0$ will always be $>=0$ and $y_1$ will always be $<=1$? From the way i understand, both $y_0 + y_1 = 1$ hence $y_0 >= 0$ and $y_1 <=1$ but i find that it doesn't really prove on why $y_0 >= 0$ and $y_1 <=1$. Could anyone assist me on my understanding. AI: Look at the definition of softmax: https://en.wikipedia.org/wiki/Softmax_function The exp() gives a positive value. Since you "standardize" each value exp(e^z) in the numerator by the sum of all exp(e^z), the values sum to one in total. Is whis what you are looking for? Maybe you should have a look at Hasties et al. "Elements of Statistical Learning" (Ch. 11.2 "Neural Nets"). Little R example: # For standardization (denominator) base = exp(-2) + exp(5) # Single exp() values are positive exp(-2) [1] 0.1353353 exp(5) [1] 148.4132 # Standatdize exp(-2)/base [1] 0.0009110512 exp(5)/base [1] 0.9990889 # Total sum equals 1 exp(-2)/base + exp(5)/base [1] 1
H: Using sigmoid in binary DNN output layer instead of softmax? For a binary DNN, the output is $y_0 + y_1 = 1$ since they are the probability distribution, hence the sum must equate to 1. However, I've been told that $y_1$ is sufficient to represent the output of the DNN where: $y_1 = \frac{e^{h_1}}{{e^{h_0} + e^{h_1}}}$ Hence, $h = h_1 - h_0$, why is it that $y$ which is the final output can now be represented as $y = \sigma(h)$, where $\sigma$ is the sigmoid function. Could anyone please explain why? How did it derive to the point where $y = \sigma(h)$? AI: In a binary classification problem you have only 2 classes, let's call them the negative and the positive class. You only need to ouptut 1 number which corresponds to the probability of your input point to belong to the positive class. The sigmoid activation function is good for that because it maps any input value to the range ]0,1[ which is what we want for a probability (it is not a real issue that 0 and 1 are excluded). Since you have only one output number, it makes no sense to use a softmax activation. Softmax activation is used in the multiclass problem where you must predict 1 of N classes where N is greater or equal than 3 and in this case the number of outputs is N (1 probability by class). The softmax function makes that all your outputs sum to 1 and it amplifies the gap between high and low probabilities.
H: training neural nets for OCR we are trying to build an in house OCR system to extract alphanumeric strings from images (kindly note, most of our clients cant afford to send their data onto the cloud so it rules out any API from the googles and microsofts..and also tesseract is way too slow for our purpose ..we are talking 10's of users simultaneously uploading images .. tesseract, i guess was never designed to be used as such). We have already started using a CTC based approach to train on our data sets but there's a fundamental/conceptual issue. By definition, CTC finds a happy path to the most probable unit (in this case, a word) but alphanumeric strings by definition DO NOT have a pattern ! So i have a feeling CTC will not work for us. But juxtaposed to that is the multiclass classification approach (where we just predict every individual char) which has this big problem about how to split any image into specific characters before passing it to a CNN layer. We are a little clueless about the approach to be taken here. Any pointers will be heartily appreciated :) AI: CTC can also work with totally random text (and thus without any "word pattern"). I know that because I trained an OCR network with CTC loss on random text which was then able to read non random text (fortunately). The CTC just sums the probabilities of the all the paths which are "compatible" with the target text (random or not). The higher the sum of probas of "compatible" paths, the better the prediction. If not done yet, you should read this very good article explaining the CTC loss. Edit to clarify and answer questions asked in comments : do u mean specifically alphanumeric strings ? Yes I mean random string with any character, alpha or numeric, it does not matter (even Chinese or Arabic alphabet if you want). it needs to have seen a lot of data with patterns to generalize weights for transition from C->A->T, correct ? No, when you train on random text, the network is not using context of previous chars to predict current time step but it is still forced to predict each time step (slice of text) correctly because of the CTC loss. The CTC loss pushes the CNN to compute good features and the RNN to give good probabilities by character for all time steps. In other words, even with random text, the CTC loss still pushes the network to predict each char correctly independently of the other chars of the text. And actually a system trained on random text generalizes better on unseen words than a system trained on real words because the latter is biased toward words it has seen during training. For example a system trained on random text with Latin alphabet could read English, French and German but a system trained only on English words won't work well to read another language with same alphabet. how can we ever generalize A623BBCEF ?? Just because a network trained on random text learnt to read each char independently of other chars. the output of such a RNN is a matrix containing character probabilities for each time-step" ..how do we calculate character probabilities ? The RNN takes the feature map for the current time step and compute a "score vector" from it. Sigmoid activation is used to map the scores to probabilities (between 0 and 1). In the end it is just a fully connected layer with as many units as characters with a sigmoid activation on each unit.
H: What should I check if model accuracy is no better than baseline level(random guess) I have a data with only 8 columns: id created_time employee_id rank position hourly price num_work_completed work_category hired Hired is the target variable with 1 representing hired and 0 representing not hired, and it is imbalanced with 5.7% hired(1) which makes the baseline accuracy 94.3% I am trying to build model that predict whether a employee will be hired. After I finished the EDA, feature engineering(dealing with NAs, encoding categorical variables, normalizing numeric variables), I used 80/20 as the splitting rule and built random forest with rank_position,hourly_price, num_work_completed, work_category_dummy clf=RandomForestClassifier(n_estimators=100,class_weight=balanced) clf.fit(X_train,y_train) y_pred=clf.predict(X_test) However the model's accuracy(Test accuracy) turned out to be 93% while the baseline is 94.3%. The training accuracy is 99%. Compared to test accuracy 94.3%, I don't think there's a over-fitting problem The logistic regression also has the same problem. Based on correlation blot, most independent variables have pretty weak relationship with target variable smaller than +/- 0.3. what should I do next to improve my model accuracy? I tried parameter tuning but it doesn't help a lot. AI: This is a common problem with rare events modelling, and your options are relatively limited (as far as I am aware, at least). It may well be the case that the features you're using are not very informative with respect to predicting these outcomes. The major issue is that your predictors, in the context of this model, are not very informative. The model tries to balance false positives and false negatives, but with so few true positives any mistakenly-predicted positive outcome will have a large effect on your classification accuracy. It seems likely in this case that your predictors do not offer enough information to predict outcomes well. You may have reached the ceiling of what this model can do. This could be an artifact of the rarity of the "hired" outcome in your data set, or it could simply be that the relationship between these predictors and the outcome is weak. There are a few options, involving use of different techniques (like a Firth regression, designed for rare events modelling). But using different predictors may be the best option, if it is possible to do so. Not every event can be modelled well with some arbitrary set of features, and it may be that you've found one of those.
H: Softmax function result for already normalized probabilities Isn't the aim of softmax function normalizing the probabilities such that they all sum to 1? So when we apply this method to the already normalized numbers, it would change them. what do these new outcomes depict? Since they are changed, are they reliable? from scipy import special scipy.special.softmax([0.4,0.6]) gives: array([0.450166, 0.549834]) AI: Summing to 1 is just one property of the softmax function. The softmax function takes the exponential of each value and divides it by the sum of the exponentials of all values. This tends to cluster values towards the mean, as you've seen in your example. While the outputs of a softmax look and smell like probabilities, their values don't actually correspond to the likelihood of sampling from that distribution. They're better off thought of as the confidence of your prediction.
H: Random Forest application with 40+ Predictor Variables I am using R package randomForest to build a Random Forest model for classification. Ultimately, I need to choose one of five programs for a group of individuals based on historical data. The final variable that is being predicted is a categorical "1, 2, 3, 4, or 5" variable. I need to incorporate 30-40 variables--their importance will also be analyzed-- to reach a decision for each individual. I have no problem training a model, initially. Below is a sample data set being trained with R's randomForest package. I am running up-to-date versions of both the package and RStudio. set.seed(101) train <- sample(1:nrow(Boston), 300) Boston.rf <- randomForest(medv~., data = Boston, subset = train) Boston.rf My question is: Once I have a model trained, tested, and cross-validated, how do I actually apply new data to this model? Is it viable to use that many variables--all categorical, some binary--for this? AI: The randomForest package supports various tasks using an existing randomForest object. For instance, it offers the predict method which will perform prediction using a given trained forest and a given dataset. As an example, if you have a trained forest called mod and a dataset called my_data, predictions can be obtained by running predict(object=mod,newdata=my_data). It's worth reading through the package documentation which gives good examples of using the functionality. I would note that any new data you supply to a trained forest object should have the same encodings and transformations as the data used to train the forest with.
H: Hand-crafted decision tree inspired from learned decision tree Goal of this question: As I am the only 'machine learning guy' in our group, I wanted to get an outsiders view, that is a sanity check if what I am doing adheres at least to 'decent practices' in machine learning (I know its not best practices :) ). Problem setup: I am working on a classification task on biomedical signals (detection of hypertension from physiological signals other than blood pressure). Since I do not have too much high quality labeled data for training a powerful classifier (say a larger conv-net), currently my procedure is as follows: Feature engineering (manually engineered features, mainly driven by (physiology) domain insights. Train supervised learning classifiers, in particular tree based algorithms such as random forests and simple decision trees. Now, since currently I do not have enough high quality labels and I need to ship some classifier soon (in addition as it is a medical application I really care about stability and to some degree interpretability), I thought I could go for a manually built expert system, that is some rule-based system (if-else) using the most relevant features according to the learned random forest or decision tree classifier. For the cutoffs on the features I can use the values learned by the single decision tree. In addition, I would start from a learned decision tree and adapt it (as some learned splits are totally non-sense, i.e., we are in the overfitting regime). Question: Is this procedure fine as long as I test my (hand-crafted and machine learning inspired) classifier on unseen data? I'd be happy to hear your experiences in similar situations! Thanks AI: I did this once, in a position similar to yours. My constraints were, People skeptical of machine learning Required interpretability Required very fast inference Explainable in text documentation Implementable in any language Traditionally, this would have been done by an expert system, so training a decision tree was essentially just one step further. On my end, all of the work was being done in Python. I tried a handful of ML models, with and without hand-crafted features. A single decision tree performed nearly as well as anything I tried, and outperformed a fully handcrafted expert system. I then created an "inference model" in the form of if-statements for each split in the tree to confirm performance was equal to the decision tree itself. I even manually ignored some low entropy splits or where a single feature was underutilized, finding that for the same number of branches, a larger trained tree manually retaining fewer nodes outperformed a small tree where I replicated it exactly. Some of the splits were unintuitive, but that's partially the point of throwing a model at it. Be sure to really dig into the failure modes of every alternative to help everyone involved have confidence in your work. Given the constraints, it was a great solution to the problem.
H: Is there a way (like planning algorithms) to draw a successive polyline to fill a specific shape (like triangle)? there is a specific shape (like triangle) area, i am trying to develop an program to draw a successive polyline inside the triangle to fill the triangle, one line 3 layers this post demonstrates some algorithms to solve this category of problems. none of them is suitable for this task, any ideas? AI: Would not it be easier to simply apply geometric computation to your triangle to get smaller triangles whose vertices can be used for the polyline.? With Wolfram Language You have a triangle. shp = Triangle[]; Graphics[{LightGray, shp}] A ScalingTransform about the RegionCentroid can be performed with TransformedRegion to get smaller inner triangles. Below, shp is scaled by factors 0.8, 0.6, and 0.4 for the inner triangles. scaledShps = TransformedRegion[shp, ScalingTransform[{#, #}, RegionCentroid@shp]] & /@ Range[.8, .4, -.2]; SeedRandom[456] Graphics[{LightGray, shp, Riffle[Hue /@ RandomReal[1, Length@scaledShps], scaledShps]}] Each of scaledShps has the vertices as the first argument. Map (/@) First to collect these and Flatten them into a single Line. Graphics[{ LightGray, shp, Orange, Thick, Line@Flatten[First /@ scaledShps, 1] }] Get more layers by increasing the number of scaling factors. For example, from .95 to .05 in steps of -.05 scaledShps = TransformedRegion[shp, ScalingTransform[{#, #}, RegionCentroid@shp]] & /@ Range[.95, .05, -.05]; Graphics[{ LightGray, shp, Orange, Thick, Line@Flatten[First /@ scaledShps, 1] }] Wolfram recently released a free Wolfram Engine that can be called from many languages (Python, C, ...) so you can use the above code directly in your project. Hope this helps.
H: Is it a good idea to use tensorflow instead of numpy for numerical approximations? I intend on perfroming some numerical approximations for a problem in physics. The main gist of the program will be to perform a svd on large sparse matrices and also calculating the trace of a large matrix. I used numpy/scipy and multiprocessing modules in python to do this, but it is not fast enough. I also implemented the same code using tensorflow on some gpus. Tensorflow does the calulations much faster, but it takes a long time to send the data into the gpus and bring it out. Each instance of data is sent individually as float values. Is there any way to send all the data at once into the gpu? Is it a good or bad idea to use tensorflow for numerical calculations? Below is a snippet of my code.... def Expectation_Value(density,N): expec=0 for i in np.arange(1,N+1): a=tf.linalg.trace(tf.matmul(Sigma(i,N,Sz),density)) expec+=a return expec/N def main(): df=pd.read_csv('.....') sess=tf.Session() for l in range(len(df)): Delta=df['Delta'].iloc[l] Omega=df['Omega'].iloc[l] Gamma=df['Gamma'].iloc[l] J=df['J'].iloc[l] Sz_tens=tf.zeros(len(df)) Expec=Expectation_Value(DMT,N) Sz_tens[l]=Expec Sz_arr=Sz_tens.eval(session=sess) df['Sz']=Sz_arr sess.close() df.head() AI: TensorFlow is a library that supports dataflow paradigm which makes it a good candidate for numerical calculations. In general, TensorFlow data loading performance can be optimized using tf.data module. In particular, there is a concept called GPUs starvation, where GPUs are waiting for data to be loaded. GPU starvation can be minimized in TensorFlow with Pipelining, overlapping the preprocessing and model execution of a training step. In particular, your code can be optimized by refactoring to Expectation_Value operate on vectors instead of scalars. Right now N and i are scalars and there is a for-loop that processes elements one at a time. tf.linalg.trace can take a tensor as an input so that N and i could be vectors. The function will be much faster once it operates on vectors and no longer uses a for-loop.
H: Why does my linear regression model converge to a non-zero gradient value? I have a basic 2D Linear Regression model coded out (using gradient descent), yet it doesn't seem to work as well as it should. What I expect is that m and c should approach 4 and 3 respectively, and m's slope or c's slope should tend to 0; yet what is actually happening is that c's slope approaches a non-zero value, and c itself approaches a value depending on the epoch (around 0.5 with an epoch of 100.) If I look at the graph of c, it very slowly tends up over time, though. Code here: import random, math import matplotlib.pyplot as plt def linreg(x, y): """ Performs linear regression: input x, output y. """ n = float(len(x)) m = random.random() c = random.random() dm, dc = [], [] rate = 0.00001 epoch = 100 for run in range(epoch): d_m = 0 d_c = 0 for i in range(len(x)): d_m += (y[i] - m*x[i] - c)*x[i] d_c += (y[i] - m*x[i] - c) d_m *= -2/n d_c *= -2/n m -= d_m * rate c -= d_c * rate dm.append(d_m) dc.append(d_c) return m, c, dm, dc x = [i for i in range(400)] y = [4*i + 3 for i in x] m, c, dm, dc = linreg(x, y) print(m, c) plt.grid() plt.scatter(x, y) plt.plot(x, [m*i + c for i in x], color='red') plt.show() plt.grid() plt.plot([i for i in range(len(dm))], dm) plt.plot([i for i in range(len(dc))], dc, color='red') plt.show() AI: those x values are way larger than it should. If you normalize your x and increase the learning rate, it will converge easily. General assumption is that the input comes from a normal distribution so we transform the input. Here is the code(notice that I only subtracted the mean from x and divided it by its standard deviation and I changed the rate to 0.05): import random, math import matplotlib.pyplot as plt import numpy as np def linreg(x, y): """ Performs linear regression: input x, output y. """ n = float(len(x)) m = random.random() c = random.random() dm, dc = [], [] rate = 0.05 epoch = 100 for run in range(epoch): d_m = 0 d_c = 0 for i in range(len(x)): d_m += (y[i] - m*x[i] - c)*x[i] d_c += (y[i] - m*x[i] - c) d_m *= -2/n d_c *= -2/n m -= d_m * rate c -= d_c * rate dm.append(d_m) dc.append(d_c) return m, c, dm, dc x = [i for i in range(400)] x_mean = np.mean(x) x_std = np.std(x) x = [(i - x_mean) / x_std for i in x] y = [4*i + 3 for i in x] m, c, dm, dc = linreg(x, y) print(m, c) plt.grid() plt.scatter(x, y) plt.plot(x, [m*i + c for i in x], color='red') plt.show() plt.grid() plt.plot([i for i in range(len(dm))], dm) plt.plot([i for i in range(len(dc))], dc, color='red') plt.show() the output in my case: 3.999898446167822 2.999938454995635
H: When I try to predict with my model I get an Attribute error After I've created my model using keras sequential, I tried to start predicting on a small sample to see if it would work however I get this error and I have no idea why. error : AttributeError Traceback (most recent call last) in () ----> 1 example_results = model.predict(example_batch) AttributeError: 'NoneType' object has no attribute 'predict' The code for my model can be found here : https://colab.research.google.com/drive/1mxRIx5ufVsA0ljdTpL0Ud0qm2y39eyGX I've done it using this method before but it refuses to work this time around. AI: In cell 10 of your notebook you write: model = build_model() #Perfect In cell 11 you write: model = model.summary() #Uh oh!! later you call example_result = model.predict(example_batch) #model.summary() returns None and python tells you AttributeError: 'NoneType' object has no attribute 'predict' This is because you reassigned model in cell 11 to, well, nothing. You should remove the model = in cell 11 and your code will run perfectly! redo cell 11 to read: model.summary()
H: Normalized metric for comparing regression models performance I was recently trying to explain to someone whether performance of my estimation approach is good or bad. For instance, whether a model with Mean Absolute Error (MAE) of 17000 is a bad solution. It was also hard for me to explain whether performance loss by 225 (in terms of MAE), when switching from one model to another, is significant or not. To me it was clear that both are little because I knew the context: we we're talking about predicting house prices ranging from \$34,900 (min) to \$755,000 (max), so that MAE=17,000 is just 2.5% of the difference between max and min Change in MAE by \$225 is just 0.03% of the difference between max and min. Are there some normalized metrics for comparing performance of regression models without the need to know the context? Which of those metrics are available in scikit-learn? For instance, it provides mean_absolute_error and mean_squared_error but they are not normalized. Edit: After Joe B suggestions, I've plotted a graph to see deviations between predicted and expected price. In fact, that's gives more insight than a single-number metric: AI: A couple suggestions MAE represents your mean error. This is essential to recognize as having an error of even half of your MAE on the low end of your spectrum ($34,900) is huge. However, to your point, at the high end of your spectrum it is quite small. To solve the above, you should plot a graph to identify the patterns in your error. You'll plot your predicted y value vs. your actual y value and see where your predictions deviate from the y=x line. Normalized metrics: you may use an r-squared value. This might be ideal because its looking at the percent of variance explained by your regression and thus is a relative measure of fit. Your r-squared value will always be between 0 and 1 (assuming your model is not worse than always guessing your mean value) where 1 is perfect. Implement as follows: from sklearn.metrics import r2_score y_pred = model.predict(X_train) print(r2_score(y_true, y_pred))
H: Installing skmca python package I can't figure out how to install the python package named skmca ( MCA for sickit-learn ) : https://github.com/TomAugspurger/skmca There is no pip or conda command available. Thanks for your help. AI: Check their reference to the following page for PIP install: https://github.com/MaxHalford/Prince
H: How can I train a many-to-one RNN with an array of 2D matrices? I have eye tracking data for every word of a novel. Features for every word is given separately. I want to take groups of 100 words to make a sample and then use each of these samples as a single training instance (instead of a single word). My samples are 2D matrices of shape (no. of words, features). How can I train my many-to-one RNN with these samples? AI: you can use CNN just before inputting your raw matrix into RNN. An other option, which is just made for that purpose is ConvLSTM. Several research papers show that ConvLSTM can be used to process in sequential frames of video which is similar to your input type.
H: Keras MLP Classifer Model not converging I'm trying to make MLP based classifier based on numerical and categorical data The train_X (Input) data that I'm working with is look like that each data type is 20 numerical and 1 categorical data ,categorical here I've converted that using cat.codes My train_Y (prediction) data looks like this : winPlacePerc 0 0.4444 1 0.6400 2 0.7755 3 0.1667 4 0.1875 5 0.0370 6 0.0000 7 0.7368 8 0.3704 9 0.2143 10 0.3929 its between 0 to 1 I am using here keras sequential model : from keras.models import Sequential from keras.layers import Dense from keras.layers import Dense, Dropout, Activation from keras.callbacks import EarlyStopping #create model model = Sequential() #get number of columns in training data n_cols = train_X.shape[1] #add layers to model model.add(Dense(500, activation='relu', input_shape=(n_cols,))) model.add(Dropout(0.5)) model.add(Dense(1000, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1000, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(500, activation='relu')) model.add(Dropout(0.5)) model.add(Dropout(0.5)) model.add(Dense(250, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(250, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(250, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='Adam', loss='binary_crossentropy', metrics=['accuracy']) #set early stopping monitor so the model stops training when it won't improve anymore early_stopping_monitor = EarlyStopping(patience=3) #train model model.fit(train_X, train_y, validation_split=0.4, epochs=30, callbacks=[early_stopping_monitor] here's the model summary Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 500) 11000 _________________________________________________________________ dropout_1 (Dropout) (None, 500) 0 _________________________________________________________________ dense_2 (Dense) (None, 1000) 501000 _________________________________________________________________ dropout_2 (Dropout) (None, 1000) 0 _________________________________________________________________ dense_3 (Dense) (None, 1000) 1001000 _________________________________________________________________ dropout_3 (Dropout) (None, 1000) 0 _________________________________________________________________ dense_4 (Dense) (None, 500) 500500 _________________________________________________________________ dropout_4 (Dropout) (None, 500) 0 _________________________________________________________________ dropout_5 (Dropout) (None, 500) 0 _________________________________________________________________ dense_5 (Dense) (None, 250) 125250 _________________________________________________________________ dropout_6 (Dropout) (None, 250) 0 _________________________________________________________________ dense_6 (Dense) (None, 250) 62750 _________________________________________________________________ dropout_7 (Dropout) (None, 250) 0 _________________________________________________________________ dense_7 (Dense) (None, 250) 62750 _________________________________________________________________ dropout_8 (Dropout) (None, 250) 0 _________________________________________________________________ dense_8 (Dense) (None, 1) 251 ================================================================= Total params: 2,264,501 Trainable params: 2,264,501 Non-trainable params: 0 _________________________________________________________________ since the output is between 0 to 1 ,I used sigmoid as my last layer But the problem is the loss is not converging at all,what's the issue here ? I also happen to have some 400k training data ,could that be the reason ? AI: The dense layers will expect numerical features so you have to transform the categorical features to make them numerical, for example using one-hot encoding. One-hot encoding transforms a categorical features into a set of booleans. For example if you have two features: temperature and day of week, and you would one-hot encode the day of the week, then you would get 6 features for that: (is_monday, is_tuesday, is_wednesday, ...). If the day of week is a Monday you get a 1 for that feature and 0's for all the others. If it's a Sunday, you get 0's for all, so you don't need 7 features for to encode 7 values. Now about your problem: Start with a simpler model, for example, one or two layers and make step-wise improvements from there. Two subsequent drop-out layers make no sense - if you wanted that, use one drop-out layer with the parameter that is the product of the existing two layers, but I don't think you want that. Why are you saying it doesn't converge? Plot the loss as a function of the number of iterations (or after each epoch, that is, after each pass through the data). It should be shrinking. Maybe you are just not training long enough.
H: What is the standard model for CNN to compare against other classifier technique? I have created a new method to do binary image classification. I think it would be interesting to compare it to the convolutional neural network that would do the same binary classification given the same data sets. The question I have is what would be a standard architecture of such CNN model? How much should I tune it to increase the accuracy of the CNN model so it is fair to compare it to my methodology? I want this to be as fair as possible with the chance of maybe having my new methodology published. I tried googling my question but I could not stumble upon any blog about it. I would appreciate some guidance or links. AI: If your model is simple and you don't have a lot of training data, then you need a model with few parameters to compare to, or else you won't be able to train it. Using standard CNN architectures may not be a good option in this case, because even if the point of using a CNN rather than a full-connected network is to reduce the parameters, they still have many millions of parameters. Depending on the details of the problem, I think you have to design something yourself. Have a look at this Kaggle guide on choosing a CNN architecture. If you have a lot of input data and a complex model that you're comparing to, I would recommend using a standard architecture that people are familiar with, such as VGG-16 or VGG-19, so that a performance comparison means something to the reader. You can get these models with pretrained weights, so you don't need to do any training necessarily. However, they are set up for multiclass classification and typically trained on ImageNet, so you need to adapt the classification part. The simplest way is to replace the last (output) layer with a single output node for your binary classification. To make a fair comparison you then need to train the entire model on the same training dataset used for your model, and evaluate on the same test set. But I would start by retraining the final FC layers first, to get a quick preview of how it's going. Then retrain the full model for the fair comparison.
H: K-Fold and Random Subsampling (RSS) Dataset generation? Let say if I have a large dataset of 300k instances with 200 features, I want to reduce its size. Can I apply K-Fold technique to the 200 features then the trimmed dataset are applied with RSS to trim the instances? Its like K-Fold to reduce Features then RSS to reduce instances then I get a small-scaled dataset with the less number of features and randomized smaller instances. Can this be done? AI: K-fold cross-validation is a technique you use when you have too little training data. You use it to expand your training dataset, leaving only a tiny bit for testing (for example, 5%). Then you use k-fold cross-validation to select a different 5% for testing each time, and a different 95% of the full dataset for training each time. That way, you can get a good idea of the performance of your model, even if you only kept 5% of the data apart for training. This is just an example, it could be that 5% is fine, or even too much, to hold out for testing. It depends mainly on the size of your dataset, and in case of classification, for example, it matter if one class has very few examples, so that perhaps in your 5% for testing there may not be a single example of that class. What you're trying to do is called dimensionality reduction. The go-to technique for that is PCA. Make sure you normalize your features first (subtract the mean and divide by the variance), and then drop features until you still have, for example, 95% of the variance of your data explained. If in the end you find that you cannot get good performance from your model, it could be that you have lost too much information. In that case, you may have to increase this to 99%, adding features back in. RSS is ok to reduce the number of training examples you have in your dataset, but it is only necessary if the order of examples is not random already. If it's random already, then you can just train on the first 10%, or whatever you need, and save yourself the trouble.
H: Transfer learning for a regression problem if my understanding is correct, in case of image classification and NLP, if I have a pre-trained model, to train on new data, I can reshape the data according to the pre-trained model. So there is no problem even if the new data is slightly different from the previous data. I am trying to use transfer learning for a regression problem. Consider I train a base model with 15 parameters and 1 million rows. I train a model. Now if I want to use this model for a similar problem case but I have only 14 parameters, one parameter is missing. Will the pre-trained model be of any use. Is there a way I can use transfer learning in such cases? Thank you. AI: The question is if you can provide the trained model with the data it expects to provide a good output. In your example, for the model to provide an output, you will have to provide 15 inputs. If you have only 14, can you put a value for input 15 that will make sense to the model? If feature 15 is a person's age, can you make an estimate? If you trained a model on 16x16 images (256 features) but now you have images that are 4x4 (16 features), you may be able to just provide 256 - 16 = 240 black pixels and the model may work fine. Or you could reformat your data to multiply each pixel into 4x4 patch, and get a 16x16 image that way. What you cannot do, is simply reshape the input so that it's 15 columns wide. If you do that, the first example you provide to your model will contain 14 features from the first real example, plus feature number 1 from the next case. That way, the model is using data from two different examples to make a prediction and it's getting a value for feature 1 on the input node where it expects a value feature 15, and feature 1 and feature 15 can have very different meaning.
H: What does pandas describe() percentiles values tell about our data? Let say this is my dataframe x=[0.09, 0.95, 0.93, 0.93, 0.34, 0.29, 0.14, 0.23, 0.91, 0.31, 0.62, 0.29, 0.71, 0.26, 0.79, 0.3 , 0.1 , 0.73, 0.63, 0.61] x=pd.DataFrame(x) When we x.describe() this dataframe we get result as this >>> x.describe() 0 count 20.000000 mean 0.50800 std 0.30277 min 0.09000 25% 0.28250 50% 0.47500 75% 0.74500 max 0.95000 What is meant by 25,50, and 75 percentile values? Is it saying 25% of values in x is less than 0.28250? AI: It describes the distribution of your data: 50 should be a value that describes „the middle“ of the data, also known as median. 25, 75 is the border of the upper/lower quarter of the data. You can get an idea of how skew your data is. Note that the mean is higher than the median, which means your data is right skewed. Try: import pandas as pd x=[1,2,3,4,5] x=pd.DataFrame(x) x.describe()
H: How to import statsmodels module to use OLS class? I am trying multiple Regression import numpy as np import pandas as pd import matplotlib.pyplot as plt # Importing Dataset dataset = pd.read_csv( 'C:/Users/Rupali Singh/Desktop/ML A-Z/Machine Learning A-Z Template Folder/Part 2 - Regression/Section 5 - Multiple Linear Regression/50_Startups.csv') print(dataset) X = dataset.iloc[:, :-1].values Y = dataset.iloc[:, 4].values # Categorical Data from sklearn.preprocessing import LabelEncoder, OneHotEncoder labelencoder = LabelEncoder() X[:, 3] = labelencoder.fit_transform(X[:, 3]) onehotencoder = OneHotEncoder(categorical_features=[3]) X = onehotencoder.fit_transform(X).toarray() # Splitting the dataset into training set and test set from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2) print(Y_train) # Fitting Multiple Linear Regression from sklearn.linear_model import LinearRegression regressor = LinearRegression() regressor.fit(X_train, Y_train) # predicting the test result Y_pred = regressor.predict(X_test) This is the part where error is occuring # Building the optimal model with Backward Elimination import statsmodels.formula.api as sm X = np.append(arr=np.ones((50, 1)).astype(int), values=X, axis=1) print(X) X_opt = X[:, [0, 1, 2, 3, 4, 5]] regressor_ols = sm.OLS(endog=Y, exog=X_opt).fit() print(regressor_ols.summary()) This is the error message Traceback (most recent call last): File "C:/Users/Rupali Singh/PycharmProjects/Machine_Learning/Muliple_Linear_Regression.py", line 39, in <module> import statsmodels.formula.api as sm File "C:\Users\Rupali Singh\PycharmProjects\Machine_Learning\venv\lib\site-packages\statsmodels\formula\api.py", line 15, in <module> from statsmodels.discrete.discrete_model import MNLogit File "C:\Users\Rupali Singh\PycharmProjects\Machine_Learning\venv\lib\site-packages\statsmodels\discrete\discrete_model.py", line 45, in <module> from statsmodels.distributions import genpoisson_p File "C:\Users\Rupali Singh\PycharmProjects\Machine_Learning\venv\lib\site-packages\statsmodels\distributions\__init__.py", line 2, in <module> from .edgeworth import ExpandedNormal File "C:\Users\Rupali Singh\PycharmProjects\Machine_Learning\venv\lib\site-packages\statsmodels\distributions\edgeworth.py", line 7, in <module> from scipy.misc import factorial ImportError: cannot import name 'factorial' Process finished with exit code 1 AI: See https://stackoverflow.com/a/56284155/9524424 You need to have a matching scipy version (1.2 instead of 1.3)
H: Is there any advantage in using Particle Swarm Optimization for clustering than K-Means? I have read some paper about using particle swarm optimization. It doesn't look give much different result than K-Means. I tried to use PSO for clustering but the result is pretty much the same with K-Means with some drawbacks like longer execution time and have a lot of different result caused by the random factor. AI: K-means makes locally the optimal decision. Most of the time (in particular when this objective works reasonably well at all) this works quite well to find a good local optimum. I doubt that using an approach such as PSO gives you any advantage here - the problem is just too simple, and k-means has the speed advantage, while PSO is unlikely to find other optima. And in particular the better k-means algorithms such as Hamerly, Elkan, ... are so fast, they'll be able to run with hundreds of random restarts in the time needed for just one PSO.
H: .h5 file format does not close properly import h5py #added hf = h5py.File('../images.h5', 'w') #added hf.close() #added h5_file = tables.open_file("images.h5", mode="w") I also tried: h5py.File.close(hf) the error that pops up in both cases is: ValueError: The file 'restricted_images.h5' is already opened. Please close it before reopening in write mode. I've also tried: if isinstance(obj, h5py.File): # Just HDF5 files obj.close() while In[]: hf Out[]: <Closed HDF5 file> , the file is not closed yet. AI: You may want to use something like the following snippet: with h5py.File("some_path.h5") as f: f["data1"] = some_data import h5py #added with h5py.File('../images.h5', 'w') as f: h5_file = tables.open_file("images.h5", mode="w") # do what ever you want to do, it will be closed by itself.
H: Cross validation Vs. Train Validate Test I have a doubt regarding the cross validation approach and train-validation-test approach. I was told that I can split a dataset into 3 parts: Train: we train the model. Validation: we validate and adjust model parameters. Test: never seen before data. We get an unbiased final estimate. So far, we have split into three subsets. Until here everything is okay. Attached is a picture: Then I came across the K-fold cross validation approach and what I don’t understand is how I can relate the Test subset from the above approach. Meaning, in 5-fold cross validation we split the data into 5 and in each iteration the non-validation subset is used as the train subset and the validation is used as test set. But, in terms of the above mentioned example, where is the validation part in k-fold cross validation? We either have validation or test subset. When I refer myself to train/validation/test, that “test” is the scoring: Model development is generally a two-stage process. The first stage is training and validation, during which you apply algorithms to data for which you know the outcomes to uncover patterns between its features and the target variable. The second stage is scoring, in which you apply the trained model to a new dataset. Then, it returns outcomes in the form of probability scores for classification problems and estimated averages for regression problems. Finally, you deploy the trained model into a production application or use the insights it uncovers to improve business processes. As an example, I found the Sci-Kit learn cross validation version as you can see in the following picture: When doing the splitting, you can see that the algorithm that they give you, only takes care of the training part of the original dataset. So, in the end, we are not able to perform the Final evaluation process as you can see in the attached picture. Thank you! scikitpage AI: If k-fold cross-validation is used to optimize the model parameters, the training set is split into k parts. Training happens k times, each time leaving out a different part of the training set. Typically, the error of these k-models is averaged. This is done for each of the model parameters to be tested, and the model with the lowest error is chosen. The test set has not been used so far. Only at the very end the test set is used to test the performance of the (optimized) model. # example: k-fold cross validation for hyperparameter optimization (k=3) original data split into training and test set: |---------------- train ---------------------| |--- test ---| cross-validation: test set is not used, error is calculated from validation set (k-times) and averaged: |---- train ------------------|- validation -| |--- test ---| |---- train ---|- validation -|---- train ---| |--- test ---| |- validation -|----------- train -----------| |--- test ---| final measure of model performance: model is trained on all training data and the error is calculated from test set: |---------------- train ---------------------|--- test ---| In some cases, k-fold cross-validation is used on the entire data set if no parameter optimization is needed (this is rare, but it happens). In this case there would not be a validation set and the k parts are used as a test set one by one. The error of each of these k tests is typically averaged. # example: k-fold cross validation |----- test -----|------------ train --------------| |----- train ----|----- test -----|----- train ----| |------------ train --------------|----- test -----|