Search Results

Search found 5786 results on 232 pages for 'fast'.

Page 15/232 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • MYSQL Fast Insert dependent on flag from a seperate table

    - by Stuart P
    Hi all. For work I'm dealing with a large database (160 million + rows a year, 10 years of data) and have a quandary; A large percentage of the data we upload is null data and I'd like to stop it from being uploaded. The data in question is spatial in nature, so I have one table like so: idLocations (Auto-increment int, PK) X (float) Y (foat) Alwaysignore (Bool) Which is used as a reference in a second table like so: idLocations (Int, PK, "FK") idDates (Int, PK, "FK") DATA1 (float) DATA2 (float) ... DATA7 (float) So, Ideally I'd like to find a method where I can do something like: INSERT INTO tblData(idLocations, idDates, DATA1, ..., DATA7) VALUES (...), ..., (...) WHERE VALUES(idLocations) NOT LIKE (SELECT FROM tblLocation WHERE alwaysignore=TRUE ON DUPLICATE KEY UPDATE DATA1=VALUES(DATA1) So, for my large batch of input data (250 values in a block), ignore the inserts where the idLocations matches an idLocations values flagged with alwaysignore. Anyone have any suggestions? Cheers. -Stuart Other details: Running MySQL on a semi-dedicated machine, MyISAM engine for the tables.

    Read the article

  • Fast search in XMl files in .NET (or How to index XML files)

    - by codymanix
    I have to implement a search feature which is able to quickly perform arbitrary complex queries to XML-data. If the user makes a query, all XML files must be searched to find possible matches. The users will have lots of XML-Files (a few 10000 or more) which are typically a few kilobytes in size. All the XML-files have almost the same structure. I already benchmarked XPath, it is too slow for my needs. How can it be done most efficiently? Is is possible to create indexes for the contents of the XML files (preserving content semantics, not just plain fulltext search)? Will it be useful to put the XML data into an (embedded) SQL database and do the queries with SQL? What other possibilities do I have?

    Read the article

  • Neural Network settings for fast training

    - by danpalmer
    I am creating a tool for predicting the time and cost of software projects based on past data. The tool uses a neural network to do this and so far, the results are promising, but I think I can do a lot more optimisation just by changing the properties of the network. There don't seem to be any rules or even many best-practices when it comes to these settings so if anyone with experience could help me I would greatly appreciate it. The input data is made up of a series of integers that could go up as high as the user wants to go, but most will be under 100,000 I would have thought. Some will be as low as 1. They are details like number of people on a project and the cost of a project, as well as details about database entities and use cases. There are 10 inputs in total and 2 outputs (the time and cost). I am using Resilient Propagation to train the network. Currently it has: 10 input nodes, 1 hidden layer with 5 nodes and 2 output nodes. I am training to get under a 5% error rate. The algorithm must run on a webserver so I have put in a measure to stop training when it looks like it isn't going anywhere. This is set to 10,000 training iterations. Currently, when I try to train it with some data that is a bit varied, but well within the limits of what we expect users to put into it, it takes a long time to train, hitting the 10,000 iteration limit over and over again. This is the first time I have used a neural network and I don't really know what to expect. If you could give me some hints on what sort of settings I should be using for the network and for the iteration limit I would greatly appreciate it. Thank you!

    Read the article

  • Should I use Python or Assembly for a super fast copy program

    - by PyNEwbie
    As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine. I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies. I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages. Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss) The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.

    Read the article

  • Is C# fast enough for games

    - by Matt
    Will a game written in C# have any speed issues after long periods of play, like for 24 hours at a time? I'm specifically talking about a 2D RPG similar to old Final Fantasy or Dragon Quest games. I know that languages like Python will slow down too much, curious how C# would stand.

    Read the article

  • Delphi: Fast(er) widestring concatenation

    - by Ian Boyd
    i have a function who's job is to convert an ADO Recordset into html: class function RecordsetToHtml(const rs: _Recordset): WideString; And the guts of the function involves a lot of wide string concatenation: while not rs.EOF do begin Result := Result+CRLF+ '<TR>'; for i := 0 to rs.Fields.Count-1 do Result := Result+'<TD>'+VarAsString(rs.Fields[i].Value)+'</TD>'; Result := Result+'</TR>'; rs.MoveNext; end; With a few thousand results, the function takes, what any user would feel, is too long to run. The Delphi Sampling Profiler shows that 99.3% of the time is spent in widestring concatenation (@WStrCatN and @WstrCat). Can anyone think of a way to improve widestring concatenation? i don't think Delphi 5 has any kind of string builder. And Format doesn't support Unicode. And to make sure nobody tries to weasel out: pretend you are implementing the interface: IRecordsetToHtml = interface(IUnknown) function RecordsetToHtml(const rs: _Recordset): WideString; end; Update One I thought of using an IXMLDOMDocument, to build up the HTML as xml. But then i realized that the final HTML would be xhtml and not html - a subtle, but important, difference. Update Two Microsoft knowledge base article: How To Improve String Concatenation Performance

    Read the article

  • How fast can you make linear search?

    - by Mark Probst
    I'm looking to optimize this linear search: static int linear (const int *arr, int n, int key) { int i = 0; while (i < n) { if (arr [i] >= key) break; ++i; } return i; } The array is sorted and the function is supposed to return the index of the first element that is greater or equal to the key. They array is not large (below 200 elements) and will be prepared once for a large number of searches. Array elements after the n-th can if necessary be initialized to something appropriate, if that speeds up the search. No, binary search is not allowed, only linear search.

    Read the article

  • Mootools not loading fast enough IE6

    - by Tom
    Very random and annoying problem with IE6. We keep our common JS files on a resources server so we only have to update them in one place. As well as our custom classes we also keep our build of mootools and more on the resources server and link to it in the head of our sites. This is fine in all the browsers accept IE6. In IE6 it seems to not loads the core quick enough from the external link before trying to process the mootools code in my site.js file. It will go wrong on the first line "windows.addEvent". If i put a mootools core in a folder where the site is though its fine. Does anyone know why it might be doing this and if so a way around it, but still keeping the files on the resources domain? Thanks Tom

    Read the article

  • DSP - Problems using the inverse Fast Fourier Transform

    - by Trap
    I've been playing around a little with the Exocortex implementation of the FFT, but I'm having some problems. First, after calculating the inverse FFT of an unchanged frequency spectrum obtained by a previous forward FFT, one would expect to get the original signal back, but this is not the case. I had to figure out that I needed to scale the FFT output to about 1 / fftLength to get the amplitudes ok. Why is this? Second, whenever I modify the amplitudes of the frequency bins before calling the iFFT the signal gets distorted at low frequencies. However, this does not happen if I attenuate all the bins by the same factor. Let me put a very simplified example of the output buffer of a 4-sample FFT: // Bin 0 (DC) FFTOut[0] = 0.0000610351563 FFTOut[1] = 0.0 // Bin 1 FFTOut[2] = 0.000331878662 FFTOut[3] = 0.000629425049 // Central bin FFTOut[4] = -0.0000381469727 FFTOut[5] = 0.0 // Bin 3, this is a negative frequency bin. FFTOut[6] = 0.000331878662 FFTOut[7] = -0.000629425049 The output is composed of pairs of floats, each representing the real and imaginay parts of a single bin. So, bin 0 (array indexes 0, 1) would represent the real and imaginary parts of the DC frequency. As you can see, bins 1 and 3 both have the same values, (except for the sign of the Im part), so I guess these are the negative frequency values, and finally indexes (4, 5) would be the central frequency bin. To attenuate the frequency bin 1 this is what I do: // Attenuate the 'positive' bin FFTOut[2] *= 0.5; FFTOut[3] *= 0.5; // Attenuate its corresponding negative bin. FFTOut[6] *= 0.5; FFTOut[7] *= 0.5; For the actual tests I'm using a 1024-length FFT and I always provide all the samples so no 0-padding is needed. // Attenuate var halfSize = fftWindowLength / 2; float leftFreq = 0f; float rightFreq = 22050f; for( var c = 1; c < halfSize; c++ ) { var freq = c * (44100d / halfSize); // Calc. positive and negative frequency locations. var k = c * 2; var nk = (fftWindowLength - c) * 2; // This kind of attenuation corresponds to a high-pass filter. // The attenuation at the transition band is linearly applied, could // this be the cause of the distortion of low frequencies? var attn = (freq < leftFreq) ? 0 : (freq < rightFreq) ? ((freq - leftFreq) / (rightFreq - leftFreq)) : 1; mFFTOut[ k ] *= (float)attn; mFFTOut[ k + 1 ] *= (float)attn; mFFTOut[ nk ] *= (float)attn; mFFTOut[ nk + 1 ] *= (float)attn; } Obviously I'm doing something wrong but can't figure out what or where.

    Read the article

  • Fast or asynchronous AS3 JPEG encoding

    - by Bart van Heukelom
    I'm currently using the JPGEncoder from the AS3 core lib to encode a bitmap to JPEG var enc:JPGEncoder = new JPGEncoder(90); var jpg:ByteArray = enc.encode(bitmap); Because the bitmap is rather large (3000 x 2000) the encoding takes a long while (about 20 seconds), causing the application to seemingly freeze while encoding. To solve this, I need either: An asynchronous encoder so I can keep updating the screen (with a progress bar or something) while encoding An alternative encoder which is simply faster Is either possible?

    Read the article

  • writing a fast parser in python

    - by panzi
    I've written a hands-on recursive pure python parser for a some file format (ARFF) we use in one lecture. Now running my exercise submission is awfully slow. Turns out by far the most time is spent in my parser. It's consuming a lot of CPU time, the HD is not the bottleneck. I wonder what performant ways are there to write a parser in python? I'd rather not rewrite it in C. I tried to use jython, but that decreased performance a lot! The files I parse are partially huge ( 150 MB) with very long lines. My current parser only needs a look-ahead of one character. I'd post the source here but I don't know if that's such a good idea. After all the submission deadline has not jet ended. But then, the focus in this exercise is not the parser. You can choose whatever language you want to use and there already is a parser for Java.

    Read the article

  • Fast single thread comet server, possible?

    - by Pepijn
    I recently encountered a few cases where a server would distribute an event stream that contains the exact same data for all listeners, such as a 'recent activity' box. It occurred to me that it is quite strange and inefficient to have a server like Apache run a thread processing and querying the database for every single comet stream containing the same data. What I would do for those global(not per user) streams is run a single thread that continuously emits data, and a new (green)thread for every new request that outputs the headers and then 'merges' into the main thread. Is it possible for one thread to serve multiple sockets, or for multiple clients to listen to the same socket? An example o = event # threads received | a b # 3 o / / # 3 - |/_/ | # 1 o c # 2 a, b | / o/ # 2 a, b o # 1 a, b, c | # connection b closed o # 1 a, c Does something like this exist? Would it work? Is it possible to do? Disclaimer: I'm not a server expert.

    Read the article

  • Fast way to test if a port is in use using Python

    - by directedition
    I have a python server that listens on a couple sockets. At startup, I try to connect to these sockets before listening, so I can be sure that nothing else is already using that port. This adds about three seconds to my server's startup (which is about .54 seconds without the test) and I'd like to trim it down. Since I'm only testing localhost, I think a timeout of about 50 milliseconds is more than ample for that. Unfortunately, the socket.setdefaulttimeout(50) method doesn't seem to work for some reason. How I can trim this down?

    Read the article

  • Fast ruby http library for large XML downloads

    - by Vlad Zloteanu
    I am consuming various XML-over-HTTP web services returning large XML files ( 2MB). What would be the fastest ruby http library to reduce the 'downloading' time? Required features: both GET and POST requests gzip/deflate downloads (Accept-Encoding: deflate, gzip) - very important I am thinking between: open-uri Net::HTTP curb but you can also come with other suggestions. P.S. To parse the response, I am using a pull parser from Nokogiri, so I don't need an integrated solution like rest-client or hpricot.

    Read the article

  • WPF ListView.CurrentChanged too fast for binding

    - by matt
    My case: MVVM ListView+Details(custom UserControl) List bound to MV.Items (IsSynchronizedWithCurrent=true) Details bound to MV.Items.Current MV.Items.Count == 100 about 0.2sec to read details (lazy mode) When I hold the down arrow on the list, very strange things happen: list items order change current changes in the random order CPU usage drastically increments and eventually all hangs. I've read some post that one should start the timer or run handler in the background, but I am not able to do that, since all the binding WPF does for me. Is there some way to instruct the binding in my DetailsControl, to wait a while before accepting CurrentItem? Or should I just resign from the clean solution and write custom code in my MV to handle that?

    Read the article

  • Java/Swing: the fast/slow UI binding problem

    - by Jason S
    I need a way to bind UI indicators to rapidly-changing values. I have a class NumberCruncher which does a bunch of heavy processing in a critical non-UI thread, thousands of iterations of a loop per second, and some number of those result in changes to a set of parameters I care about. (think of them as a key-value store) I want to display those at a slower rate in the UI thread; 10-20Hz would be fine. How can I add MVC-style notification so that my NumberCruncher code doesn't need to know about the UI code/binding?

    Read the article

  • Fast way to code forms in C# which is bind to SQL data

    - by adopilot
    I am coming from MS-ACESS world and their programing habits, There was nice utility to make form from table, You can simply hit right click on table and make form for it. Now I looking for something similar for Visual Studio and WinForms. I am trying to develop simple application for which I need to have more then 30 forms for handling data, till now I designed database tables, keys and sprocs in SQL2008 and before I start coding forms for handling data, I asking You for main guidelines how to save my time while coding forms.

    Read the article

  • postgresql: Fast way to update the latest inserted row

    - by Anonymous
    What is the best way to modify the latest added row without using a temporary table. E.g. the table structure is id | text | date My current approach would be an insert with the postgresql specific command "returning id" so that I can update the table afterwards with update myTable set date='2013-11-11' where id = lastRow However I have the feeling that postgresql is not simply using the last row but is iterating through millions of entries until "id = lastRow" is found. How can i directly access the last added row?

    Read the article

  • ffmpeg libxvid settings for optimal quality and preferably fast encoding

    - by dropson
    What ffmpeg settings should I use to convert a video into xvid with a mixed speed and quality ratio, using 2-passes, and alternativly 1 pass. Currently I use the following for just 1 pass, but I need a better sugestion. -acodec libmp3lame -ab 128 -ar 44100 -ac 2 -vcodec libxvid -qmin 3 -qmax 5 -mbd 2 -bf 2 -flags +4mv -trellis -aic -cmp 2 -subcmp 2 -g 2 -maxrate 1300 -b 1200 -threads 0

    Read the article

  • Fast Lightweight Image Comparisson Metric Algorithm

    - by gav
    Hi All, I am developing an application for the Android platform which contains 1000+ image filters that have been 'evolved'. When a user selects a photo I want to present the most relevant filters first. This 'relevance' should be dependent on previous use cases. I have already developed tools that register when a filtered image is saved; this combination of filter and image can be seen as the training data for my system. The issue is that the comparison must occur between selecting an image and the next screen coming up. From a UI point of view I need the whole process to take less that 4 seconds; select an image- obtain a metric to use for similarity - check against use cases - return 6 closest matches. I figure with 4 seconds I can use animations and progress dialogs to keep the user happy. Due to platform contraints I am fairly limited in the computational expense of the algorithm. I have implemented a technique adapted from various online tutorials for running C code on the G1 and hence this language is available Specific Constraints; Qualcomm® MSM7201A™, 528 MHz Processor 320 x 480 Pixel bitmap in 32 bit ARGB ~ 2 seconds computational time for the native method to get the metric ~ 2 seconds to compare the metric of the current image with training data This is an academic project so all ideas are welcome, anything you can think of or have heard about would be of interest to me. My ideas; I want to keep the complexity down (O(n*m)?) by using pixel data only rather than a neighbourhood function I was looking at using the Colour historgram/Greyscale histogram/Texture/Entropy of the image, combining them to make the measure. There will be an obvious loss of information but I need the resultant metric to be substantially smaller than the memory footprint of the image (~0.512 MB) As I said, any ideas to direct my research would be fantastic. Kind regards, Gavin

    Read the article

  • Fast, very lightweight algorithm for camera motion detection?

    - by Ertebolle
    I'm working on an augmented reality app for iPhone that involves a very processor-intensive object recognition algorithm (pushing the CPU at 100% it can get through maybe 5 frames per second), and in an effort to both save battery power and make the whole thing less "jittery" I'm trying to come up with a way to only run that object recognizer when the user is actually moving the camera around. My first thought was to simply use the iPhone's accelerometers / gyroscope, but in testing I found that very often people would move the iPhone at a consistent enough attitude and velocity that there wouldn't be any way to tell that it was still in motion. So that left the option of analyzing the actual video feed and detecting movement in that. I got OpenCV working and tried running their pyramidal Lucas-Kanade optical flow algorithm, which works well but seems to be almost as processor-intensive as my object recognizer - I can get it to an acceptable framerate if I lower the depth levels / downsample the image / track fewer points, but then accuracy suffers and it starts to miss some large movements and trigger on small hand-shaking-y ones. So my question is, is there another optical flow algorithm that's faster than Lucas-Kanade if I just want to detect the overall magnitude of camera movement? I don't need to track individual objects, I don't even need to know which direction the camera is moving, all I really need is a way to feed something two frames of video and have it tell me how far apart they are.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >