Search Results

Search found 6670 results on 267 pages for 'speed dial'.

Page 234/267 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • How can I solve the Log Pile wooden puzzle with a computer program?

    - by craig1410
    Can anyone suggest how to solve the Log Pile wooden puzzle using a computer program? See here to visualise the puzzle: http://www.puzzlethis.co.uk/products/madcow/the_log_pile.htm The picture only shows some of the pieces. The full set of 10 pieces are configured as follows with 1 representing a peg, -1 representing a hole and 0 representing neither a peg nor a hole. -1,1,0,-1,0 1,0,1,0,0 1,-1,1,0,0 -1,-1,0,0,-1 -1,1,0,1,0 0,1,0,0,1 1,0,-1,0,-1 0,-1,0,1,0 0,0,-1,1,-1 1,0,-1,0,0 The pieces can be interlocked in two layers of 5 pieces each with the top layer at 90 degrees to the bottom layer as shown in the above link. I have already created a solution to this problem myself using Java but I feel that it was a clumsy solution and I am interested to see some more sophisticated solutions. Feel free to either suggest a general approach or to provide a working program in the language of your choice. My approach was to use the numeric notation above to create an array of "Logs". I then used a combination/permutation generator to try all possible arrangements of the Logs until a solution was found where all the intersections equated to zero (ie. Peg to Hole, Hole to Peg or Blank to Blank). I used some speed-ups to detect the first failed intersection for a given permutation and move on to the next permutation. I hope you find this as interesting as I have. Thanks, Craig.

    Read the article

  • How to improve Visual C++ compilation times?

    - by dtrosset
    I am compiling 2 C++ projects in a buildbot, on each commit. Both are around 1000 files, one is 100 kloc, the other 170 kloc. Compilation times are very different from gcc (4.4) to Visual C++ (2008). Visual C++ compilations for one project take in the 20 minutes. They cannot take advantage of the multiple cores because a project depend on the other. In the end, a full compilation of both projects in Debug and Release, in 32 and 64 bits takes more than 2 1/2 hours. gcc compilations for one project take in the 4 minutes. It can be parallelized on the 4 cores and takes around 1 min 10 secs. All 8 builds for 4 versions (Debug/Release, 32/64 bits) of the 2 projects are compiled in less than 10 minutes. What is happening with Visual C++ compilation times? They are basically 5 times slower. What is the average time that can be expected to compile a C++ kloc? Mine are 7 s/kloc with vc++ and 1.4 s/kloc with gcc. Can anything be done to speed-up compilation times on Visual C++?

    Read the article

  • setup Qt and PyQt on mac osx so my app can also deployable on windows

    - by hk_programmer
    Hi, I've been coding with Python and C++ and now need to work on building a gui for data visualization purposes. I work on Mac Snow Leopard (intel), python 3.1 using gcc 4.2.1 (from Xcode 3.1) I wanted to first install Qt and then PyQt. And my goals are to be able to: - quickly prototype GUI and the accompanied logic that drives the GUI using PyQt and python - if I decided I need the speed, or if it's fairly easy to translate my GUI into C++ using the Qt tools, I have the options to translate my app into C++ - Be able to deploy my application onto Windows (both the python and c++ version of my app) Give the goals above, what are the correct steps I should take and what issues i should be aware of when setting up Qt and PyQt. Which other deployment tools do I need? From my readings so far, here's what I have: download the Qt source for mac and configure it with -platform macx-g++42 -arch x86_64 -no-framework (i've read somewhere that building as framework causes some trouble in deployment and/or debugging, can't find the article anymore) download latest SIP source and build download latest PyQt and build from source (any special options I should pay attention to?) For deployment, I've read that I would need to use py2exe/cx_freeze for windows, p2app for mac: http://arstechnica.com/open-source/guides/2009/03/how-to-deploying-pyqt-applications-on-windows-and-mac-os-x.ars but seems like what the article describe is deploying an app you build on windows on the windows platform and vice versa. How do you deploy to windows (is it even possible?) if you are writing your Qt app on a mac ? Really appreciate the help

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • Currently using View, Should I use a hard table instead?

    - by 1001010101
    I am currently debating whether my table, mapping_uGroups_uProducts, which is a view formed by the following table: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `db`.`mapping_uGroups_uProducts` AS select distinct `X`.`upID` AS `upID`,`Z`.`ugID` AS `ugID` from ((`db`.`mapping_uProducts_Products` `X` join `db`.`productsInfo` `Y` on((`X`.`pID` = `Y`.`pID`))) join `db`.`mapping_uGroups_Groups` `Z` on((`Y`.`gID` = `Z`.`gID`))); My current query is: SELECT upID FROM uProductsInfo \ JOIN fs_uProducts USING (upID) column \ JOIN mapping_uGroups_uProducts USING (upID) -- could be faster if we use hard table and index \ JOIN mapping_fs_key USING (fsKeyID) \ WHERE fsName="OVERALL" \ AND ugID=1 \ ORDER BY score DESC \ LIMIT 0,30; which is pretty slow. (for 30 results, it requires about 10 secondes). I think the reason for my query being so slow is definitely due to the fact that that particular query relies on a VIEW which has no index to speed things up. +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | 1 | PRIMARY | mapping_fs_key | const | PRIMARY,fsName | fsName | 386 | const | 1 | Using temporary; Using filesort | | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 19706 | Using where | | 1 | PRIMARY | uProductsInfo | eq_ref | PRIMARY | PRIMARY | 4 | mapping_uGroups_uProducts.upID | 1 | Using index | | 1 | PRIMARY | fs_uProducts | ref | upID | upID | 4 | db.uProductsInfo.upID | 221 | Using where | | 2 | DERIVED | X | ALL | PRIMARY | NULL | NULL | NULL | 40772 | Using temporary | | 2 | DERIVED | Y | eq_ref | PRIMARY | PRIMARY | 4 | db.X.pID | 1 | Distinct | | 2 | DERIVED | Z | ref | PRIMARY | PRIMARY | 4 | db.Y.gID | 2 | Using index; Distinct | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ 7 rows in set (0.48 sec) The explain here looks pretty cryptic, and I don't know whether I should drop view and write a script to just insert everything in the view to a hard table. ( obviously, it will lose the flexibility of the view since the mapping changes quite frequently). Does anyone have any idea to how I can optimize my schema better?

    Read the article

  • Python Speeding Up Retrieving data from extremely large string

    - by Burninghelix123
    I have a list I converted to a very very long string as I am trying to edit it, as you can gather it's called tempString. It works as of now it just takes way to long to operate, probably because it is several different regex subs. They are as follow: tempString = ','.join(str(n) for n in coords) tempString = re.sub(',{2,6}', '_', tempString) tempString = re.sub("[^0-9\-\.\_]", ",", tempString) tempString = re.sub(',+', ',', tempString) clean1 = re.findall(('[-+]?[0-9]*\.?[0-9]+,[-+]?[0-9]*\.?[0-9]+,' '[-+]?[0-9]*\.?[0-9]+'), tempString) tempString = '_'.join(str(n) for n in clean1) tempString = re.sub(',', ' ', tempString) Basically it's a long string containing commas and about 1-5 million sets of 4 floats/ints (mixture of both possible),: -5.65500020981,6.88999986649,-0.454999923706,1,,,-5.65500020981,6.95499992371,-0.454999923706,1,,, The 4th number in each set I don't need/want, i'm essentially just trying to split the string into a list with 3 floats in each separated by a space. The above code works flawlessly but as you can imagine is quite time consuming on large strings. I have done a lot of research on here for a solution but they all seem geared towards words, i.e. swapping out one word for another. EDIT: Ok so this is the solution i'm currently using: def getValues(s): output = [] while s: # get the three values you want, discard the 3 commas, and the # remainder of the string v1, v2, v3, _, _, _, s = s.split(',', 6) output.append("%s %s %s" % (v1.strip(), v2.strip(), v3.strip())) return output coords = getValues(tempString) Anyone have any advice to speed this up even farther? After running some tests It still takes much longer than i'm hoping for. I've been glancing at numPy, but I honestly have absolutely no idea how to the above with it, I understand that after the above has been done and the values are cleaned up i could use them more efficiently with numPy, but not sure how NumPy could apply to the above. The above to clean through 50k sets takes around 20 minutes, I cant imagine how long it would be on my full string of 1 million sets. I'ts just surprising that the program that originally exported the data took only around 30 secs for the 1 million sets

    Read the article

  • Using Lucene to index private data, should I have a separate index for each user or a single index

    - by Nathan Bayles
    I am developing an Azure based website and I want to provide search capabilities using Lucene. (structured json objects would be indexed and stored in Lucene and other content such as Word documents, etc. would be indexed in lucene but stored in blob storage) I want the search to be secure, such that one user would never see a document belonging to another user. I want to allow ad-hoc searches as typed by the user. Lastly, I want to query programmatically to return predefined sets of data, such as "all notes for user X". I think I understand how to add properties to each document to achieve these 3 objectives. (I am listing them here so if anyone is kind enough to answer, they will have better idea of what I am trying to do) My questions revolve around performance and security. Can I improve document security by having a separate index for each user, or is including the user's ID as a parameter in each search sufficient? Can I improve indexing speed and total throughput of the system by having a separate index for each user? My thinking is that having separate indexes would allow me to scale the system by having multiple index writers (perhaps even on different server instances) working at the same time, each on their own index. Any insight would be greatly appreciated. Regards, Nate

    Read the article

  • Lightweight HTTP application/server for static content

    - by PartlyCloudy
    Hi, I am in need of a scalable and performant HTTP application/server that will be used for static file serving/uploading. So I only need support for GET and PUT operations. However, there are a few extra features that I need: Custom authentication: I need to check credentials against a database for each request. Thus I must be able to integrate propietary database interaction. Support for signed access keys: The access to resources via PUT should be signed using a key like http://uri/?key=foo The key then contains information about the request like md5(user + path + secret) which allows me to block unwanted requests. The application/server should allow me to check for this. Performance: I'd like to avoid piping content as much as possible. Otherwise the whole application could be implemented in Perl/etc. in a few lines as CGI. Perlbal (in webserver mode) looks nice, however the single-threaded model does not fit with my database lookup and it does also not support query strings. Lighttp/Nginx/… have some modules for these tasks, however it is not feasible putting everything together without ending up writing own extensions/modules. So how would you solve this? Are there other leightweight webservers available for this? Should I implement an application inside of a webserver (i.e. CGI). How can I avoid/speed up piping content between the webserver and my application. Thanks in advance!

    Read the article

  • Is there a recommended way to communicate scientific/engineering programming to C developers?

    - by ggkmath
    Hi, I have a lot of MATLAB code that needs to get ported to C (execution speed is critical for this work) as part of a back-end process for a web application. When I attempt to outsource this code to a C developer, I assume (correct me if I'm wrong) few C developers also understand MATLAB code (things like indexing and memory management are different, etc.). I wonder if there are any C developers out there that can recommend a procedure for me to follow to best communicate what the code does? For example, should I provide the MATLAB code and explain what it's doing line by line? Or, should I just provide the math/algorithm, explain it in plain English, and let the C developer implement it with this understanding in his/her own way (e.g. can I assume the developer understands how to work with complex math (i.e. imaginary numbers), how to generate histograms, perform an FFT, etc.)? Or, is there a better method? I expect I'm not the first to need to do this, so I wonder if any C developers out there ran into this situation and can share any conventional wisdom how they'd like this task to be transferred? Thanks in advance for any comments.

    Read the article

  • ASP.NET Web Optimization - confusion about loading order

    - by Ciel
    Using the ASP.NET Web Optimization Framework, I am attempting to load some javascript files up. It works fine, except I am running into a peculiar situation with either the loading order, the loading speed, or its execution. I cannot figure out which. Basically, I am using ace code editor for javascript, and I also want to include its autocompletion package. This requires two files. /ace.js /ext-language_tools.js This isn't an issue, if I load both of these files the normal way (with <script> tags) it works fine. But when I try to use the web optimization bundles, it seems as if something goes wrong. Trying this out... bundles.Add(new ScriptBundle("~/bundles/js") { .Include("~/js/ace.js") .Include("~/js/ext-language_tools.js") }); and then in the view .. @Scripts.Render("~/bundles/js") I get the error ace is not defined This means that the ace.js file hasn't run, or hasn't loaded. Because if I break it apart into two bundles, it starts working. bundles.Add(new ScriptBundle("~/bundles/js") { .Include("~/js/ace.js") }); bundles.Add(new ScriptBundle("~/bundles/js/language_tools") { .Include("~/js/ext-language_tools.js") }); Can anyone explain why this would behave in this fashion?

    Read the article

  • Gradual memory leak and slowdown in loop

    - by Benji XVI
    I have a simple foundation tool that exports every frame of a movie as a .tiff file. Here is the relevant code: NSString* movieLoc = [NSString stringWithCString:argv[1]]; QTMovie *sourceMovie = [QTMovie movieWithFile:movieLoc error:nil]; int i=0; while (QTTimeCompare([sourceMovie currentTime], [sourceMovie duration]) != NSOrderedSame) { // save image of movie to disk NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSString *filePath = [NSString stringWithFormat:@"/somelocation_%d.tiff", i++]; NSData *currentImageData = [[sourceMovie currentFrameImage] TIFFRepresentation]; [currentImageData writeToFile:filePath atomically:NO]; NSLog(@"%@", filePath); [sourceMovie stepForward]; [arp release]; } [pool drain]; return 0; As you can see, in order to prevent very large memory buildups with the various transparently-autoreleased variables in the loop, we create, and flush, an autoreleasepool with every run through the loop. However, over the course of stepping through a movie, the amount of memory used by the program still gradually increases, and the speed at which frames are processed drops precipitously. (From ~0.5 seconds per frame at the start, to ~2 seconds per frame by the 250th frame.) The only thing I can think can be causing the gradual memory leak is a buildup of the NSAutoreleasePool objects themselves. Am I right in thinking they will only be deallocated when the outer pool is released? If so, is there a better memory management solution here? Creating a pool every run through the loop seems a little hacky. And if not, what is causing the slow memory leak? (It is not NSStrings, and much too slow to be NSImages or NSDatas.) And what could be causing the slowdown?

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

  • Using static variables for Strings

    - by Vivart
    below content is taken from Best practice: Writing efficient code but i didn't understand why private static String x = "example"; faster than private static final String x ="example"; Can anybody explain this. Using static variables for Strings When you define static fields (also called class fields) of type String, you can increase application speed by using static variables (not final) instead of constants (final). The opposite is true for primitive data types, such as int. For example, you might create a String object as follows: private static final String x = "example"; For this static constant (denoted by the final keyword), each time that you use the constant, a temporary String instance is created. The compiler eliminates "x" and replaces it with the string "example" in the bytecode, so that the BlackBerry® Java® Virtual Machine performs a hash table lookup each time that you reference "x". In contrast, for a static variable (no final keyword), the String is created once. The BlackBerry JVM performs the hash table lookup only when it initializes "x", so access is faster. private static String x = "example"; You can use public constants (that is, final fields), but you must mark variables as private.

    Read the article

  • Looking for a good C++/.net

    - by Michael Minerva
    I have recently started to feel that I need to greatly improve my C++ skills especially in the realm of .net. I graduated from a good four year university with a degree in computer science about 9 months ago and I have since been doing full time contract work for a small software company in my local area. Most of my work has been done using Java/lisp/cocoa/XML and before that most of my programming in my senior year was in java/C#. I did a decent amount of C++ in my Sophomore year and in my free time before that but I feel that my general knowledge of C++/.net is very lacking for the opportunities that are now coming my way. Can anyone recommend a good book that could help me get up too speed? I feel I do not need a very basic introduction to C++ but something that covers the fundamentals of .net would be good for me. So basically what I need is a book or books that would be good for a .net novice and a C++ developer who is just beyond novice. Also, a book that would help bein an interview by giving me a conversional understanding of C++ would be great. Thanks a lot!.

    Read the article

  • Guidance required: FIrst time gonna work with real high end database (size = 50GB).

    - by claws
    I got a project of designing a Database. This is going to be my first big scale project. Good thing about it is information is mostly organized & currently stored in text files. The size of this information is 50GB. There are going to be few millions of records in each Table. Its going to have around 50 tables. I need to provide a web interface for searching & browsing. I'm going to use MySQL DBMS. I've never worked with a database more than 200MB before. So, speed & performance was never a concern but I followed things like normalization & Indexes. I never used any kind of testing/benchmarking/queryOptimization/whatever because I never had to care about them. But here the purpose of creating a database is to make it quickly searchable. So, I need to consider all possible aspects in design. I was browsing archives & found: http://stackoverflow.com/questions/1981526/what-should-every-developer-know-about-databases http://stackoverflow.com/questions/621884/database-development-mistakes-made-by-app-developers I'm gonna keep the points mentioned in above answers in mind. What else should I know? What else should I keep in mind?

    Read the article

  • Efficient update of SQLite table with many records

    - by blackrim
    I am trying to use sqlite (sqlite3) for a project to store hundreds of thousands of records (would like sqlite so users of the program don't have to run a [my]sql server). I have to update hundreds of thousands of records sometimes to enter left right values (they are hierarchical), but have found the standard update table set left_value = 4, right_value = 5 where id = 12340; to be very slow. I have tried surrounding every thousand or so with begin; .... update... update table set left_value = 4, right_value = 5 where id = 12340; update... .... commit; but again, very slow. Odd, because when I populate it with a few hundred thousand (with inserts), it finishes in seconds. I am currently trying to test the speed in python (the slowness is at the command line and python) before I move it to the C++ implementation, but right now this is way to slow and I need to find a new solution unless I am doing something wrong. Thoughts? (would take open source alternative to SQLite that is portable as well)

    Read the article

  • Should I be using Expression Blend to design really dynamic UIs?

    - by Robert Rossney
    My company's product is, at its core, a framework for developing metadata-driven UIs. I don't know how to characterize it less succinctly than that, and hope I won't need to for purposes of this question, but we'll see. I've been trying to come up to speed on WPF, and have been building UI prototypes here and there, and recently I decided to see if I could use Expression Blend to help with the design of these UIs. And I'm pretty mystified at this point. It appears to me as though Expresssion Blend is designed with the expectation that you already know all of the objects that are going to be present in the UI at design time. But our program generates these object dynamically at runtime. For instance, a data row might be presented in a horizontal StackPanel containing alternating TextBlocks (for captions) and TextBoxes (for data fields). The number of these objects depends on metadata about the number of columns in the data row. I can, pretty readily, write code that runs through a metadata record and populates a StackPanel dynamically, setting up the binding of all of the controls to properties in either the data or metadata. (A TextBox's Width might be bound to metadata, while its Text is bound to data.) But I can't even begin to figure out how to do something like this in Expression Blend. I can manually create all these controls, so that I have a set of controls that I can apply styles to and work out the visual design of the app, but it's really a pain to do this. I can write code that goes through my data model and emits XAML for all these controls, I suppose, and then copy and paste it. But I'm going to feel really stupid if it turns out there a way to do this sort of thing in Expression Blend and I've dropped back and punted because I'm too dim to figure out the right way to think of it. Is this enough information for someone to try formulating an answer?

    Read the article

  • Create a variable which is named depending on an ID number?

    - by gray
    Is there any way to create a variable, and add an ID to the end of the actual variable name? I have a variable called 'gauge', used to create a new Gauge object: var gauge = new Gauge(target).setOptions(opts); I want to add an ID to the variable, so something like: var gauge+id = new Gauge(target).setOptions(opts); It's because I'm creating a number of these objects, and have a specific ID already for each one, which I want to attach to the gauge object if possible? All my code for this function is below, if it gives you a better idea of what i need to do: function go(id, votes) { var val = $('#votes_'+id).text(); var target = document.getElementById('foo_'+id); // your canvas element var gauge = new Gauge(target).setOptions(opts); // create sexy gauge! gauge.maxValue = 100; // set max gauge value gauge.animationSpeed = 20; // set animation speed (32 is default value) var a=votes+0.5; gauge.set(val); // set actual value } My main problem arises on the last line. It says gauge.set(val), and it only sets the last object, ignoring all the other ones...

    Read the article

  • Images from url to listview

    - by Andres
    I have a listview which I show video results from YouTube. Everything works fine but one thing I noticed is that the way it works seems to be a bit slow and it might be due to my code. Are there any suggestions on how I can make this better? Maybe loading the images directly from the url instead of using a webclient? I am adding the listview items in a loop from video feeds returned from a query using the YouTube API. The piece of code which I think is slowing it down is this: Feed<Video> videoFeed = request.Get<Video>(query); int i = 0; foreach (Video entry in videoFeed.Entries) { string[] info = printVideoEntry(entry).Split(','); WebClient wc = new WebClient(); wc.DownloadFile(@"http://img.youtube.com/vi/" + info[0].ToString() + "/hqdefault.jpg", info[0].ToString() + ".jpg"); string[] row1 = { "", info[0].ToString(), info[1].ToString() }; ListViewItem item = new ListViewItem(row1, i); YoutubeList.Items.Add(item); imageListSmall.Images.Add(Bitmap.FromFile(info[0].ToString() + @".jpg")); imageListLarge.Images.Add(Bitmap.FromFile(info[0].ToString() + @".jpg")); } public static string printVideoEntry(Video video) { return video.VideoId + "," + video.Title; } As you can see I use a Webclient which downloads the images so then I can use them as image in my listview. It works but what I'm concerned about is speed..any suggestions? maybe a different control all together?

    Read the article

  • F# ref-mutable vars vs object fields

    - by rwallace
    I'm writing a parser in F#, and it needs to be as fast as possible (I'm hoping to parse a 100 MB file in less than a minute). As normal, it uses mutable variables to store the next available character and the next available token (i.e. both the lexer and the parser proper use one unit of lookahead). My current partial implementation uses local variables for these. Since closure variables can't be mutable (anyone know the reason for this?) I've declared them as ref: let rec read file includepath = let c = ref ' ' let k = ref NONE let sb = new StringBuilder() use stream = File.OpenText file let readc() = c := stream.Read() |> char // etc I assume this has some overhead (not much, I know, but I'm trying for maximum speed here), and it's a little inelegant. The most obvious alternative would be to create a parser class object and have the mutable variables be fields in it. Does anyone know which is likely to be faster? Is there any consensus on which is considered better/more idiomatic style? Is there another option I'm missing?

    Read the article

  • read files from directory and filter files from Java

    - by Adnan
    The following codes goes through all directories and sub-directories and outputs just .java files; import java.io.File; public class DirectoryReader { private static String extension = "none"; private static String fileName; public static void main(String[] args ){ String dir = "C:/tmp"; File aFile = new File(dir); ReadDirectory(aFile); } private static void ReadDirectory(File aFile) { File[] listOfFiles = aFile.listFiles(); if (aFile.isDirectory()) { listOfFiles = aFile.listFiles(); if(listOfFiles!=null) { for(int i=0; i < listOfFiles.length; i++ ) { if (listOfFiles[i].isFile()) { fileName = listOfFiles[i].toString(); int dotPos = fileName.lastIndexOf("."); if (dotPos > 0) { extension = fileName.substring(dotPos); } if (extension.equals(".java")) { System.out.println("FILE:" + listOfFiles[i] ); } } if(listOfFiles[i].isDirectory()) { ReadDirectory(listOfFiles[i]); } } } } } } Is this efficient? What could be done to increase the speed? All ideas are welcome.

    Read the article

  • gcc compilations (sometimes) result in cpu underload

    - by confusedCoder
    I have a larger C++ program which starts out by reading thousands of small text files into memory and storing data in stl containers. This takes about a minute. Periodically, a compilation will exhibit behavior where that initial part of the program will run at about 22-23% CPU load. Once that step is over, it goes back to ~100% CPU. It is more likely to happen with O2 flag turned on but not consistently. It happens even less often with the -p flag which makes it almost impossible to profile. I did capture it once but the gprof output wasn't helpful - everything runs with the same relative speed just at low cpu usage. I am quite certain that this has nothing to do with multiple cores. I do have a quad-core cpu, and most of the code is multi-threaded, but I tested this issue running a single thread. Also, when I run the problematic step in multiple threads, each thread only runs at ~20% CPU. I apologize ahead of time for the vagueness of the question but I have run out of ideas as to how to troubleshoot it further, so any hints might be helpful. UPDATE: Just to make sure it's clear, the problematic part of the code does sometimes (~30-40% of the compilations) run at 100% CPU, so it's hard to buy the (otherwise reasonable) argument that I/O is the bottleneck

    Read the article

  • Correct way to switch between UIView with ARC. My way leads to memory leaks :( (iOS)

    - by Andrei Golubev
    i use xcode 4.4 with ARC on.. I have dynamically created UIViews in the ViewController.m: UIView*myviews[10]; Then in the - (void)viewDidLoad function i fill each of it with pictures i need myviews[viewIndex] = [[UIView alloc]initWithFrame:myrec]; UIImage *testImg; UIImageView * testImgView; testImg = [UIImage imageNamed:[NSString stringWithFormat:@"imgarray%d.png", viewIndex]; testImgView.image = testImg; viewindex++; So all seems to be fine, when i want to jump from one view to another i do with two buttons next: [self.view addSubview:views[viewIndex]]; CATransition *animation = [CATransition animation]; [animation setDelegate:self]; [animation setDuration:1.0f]; [animation setType:@"rippleEffect"]; [animation setSubtype:kCATransitionFromTop]; //[animation setTimingFunction:[CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut]]; [self.view.layer addAnimation:animation forKey:@"transitionViewAnimation"]; Nothing seems to be bad, but the memory consumption grows with huge speed when i switch between views.. and then i get low memory warning or sometimes application will just crash. I have tried to use UIViewController array and was switching between the controllers: nothing changes, the memory low warning is what i end up with.. Maybe i need to clean the memory somehow? But how? ARC does not allow to use release and so on.. last what i have tried (sorry, maybe not very professional) before to add new subview is this NSArray *viewsToRemove = [self.view subviews]; for (UIView *views in viewsToRemove) { [views removeFromSuperview]; } But this does not help either.. Please don't judge too strong, i am still new to iOS and Objective-c..

    Read the article

  • Why doesn't this CompiledQuery give a performance improvement?

    - by Grammarian
    I am trying to speed up an often used query. Using a CompiledQuery seemed to be the answer. But when I tried the compiled version, there was no difference in performance between the compiled and non-compiled versions. Can someone please tell me why using Queries.FindTradeByTradeTagCompiled is not faster than using Queries.FindTradeByTradeTag? static class Queries { // Pre-compiled query, as per http://msdn.microsoft.com/en-us/library/bb896297 private static readonly Func<MyEntities, int, IQueryable<Trade>> mCompiledFindTradeQuery = CompiledQuery.Compile<MyEntities, int, IQueryable<Trade>>( (entities, tag) => from trade in entities.TradeSet where trade.trade_tag == tag select trade); public static Trade FindTradeByTradeTagCompiled(MyEntities entities, int tag) { IQueryable<Trade> tradeQuery = mCompiledFindTradeQuery(entities, tag); return tradeQuery.FirstOrDefault(); } public static Trade FindTradeByTradeTag(MyEntities entities, int tag) { IQueryable<Trade> tradeQuery = from trade in entities.TradeSet where trade.trade_tag == tag select trade; return tradeQuery.FirstOrDefault(); } }

    Read the article

  • Faster Insertion of Records into a Table with SQLAlchemy

    - by Kyle Brandt
    I am parsing a log and inserting it into either MySQL or SQLite using SQLAlchemy and Python. Right now I open a connection to the DB, and as I loop over each line, I insert it after it is parsed (This is just one big table right now, not very experienced with SQL). I then close the connection when the loop is done. The summarized code is: log_table = schema.Table('log_table', metadata, schema.Column('id', types.Integer, primary_key=True), schema.Column('time', types.DateTime), schema.Column('ip', types.String(length=15)) .... engine = create_engine(...) metadata.bind = engine connection = engine.connect() .... for line in file_to_parse: m = line_regex.match(line) if m: fields = m.groupdict() pythonified = pythoninfy_log(fields) #Turn them into ints, datatimes, etc if use_sql: ins = log_table.insert(values=pythonified) connection.execute(ins) parsed += 1 My two questions are: Is there a way to speed up the inserts within this basic framework? Maybe have a Queue of inserts and some insertion threads, some sort of bulk inserts, etc? When I used MySQL, for about ~1.2 million records the insert time was 15 minutes. With SQLite, the insert time was a little over an hour. Does that time difference between the db engines seem about right, or does it mean I am doing something very wrong?

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >