Search Results

Search found 13288 results on 532 pages for 'high speed'.

Page 68/532 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Are any of these quad-tree libraries any good?

    - by Noctis Skytower
    It appears that a certain project of mine will require the use of quad-trees, something that I have never worked with before. From what I have read they should allow substantial performance enhancements than a brute-force attempt at the problem would yield. Are any of these python modules any good? Quadtree 0.1.2 <= No: unable to execute in Python 3.1 QuadTree <= Yes: simple while working with rectangles quadtree.py <= No: no support for needed operations EDIT: Does anyone know of a better implementation that the one presented on the pygame wiki article?

    Read the article

  • Quickly retrieve the subset of properties used in a huge collection in C#

    - by ccornet
    I have a huge Collection (which I can cast as an enumerable using OfType<()) of objects. Each of these objects has a Category property, which is drawn from a list somewhere else in the application. This Collection can reach sizes of hundreds of items, but it is possible that only, say, 6/30 of the possible Categories are actually used. What is the fastest method to find these 6 Categories? The size of the huge Collection discourages me from just iterating across the entire thing and returning all unique values, so is there a faster method of accomplishing this? Ideally I'd collect the categories into a List.

    Read the article

  • How to make active services highly available?

    - by Jader Dias
    I know that with Network Load Balancing and Failover Clusteringwe can make passive services highly available. But what about active apps? Example: One of my apps retrieves some content from a external resource in a fixed interval. I have imagined the following scenarios: Run it in a single machine. Problem: if this instance falls, the content won't be retrieved Run it in each machine of the cluster. Problem: the content will be retrieved multiple times Have it in each machine of the cluster, but run it only in one of them. Each instance will have to check some sort of common resource to decide whether it its turn to do the task or not. When I was thinking about the solution #3 I have wondered what should be the common resource. I have thought of creating a table in the database, where we could use it to get a global lock. Is this the best solution? How does people usually do this? By the way it's a C# .NET WCF app running on Windows Server 2008

    Read the article

  • .NET values lookup

    - by Maciej
    Hi, I have a feeling of missing something obvious. UDP receiver application. It holds a collection of valid UDP sender IPs - only guys with IP on that list will be considered. Since that list must be looked at on every packet and UDPs are so volatile, that operation must be maximum fast. Good choice is Dictionary but it is a key-value structure and what I actually need here is a dictionary-like (hash lookup) key only structure. Is there something like that? Small annoyance rather than a bug but still. I can still use Dictionary Thanks, M.

    Read the article

  • generic async loading method for page web scripts?

    - by boomhauer
    The google analytics code went to an async load model some time back. I've noticed that a lot of the other scripts I use on many sites are causing slow load times - specifically the addthis script and the facebook like button. I'm noticing that the slow load times of these scripts is causing the google bot to calc my page loadtimes as being much slower than previously. I'd like to know if there is a standard/generic way of causing these scripts to load async as well, or perhaps a pointer to someone who has done the work for this already. Seems this would be a popular thing to do, but not much luck searching around.

    Read the article

  • Should one use PHP to print all of a page's HTML?

    - by Mala
    Hi So I've always developed PHP pages like this: <?php goes at the top, ?> goes at the bottom, and all the HTML gets either print()ed or echo()ed out. Is that slower than having non-dynamic html outputted outside of <?php ?> tags? I can't seem to find any info about this. Thanks! --Mala UPDATE: the consesus seems to be on doing it my old way being hard to read. This is not the case if you break your strings up line by line as in: print("\n". "first line goes here\n". "second line goes here\n". "third line"); etc. It actually makes it a lot easier to read than having html outside of php structures, as this way everything is properly indented. That being said, it involves a lot of string concatenation.

    Read the article

  • database vs flat file, which is a faster structure for "regex" matching with many simultaneous reque

    - by Jamex
    Hi, which structure returns faster result and/or less taxing on the host server, flat file or database (mysql)? Assume many users (100 users) are simultaneously query the file/db. Searches involve pattern matching against a static file/db. File has 50,000 unique lines (same data type). There could be many matches. There is no writing to the file/db, just read. Is it possible to have a duplicate the file/db and write a logic switch to use the backup file/db if the main file is in use? Which language is best for the type of structure? Perl for flat and PHP for db? Addition info: If I want to find all the cities have the pattern "cis" in their names. Which is better/faster, using regex or string functions? Please recommend a strategy TIA

    Read the article

  • database vs flat file, which is a faster structure for regex matching with many simultaneous request

    - by Jamex
    Hi, which structure returns faster result and/or less taxing on the host server, flat file or database (mysql)? Assume many users (100 users) are simultaneously query the file/db. Searches involve pattern matching using regex against a static file/db. File has 50,000 unique lines (same data type). There could be many matches. There is no writing to the file/db, just read. Is it possible to have a duplicate the file/db and write a logic switch to use the backup file/db if the main file is in use? Which language is best for the type of structure? Perl for flat and PHP for db? TIA

    Read the article

  • Should I be regularly shrinking my DB or at least my log file?

    - by Tom
    My question is, should I be running one or both of the shrink command regularly, DBCC SHRINKDATABASE OR DBCC SHRINKFILE ============================= background Sql Server: Database is 200 gigs, logs are 150 gigs. running this command SELECT name ,size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int) / 128.0 AS AvailableSpaceInMB FROM sys.database_files;` produces this output.. MyDB: 159.812500 MB free MyDB_Log: 149476.390625 MB free So it seems there is some free space. We backup transaction logs every hour, diff backup 5 nights a week, full backup the other 2 nights of the week.

    Read the article

  • Best scaling methodologies for a highly traffic web application?

    - by tester2001
    We have a new project for a web app that will display banners ads on websites (as a network) and our estimate is for it to handle 20 to 40 billion impressions a month. Our current language is in ASP...but are moving to PHP. Does PHP 5 has its limit with scaling web application? Or, should I have our team invest in picking up JSP? Or, is it a matter of the app server and/or DB? We plan to use Oracle 10g as the database.

    Read the article

  • Writing shorter code/algorithms, is more efficient (performance)?

    - by Carlos
    After coming across the code golf trivia around the site it is obvious people try to find ways to write code and algorithms as short as the possibly can in terms of characters, lines and total size, even if that means writing something like: n=input() while n>1:n=(n/2,n*3+1)[n%2];print n So as a beginner I start to wonder whether size actually matters :D. It is obviously a very subjective question highly dependent on the actual code being used, but what is the rule of thumb in the real world. In the case that size wont matter, how come then we don't focus more on performance rather than size?

    Read the article

  • Rewriting a simple Pygame 2D drawing function in C++

    - by Dominic Bou-Samra
    I have a 2D list of vectors (say 20x20 / 400 points) and I am drawing these points on a screen like so: for row in grid: for point in row: pygame.draw.circle(window, white, (particle.x, particle.y), 2, 0) pygame.display.flip() #redraw the screen This works perfectly, however it's much slower then I expected. I want to rewrite this in C++ and hopefully learn some stuff (I am doing a unit on C++ atm, so it'll help) on the way. What's the easiest way to approach this? I have looked at Direct X, and have so far followed a bunch of tutorials and have drawn some rudimentary triangles. However I can't find a simple (draw point).

    Read the article

  • HA with nginx and cloud environment

    - by gotts
    I have a node in cloud environment which is used now as nginx and mongrels behind it. This is what nginx config looks like: upstream mongrel { server 127.0.0.1:8000; server 127.0.0.1:8001; server 127.0.0.1:8002; } I want to achieve the following: add another node nginx has to know about this new node automatically without stopping him, changing config(manually adding new node's mongrels) and starting it again. How can I make my load balancer(nginx) work in the way so it can be self-aware of nodes in cloud?

    Read the article

  • Fastest way to pad a number in Java to a certain number of digits

    - by Martin
    Am trying to create a well-optimised bit of code to create number of X-digits in length (where X is read from a runtime properties file), based on a DB-generated sequence number (Y), which is then used a folder-name when saving a file. I've come up with three ideas so far, the fastest of which is the last one, but I'd appreciate any advice people may have on this... 1) Instantiate a StringBuilder with initial capacity X. Append Y. While length < X, insert a zero at pos zero. 2) Instantiate a StringBuilder with initial capacity X. While length < X, append a zero. Create a DecimalFormat based on StringBuilder value, and then format the number when it's needed. 3) Create a new int of Math.pow( 10, X ) and add Y. Use String.valueOf() on the new number and then substring(1) it. The second one can obviously be split into outside-loop and inside-loop sections. So, any tips? Using a for-loop of 10,000 iterations, I'm getting similar timings from the first two, and the third method is approximately ten-times faster. Does this seem correct? Full test-method code below... // Setup test variables int numDigits = 9; int testNumber = 724; int numIterations = 10000; String folderHolder = null; DecimalFormat outputFormat = new DecimalFormat( "#,##0" ); // StringBuilder test long before = System.nanoTime(); for ( int i = 0; i < numIterations; i++ ) { StringBuilder sb = new StringBuilder( numDigits ); sb.append( testNumber ); while ( sb.length() < numDigits ) { sb.insert( 0, 0 ); } folderHolder = sb.toString(); } long after = System.nanoTime(); System.out.println( "01: " + outputFormat.format( after - before ) + " nanoseconds" ); System.out.println( "Sanity check: Folder = \"" + folderHolder + "\"" ); // DecimalFormat test before = System.nanoTime(); StringBuilder sb = new StringBuilder( numDigits ); while ( sb.length() < numDigits ) { sb.append( 0 ); } DecimalFormat formatter = new DecimalFormat( sb.toString() ); for ( int i = 0; i < numIterations; i++ ) { folderHolder = formatter.format( testNumber ); } after = System.nanoTime(); System.out.println( "02: " + outputFormat.format( after - before ) + " nanoseconds" ); System.out.println( "Sanity check: Folder = \"" + folderHolder + "\"" ); // Substring test before = System.nanoTime(); int baseNum = (int)Math.pow( 10, numDigits ); for ( int i = 0; i < numIterations; i++ ) { int newNum = baseNum + testNumber; folderHolder = String.valueOf( newNum ).substring( 1 ); } after = System.nanoTime(); System.out.println( "03: " + outputFormat.format( after - before ) + " nanoseconds" ); System.out.println( "Sanity check: Folder = \"" + folderHolder + "\"" );

    Read the article

  • Very difficult question !

    - by cibercitizen1
    I'm absolutely impressed by how fast the questions are answered. Some guys seems to be spending all day long waiting for questions: congratulations ! Your help is very appreciated, really. but ... I guess you don't work, don't have friends, or family, even you don't sleep at all? Do you? Or how do you manage to be so fast?

    Read the article

  • Quickly or concisely determine the longest string per column in a row-based data collection

    - by ccornet
    Judging from the failure of my last inquiry, I need to calculate and preset the widths of a set of columns in a table that is being made into an Excel file. Unfortunately, the string data is stored in a row-based format, but the widths must be calculated in a column-based format. The data for the spreadsheets are generated from the following two collections: var dictFiles = l.Items.Cast<SPListItem>().GroupBy(foo => foo.GetSafeSPValue("Category")).ToDictionary(bar => bar.Key); StringDictionary dictCols = GetColumnsForItem(l.Title); Where l is an SPList whose title determines which columns are used. Each SPListItem corresponds to a row of data, which are sorted into separate worksheets based on Category (hence the dictionary). The second line is just a simple StringDictionary that has the column name (A, B, C, etc.) as a key and the corresponding SPListItme field display name as the corresponding value. So for each Category, I enumerate through dictFiles[somekey] to get all the rows in that sheet, and get the particular cell data using SPListItem.Fields[dictCols[colName]]. What I am asking is, is there a quick or concise method, for any one dictFiles[somekey], to retrieve a readout of the longest string in each column provided by dictCols? If it is impossible to get both quickness and conciseness, I can settle for either (since I always have the O(n*m) route of just enumerating the collection and updating an array whenever strCurrent.Length strLongest.Length). For example, if the goal table was the following... Item# Field1 Field2 Field3 1 Oarfish Atmosphere Pretty 2 Raven Radiation Adorable 3 Sunflower Flowers Cute I'd like a function which could cleanly take the collection of items 1, 2, and 3 and output in the correct order... Sunflower, Atmosphere, Adorable Using .NET 3.5 and C# 3.0.

    Read the article

  • I'm doing a lot of lists and dictionary sorting...and this is causing memory errors in Python websit

    - by alex
    I retrieved data from the log table in my database. Then I started finding unique users, comparing/sorting lists, etc. In the end I got down to this. stats = {'2010-03-19': {'date': '2010-03-19', 'unique_users': 312, 'queries': 1465}, '2010-03-18': {'date': '2010-03-18', 'unique_users': 329, 'queries': 1659}, '2010-03-17': {'date': '2010-03-17', 'unique_users': 379, 'queries': 1845}, '2010-03-16': {'date': '2010-03-16', 'unique_users': 434, 'queries': 2336}, '2010-03-15': {'date': '2010-03-15', 'unique_users': 390, 'queries': 2138}, '2010-03-14': {'date': '2010-03-14', 'unique_users': 460, 'queries': 2221}, '2010-03-13': {'date': '2010-03-13', 'unique_users': 507, 'queries': 2242}, '2010-03-12': {'date': '2010-03-12', 'unique_users': 629, 'queries': 3523}, '2010-03-11': {'date': '2010-03-11', 'unique_users': 811, 'queries': 4274}, '2010-03-10': {'date': '2010-03-10', 'unique_users': 171, 'queries': 1297}, '2010-03-26': {'date': '2010-03-26', 'unique_users': 299, 'queries': 1617}, '2010-03-27': {'date': '2010-03-27', 'unique_users': 323, 'queries': 1310}, '2010-03-24': {'date': '2010-03-24', 'unique_users': 352, 'queries': 2112}, '2010-03-25': {'date': '2010-03-25', 'unique_users': 330, 'queries': 1290}, '2010-03-22': {'date': '2010-03-22', 'unique_users': 329, 'queries': 1798}, '2010-03-23': {'date': '2010-03-23', 'unique_users': 329, 'queries': 1857}, '2010-03-20': {'date': '2010-03-20', 'unique_users': 368, 'queries': 1693}, '2010-03-21': {'date': '2010-03-21', 'unique_users': 329, 'queries': 1511}, '2010-03-29': {'date': '2010-03-29', 'unique_users': 325, 'queries': 1718}, '2010-03-28': {'date': '2010-03-28', 'unique_users': 340, 'queries': 1815}, '2010-03-30': {'date': '2010-03-30', 'unique_users': 329, 'queries': 1891}} It's not a big dictionary. But when I try to do one last thing...it craps out on me. for k, v in stats: mylist.append(v) too many values to unpack What the heck does that mean??? TOO MANY VALUES TO UNPACK.

    Read the article

  • how to Improve DrawDIB's quality?

    - by sxingfeng
    I am coding in c++, gdi I use stretchDIBits to draw Images to dc. ::SetStretchBltMode(hDC, HALFTONE); ::StretchDIBits( hDC, des.left,des.top,des.right - des.left,des.bottom - des.top, 0, 0, img.getWidth(), img.getHeight(), (img.accessPixels()), (img.getInfo()), DIB_RGB_COLORS, SRCCOPY ); However It is slow. So I changed to use DrawDib function. ::SetStretchBltMode(hDC, HALFTONE); DrawDibDraw( hdd, hDC, des.left,des.top,des.right - des.left,des.bottom - des.top, (LPBITMAPINFOHEADER)(img.getInfo()), (img.accessPixels()), 0, 0, img.getWidth(), img.getHeight(), DDF_HALFTONE ); However the result is just like draw by COLORONCOLOR Mode. How can I improve the drawing quality?

    Read the article

  • [Python] Tips for making a fraction calculator code more optimized (faster and using less memory)

    - by Logic Named Joe
    Hello Everyone, Basicly, what I need for the program to do is to act a as simple fraction calculator (for addition, subtraction, multiplication and division) for the a single line of input, for example: -input: 1/7 + 3/5 -output: 26/35 My initial code: import sys def euclid(numA, numB): while numB != 0: numRem = numA % numB numA = numB numB = numRem return numA for wejscie in sys.stdin: wyjscie = wejscie.split(' ') a, b = [int(x) for x in wyjscie[0].split("/")] c, d = [int(x) for x in wyjscie[2].split("/")] if wyjscie[1] == '+': licz = a * d + b * c mian= b * d nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) elif wyjscie[1] == '-': licz= a * d - b * c mian= b * d nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) elif wyjscie[1] == '*': licz= a * c mian= b * d nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) else: licz= a * d mian= b * c nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) Which I reduced to: import sys def euclid(numA, numB): while numB != 0: numRem = numA % numB numA = numB numB = numRem return numA for wejscie in sys.stdin: wyjscie = wejscie.split(' ') a, b = [int(x) for x in wyjscie[0].split("/")] c, d = [int(x) for x in wyjscie[2].split("/")] if wyjscie[1] == '+': print("/".join([str((a * d + b * c)/euclid(a * d + b * c, b * d)),str((b * d)/euclid(a * d + b * c, b * d))])) elif wyjscie[1] == '-': print("/".join([str((a * d - b * c)/euclid(a * d - b * c, b * d)),str((b * d)/euclid(a * d - b * c, b * d))])) elif wyjscie[1] == '*': print("/".join([str((a * c)/euclid(a * c, b * d)),str((b * d)/euclid(a * c, b * d))])) else: print("/".join([str((a * d)/euclid(a * d, b * c)),str((b * c)/euclid(a * d, b * c))])) Any advice on how to improve this futher is welcome. Edit: one more thing that I forgot to mention - the code can not make use of any libraries apart from sys.

    Read the article

  • Fastest Way To Format a Plain Text Using Javascript

    - by Nathan Campos
    I have a huge plain text document, about 700kb which is very big for plain texts and I need to format it on cloud converting it to HTML, but the only things that I need to replace, format to HTML so it can be displayed by the browser, are bold and italic. For bold at the plain text they are like this: Not on bold... **bold text here** not bold here And italic like this: Not italic... *italic text* no italic Just like StackOverflow does for their formatting, but the problem is that I need to make it a lot faster, since the text is so big... One of my ideas was to add a page slide, so I the script just need to format some part of the text, not it all, then after the user changes the page the script would be called again, but the problem is how I can make the code for this all?

    Read the article

  • Can sIFR js files be combined into 1 js file?

    - by Logan Stellway
    In the new sifr 3, the author has implemented the ability to combine the css files into 1 file. There are 4 .js files total and I was wondering if there were any ideas how to combine these files for a faster loading time on the client side and also I was wondering if there was any way to just include the sIFR css styles in an already existing css file. These modifications would greatly decrease the load/waiting time on the end user and I was wondering if there was any thought for that on upcoming builds or if there were any ideas on how to accomplish this idea.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >