Search Results

Search found 6199 results on 248 pages for 'fast enumeration'.

Page 20/248 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Efficient data structure for fast random access, search, insertion and deletion

    - by Leonel
    I'm looking for a data structure (or structures) that would allow me keep me an ordered list of integers, no duplicates, with indexes and values in the same range. I need four main operations to be efficient, in rough order of importance: taking the value from a given index finding the index of a given value inserting a value at a given index deleting a value at a given index Using an array I have 1 at O(1), but 2 is O(N) and insertion and deletions are expensive (O(N) as well, I believe). A Linked List has O(1) insertion and deletion (once you have the node), but 1 and 2 are O(N) thus negating the gains. I tried keeping two arrays a[index]=value and b[value]=index, which turn 1 and 2 into O(1) but turn 3 and 4 into even more costly operations. Is there a data structure better suited for this?

    Read the article

  • Hashtable is that fast

    - by Costa
    Hi s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]. Is the hash function of the java string, I assume the rest of languages is similar or close to this implementation. If we have hash-Table and a list of 50 elements. each element is 7 chars ABCDEF1, ABCDEF2, ABCDEF3..... ABCDEFn If each bucket of hashtable contains 5 strings (I think this function will make it one string per bucket, but let us assume it is 5). If we call col.Contains("ABCDEFn"); // will do 6 comparisons and discover the difference on the 7th. The hash-table will take around 70 operations (multiplication and additions) to get the hashcode and to compare with 5 strings in bucket. and BANG it found. For list it will take around 300 comparisons to find it. for the case that there is only 10 elements, the list will take around 70 operations but the Hashtable will take around 50 operations. and note that hashtable operations are more time consuming (it is multiplications). I conclude that HybirdDictionary in .Net probably is the best choice for that most cases that require Hashtable with unknown size, because it will let me use a list till the list becomes more than 10 elements. still need something like HashSet rather than a Dictionary of keys and values, I wonder why there is no HybirdSet!! So what do u think? Thanks

    Read the article

  • Fast serarch of 2 dimensional array

    - by Tim
    I need a method of quickly searching a large 2 dimensional array. I extract the array from Excel, so 1 dimension represents the rows and the second the columns. I wish to obtain a list of the rows where the columns match certain criteria. I need to know the row number (or index of the array). For example, if I extract a range from excel. I may need to find all rows where column A =”dog” and column B = 7 and column J “a”. I only know which columns and which value to find at run time, so I can’t hard code the column index. I could use a simple loop, but is this efficient ? I need to run it several thousand times, searching for different criteria each time. For r As Integer = 0 To UBound(myArray, 0) - 1 match = True For c = 0 To UBound(myArray, 1) - 1 If not doesValueMeetCriteria(myarray(r,c) then match = False Exit For End If Next If match Then addRowToMatchedRows(r) Next The doesValueMeetCriteria function is a simple function that checks the value of the array element against the query requirement. e.g. Column A = dog etc. Is it more effiecent to create a datatable from the array and use the .select method ? Can I use Linq in some way ? Perhaps some form of dictionary or hashtable ? Or is the simple loop the most effiecent ? Your suggestions are most welcome.

    Read the article

  • Delphi fast large bitmap creation (without clearing)

    - by Ritsaert Hornstra
    When using the TBitmap wrapper for a GDI bitmap from the unit Graphics I noticed it will always clear out the bitmap (using a PatBlt call) when setting up a bitmap with SetSize( w, h ). When I copy in the bits later on (see routine below) it seems ScanLine is the fastest possibility and not SetDIBits. function ToBitmap: TBitmap; var i, N, x: Integer; S, D: PAnsiChar; begin Result := TBitmap.Create(); Result.PixelFormat := pf32bit; Result.SetSize( width, height ); S := Src; D := Result.ScanLine[ 0 ]; x := Integer( Result.ScanLine[ 1 ] ) - Integer( D ); N := width * sizeof( longword ); for i := 0 to height - 1 do begin Move( S^, D^, N ); Inc( S, N ); Inc( D, x ); end; end; The bitmaps I need to work with are quite large (150MB of RGB memory). With these iomages it takes 150ms to simply create an empty bitmap and a further 140ms to overwrite it's contents. Is there a way of initializing a TBitmap with the correct size WITHOUT initializing the pixels itself and leaving the memory of the pixels uninitialized (eg dirty)? Or is there another way to do such a thing. I know we could work on the pixels in place but this still leaves the 150ms of unnessesary initializtion of the pixels.

    Read the article

  • Fast parsing of PHP in C#

    - by Jessica Shea
    Hello there, I've got a requirement for parsing PHP files in C#. We essentially require some of the devs in another country to upload PHP files and once uploaded we need to check the php files and get a list of all the methods and classes/functions etc. I thought of using a regex but I can't workout if a function belongs to a class etc, so I was wondering if theres already something 'out there' that will parse out PHP files and spit out its functions (I'm trying to avoid writing a full blow AST implementation). Does anyone have any idea? I looked at Coco/R but I couldn't find a PHP grammar file. I'm using .NET 2.0 and C#.

    Read the article

  • Optimizing Mysql to avoid redundancy but still have fast access to calculable data

    - by diglettpotato
    An example for the sake of the question: I have a database which contains users, questions, and answers. Each user has a score which can be calculated using the data from the questions and answers tables. Therefore if I had a score field in the users table, it would be redundant. However, if I don't use a score field, then calculating the score every time would significantly slow down the website. My current solution is to put it in a score field, and then have a cron running every few hours which recalculates everybody's score, and updates the field. Is there a better way to handle this?

    Read the article

  • Programming language for fast calculations with big integers

    - by sub
    I'm doing Project Euler problems at the moment and I can solve most of them using my own programming language which uses direct C++ integers (so they are bound to 2^32 on my machine). However, at times there are problems which require me to work with very high numbers, I can't do that with native integers. So I implemented a BigInt library in my language which unfortunately gets extremely slow at times. Is there a programming language suitable for very efficient handling of big numbers? I mean that I want to do the things I could do in other programming languages with it (variables, loops, etc.), but in a faster way. If you have got tips for workarounds of the 2^32 limit in my language/C++/other languages, please tell me too!

    Read the article

  • Fast read of certain bytes of multiple files in C/C++

    - by Alejandro Cámara
    I've been searching in the web about this question and although there are many similar questions about read/write in C/C++, I haven't found about this specific task. I want to be able to read from multiple files (256x256 files) only sizeof(double) bytes located in a certain position of each file. Right now my solution is, for each file: Open the file (read, binary mode): fstream fTest("current_file", ios_base::out | ios_base::binary); Seek the position I want to read: fTest.seekg(position*sizeof(test_value), ios_base::beg); Read the bytes: fTest.read((char *) &(output[i][j]), sizeof(test_value)); And close the file: fTest.close(); This takes about 350 ms to run inside a for{ for {} } structure with 256x256 iterations (one for each file). Q: Do you think there is a better way to implement this operation? How would you do it?

    Read the article

  • How fast should an interpreted language be today?

    - by Tarbal
    Is speed of the (main/only viable) implementation of an interpreted programming language a criteria today? What would be the optimal balance between speed and abstraction? Should scripting languages completely ignore all thoughts about performance and just follow the concepts of rapid development, readability, etc.? I'm asking this because I'm currently designing some experimental languages and interpreters

    Read the article

  • Fast exchange of data between unmanaged code and managed code

    - by vizcaynot
    Hello: Without using p/invoke, from a C++/CLI I have succeeded in integrating various methods of a DLL library from a third party built in C. One of these methods retrieves information from a database and stores it in different structures. The C++/CLI program I wrote reads those structures and stores them in a List<, which is then returned to the corresponding reading and use of an application programmed completely in C#. I understand that the double handling of data (first, filling in several structures and then, filling all of these structures into a list<) may generate an unnecessary overload, at which point I wish C++/CLI had the keyword "yield". Depending on the above scenario, do you have recommendations to avoid or reduce this overload? Thanks.

    Read the article

  • App is fast on 3GS but slow on 3G

    - by Anthony Chan
    Hi all, I'm new to computer coding and have just finished coding an app and tested it on both 3G and 3GS. On 3GS, it worked as normal as on the simulator. However, when I tried to run it on 3G, the app becomes extremely slow. I'm not sure what's the reason and I hope someone could shed some light on me. Generally, my app has a couple of view controller classes, with one of them being the title page, one being the main page, one is setting, etc. I used a dissolve to transition from the title page to the main page. But even this simple transition shows un-smooth performance on a 3G! My other part of the app involves zooming in to some images by scaling up the images, switching images by push or dissolve upon receiving touch events, saving photos into photo library and storing and retrieving some photos in a folder and some data in a SQlite database, each showing un-smooth actions. Compared with some heavy graphic or heavy maths app, I think mine is pretty simple. I totally have no clue why the app would behave so slow and un-smooth that it is barely useful on a 3G. Any help/ direction would be much appreciated. Thanks for helping out.

    Read the article

  • NHibernate - fast way to clear out database

    - by csetzkorn
    Hi, I intend to perform some automated integration tests. This requires the db to be put back into a 'clean state'. Is this the fastest/best way to do this: var cfg = new Configuration(); cfg.Configure(); cfg.AddAssembly("Bla"); new SchemaExport(cfg).Execute(false, true, false); Thanks. Christian

    Read the article

  • fast scrolling background

    - by Andre
    i want a game that scrolls the background in a similar way to a UItableView. I solved it with a timer that moves the background up and brings another copy of the same picture up if (bg1.center.y <= - self.view.bounds.size.height/2 ) { bg1.center = CGPointMake(bg1.center.x, 690); } if (bg2.center.y <= - self.view.bounds.size.height/2 ) { bg2.center = CGPointMake(bg2.center.x, 690); bg1.center = CGPointMake(bg1.center.x, bg1.center.y - movement); bg2.center = CGPointMake(bg2.center.x, bg2.center.y - movement); But the faster i move the pictures the more problems occur: There are appearing gaps between the backgrounds and they are getting biggiger the faster i move them! movement is defined by the speed of swiping over the screen Any idea to solve that?

    Read the article

  • Fast path cache generation for a connected node graph

    - by Sukasa
    I'm trying to get a faster pathfinding mechanism in place in a game I'm working on for a connected node graph. The nodes are classed into two types, "Networks" and "Routers." In this picture, the blue circles represent routers and the grey rectangles networks. Each network keeps a list of which routers it is connected to, and vice-versa. Routers cannot connect directly to other routers, and networks cannot connect directly to other networks. Networks list which routers they're connected to Routers do the same I need to get an algorithm that will map out a path, measured in the number of networks crossed, for each possible source and destination network excluding paths where the source and destination are the same network. I have one right now, however it is unusably slow, taking about two seconds to map the paths, which becomes incredibly noticeable for all connected players. The current algorithm is a depth-first brute-force search (It was thrown together in about an hour to just get the path caching working) which returns an array of networks in the order they are traversed, which explains why it's so slow. Are there any algorithms that are more efficient? As a side note, while these example graphs have four networks, the in-practice graphs have 55 networks and about 20 routers in use. Paths which are not possible also can occur, and as well at any time the network/router graph topography can change, requiring the path cache to be rebuilt. What approach/algorithm would likely provide the best results for this type of a graph?

    Read the article

  • Fast ceiling of an integer division in C / C++

    - by andand
    Given integer values x and y, C and C++ returns as the quotient q = x/y the floor of the floating point valued equivalent. I'm interestd in a method of returning the ceiling instead? For example, ceil(10/5) = 2 and ceil(11/5) = 3. The obvious approach involves something like: q = x / y; if (q * y < x) ++q; This requires an extra comparison and multiplication; and other methods I've seen (used in fact) involve casting as a float or double. Is there a more direct method that avoids the additional multiplication (or a second division) and branch, and that also avoids casting as a floating point number?

    Read the article

  • Python Tkinter after loop not working fast enough

    - by user2658538
    I am making a simple metronome where it plays a tick sound every few milliseconds depending on the bpm and plays the sound using the winsound module. I use tkinter because there will be a gui component later but for now the metronome code is working, it plays the sound at a constant rate, but even though I set the after loop to play the sound every few milliseconds, it waits longer and the beat is slower than it should be. Is it a problem with the code or a problem with the way I calculate the time? Thanks. Here is my code. from Tkinter import * import winsound,time,threading root=Tk() c=Canvas(root) c.pack() class metronome(): def __init__(self,root,canvas,tempo=100): self.root=root self.root.bind("<1>",self.stop) self.c=canvas self.thread=threading.Thread(target=self.play) self.thread.daemon=True self.pause=False self.tempo=tempo/60.0 self.tempo=1.0/self.tempo self.tempo*=1000 def play(self): winsound.PlaySound("tick.wav",winsound.SND_FILENAME) self.sound=self.c.after(int(self.tempo),self.play) def stop(self,e): self.c.after_cancel(self.sound) beat=metronome(root,c,120) beat.thread.start() root.mainloop()

    Read the article

  • Finding open contiguous blocks of time for every day of a month, fast

    - by Chris
    I am working on a booking availability system for a group of several venues, and am having a hard time generating the availability of time blocks for days in a given month. This is happening server-side in PHP, but the concept itself is language agnostic -- I could be doing this in JS or anything else. Given a venue_id, month, and year (6/2012 for example), I have a list of all events occurring in that range at that venue, represented as unix timestamps start and end. This data comes from the database. I need to establish what, if any, contiguous block of time of a minimum length (different per venue) exist on each day. For example, on 6/1 I have an event between 2:00pm and 7:00pm. The minimum time is 5 hours, so there's a block open there from 9am - 2pm and another between 7pm and 12pm. This would continue for the 2nd, 3rd, etc... every day of June. Some (most) of the days have nothing happening at all, some have 1 - 3 events. The solution I came up with works, but it also takes waaaay too long to generate the data. Basically, I loop every day of the month and create an array of timestamps for each 15 minutes of that day. Then, I loop the time spans of events from that day by 15 minutes, marking any "taken" timeslot as false. Remaining, I have an array that contains timestamp of free time vs. taken time: //one day's array after processing through loops (not real timestamps) array( 12345678=>12345678, // <--- avail 12345878=>12345878, 12346078=>12346078, 12346278=>false, // <--- not avail 12346478=>false, 12346678=>false, 12346878=>false, 12347078=>12347078, // <--- avail 12347278=>12347278 ) Now I would need to loop THIS array to find continuous time blocks, then check to see if they are long enough (each venue has a minimum), and if so then establish the descriptive text for their start and end (i.e. 9am - 2pm). WHEW! By the time all this looping is done, the user has grown bored and wandered off to Youtube to watch videos of puppies; it takes ages to so examine 30 or so days. Is there a faster way to solve this issue? To summarize the problem, given time ranges t1 and t2 on day d, how can I determine the remaining time left in d that is longer than the minimum time block m. This data is assembled on demand via AJAX as the user moves between calendar months. Results are cached per-page-load, so if the user goes to July a second time, the data that was generated the first time would be reused. Any other details that would help, let me know. Edit Per request, the database structure (or the part that is relevant here) *events* id (bigint) title (varchar) *event_times* id (bigint) event_id (bigint) venue_id (bigint) start (bigint) end (bigint) *venues* id (bigint) name (varchar) min_block (int) min_start (varchar) max_start (varchar)

    Read the article

  • Unsafe, super-fast cross-process memory buffer?

    - by John
    Cross-process memory buffers always have some overhead, and my understanding is this is quite high. But what if you're implementing a cross-process render-buffer, this isn't critically important in the same way as other data so are there techniques we can use to get 'raw' access to a chunk of memory from multiple processes, with no safety nets apart from it not crashing? Or do modern operating systems simply not work with unabstracted memory in a way to make this possible? I'm working in C++ but the question applies to Win XP/Vista/7, MacOSX 10.5+ (& Linux less importantly).

    Read the article

  • Fibonnaci Sequence fast implementation

    - by user2947615
    I have written this function in Scala to calculate the fibonacci number given a particular index n: def fibonacci(n: Long): Long = { if(n <= 1) n else fibonacci(n - 1) + fibonacci(n - 2) } However it is not efficient when calculating with large indexes. Therefore I need to implement a function using a tuple and this function should return two consecutive values as the result. Can somebody give me any hints about this? I have never used Scala before. Thanks!

    Read the article

  • Fast comparison of char arrays?

    - by StackedCrooked
    I'm currently working in a codebase where IPv4 addresses are represented as pointers to u_int8. The equality operator is implemented like this: bool Ipv4Address::operator==(const u_int8 * inAddress) const { return (*(u_int32*) this->myBytes == *(u_int32*) inAddress); } This is probably the fasted solution, but it causes the GCC compiler warning: ipv4address.cpp:65: warning: dereferencing type-punned pointer will break strict-aliasing rules How can I rewrite the comparison correctly without breaking strict-aliasing rules and without losing performance points? I have considered using either memcmp or this macro: #define IS_EQUAL(a, b) \ (a[0] == b[0] && a[1] == b[1] && a[2] == b[2] && a[3] == b[3]) I'm thinking that the macro is the fastest solution. What do you recommend?

    Read the article

  • how to fast compute distance between high dimension vectors

    - by chyojn
    assume there are three group of high dimension vectors: {a_1, a_2, ..., a_N}, {b_1, b_2, ... , b_N}, {c_1, c_2, ..., c_N}. each of my vector can be represented as: x = a_i + b_j + c_k, where 1 <=i, j, k <= N. then the vector is encoded as (i, j, k) wich is then can be decoded as x = a_i + b_j + c_k. my question is, if there are two vector: x = (i_1, j_1, k_1), y = (i_2, j_2, k_2), is there a method to compute the euclidian distance of these two vector without decode x and y.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >