Search Results

Search found 9032 results on 362 pages for 'fast math'.

Page 41/362 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Fast read of certain bytes of multiple files in C/C++

    - by Alejandro Cámara
    I've been searching in the web about this question and although there are many similar questions about read/write in C/C++, I haven't found about this specific task. I want to be able to read from multiple files (256x256 files) only sizeof(double) bytes located in a certain position of each file. Right now my solution is, for each file: Open the file (read, binary mode): fstream fTest("current_file", ios_base::out | ios_base::binary); Seek the position I want to read: fTest.seekg(position*sizeof(test_value), ios_base::beg); Read the bytes: fTest.read((char *) &(output[i][j]), sizeof(test_value)); And close the file: fTest.close(); This takes about 350 ms to run inside a for{ for {} } structure with 256x256 iterations (one for each file). Q: Do you think there is a better way to implement this operation? How would you do it?

    Read the article

  • Hashtable is that fast

    - by Costa
    Hi s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]. Is the hash function of the java string, I assume the rest of languages is similar or close to this implementation. If we have hash-Table and a list of 50 elements. each element is 7 chars ABCDEF1, ABCDEF2, ABCDEF3..... ABCDEFn If each bucket of hashtable contains 5 strings (I think this function will make it one string per bucket, but let us assume it is 5). If we call col.Contains("ABCDEFn"); // will do 6 comparisons and discover the difference on the 7th. The hash-table will take around 70 operations (multiplication and additions) to get the hashcode and to compare with 5 strings in bucket. and BANG it found. For list it will take around 300 comparisons to find it. for the case that there is only 10 elements, the list will take around 70 operations but the Hashtable will take around 50 operations. and note that hashtable operations are more time consuming (it is multiplications). I conclude that HybirdDictionary in .Net probably is the best choice for that most cases that require Hashtable with unknown size, because it will let me use a list till the list becomes more than 10 elements. still need something like HashSet rather than a Dictionary of keys and values, I wonder why there is no HybirdSet!! So what do u think? Thanks

    Read the article

  • Calculate proportional width of object (proportion: 1600x1080)

    - by Hans Stauden
    Hello, this jquery question: when I set a specific height to a "#div", i want to set the width of a inner object automatically too (cause i need a width to read it out) [ "#div" ["object"] ] example: (object).css(width: [ CALCULATION ], height: ($("#div").height())+'px' ) the original proportion of the image is: 1600x1080 here's the link to the attachment, take a look at it (tinypic): link text the heights "500px", "600px" and "700px" you can see in the attachment are just examples, the heigth could also be "711px", "623px", "998px" etc. cause the "#div" scales with the browser (i can read out the height of window, that works) my math skills aren't really good, would be great if someone could help me out :-)

    Read the article

  • What's a better choice for SQL-backed number crunching - Ruby 1.9, Python 2, Python 3, or PHP 5.3?

    - by Ivan
    Crterias of 'better': fast im math and simple (little of fields, many records) db transactions, convenient to develop/read/extend, flexible, connectible. The task is to use a common web development scripting language to process and calculate long time series and multidimensional surfaces (mostly selectint/inserting sets of floats and dong maths with rhem). The choice is Ruby 1.9, Python 2, Python 3, PHP 5.3, Perl 5.12, JavaScript (node.js). All the data is to be stored in a relational database (due to its heavily multidimensional nature), all the communication with outer world is to be done by means of web services.

    Read the article

  • How do you create a formula that has diminishing returns?

    - by egervari
    I guess this is a math question and not a programming question, but what is a good way to create a formula that has diminishing returns? Here are some example points on how I want the curve to look like. f(1) = 1 f(1.5)= .98 f(2) = .95 f(2.5) = .9 f(3) = .8 f(4) = .7 f(5) = .6 f(10) = .5 f(20) = .25 Notice that as the input gets higher, the percentage decreases rapidly. Is there any way to model a function that has a very smooth and accurate curve that says this? Another way to say it is by using a real example. You know in Diablo II they have Magic Find? There are diminishing returns for magic find. If you get 100%, the real magic find is still 100%. But the more get, your actual magic find goes down. So much that say if you had 1200, your real magic find is probably 450%. So they have a function like: actualMagicFind(magicFind) = // some way to reduced magic find

    Read the article

  • Obtain Latitude and Longitude from a GeoTIFF File

    - by Mikee
    Using GDAL in Python, how do you get the latitude and longitude of a GeoTIFF file? GeoTIFF's do not appear to store any coordinate information. Instead, they store the XY Origin coordinates. However, the XY coordinates do not provide the latitude and longitude of the top left corner and bottom left corner. It appears I will need to do some math to solve this problem, but I don't have a clue on where to start. What procedure is required to have this performed? I know that the GetGeoTransform() method is important for this, however, I don't know what to do with it from there.

    Read the article

  • Fast serarch of 2 dimensional array

    - by Tim
    I need a method of quickly searching a large 2 dimensional array. I extract the array from Excel, so 1 dimension represents the rows and the second the columns. I wish to obtain a list of the rows where the columns match certain criteria. I need to know the row number (or index of the array). For example, if I extract a range from excel. I may need to find all rows where column A =”dog” and column B = 7 and column J “a”. I only know which columns and which value to find at run time, so I can’t hard code the column index. I could use a simple loop, but is this efficient ? I need to run it several thousand times, searching for different criteria each time. For r As Integer = 0 To UBound(myArray, 0) - 1 match = True For c = 0 To UBound(myArray, 1) - 1 If not doesValueMeetCriteria(myarray(r,c) then match = False Exit For End If Next If match Then addRowToMatchedRows(r) Next The doesValueMeetCriteria function is a simple function that checks the value of the array element against the query requirement. e.g. Column A = dog etc. Is it more effiecent to create a datatable from the array and use the .select method ? Can I use Linq in some way ? Perhaps some form of dictionary or hashtable ? Or is the simple loop the most effiecent ? Your suggestions are most welcome.

    Read the article

  • Delphi fast large bitmap creation (without clearing)

    - by Ritsaert Hornstra
    When using the TBitmap wrapper for a GDI bitmap from the unit Graphics I noticed it will always clear out the bitmap (using a PatBlt call) when setting up a bitmap with SetSize( w, h ). When I copy in the bits later on (see routine below) it seems ScanLine is the fastest possibility and not SetDIBits. function ToBitmap: TBitmap; var i, N, x: Integer; S, D: PAnsiChar; begin Result := TBitmap.Create(); Result.PixelFormat := pf32bit; Result.SetSize( width, height ); S := Src; D := Result.ScanLine[ 0 ]; x := Integer( Result.ScanLine[ 1 ] ) - Integer( D ); N := width * sizeof( longword ); for i := 0 to height - 1 do begin Move( S^, D^, N ); Inc( S, N ); Inc( D, x ); end; end; The bitmaps I need to work with are quite large (150MB of RGB memory). With these iomages it takes 150ms to simply create an empty bitmap and a further 140ms to overwrite it's contents. Is there a way of initializing a TBitmap with the correct size WITHOUT initializing the pixels itself and leaving the memory of the pixels uninitialized (eg dirty)? Or is there another way to do such a thing. I know we could work on the pixels in place but this still leaves the 150ms of unnessesary initializtion of the pixels.

    Read the article

  • Fast parsing of PHP in C#

    - by Jessica Shea
    Hello there, I've got a requirement for parsing PHP files in C#. We essentially require some of the devs in another country to upload PHP files and once uploaded we need to check the php files and get a list of all the methods and classes/functions etc. I thought of using a regex but I can't workout if a function belongs to a class etc, so I was wondering if theres already something 'out there' that will parse out PHP files and spit out its functions (I'm trying to avoid writing a full blow AST implementation). Does anyone have any idea? I looked at Coco/R but I couldn't find a PHP grammar file. I'm using .NET 2.0 and C#.

    Read the article

  • Optimizing Mysql to avoid redundancy but still have fast access to calculable data

    - by diglettpotato
    An example for the sake of the question: I have a database which contains users, questions, and answers. Each user has a score which can be calculated using the data from the questions and answers tables. Therefore if I had a score field in the users table, it would be redundant. However, if I don't use a score field, then calculating the score every time would significantly slow down the website. My current solution is to put it in a score field, and then have a cron running every few hours which recalculates everybody's score, and updates the field. Is there a better way to handle this?

    Read the article

  • Fastest implementation of the frac function in C#

    - by user349937
    I would like to implement a frac function in C# (just like the one in hsl here http://msdn.microsoft.com/en-us/library/bb509603%28VS.85%29.aspx) but since it is for a very processor intensive application i would like the best version possible. I was using something like public float Frac(float value) { return value - (float)Math.Truncate(value); } but I'm having precision problems, for example for 2.6f it's returning in the unit test Expected: 0.600000024f But was: 0.599999905f I know that I can convert to decimal the value and then at the end convert to float to obtain the correct result something like this: public float Frac(float value) { return (float)((decimal)value - Decimal.Truncate((decimal)value)); } But I wonder if there is a better way without resorting to decimals...

    Read the article

  • Programming language for fast calculations with big integers

    - by sub
    I'm doing Project Euler problems at the moment and I can solve most of them using my own programming language which uses direct C++ integers (so they are bound to 2^32 on my machine). However, at times there are problems which require me to work with very high numbers, I can't do that with native integers. So I implemented a BigInt library in my language which unfortunately gets extremely slow at times. Is there a programming language suitable for very efficient handling of big numbers? I mean that I want to do the things I could do in other programming languages with it (variables, loops, etc.), but in a faster way. If you have got tips for workarounds of the 2^32 limit in my language/C++/other languages, please tell me too!

    Read the article

  • How fast should an interpreted language be today?

    - by Tarbal
    Is speed of the (main/only viable) implementation of an interpreted programming language a criteria today? What would be the optimal balance between speed and abstraction? Should scripting languages completely ignore all thoughts about performance and just follow the concepts of rapid development, readability, etc.? I'm asking this because I'm currently designing some experimental languages and interpreters

    Read the article

  • Using exponential smoothing with NaN values

    - by Eric
    I have a sample of some kind that can create somewhat noisy output. The sample is the result of some image processing from a camera, which indicates the heading of a blob of a certain color. It is an angle from around -45° to +45°, or a NaN, which means that the blob is not actually in view. In order to combat the noisy data, I felt that exponential smoothing would do the trick. However, I'm not sure how to handle the NaN values. On the one hand, involving them in the math would result in a NaN average, which would then prevent any meaningful results. On the other hand, ignoring NaN values completely would mean that a "no-detection" scenario would never be reported. And just to complicate things, the data is also noisy in that it can get false NaN value, which ideally would be smoothed somehow to prevent random noise. Any ideas about how I could implement such an exponential smoother?

    Read the article

  • App is fast on 3GS but slow on 3G

    - by Anthony Chan
    Hi all, I'm new to computer coding and have just finished coding an app and tested it on both 3G and 3GS. On 3GS, it worked as normal as on the simulator. However, when I tried to run it on 3G, the app becomes extremely slow. I'm not sure what's the reason and I hope someone could shed some light on me. Generally, my app has a couple of view controller classes, with one of them being the title page, one being the main page, one is setting, etc. I used a dissolve to transition from the title page to the main page. But even this simple transition shows un-smooth performance on a 3G! My other part of the app involves zooming in to some images by scaling up the images, switching images by push or dissolve upon receiving touch events, saving photos into photo library and storing and retrieving some photos in a folder and some data in a SQlite database, each showing un-smooth actions. Compared with some heavy graphic or heavy maths app, I think mine is pretty simple. I totally have no clue why the app would behave so slow and un-smooth that it is barely useful on a 3G. Any help/ direction would be much appreciated. Thanks for helping out.

    Read the article

  • Android: Constructing a triangle based on Geographical information

    - by Aidan
    Hi Guys, I'm constructing a geolocation based application and I'm trying to figure out a way to make my application realise when a user is facing the direction of the given location (a particular long / lat co-ord). I've got the math figured, I just have the triangle to construct. Here's a further clarification of what I want to do.. I just want to know is there a way to get java to construct 2 other co-ordinates based on my orientation in relation to true north and my current co-ordinate? I'd like to construct a tri-angle, 45 degrees out each way of my current location (one of the points) and 1 kilometer in that direction. The problem is I don't know how to make Android/Java recognise that I want to find that point in the direction I'm currently facing.. Anyone got any ideas?

    Read the article

  • Fast exchange of data between unmanaged code and managed code

    - by vizcaynot
    Hello: Without using p/invoke, from a C++/CLI I have succeeded in integrating various methods of a DLL library from a third party built in C. One of these methods retrieves information from a database and stores it in different structures. The C++/CLI program I wrote reads those structures and stores them in a List<, which is then returned to the corresponding reading and use of an application programmed completely in C#. I understand that the double handling of data (first, filling in several structures and then, filling all of these structures into a list<) may generate an unnecessary overload, at which point I wish C++/CLI had the keyword "yield". Depending on the above scenario, do you have recommendations to avoid or reduce this overload? Thanks.

    Read the article

  • NHibernate - fast way to clear out database

    - by csetzkorn
    Hi, I intend to perform some automated integration tests. This requires the db to be put back into a 'clean state'. Is this the fastest/best way to do this: var cfg = new Configuration(); cfg.Configure(); cfg.AddAssembly("Bla"); new SchemaExport(cfg).Execute(false, true, false); Thanks. Christian

    Read the article

  • Determining if and where a photon will collide with a polygon in 3D space.

    - by Peter
    The problem is straight forward: 1) We have a photon traveling from Point 1 (x,y,z) to Point 2 (x,y,z), both of which could be located anywhere in 3D space. 2) We have a polygon that is both rotated randomly on the x-axis and/or y-axis and also located anywhere in 3D space. 3) We want to find: a) if the photon will collide with the polygon at all and b) if it does where will that be (x,y,z)? An image of the problem: http://dl.dropbox.com/u/3150177/Programming/3D/Math/Photon%20Path/Photon%20Path.png The aim of this is to calculate how the photon's path should be altered from an interaction(s) with the polygon(s). I am reading up on this subject now but I was wondering if anyone could give me a head start. Thanks in advance.

    Read the article

  • Best way to search for a saturation value in a sorted list

    - by AB Kolan
    A question from Math Battle. This particular question was also asked to me in one of my job interviews. " A monkey has two coconuts. It is fooling around by throwing coconut down from the balconies of M-storey building. The monkey wants to know the lowest floor when coconut is broken. What is the minimal number of attempts needed to establish that fact? " Conditions: if a coconut is broken, you cannot reuse the same. You are left with only with the other coconut Possible approaches/strategies I can think of are Binary break ups & once you find the floor on which the coconut breaks use upcounting from the last found Binary break up lower index. Window/Slices of smaller sets of floors & use binary break up within the Window/Slice (but on the down side this would require a Slicing algorithm of it's own.) Wondering if there are any other way to do this.

    Read the article

  • Fast method of directory enumeration on Win32?

    - by BillyONeal
    Hello everyone :) I'm trying to speedup directory enumeration in C++, where I'm recursing into subdirectories. I currently have an app which spends 95% of it's time in FindFirst/FindNextFile APIs, and it takes several minutes to enumerate all the files on a given volume. I know it's possible to do this faster because there is an app that does: Everything. It enumerates my entire drive in seconds. How might I accomplish something like this?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >