Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 477/527 | < Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >

  • C Population Count of unsigned 64-bit integer with a maximum value of 15

    - by BitTwiddler1011
    I use a population count (hamming weight) function intensively in a windows c application and have to optimize it as much as possible in order to boost performance. More than half the cases where I use the function I only need to know the value to a maximum of 15. The software will run on a wide range of processors, both old and new. I already make use of the POPCNT instruction when Intel's SSE4.2 or AMD's SSE4a is present, but would like to optimize the software implementation (used as a fall back if no SSE4 is present) as much as possible. Currently I have the following software implementation of the function: inline int population_count64(unsigned __int64 w) { w -= (w 1) & 0x5555555555555555ULL; w = (w & 0x3333333333333333ULL) + ((w 2) & 0x3333333333333333ULL); w = (w + (w 4)) & 0x0f0f0f0f0f0f0f0fULL; return int(w * 0x0101010101010101ULL) 56; } So to summarize: (1) I would like to know if it is possible to optimize this for the case when I only want to know the value to a maximum of 15. (2) Is there a faster software implementation (for both Intel and AMD CPU's) than the function above?

    Read the article

  • Getting the Item Count of a large sharepoint list in fastest way

    - by sooraj
    I am trying to get the count of the items in a sharepoint document library programatically. The scale I am working with is 30-70000 items. We have usercontrol in a smartpart to display the count . Ours is a TEAM site. This is the code to get the total count SPList VoulnterrList = web.Lists[ListTitle]; SPQuery query = new SPQuery(); query.ViewAttributes = "Scope=\"Recursive\""; string queries = "<Where><Eq><FieldRef Name='ApprovalStatus' /><Value Type='Choice'>Pending</Value></Eq></Where>"; query.Query = queries; SPListItemCollection lstitemcollAssoID = VoulnterrList.GetItems(query); lblCount.Text = "Total Proofs: " + VoulnterrList.Items.Count.ToString() + " Pending Proofs: " + lstitemcollAssoID.Count.ToString(); The problem is this has serious performance issue it takes 75 to 80 sec to load the page. if we comment this page load will decrees to 4 sec. Any better approch for this problem Ours is sharepoint 2007

    Read the article

  • JavaScript array random index insertion and deletion

    - by Tomi
    I'm inserting some items into array with randomly created indexes, for example like this: var myArray = new Array(); myArray[123] = "foo"; myArray[456] = "bar"; myArray[789] = "baz"; ... In other words array indexes do not start with zero and there will be "numeric gaps" between them. My questions are: Will these numeric gaps be somehow allocated (and therefore take some memory) even when they do not have assigned values? When I delete myArray[456] from upper example, would items below this item be relocated? EDIT: Regarding my question/concern about relocation of items after insertion/deletion - I want to know what happens with the memory and not indexes. More information from wikipedia article: Linked lists have several advantages over dynamic arrays. Insertion of an element at a specific point of a list is a constant-time operation, whereas insertion in a dynamic array at random locations will require moving half of the elements on average, and all the elements in the worst case. While one can "delete" an element from an array in constant time by somehow marking its slot as "vacant", this causes fragmentation that impedes the performance of iteration.

    Read the article

  • Planning a programming project by example (C# or C++)

    - by Lunan
    I am in the last year of undergraduate degree and i am stumped by the lack of example in c++ and c# large project in my university. All the mini project and assignment are based on text based database, which is so inefficient, and console display and command, which is frustrating. I want to develop a complete prototype of corporate software which deals in Inventory, Sales, Marketing, etc. Everything you would usually find in SAP. I am grateful if any of you could direct me to a books or article or sample program. Some of the question are : How to plan for this kind of programming? should i use the concept of 1 object(such as inventory) have its own process and program and have an integrator sit for all the program, or should i integrate it in 1 big program? How to build and address a database? i have little bit knowledge in database and i know SQL but i never address database in a program before. Database are table, and how do you suppose to represent a table in a OOP way? For development type, which is better PHP and C++ or C# and ASP.NET? I am planning to use Web Interface to set form and information, but using a background program to handle the compute. .NET is very much integrated and coding should be much faster, but i really wonder about performance if compared to PHP and C++ package thank you for the info

    Read the article

  • class selector refuses after append to body

    - by supersize
    I'm appending loads of divs in a wrapper: var cubes = [], allCubes = '', for(var i = 0; i < 380; i++) { var randomleft = Math.floor(Math.random()*Math.floor(Math.random()*1000)), randomtop = Math.floor(Math.random()*Math.floor(Math.random()*1000)); allCubes += '<div id="cube'+i+'" class="cube" style="position: absolute; border: 2px #000 solid; left: '+randomleft+'px; top: '+randomtop+'px; width: 9px; height: 9px; z-index: -1"></div>'; } $('#wrapper').append(allCubes); // performance for(var i = 0; i < 380; i++) { cubes.push($('#cube'+i)); } and then I would like to make them all draggable with jQueryUI and log their current position. var allc = $('.cube'); allc.draggable().on('mouseup', function(i) { allc.each(function() { var nleft = $(this).offset().left; var ntop = $(this).offset().top; console.log('cubes['+i+'].animate({ left:'+nleft+',top:'+ntop+'})'); }); }); Unfortunenately it does not work. They are neither draggable nor there comes up a log. Thanks

    Read the article

  • Color banding only on Android 4.0+

    - by threeshinyapples
    On emulators running Android 4.0 or 4.0.3, I am seeing horrible colour banding which I can't seem to get rid of. On every other Android version I have tested, gradients look smooth. I have a SurfaceView which is configured as RGBX_8888, and the banding is not present in the rendered canvas. If I manually dither the image by overlaying a noise pattern at the end of rendering I can make the gradients smooth again, though obviously at a cost to performance which I'd rather avoid. So the banding is being introduced later. I can only assume that, on 4.0+, my SurfaceView is being quantized to a lower bit-depth at some point between it being drawn and being displayed, and I can see from a screen capture that gradients are stepping 8 values at a time in each channel, suggesting a quantization to 555 (not 565). I added the following to my Activity onCreate function, but it made no difference. getWindow().setFormat(PixelFormat.RGBA_8888); getWindow().addFlags(WindowManager.LayoutParams.FLAG_DITHER); I also tried putting the above in onAttachedToWindow() instead, but there was still no change. (I believe that RGBA_8888 is the default window format anyway for 2.2 and above, so it's little surprise that explicitly setting that format has no effect on 4.0+.) Which leaves the question, if the source is 8888 and the destination is 8888, what is introducing the quantization/banding and why does it only appear on 4.0+? Very puzzling. I wonder if anyone can shed some light?

    Read the article

  • Thread-Safe lazy instantiating using MEF

    - by Xaqron
    // Member Variable private static readonly object _syncLock = new object(); // Now inside a static method foreach (var lazyObject in plugins) { if ((string)lazyObject.Metadata["key"] = "something") { lock (_syncLock) { // It seems the `IsValueCreated` is not up-to-date if (!lazyObject.IsValueCreated) lazyObject.value.DoSomething(); } return lazyObject.value; } } Here I need synchronized access per loop. There are many threads iterating this loop and based on the key they are looking for, a lazy instance is created and returned. lazyObject should not be created more that one time. Although Lazy class is for doing so and despite of the used lock, under high threading I have more than one instance created (I track this with a Interlocked.Increment on a volatile static int and log it somewhere). The problem is I don't have access to definition of Lazy and MEF defines how the Lazy class create objects. I should notice the CompositionContainer has a thread-safe option in constructor which is already used. My questions: 1) Why the lock doesn't work ? 2) Should I use an array of locks instead of one lock for performance improvement ?

    Read the article

  • Batch Inserts And Prepared Query Error

    - by ircmaxell
    Ok, so I need to populate a MS Access database table with results from a MySQL query. That's not hard at all. I've got the program written to where it copies a template .mdb file to a temp name and opens it via odbc. No problem so far. I've noticed that Access does not support batch inserting (VALUES (foo, bar), (second, query), (third query)). So that means I need to execute one query per row (there are potentially hundreds of thousands of rows). Initial performance tests show a rate of around 900 inserts/sec into Access. With our largest data sets, that could mean execution times of minutes (Which isn't the end of the world, but obviously the faster the better). So, I tried testing a prepared statement. But I keep getting an error (Warning: odbc_execute() [function.odbc-execute]: SQL error: [Microsoft][ODBC Microsoft Access Driver]COUNT field incorrect , SQL state 07001 in SQLExecute in D:\....php on line 30). Here's the code I'm using (Line 30 is odbc_execute): $sql = 'INSERT INTO table ([field0], [field1], [field2], [field3], [field4], [field5]) VALUES (?, ?, ?, ?, ?, ?)'; $stmt = odbc_prepare($conn, $sql); for ($i = 200001; $i < 300001; $i++) { $a = array($i, "Field1 $", "Field2 $i", "Field3 $i", "Field4 $i", $i); odbc_execute($stmt, $a); } So my question is two fold. First, is there any idea on why I'm getting that error (I've checked, and the number in the array matches the field list which matches the number of parameter ? markers)? And second, should I even bother with this or just use the straight INSERT statements? Like I said, time isn't critical, but if it's possible, I'd like to get that time as low as possible (Then again, I may be limited by disk throughput, since 900 operations/sec is high already)... Thanks

    Read the article

  • Parallel programming, are we not learning from history again?

    - by mezmo
    I started programming because I was a hardware guy that got bored, I thought the problems being solved in the software side of things were much more interesting than those in hardware. At that time, most of the electrical buses I dealt with were serial, some moving data as fast as 1.5 megabit!! ;) Over the years these evolved into parallel buses in order to speed communication up, after all, transferring 8/16/32/64, whatever bits at a time incredibly speeds up the transfer. Well, our ability to create and detect state changes got faster and faster, to the point where we could push data so fast that interference between parallel traces or cable wires made cleaning the signal too expensive to continue, and we still got reasonable performance from serial interfaces, heck some graphics interfaces are even happening over USB for a while now. I think I'm seeing a like trend in software now, our processors were getting faster and faster, so we got good at building "serial" software. Now we've hit a speed bump in raw processor speed, so we're adding cores, or "traces" to the mix, and spending a lot of time and effort on learning how to properly use those. But I'm also seeing what I feel are advances in things like optical switching and even quantum computing that could take us far more quickly that I was expecting back to the point where "serial programming" again makes the most sense. What are your thoughts?

    Read the article

  • Is there a Java data structure that is effectively an ArrayList with double indicies and built-in in

    - by Bob Cross
    I am looking for a pre-built Java data structure with the following characteristics: It should look something like an ArrayList but should allow indexing via double-precision rather than integers. Note that this means that it's likely that you'll see indicies that don't line up with the original data points (i.e., asking for the value that corresponds to key "1.5"). As a consequence, the value returned will likely be interpolated. For example, if the key is 1.5, the value returned could be the average of the value at key 1.0 and the value at key 2.0. The keys will be sorted but the values are not ensured to be monotonically increasing. In fact, there's no assurance that the first derivative of the values will be continuous (making it a poor fit for certain types of splines). Freely available code only, please. For clarity, I know how to write such a thing. In fact, we already have an implementation of this and some related data structures in legacy code that I want to replace due to some performance and coding issues. What I'm trying to avoid is spending a lot of time rolling my own solution when there might already be such a thing in the JDK, Apache Commons or another standard library. Frankly, that's exactly the approach that got this legacy code into the situation that it's in right now.... Is there such a thing out there in a freely available library?

    Read the article

  • Optimising ruby regexp -- lots of match groups

    - by Farcaller
    I'm working on a ruby baser lexer. To improve performance, I joined up all tokens' regexps into one big regexp with match group names. The resulting regexp looks like: /\A(?<__anonymous_-1038694222803470993>(?-mix:\n+))|\A(?<__anonymous_-1394418499721420065>(?-mix:\/\/[\A\n]*))|\A(?<__anonymous_3077187815313752157>(?-mix:include\s+"[\A"]+"))|\A(?<LET>(?-mix:let\s))|\A(?<IN>(?-mix:in\s))|\A(?<CLASS>(?-mix:class\s))|\A(?<DEF>(?-mix:def\s))|\A(?<DEFM>(?-mix:defm\s))|\A(?<MULTICLASS>(?-mix:multiclass\s))|\A(?<FUNCNAME>(?-mix:![a-zA-Z_][a-zA-Z0-9_]*))|\A(?<ID>(?-mix:[a-zA-Z_][a-zA-Z0-9_]*))|\A(?<STRING>(?-mix:"[\A"]*"))|\A(?<NUMBER>(?-mix:[0-9]+))/ I'm matching it to my string producing a MatchData where exactly one token is parsed: bigregex =~ "\n ... garbage" puts $~.inspect Which outputs #<MatchData "\n" __anonymous_-1038694222803470993:"\n" __anonymous_-1394418499721420065:nil __anonymous_3077187815313752157:nil LET:nil IN:nil CLASS:nil DEF:nil DEFM:nil MULTICLASS:nil FUNCNAME:nil ID:nil STRING:nil NUMBER:nil> So, the regex actually matched the "\n" part. Now, I need to figure the match group where it belongs (it's clearly visible from #inspect output that it's _anonymous-1038694222803470993, but I need to get it programmatically). I could not find any option other than iterating over #names: m.names.each do |n| if m[n] type = n.to_sym resolved_type = (n.start_with?('__anonymous_') ? nil : type) val = m[n] break end end which verifies that the match group did have a match. The problem here is that it's slow (I spend about 10% of time in the loop; also 8% grabbing the @input[@pos..-1] to make sure that \A works as expected to match start of string (I do not discard input, just shift the @pos in it). You can check the full code at GH repo. Any ideas on how to make it at least a bit faster? Is there any option to figure the "successful" match group easier?

    Read the article

  • Creation of model in core data on the fly

    - by user1740045
    How can we create a model in core data on the fly? I.e getting the schema of database from somewhere and then creating a Core Data Object graph? *QuesTion:* Yes thats fine, agreed with all the advantages. But, can anybody can tell practically, what is the benefit of integrating Core Data into project instead of using SQL directly. 1.No need to write SQL boiler plate code [but need to learn Core Data Model (steep curve)] 2.WE can undo and redo changes [but practically who needs it] 3.we can migrate to another schema [that can be done by SQLite as well jus need to add another field into table] 4.For say aggregation on some field in table,in Core Data we need to loop through Core Data Objects whereas in SQLite we need to first write SQLite Boiler Plate Code and then the basic aggregation SQL query,which is easy to write,only length of code will increase...But in case of Core Data (need to learn a lot). So apart from reducing the length of Code,does it actually adds value to project? or in terms of Memory Efficiency,Performance,etc.. PS: If anybody has actualy worked on Core Data(Model Creation On the Fly) , if possible share and gve pointers..thanks!

    Read the article

  • Why can't we just use a hash of passphrase as the encryption key (and IV) with symmetric encryption algorithms?

    - by TX_
    Inspired by my previous question, now I have a very interesting idea: Do you really ever need to use Rfc2898DeriveBytes or similar classes to "securely derive" the encryption key and initialization vector from the passphrase string, or will just a simple hash of that string work equally well as a key/IV, when encrypting the data with symmetric algorithm (e.g. AES, DES, etc.)? I see tons of AES encryption code snippets, where Rfc2898DeriveBytes class is used to derive the encryption key and initialization vector (IV) from the password string. It is assumed that one should use a random salt and a shitload of iterations to derive secure enough key/IV for the encryption. While deriving bytes from password string using this method is quite useful in some scenarios, I think that's not applicable when encrypting data with symmetric algorithms! Here is why: using salt makes sense when there is a possibility to build precalculated rainbow tables, and when attacker gets his hands on hash he looks up the original password as a result. But... with symmetric data encryption, I think this is not required, as the hash of password string, or the encryption key, is never stored anywhere. So, if we just get the SHA1 hash of password, and use it as the encryption key/IV, isn't that going to be equally secure? What is the purpose of using Rfc2898DeriveBytes class to generate key/IV from password string (which is a very very performance-intensive operation), when we could just use a SHA1 (or any other) hash of that password? Hash would result in random bit distribution in a key (as opposed to using string bytes directly). And attacker would have to brute-force the whole range of key (e.g. if key length is 256bit he would have to try 2^256 combinations) anyway. So either I'm wrong in a dangerous way, or all those samples of AES encryption (including many upvoted answers here at SO), etc. that use Rfc2898DeriveBytes method to generate encryption key and IV are just wrong.

    Read the article

  • How security of the systems might be improved using database procedures?

    - by Centurion
    The usage of Oracle PL/SQL procedures for controlling access to data often emphasized in PL/SQL books and other sources as being more secure approach. I'v seen several systems where all business logic related with data is performed through packages, procedures and functions, so application code becomes quite "dumb" and is only responsible for visualization part. I even heard some devs call such approaches and driving architects as database nazi :) because all logic code resides in database. I do know about DB procedure performance benefits, but now I'm interested in a "better security" when using thick client model. I assume such design mostly used when Oracle (and maybe MS SQL Server) databases are used. I do agree such approach improves security but only if there are not much users and every system user has a database account, so we might control and monitor data access through standard database user security. However, how such approach could increase the security for an average web system where thick clients are used: for example one database user with DML grants on all tables, and other users are handled using "users" and"user_rights" tables? We could use DB procedures, save usernames into context use that for filtering but vulnerability resides at the root - if the main database account is compromised than nothing will help. Of course in a real system we might consider at least several main users (for example frontend_db_user, backend_db_user).

    Read the article

  • Moving x,y position of all array objects every frame in actionscript 3?

    - by Dylan Gallardo
    I have my code setup so that I have an movieclip in my library a class called "block" being duplicated multiple times and added into an array like this: function makeblock(e:Event){ newblock=new block; newblock.x=10; newblock.y=10; addChild(newblock); myarray[counter] = newblock; //adds a newblock object into array counter += 1; } Then I have a loop with a currently primitive way of handling my problem: stage.addEventListener(Event.ENTER_FRAME, gameloop); function gameloop(evt:Event):void { if (moveright==true){ myarray[0].x += 5; myarray[1].x += 5; myarray[2].x += 5 -(and so on)- My question is how can I change x,y values every frame for new objects duplicated into the array, along with the previous ones that were added. Of course with a more elegant way than writing it out myself... array[0].x += 5, array[1], array[2], array[3] etc. Ideally I would like this to go up to 500 or more array objects for one array so obviously I don't want to be writing it out individually haha, I also need it to be consistent with performance so using a for loop or something to loop through the whole array and move each x += 5 wouldn't work would it? Anyway, if anyone has any ideas that'd be great!

    Read the article

  • Prototype or jQuery for DOM manipulation (client-side dynamic content)

    - by luiggitama
    I need to know which of these two JavaScript frameworks is better for client-side dynamic content modification for known DOM elements (by id), in terms of performance, memory usage, etc.: Prototype's $('id').update(content) jQuery's jQuery('#id').html(content) BTW, both libraries coexist with no conflict in my app, because I'm using RichFaces for JSF development, that's why I can use "jQuery" instead of "$". I have at least 20 updatable areas in my page, and for each one I prepare content (tables, option lists, etc.), based on some user-defined client-side criteria filtering or some AJAX event, etc., like this: var html = []; int idx = 0; ... html[idx++] = '<tr><td class="cell"><span class="link" title="View" onclick="myFunction('; html[idx++] = param; html[idx++] = ')"></span>'; html[idx++] = someText; html[idx++] = '</td></tr>'; ... So here comes the question, which is better to use: // Prototype's $('myId').update(html.join('')); // or jQuery's jQuery('#myId').html(html.join('')); Other needed functions are hide() and show(), which are present in both frameworks. Which is better? Also I'm needing to enable/disable form controls, and to read/set their values. Note that I know my updatable area's id (I don't need CSS selectors at this point). And I must tell that I'm saving these queried objects in some data structure for later use, so they are requested just once when the page is rendered, like this: MyData = {div1:jQuery('#id1'), div2:$('id2'), ...}; ... div1.update('content 1'); div2.html('content 2'); So, which is the best practice?

    Read the article

  • Iterate over defined elements of a JS array

    - by sibidiba
    I'm using a JS array to Map IDs to actual elements, i.e. a key-value store. I would like to iterate over all elements. I tried several methods, but all have its caveats: for (var item in map) {...} Does iterates over all properties of the array, therefore it will include also functions and extensions to Array.prototype. For example someone dropping in the Prototype library in the future will brake existing code. var length = map.lenth; for (var i = 0; i < length; i++) { var item = map[i]; ... } does work but just like $.each(map, function(index, item) {...}); They iterate over the whole range of indexes 0..max(id) which has horrible drawbacks: var x = []; x[1]=1; x[10]=10; $.each(x, function(i,v) {console.log(i+": "+v);}); 0: undefined 1: 1 2: undefined 3: undefined 4: undefined 5: undefined 6: undefined 7: undefined 8: undefined 9: undefined 10: 10 Of course my IDs wont resemble a continuous sequence either. Moreover there can be huge gaps between them so skipping undefined in the latter case is unacceptable for performance reasons. How is it possible to safely iterate over only the defined elements of an array (in a way that works in all browsers and IE)?

    Read the article

  • Designing for varying mobile device resolutions, i.e. iPhone 4 & iPhone 3G

    - by Josh
    As the design community moves to design applications & interfaces for mobile devices, a new problem has arisen: Varying Screen DPI's. Here's the situation: Touch: * iPhone 3G/S ~ 160 dpi * iPhone 4 ~ 300 dpi * iPad ~ 126 dpi * Android device @ 480p ~ 200 dpi Point / click: * Laptop @ 720p ~ 96 dpi * Desktop @ 720p ~ 72 dpi There is certainly a clear distinction between desktop and mobile so having two separate front-ends to the same app is logical, especially when considering one is "touch"-based and the other is "point/click"-based. The challenge lies in designing static graphical elements that will scale between, say, 160 dpi and 300+ dpi, and get consistent and clean design across zoom levels. Any thoughts on how to approach this? Here are some scenarios, but each has drawbacks as well: * Design a single set of assets (high resolution), then adjust zoom levels based on detected resolution / device o Drawbacks: Performance caused by code layering, varying device support of Zoom * Develop & optimize multiple variations of image and CSS assets, then hide / show each based on device o Drawbacks: Extra work in design & QA. Anyone have thoughts or experience on how to deal with this? We should certainly be looking at methods that use / support HTML5 and CSS3.

    Read the article

  • Database solution for 200million writes/day, monthly summarization queries

    - by sb
    Hello. I'm looking for help deciding on which database system to use. (I've been googling and reading for the past few hours; it now seems worthwhile to ask for help from someone with firsthand knowledge.) I need to log around 200 million rows (or more) per 8 hour workday to a database, then perform weekly/monthly/yearly summary queries on that data. The summary queries would be for collecting data for things like billing statements, eg. "How many transactions of type A did each user run this month?" (could be more complex, but that's the general idea). I can spread the database amongst several machines, as necessary, but I don't think I can take old data offline. I'll definitely need to be able to query a month's worth of data, maybe a year. These queries would be for my own use, and wouldn't need to be generated in real-time for an end-user (they could run overnight, if needed). Does anyone have any suggestions as to which databases would be a good fit? P.S. Cassandra looks like it would have no problem handling the writes, but what about the huge monthly table scans? Is anyone familiar with Cassandra/Hadoop MapReduce performance?

    Read the article

  • My HTML5 web app crashes and I have no clue how to debug

    - by Shouvik
    Hi All, I have written a word game using HTML5 canvas tag and a little bit of audio. I developed the application on the chrome web browser on a linux system. Recently during the testing phase it was tried on safari 5.0.3 on Mac and the webpage froze. Not just the canvas element, but interactive element on the page froze. I have at some times experienced this problem on google chrome when I was developing but since the console did not throw any error before this happened, I did not give it much credence. Now as per requirements I am supposed to support both chrome and safari but this dismal performance on safari has left me shocked and I cannot see what error can be thrown which might lead to such a situation. Worse yet the CPU usage on using this application peaks to 70-80percent on my 2yr old macbook running ubuntu... I can only but pity the person who uses mac to operate this app, which undoubtedly is a heavier OS. Could someone help me out with a place I can start with to find out what exactly is causing this issue. I have run profiles on this webapp on google chromes console and noticed that in the heap spanshot value increases steadily with the playing of the game, specifically (root) value which jumps up by 900 counts. Any help would be very appreciated! Thanks EDIT: I don't know if this helps, but I have noticed that even on refreshing the page after the app becomes unresponsive the page reloads and I am still not able to interact with the page elements but the tab scroll bar continues to work and I can see my application window completely. So to summaries the tab stops accepting any sort of user interaction inside the page. Edit2: Nop. It doesn't work still... The app crashes on double click on the canvas element. The console is not throwing any errors either! =/ I have noticed this problem is isolated only to safari!

    Read the article

  • Setting pixel values in Nvidia NPP ImageCPU objects?

    - by solvingPuzzles
    In the Nvidia Performance Primitives (NPP) image processing examples in the CUDA SDK distribution, images are typically stored on the CPU as ImageCPU objects, and images are stored on the GPU as ImageNPP objects. boxFilterNPP.cpp is an example from the CUDA SDK that uses these ImageCPU and ImageNPP objects. When using a filter (convolution) function like nppiFilter, it makes sense to define a filter as an ImageCPU object. However, I see no clear way setting the values of an ImageCPU object. npp::ImageCPU_32f_C1 hostKernel(3,3); //allocate space for 3x3 convolution kernel //want to set hostKernel to [-1 0 1; -1 0 1; -1 0 1] hostKernel[0][0] = -1; //this doesn't compile hostKernel(0,0) = -1; //this doesn't compile hostKernel.at(0,0) = -1; //this doesn't compile How can I manually put values into an ImageCPU object? Notes: I didn't actually use nppiFilter in the code snippet; I'm just mentioning nppiFilter as a motivating example for writing values into an ImageCPU object. The boxFilterNPP.cpp example doesn't involve writing directly to an ImageCPU object, because nppiFilterBox is a special case of nppiFilter that uses a built-in gaussian smoothing filter (probably something like [1 1 1; 1 1 1; 1 1 1]).

    Read the article

  • what are good ways to implement search and search results using ajax?

    - by Amr ElGarhy
    i have some text box in a page and in the same page there will be a table 'grid' like for holding the search result. When the user start editing and of the textbox above, the search must start by sending all textboxs values to the server 'ajax', and get back with the results to fill the below grid. Notes: This grid should support paging, sorting by clicking on headers and it will contains some controls beside the results such as checkboxs for boolean values and links for opening details in another page. I know many ways to do this some of them are: 1- updatepanel around all of these controls and thats it "fast dirty solution" 2- send the search criteria using ajax request using JQuery post function for example and get back the JSON result, and using a template will draw the grid "clean but will take time to finish and will be harder to edit later". 3- .... My question is: What do you think will be the best choice to implement this scenario? because i face this scenario too much, and want to know which implementation will be better regarding performance, optimization, and time to finish. I just want to know your thoughts about this issue.

    Read the article

  • Architecture of finding movable geotagged objects

    - by itsme
    I currently have a Postgres DB filled with approx. 300.000 data-sets of moving vehicles all over the world. My very frequently repeated query is: Give me all vehicles in a 5/10/20mile radius. Currently I spend around 600 to 1200 ms in the DB to prepare the set of located vehicle-objects. I am looking to vastly improve this time by ideally one or two orders of magnitude if possible. I am working in a Ruby on Rails 3.0beta environment if this is relevant. Any ideas how to architect the whole system to accelerate this query? Any NoSQL database able to deliver this kind of geolocation performance? I know of MongoDB working on an extension to facilitate this scenario but haven't tried it yet. Any intelligent use of Redis to achieve this? One problem with SQL-DBs here seems to be that I can't possibly use indexes because my vehicles are mostly moving around, meaning I had to constantly created DB indexes which, by itself, is probably more expensive than just doing the searching without index. Looking forward to your thoughs, Thanks!

    Read the article

  • Java: multi-threaded maps: how do the implementations compare?

    - by user346629
    I'm looking for a good hash map implementation. Specifically, one that's good for creating a large number of maps, most of them small. So memory is an issue. It should be thread-safe (though losing the odd put might be an OK compromise in return for better performance), and fast for both get and put. And I'd also like the moon on a stick, please, with a side-order of justice. The options I know are: HashMap. Disastrously un-thread safe. ConcurrentHashMap. My first choice, but this has a hefty memory footprint - about 2k per instance. Collections.sychronizedMap(HashMap). That's working OK for me, but I'm sure there must be faster alternatives. Trove or Colt - I think neither of these are thread-safe, but perhaps the code could be adapted to be thread safe. Any others? Any advice on what beats what when? Any really good new hash map algorithms that Java could use an implementation of? Thanks in advance for your input!

    Read the article

  • Built in background-scheduling system in .NET?

    - by Lasse V. Karlsen
    I ask though I doubt there is any such system. Basically I need to schedule tasks to execute at some point in the future (usually no more than a few seconds or possibly minutes from now), and have some way of cancelling that request unless too late. Ie. code that would look like this: var x = Scheduler.Schedule(() => SomethingSomething(), TimeSpan.FromSeconds(5)); ... x.Dispose(); // cancels the request Is there any such system in .NET? Is there anything in TPL that can help me? I need to run such future-actions from various instances in a system here, and would rather avoid each such class instance to have its own thread and deal with this. Also note that I don't want this (or similar, for instance through Tasks): new Thread(new ThreadStart(() => { Thread.Sleep(5000); SomethingSomething(); })).Start(); There will potentially be a few such tasks to execute, they don't need to be executed in any particular order, except for close to their deadline, and it isn't vital that they have anything like a realtime performance concept. I just want to avoid spinning up a separate thread for each such action.

    Read the article

< Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >