Search Results

Search found 6670 results on 267 pages for 'speed dial'.

Page 232/267 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • wordpress generating slow mysql queries - is it index problem?

    - by tash
    Hello Stack Overflow I've got very slow Mysql queries coming up from my wordpress site. It's making everything slow and I think this is eating up CPU usage. I've pasted the Explain results for the two most frequently problematic queries below. This is a typical result - although very occasionally teh queries do seem to be performed at a more normal speed. I have the usual wordpress indexes on the database tables. You will see that one of the queries is generated from wordpress core code, and not from anything specific - like the theme - for my site. I have a vague feeling that the database is not always using the indexes/is not using them properly... Is this right? Does anyone know how to fix it? Or is it a different problem entirely? Many thanks in advance for any help anyone can offer - it is hugely appreciated Query: [wp-blog-header.php(14): wp()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort Query time: 34.2829 (ms) 9) Query: [wp-content/themes/LMHR/index.php(40): query_posts()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('217', '218', '223', '224') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort 2 DEPENDENT SUBQUERY tr ref PRIMARY,term_taxonomy_id PRIMARY 8 func 1 Using index 2 DEPENDENT SUBQUERY tt eq_ref PRIMARY,term_id_taxonomy,taxonomy PRIMARY 8 antin1_lovemusic2010.tr.term_taxonomy_id 1 Using where Query time: 70.3900 (ms)

    Read the article

  • delete row from result set in web sql with javascript

    - by Kaijin
    I understand that the result set from web sql isn't quite an array, more of an object? I'm cycling through a result set and to speed things up I'd like to remove a row once it's been found. I've tried "delete" and "splice", the former does nothing and the latter throws an error. Here's a piece of what I'm trying to do, notice the delete on line 18: function selectFromReverse(reverseRay,suggRay){ var reverseString = reverseRay.toString(); db.transaction(function (tx) { tx.executeSql('SELECT votecount, comboid FROM counterCombos WHERE comboid IN ('+reverseString+') AND votecount>0', [], function(tx, results){ processSelectFromReverse(results,suggRay); }); }, function(){onError}); } function processSelectFromReverse(results,suggRay){ var i = suggRay.length; while(i--){ var j = results.rows.length; while(j--){ console.log('searching'); var found = 0; if(suggRay[i].reverse == results.rows.item(j).comboid){ delete results.rows.item(j); console.log('found'); found++; break; } } if(found == 0){ console.log('lost'); } } }

    Read the article

  • increase number of photos from flickr using json

    - by Andrew Welch
    Hi this is my code: Is is possible to get more photos from flickr. What is the standard / default number? $(document).ready(function(){ $.getJSON("http://api.flickr.com/services/feeds/photos_public.gne?id=48719970@N07&lang=en-us&format=json&jsoncallback=?", function(data){ $.each(data.items, function(i, item){ var newurl = 'url(' + item.media.m + ')'; $("<div class='images'/>").css('background', newurl).css('backgroundPosition','top center').css('backgroundRepeat','no-repeat').appendTo("#images").wrap("<a target=\"_blank\ href='" + item.link + "'></a>"); }) $("#title").html(data.title); $("#description").html(data.description); $("#link").html("<a href='" + data.link + "' target=\"_blank\">Visit the Viget Inspiration Pool!</a>"); //Notice that the object here is "data" because that information sits outside of "items" in the JSON feed $('.jcycleimagecarousel').cycle({ fx: 'fade', speed: 300, timeout: 3000, next: '#next', prev: '#prev', pause: 1, random: 1 }); }); });

    Read the article

  • What hash algorithms are paralellizable? Optimizing the hashing of large files utilizing on mult-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • Good way to identify similar images?

    - by Nick
    I've developed a simple and fast algorithm in PHP to compare images for similarity. Its fast (~40 per second for 800x600 images) to hash and a unoptimised search algorithm can go through 3,000 images in 22 mins comparing each one against the others (3/sec). The basic overview is you get a image, rescale it to 8x8 and then convert those pixels for HSV. The Hue, Saturation and Value are then truncated to 4 bits and it becomes one big hex string. Comparing images basically walks along two strings, and then adds the differences it finds. If the total number is below 64 then its the same image. Different images are usually around 600 - 800. Below 20 and extremely similar. Are there any improvements upon this model I can use? I havent looked at how relevant the different components (hue, saturation and value) are to the comparison. Hue is probably quite important but the others? To speed up searches I could probably split the 4 bits from each part in half, and put the most significant bits first so if they fail the check then the lsb doesnt need to be checked at all. I dont know a efficient way to store bits like that yet still allow them to be searched and compared easily. I've been using a dataset of 3,000 photos (mostly unique) and there havent been any false positives. Its completely immune to resizes and fairly resistant to brightness and contrast changes.

    Read the article

  • What is the fastest way for reading huge files in Delphi?

    - by dummzeuch
    My program needs to read chunks from a huge binary file with random access. I have got a list of offsets and lengths which may have several thousand entries. The user selects an entry and the program seeks to the offset and reads length bytes. The program internally uses a TMemoryStream to store and process the chunks read from the file. Reading the data is done via a TFileStream like this: FileStream.Position := Offset; MemoryStream.CopyFrom(FileStream, Size); This works fine but unfortunately it becomes increasingly slower as the files get larger. The file size starts at a few megabytes but frequently reaches several tens of gigabytes. The chunks read are around 100 kbytes in size. The file's content is only read by my program. It is the only program accessing the file at the time. Also the files are stored locally so this is not a network issue. I am using Delphi 2007 on a Windows XP box. What can I do to speed up this file access?

    Read the article

  • Programming exercises in Java inheritance for intern

    - by Tenner
    I work for a small software development team, working primarily in Java, for a very large company. Our new intern showed up sight-unseen (not uncommon in my company). He has some C++ experience but no Java. Worse, he's never worked with inheritance in C++. Our code has a great deal of abstraction and a heavy reliance on inheritance. We need to get him up to speed as quickly as possible. Of course the rest of the team is busy, and so we can't take the time out of our day to teach a one-student 200-level CS course. Instead, I'd like to give him an actual programming project to work on which highlights how classes, interfaces, method overrides, etc. work. I've had him look at Project Euler, but most of the solutions end up being procedural, and not object-oriented programs. Do any of you have any somewhat-straightforward (and relatively quick) projects which you would give to an intern in this situation? Or, any recent (or current) students have a school project they'd be willing to share? Anyone else had this experience?

    Read the article

  • Animation management in COCOS2D iphone.

    - by shreya
    Hi All, I have near about 255 image frames for background animation, 99 frames of enemy sprite and 125 frames of player sprite. All animations are running simultaneously on the screen. That is background animation is running and 4-5 enemies are on the screen are present at a time, also player is there at the same time. Take a look at the code below, CCAnimation *_enemyAnimation = [CCAnimation animationWithName:@"Enemy" delay:0.1f]; for (int i = 1; i<99; i++) { [_enemyAnimation addFrameWithFilename:[NSString stringWithFormat:@"enemy %02d.jpg",i]]; } id action1 = [CCAnimate actionWithAnimation: _enemyAnimation]; [_enemySprite runAction:[CCRepeatForever actionWithAction: action1]]; [self schedule:@selector(BackToGameLogic:) interval:5.0]; This makes my game too slower and consumes memory about 65MB in the allocations. How should I manage my animations so there will be improvement in speed and memory consumption will be reduced?. Please suggest me the way. Thanks.

    Read the article

  • Proper way to scan a range of IP addresses

    - by Josh G
    Given a range of IP addresses entered by a user (through various means), I want to identify which of these machines have software running that I can talk to. Here's the basic process: Ping these addresses to find available machines Connect to a known socket on the available machines Send a message to the successfully established sockets Compare the response to the expected response Steps 2-4 are straight forward for me. What is the best way to implement the first step in .NET? I'm looking at the System.Net.NetworkInformation.Ping class. Should I ping multiple addresses simultaneously to speed up the process? If I ping one address at a time with a long timeout it could take forever. But with a small timeout, I may miss some machines that are available. Sometimes pings appear to be failing even when I know that the address points to an active machine. Do I need to ping twice in the event of the request getting discarded? To top it all off, when I scan large collections of addresses with the network cable unplugged, Ping throws a NullReferenceException in FreeUnmanagedResources(). !? Any pointers on the best approach to scanning a range of IPs like this?

    Read the article

  • Fastest inline-assembly spinlock

    - by sigvardsen
    I'm writing a multithreaded application in c++, where performance is critical. I need to use a lot of locking while copying small structures between threads, for this I have chosen to use spinlocks. I have done some research and speed testing on this and I found that most implementations are roughly equally fast: Microsofts CRITICAL_SECTION, with SpinCount set to 1000, scores about 140 time units Implementing this algorithm with Microsofts InterlockedCompareExchange scores about 95 time units Ive also tried to use some inline assembly with __asm {} using something like this code and it scores about 70 time units, but I am not sure that a proper memory barrier has been created. Edit: The times given here are the time it takes for 2 threads to lock and unlock the spinlock 1,000,000 times. I know this isn't a lot of difference but as a spinlock is a heavily used object, one would think that programmers would have agreed on the fastest possible way to make a spinlock. Googling it leads to many different approaches however. I would think this aforementioned method would be the fastest if implemented using inline assembly and using the instruction CMPXCHG8B instead of comparing 32bit registers. Furthermore memory barriers must be taken into account, this could be done by LOCK CMPXHG8B (I think?), which guarantees "exclusive rights" to the shared memory between cores. At last [some suggests] that for busy waits should be accompanied by NOP:REP that would enable Hyper-threading processors to switch to another thread, but I am not sure whether this is true or not? From my performance-test of different spinlocks, it is seen that there is not much difference, but for purely academic purpose I would like to know which one is fastest. However as I have extremely limited experience in the assembly-language and with memory barriers, I would be happy if someone could write the assembly code for the last example I provided with LOCK CMPXCHG8B and proper memory barriers in the following template: __asm { spin_lock: ;locking code. spin_unlock: ;unlocking code. }

    Read the article

  • Jquery cycle plugin and image tag from code behind

    - by Geetha
    Hi All, I am using cycle plugin to show images. I am binding the html code for image from code behind. Problem: Images are not getting displayed. If i hard coded the image tag it is working. Code: $(window).load(function() { $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", data: "{}", url: "Default.aspx/GetImage", dataType: "json", success: function(data) { alert(data.d); $("#sample").html(data.d); $('#sample').cycle({ fx: 'fade', continuous: true, speed: 7500, timeout: 55000, pause: 1, sync: 1 }); } }); }); Data.d will give: <img src="Images/Frontbanner/s0.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s111.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s112.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s113.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s114.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s115.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s116.jpg" width="664" height="428" border="0" />

    Read the article

  • Splitting a set of object into several subsets of 'similar' objects

    - by doublep
    Suppose I have a set of objects, S. There is an algorithm f that, given a set S builds certain data structure D on it: f(S) = D. If S is large and/or contains vastly different objects, D becomes large, to the point of being unusable (i.e. not fitting in allotted memory). To overcome this, I split S into several non-intersecting subsets: S = S1 + S2 + ... + Sn and build Di for each subset. Using n structures is less efficient than using one, but at least this way I can fit into memory constraints. Since size of f(S) grows faster than S itself, combined size of Di is much less than size of D. However, it is still desirable to reduce n, i.e. the number of subsets; or reduce the combined size of Di. For this, I need to split S in such a way that each Si contains "similar" objects, because then f will produce a smaller output structure if input objects are "similar enough" to each other. The problems is that while "similarity" of objects in S and size of f(S) do correlate, there is no way to compute the latter other than just evaluating f(S), and f is not quite fast. Algorithm I have currently is to iteratively add each next object from S into one of Si, so that this results in the least possible (at this stage) increase in combined Di size: for x in S: i = such i that size(f(Si + {x})) - size(f(Si)) is min Si = Si + {x} This gives practically useful results, but certainly pretty far from optimum (i.e. the minimal possible combined size). Also, this is slow. To speed up somewhat, I compute size(f(Si + {x})) - size(f(Si)) only for those i where x is "similar enough" to objects already in Si. Is there any standard approach to such kinds of problems? I know of branch and bounds algorithm family, but it cannot be applied here because it would be prohibitively slow. My guess is that it is simply not possible to compute optimal distribution of S into Si in reasonable time. But is there some common iteratively improving algorithm?

    Read the article

  • Javascript changing source of an image

    - by Pete Herbert Penito
    Hi Everyone! I have some javascript which runs a timer that animates something on the website. It all works perfectly but I want to get an image to change when the animation is run, It uses jquery: if(options.auto && dir=="next" && !clicked) { timeout = setTimeout(function(){ if (document.images['bullet1'].src == "img/bulletwhite.png") { document.images['bullet1'].src = "img/bullet.png"; document.images['bullet2'].src = "img/bulletwhite.png"; } animate("next",false); },diff*options.speed+options.pause); } options.auto means that its automatically cycling, dir is the direction of the motion and clicked is whether or not the user clicked it. Is there something wrong with my syntax here? I ran firebug with it, and it doesn't throw any errors, it simply won't work. Any advice would help! I should mention that the src of bullet1 starts at bulletwhite.png and then I was hoping for it to change to bullet.png and have bullet2 change to bulletwhite.png.

    Read the article

  • Fast, Unicode-capable, cross-platform programmer's text editor that shows invisibles like ZWSP?

    - by Roger_S
    Our publishing workflow includes Windows and Linux machines (there are some Macs too, but not in the critical-path workflow). Many texts include both English and Khmer and are marked-up in XML. XML Copy Editor is the best cross-platform open-source XML editor I've discovered. It utilizes the Scintilla editing component, which is generally good with Unicode but which does not enable non-printing or invisible characters like U+200B (zero-width space) and U+200C (zero-width non-joiner) to be displayed. Khmer does not separate words with a space character as Western languages do, so ZWSP is used in electronic texts to enable applications to break lines easily. Ideally I'd edit the markup and the content in a single editor, but XML awareness is less important at times than being able to display invisibles. (OpenOffice.org Writer and Microsoft Word are the only two apps I know that will display ZWSP. They are not suitable for the markup and text manipulations that need to be done to prepare manuscripts for publication, unfortunately, although I guess they're fine for authoring.) I tried out a promising editor last week, but a search-and-replace regex operation that took under a second in TextPad 4.7.3 lasted over twenty seconds. So I want to mention that speed and the ability to handle large (up to 150mb) files is also a concern. Is there a good, fast, free or not too expensive text editor, with versions on Windows and Linux and maybe mac too, Unicode-aware and capable of displaying invisibles like ZWSP? That has syntax highlighting, can handle large files and is customizable enough that I won't tear my hair out in frustration? Thanks, Roger_S

    Read the article

  • Program structure in long running data processing python script

    - by fmark
    For my current job I am writing some long-running (think hours to days) scripts that do CPU intensive data-processing. The program flow is very simple - it proceeds into the main loop, completes the main loop, saves output and terminates: The basic structure of my programs tends to be like so: <import statements> <constant declarations> <misc function declarations> def main(): for blah in blahs(): <lots of local variables> <lots of tightly coupled computation> for something in somethings(): <lots more local variables> <lots more computation> <etc., etc.> <save results> if __name__ == "__main__": main() This gets unmanageable quickly, so I want to refactor it into something more manageable. I want to make this more maintainable, without sacrificing execution speed. Each chuck of code relies on a large number of variables however, so refactoring parts of the computation out to functions would make parameters list grow out of hand very quickly. Should I put this sort of code into a python class, and change the local variables into class variables? It doesn't make a great deal of sense tp me conceptually to turn the program into a class, as the class would never be reused, and only one instance would ever be created per instance. What is the best practice structure for this kind of program? I am using python but the question is relatively language-agnostic, assuming a modern object-oriented language features.

    Read the article

  • Parallel.For Batching

    - by chibacity
    Is there built-in support in the TPL for batching operations? I was recently playing with a routine to carry out character replacement on a character array which required a lookup table i.e. transliteration: for (int i = 0; i < chars.Length; i++) { char replaceChar; if (lookup.TryGetValue(chars[i], out replaceChar)) { chars[i] = replaceChar; } } I could see that this could be trivially parallelized, so jumped in with a first stab which I knew would perform worse as the tasks were too fine-grained: Parallel.For(0, chars.Length, i => { char replaceChar; if (lookup.TryGetValue(chars[i], out replaceChar)) { chars[i] = replaceChar; } }); I then reworked the algorithm to use batching so that the work could be chunked onto different threads in less fine-grained batches. This made use of threads as expected and I got some near linear speed up. I'm sure that there must be built-in support for batching in the TPL. What is the syntax, and how do I use it? const int CharBatch = 100; int charLen = chars.Length; Parallel.For(0, ((charLen / CharBatch) + 1), i => { int batchUpper = ((i + 1) * CharBatch); for (int j = i * CharBatch; j < batchUpper && j < charLen; j++) { char replaceChar; if (lookup.TryGetValue(chars[j], out replaceChar)) { chars[j] = replaceChar; } } });

    Read the article

  • HTTP Compression problems on IIS7

    - by Jonathan Wood
    I've spent quite a bit of time on this but seem to be going nowhere. I have a large page that I really want to speed up. The obvious place to start seems to be HTTP compression, but I just can't seem to get it to work for me. After considerable searching, I've tried several variations of the code below. It kind of works, but after refreshing the browser, the results seem to fall apart. They were turning to garbage when the page used caching. If I turn off caching, then the page seems right but I lose my CSS formatting (stored in a separate file) and get an error that an included JS file contains invalid characters. Most of the resources I've found on the Web were either very old or focused on accessing IIS directly. My page is running on a shared hosting account and I do not have direct access to IIS7, which it's running on. protected void Application_BeginRequest(object sender, EventArgs e) { // Implement HTTP compression if (Request["HTTP_X_MICROSOFTAJAX"] == null) // Avoid compressing AJAX calls { // Retrieve accepted encodings string encodings = Request.Headers.Get("Accept-Encoding"); if (encodings != null) { // Verify support for or gzip (deflate takes preference) encodings = encodings.ToLower(); if (encodings.Contains("gzip") || encodings == "*") { Response.Filter = new GZipStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "gzip"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } else if (encodings.Contains("deflate")) { Response.Filter = new DeflateStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "deflate"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } } } } Is anyone having better success with this?

    Read the article

  • Stopping cookies being set from a domain (aka "cookieless domain") to increase site performance

    - by Django Reinhardt
    I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain": Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. Google then says that the best way to do this is to buy a new domain and set it to point to your current one: To reserve a cookieless domain for serving static content, register a new domain name and configure your DNS database with a CNAME record that points the new domain to your existing domain A record. Configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain. In your web pages, reference the domain name in the URLs for the static resources. This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain? At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do. Apologies if this is a real newb question, I'm new at this! Thanks.

    Read the article

  • What is the fastest way to filter a list of strings when making an Intellisense/Autocomplete list?

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • How can I accelerate the generation of the an MD5 Checksum within vb.net?

    - by Richard
    I'm working with some very large files residing on P2 (Panasonic) cards. Part of the process we employ is to first generate a checksum of the file we are going to copy, then copy the file, then run a checksum on the file to confirm that it copied OK. The problem is, is that files are large (70 GB+) and take a long time to complete. It's an issue since we will eventually be dealing with thousands of these files. I would like to find a faster way to generate the checksum other than using the System.Security.Cryptography.MD5CryptoServiceProvider I don't care if this means using a specialized hardware card, provided it works and is not to ungodly expensive. I would prefer to have a method of encoding that provided some feedback as to how far the process has gone along so I can display it like I do now. The application is written in vb.net. I would prefer to be able to use it as component, library, reference within my application, but I'm willing to call an outside application if there is enough improvement in the speed of generating the checksum. Needless to say, the checksum must be consistent and correct. :-) Thank you in advance for your time and efforts, Richard

    Read the article

  • gl_FragColor and glReadPixels

    - by chun0216
    I am still trying to read pixels from fragment shader and I have some questions. I know that gl_FragColor returns with vec4 meaning RGBA, 4 channels. After that, I am using glReadPixels to read FBO and write it in data GLubyte *pixels = new GLubyte[640*480*4]; glReadPixels(0, 0, 640,480, GL_RGBA, GL_UNSIGNED_BYTE, pixels); This works fine but it really has speed issue. Instead of this, I want to just read RGB so ignore alpha channels. I tried: GLubyte *pixels = new GLubyte[640*480*3]; glReadPixels(0, 0, 640,480, GL_RGB, GL_UNSIGNED_BYTE, pixels); instead and this didn't work though. I guess it's because gl_FragColor returns 4 channels and maybe I should do something before this? Actually, since my returned image (gl_FragColor) is grayscale, I did something like float gray = 0.5 //or some other values gl_FragColor = vec4(gray,gray,gray,1.0); So is there any efficient way to use glReadPixels instead of using the first 4 channels method? Any suggestion? By the way, this is on opengl es 2.0 code.

    Read the article

  • Is there any performance issue using Row_Number to implement table paging in Sql Server 2008?

    - by majkinetor
    I want to implement table paging using this method: SET @PageNum = 2; SET @PageSize = 10; WITH OrdersRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY OrderDate, OrderID) AS RowNum ,* FROM dbo.Orders ) SELECT * FROM OrdersRN WHERE RowNum BETWEEN (@PageNum - 1) * @PageSize + 1 AND @PageNum * @PageSize ORDER BY OrderDate ,OrderID; Is there anything I should be aware of ? Table has millions of records. Thx. EDIT: After using suggested MAXROWS method for some time (which works really really fast) I had to switch back to ROW_NUMBER method because of its greater flexibility. I am also very happy about its speed so far (I am working with View having more then 1M records with 10 columns). To use any kind of query I use following modification: PROCEDURE [dbo].[PageSelect] ( @Sql nvarchar(512), @OrderBy nvarchar(128) = 'Id', @PageNum int = 1, @PageSize int = 0 ) AS BEGIN SET NOCOUNT ON Declare @tsql as nvarchar(1024) Declare @i int, @j int if (@PageSize <= 0) OR (@PageSize > 10000) SET @PageSize = 10000 -- never return more then 10K records SET @i = (@PageNum - 1) * @PageSize + 1 SET @j = @PageNum * @PageSize SET @tsql = 'WITH MyTableOrViewRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY ' + @OrderBy + ') AS RowNum ,* FROM MyTableOrView WHERE ' + @Sql + ' ) SELECT * FROM MyTableOrViewRN WHERE RowNum BETWEEN ' + CAST(@i as varchar) + ' AND ' + cast(@j as varchar) exec(@tsql) END If you use this procedure make sure u prevented sql injection.

    Read the article

  • Calculating and saving space in Postgresql

    - by punkish
    I have a table in Pg like so CREATE TABLE t ( a BIGSERIAL NOT NULL, -- 8 b b SMALLINT, -- 2 b c SMALLINT, -- 2 b d REAL, -- 4 b e REAL, -- 4 b f REAL, -- 4 b g INTEGER, -- 4 b h REAL, -- 4 b i REAL, -- 4 b j SMALLINT, -- 2 b k INTEGER, -- 4 b l INTEGER, -- 4 b m REAL, -- 4 b CONSTRAINT a_pkey PRIMARY KEY (a) ) The above adds up to 50 bytes per row. My experience is that I need another 40% to 50% for system overhead, without even any user-created indexes to the above. So, about 75 bytes per row. I will have many, many rows in the table, potentially upward of 145 billion rows, so the table is going to be pushing 13-14 Terabytes. What tricks, if any, could I use to compact this table? My possible ideas below -- Convert the REAL values to INTEGERs. If they can stored as SMALLINT, that is a saving of 2 bytes per field. Convert the columns b .. m into an array. I don't need to search on those columns, but I do need to be able to return one column's value at a time. So, if I need column g, I could do something like SELECT a, arr[5] FROM t; Would I save space with the array option? Would there be a speed penalty? Any other ideas?

    Read the article

  • Quickest way to compare a bunch of array or list of values.

    - by zapping
    Can you please let me know on the quickest and efficient way to compare a large set of values. Its like there are a list of parent codes(string) and each code has a series of child values(string). The child lists have to be compared with each other and find out duplicates and count how many times they repeat. code1(code1_value1, code1_value2, code3_value3, ..., code1_valueN); code2(code2_value1, code1_value2, code2_value3, ..., code2_valueN); code3(code2_value1, code3_value2, code3_value3, ..., code3_valueN); . . . codeN(codeN_value1, codeN_value2, codeN_value3, ..., codeN_valueN); The lists are huge say like there are 100 parent codes and each has about 250 values in them. There will not be duplicates within a code list. Doing it in java and the solution i could figure out is. Store the values of first set of code in as codeMap.put(codeValue, duplicateCount). The count initialized to 0. Then compare the rest of the values with this. If its in the map then increment the count otherwise append it to the map. The downfall of this is to get the duplicates. Another iteration needs to be performed on a very large list. An alternative is to maintain another hashmap for duplicates like duplicateCodeMap.put(codeValue, duplicateCount) and change the initial hashmap to codeMap.put(codeValue, codeValue). Speed is what is requirement. Hope one of you can help me with it.

    Read the article

  • "variable tracking" is eating my compile time!

    - by wowus
    I have an auto-generated file which looks something like this... static void do_SomeFunc1(void* parameter) { // Do stuff. } // Continues on for another 4000 functions... void dispatch(int id, void* parameter) { switch(id) { case ::SomeClass1::id: return do_SomeFunc1(parameter); case ::SomeClass2::id: return do_SomeFunc2(parameter); // This continues for the next 4000 cases... } } When I build it like this, the build time is enormous. If I inline all the functions automagically into their respective cases using my script, the build time is cut in half. GCC 4.5.0 says ~50% of the build time is being taken up by "variable tracking" when I use -ftime-report. What does this mean and how can I speed compilation while still maintaining the superior cache locality of pulling out the functions from the switch? EDIT: Interestingly enough, the build time has exploded only on debug builds, as per the following profiling information of the whole project (which isn't just the file in question, but still a good metric; the file in question takes the most time to build): Debug: 8 minutes 50 seconds Release: 4 minutes, 25 seconds

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >