Search Results

Search found 6355 results on 255 pages for 'slow downs'.

Page 45/255 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Complex query making site extremely slow

    - by Basit
    select SQL_CALC_FOUND_ROWS DISTINCT media.*, username from album as album, album_permission as permission, user as user, media as media , word_tag as word_tag, tag as tag where ((media.album_id = album.album_id and album.private = 'yes' and album.album_id = permission.album_id and (permission.email = '' or permission.user_id = '') ) or (media.album_id = album.album_id and album.private = 'no' ) or media.album_id = '0' ) and media.status = '1' and media.user_id = user.user_id and word_tag.media_id = media.media_id and word_tag.tag_id = tag.tag_id and tag.name in ('justin','bieber','malfunction','katherine','heigl','wardrobe','cinetube') and media.media_type = 'video' and media.media_id not in ('YHL6a5z8MV4') group by media.media_id order by RAND() #there is limit too, by 20 rows.. i dont know where to begin explaining about this query, but please forgive me and ask me if you have any question. following is the explanation. SQL_CALC_FOUND_ROWS is calculating how many rows are there and will be using for pagination, so it counts total records, even tho only 20 is showing. DISTINCT will stop the repeated row to display. username is from user table. album, album_permission. its checking if album is private and if it is, then check if user has permission, by user_id. i think rest is easy to understand, but if you need to know more about it, then please ask. im really frustrated by this query and site is very slow or not opening sometimes cause of this query. please help

    Read the article

  • Extremely slow MFMailComposeViewControllerDelegate

    - by Jeff B
    I have a bit of a strange problem. I am trying to send in-app email. I am also using Cocos2d. It works, so far as I get the mail window and I can send mail, but it is extremely slow. It seems to only accept touches every second or so. I checked the cpu usage, and it is quite low. I paused my director, so nothing else should be happening. Any ideas? I am pulling my hair out. I looked at some examples and did the following: Made my scene the mail delegate: @interface MyLayer : CCLayer <MFMailComposeViewControllerDelegate> { ... } And implemented the following function in the scenes: -(void) showEmailWindow: (id) sender { [[CCDirector sharedDirector] pause]; MFMailComposeViewController *picker = [[MFMailComposeViewController alloc] init]; picker.mailComposeDelegate = self; [picker setSubject: @"My subject here"]; NSString *emailBody = @"<h1>Here is my email!</h1>"; [picker setMessageBody:emailBody isHTML:YES]; [myMail presentModalViewController:picker animated:NO]; [picker release]; } I also implemented the mailComposeController, to handle the callback.

    Read the article

  • Compiler optimization causing the performance to slow down

    - by aJ
    I have one strange problem. I have following piece of code: template<clss index, class policy> inline int CBase<index,policy>::func(const A& test_in, int* srcPtr ,int* dstPtr) { int width = test_in.width(); int height = test_in.height(); double d = 0.0; //here is the problem for(int y = 0; y < height; y++) { //Pointer initializations //multiplication involving y //ex: int z = someBigNumber*y + someOtherBigNumber; for(int x = 0; x < width; x++) { //multiplication involving x //ex: int z = someBigNumber*x + someOtherBigNumber; if(soemCondition) { // floating point calculations } *dstPtr++ = array[*srcPtr++]; } } } The inner loop gets executed nearly 200,000 times and the entire function takes 100 ms for completion. ( profiled using AQTimer) I found an unused variable double d = 0.0; outside the outer loop and removed the same. After this change, suddenly the method is taking 500ms for the same number of executions. ( 5 times slower). This behavior is reproducible in different machines with different processor types. (Core2, dualcore processors). I am using VC6 compiler with optimization level O2. Follwing are the other compiler options used : -MD -O2 -Z7 -GR -GX -G5 -X -GF -EHa I suspected compiler optimizations and removed the compiler optimization /O2. After that function became normal and it is taking 100ms as old code. Could anyone throw some light on this strange behavior? Why compiler optimization should slow down performance when I remove unused variable ? Note: The assembly code (before and after the change) looked same.

    Read the article

  • PostgreSQL - Why are some queries on large datasets so incredibly slow

    - by Brad Mathews
    Hello, I have two types of queries I run often on two large datasets. They run much slower than I would expect them to. The first type is a sequential scan updating all records: Update rcra_sites Set street = regexp_replace(street,'/','','i') rcra_sites has 700,000 records. It takes 22 minutes from pgAdmin! I wrote a vb.net function that loops through each record and sends an update query for each record (yes, 700,000 update queries!) and it runs in less than half the time. Hmmm.... The second type is a simple update with a relation and then a sequential scan: Update rcra_sites as sites Set violations='No' From narcra_monitoring as v Where sites.agencyid=v.agencyid and v.found_violation_flag='N' narcra_monitoring has 1,700,000 records. This takes 8 minutes. The query planner refuses to use my indexes. The query runs much faster if I start with a set enable_seqscan = false;. I would prefer if the query planner would do its job. I have appropriate indexes, I have vacuumed and analyzed. I optimized my shared_buffers and effective_cache_size best I know to use more memory since I have 4GB. My hardware is pretty darn good. I am running v8.4 on Windows 7. Is PostgreSQL just this slow? Or am I still missing something? Thanks! Brad

    Read the article

  • MySQL ORDER BY DESC is fast but ASC is very slow

    - by Pepper
    Hello, I'm completely stumped on this one. For some reason when I sort this query by DESC it's super fast, but if sorted by ASC it's extremely slow. This takes about 150 milliseconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published DESC LIMIT 0, 50; This takes about 32 seconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published ASC LIMIT 0, 50; The EXPLAIN is the same for both queries. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts index NULL published 5 NULL 50 Using where I've tracked it down to "USE INDEX (published)". If I take that out it's the same performance both ways. But the EXPLAIN shows the query is less efficient overall. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts range feed_id feed_id 4 \N 759 Using where; Using filesort And here's the table. CREATE TABLE `posts` ( `id` int(20) NOT NULL AUTO_INCREMENT, `feed_id` int(11) NOT NULL, `post_url` varchar(255) NOT NULL, `title` varchar(255) NOT NULL, `content` blob, `author` varchar(255) DEFAULT NULL, `published` int(12) DEFAULT NULL, `updated` datetime NOT NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `post_url` (`post_url`,`feed_id`), KEY `feed_id` (`feed_id`), KEY `published` (`published`) ) ENGINE=InnoDB AUTO_INCREMENT=196530 DEFAULT CHARSET=latin1; Is there a fix for this? Thanks!

    Read the article

  • C# String.Replace with a start/index (Added my (slow) implementation)

    - by Chris T
    I'd like an efficient method that would work something like this EDIT: Sorry I didn't put what I'd tried before. I updated the example now. // Method signature, Only replaces first instance or how many are specified in max public int MyReplace(ref string source,string org, string replace, int start, int max) { int ret = 0; int len = replace.Length; int olen = org.Length; for(int i = 0; i < max; i++) { // Find the next instance of the search string int x = source.IndexOf(org, ret + olen); if(x > ret) ret = x; else break; // Insert the replacement source = source.Insert(x, replace); // And remove the original source = source.Remove(x + len, olen); // removes original string } return ret; } string source = "The cat can fly but only if he is the cat in the hat"; int i = MyReplace(ref source,"cat", "giraffe", 8, 1); // Results in the string "The cat can fly but only if he is the giraffe in the hat" // i contains the index of the first letter of "giraffe" in the new string The only reason I'm asking is because my implementation I'd imagine getting slow with 1,000s of replaces.

    Read the article

  • Slow retrieval of data in SQLITE takes a long using ContentProvider

    - by Arlyn
    I have an application in Android (running 4.0.3) that stores a lot of data in Table A. Table A resides in SQLite Database. I am using a ContentProvider as an abstraction layer above the database. Lots of data here means almost 80,000 records per month. Table A is structured like this: String SQL_CREATE_TABLE = "CREATE TABLE IF NOT EXISTS " + TABLE_A + " ( " + COLUMN_ID + " INTEGER PRIMARY KEY NOT NULL" + "," + COLUMN_GROUPNO + " INTEGER NOT NULL DEFAULT(0)" + "," + COLUMN_TIMESTAMP + " DATETIME UNIQUE NOT NULL" + "," + COLUMN_TAG + " TEXT" + "," + COLUMN_VALUE + " REAL NOT NULL" + "," + COLUMN_DEVICEID + " TEXT NOT NULL" + "," + COLUMN_NEW + " NUMERIC NOT NULL DEFAULT(1)" + " )"; Here is the index statement: String SQL_CREATE_INDEX_TIMESTAMP = "CREATE INDEX IF NOT EXISTS " + TABLE_A + "_" + COLUMN_TIMESTAMP + " ON " + TABLE_A + " (" + COLUMN_TIMESTAMP + ") "; I have defined the columns as well as the table name as String Constants. I am already experiencing significant slow down when retrieving this data from Table A. The problem is that when I retrieve data from this table, I first put it in an ArrayList and then I display it. Obviously, this is possibly the wrong way of doing things. I am trying to find a better way to approach this problem using a ContentProvider. But this is not the problem that bothers me. The problem is for some reason, it takes a lot longer to retrieve data from other tables which have only upto 12 records maximum. I see this delay increase as the number of records in Table A increase. This does not make any sense. I can understand the delay if I retrieve data from Table A, but why the delay in retrieving data from other tables. To clarify, I do not experience this delay if Table A is empty or has less than 3000 records. What could be the problem?

    Read the article

  • Asp.Net Login Control very slow initial connection to Non-Trusted AD Domain

    - by Eric Brown - Cal
    ASP.NET Login control is very slow making the initial connection to AD when authenticating to a different domain than the domain the web server is a member of. Problem occurs for the IIS server and when using with the Visual Studio's built in web server. It takes about 30 seconds the first time when attempting to use the control to connect against another domain. There is no trust relationship bewteen the web server's domain and the other domains (attempted connecting to several different domains). Subsequent connections execute quickly until the connection times out. Using Systernals Process Monitor to troubleshoot, there are two OpenQuery operations right before the delay to "C:\WINDOWS\asembly\GAC_MSIL\System.DirectoryServices\2.0.0.0_b03f5f7f11d50a3a\Netapi32.dll with a result NAME NOT FOUND" and right after the 30 second delay the TCP Send and TCP Recieves indicate communication begins with the AD server. Things we have tried: Impersonating an administrator on the web server in the web.config; Granting permissions to the CryptoKeys to the NetworkService and ASPNET; Specifying by IP instead of DNS name; Multiple variations of specifying the name and ldap server with domains and OU's; Local host entries; Looked for ports being blocked (SYN_SENT) with netstat -an. Nslookup resolves all the domains and systems involved correectly. TraceRt shows the Correct routes Any Idea or hints are greately appreicated.

    Read the article

  • Should I expect Comet to be this slow?

    - by Chad Johnson
    I have the following in a Rails controller: def poll records = [] start_time = Time.now.to_i while records.length == 0 do records = Something.uncached{Something.find(:all, :conditions => { :some_condition => false})} if records.length > 0 break end sleep 1 if Time.now.to_i - start_time >= 20 break end end responseData = [] records.each do |record| responseData << { 'something' => record.some_value } # Flag message as received. record.some_condition = true record.save end render :text => responseData.to_json end and then I have Javascript performing an AJAX request. The request sits there for 20 seconds or until the controller method finds a record in the database, waiting. That works. function poll() { $.ajax({ url: '/my_controller/poll', type: 'GET', dataType: 'json', cache: false, data: 'time=' + new Date().getTime(), success: function(response) { // show response here }, complete: function() { poll(); }, error: function() { alert('error'); poll(); } }); } When I have 5 - 10 tabs open in my browser, my web application becomes super slow. Is this to be expected? Or is there some obvious improvement(s) I can make?

    Read the article

  • Python: slow read & write for millions of small files

    - by Jami
    I am building directory tree which has tons of subdirectories and files. The total directory count is somewhere along 256^32 subdirectories with 256 files in each end which are only a few bytes long. I did this so I would have fast access to these files (since i'm not searching and i'm just directly accessing then via a known file path) I have a python script that builds this filesystem and reads & writes those files. The problem is that when I reach more than 1Gb of total filesize, the read and write methods become extremely slow. Here's the function I have that reads the contents of a file (the file contains an integer string), adds a certain number to it, then writes it back to the original file. def addInFile(path, scoreToAdd): num = scoreToAdd try: shutil.copyfile(path, '/tmp/tmp.txt') fp = open('/tmp/tmp.txt', 'r') num += int(fp.readlines()[0]) fp.close() except: pass fp = open('/tmp/tmp.txt', 'w') fp.write(str(num)) fp.close() shutil.copyfile('/tmp/tmp.txt', path) I previously tried performing linux console commands but it was slower. I copy the file to a temporary file first then access/modify it then copy it back because i found this was faster than directly accessing the file. I think the cause of the slowdown is because there're tons of files. performing this function 1000 times sometimes reach 1 minute now, but before (when there were only a few files, 1000 calls was performed for only less than 1 second) How do you suggest I fix this?

    Read the article

  • Returning Database Blobs in TurboGears 2.x / FCGI / Lighttpd extremely slow

    - by Tom
    Hey everyone, I am running a TG2 App on lighttpd via flup/fastcgi. We are reading images (~30kb each) from BlobFields in a MySQL database and return those images with a custom mime type via a controller method. Caching these images on the hard disk makes no sense because they change with every request, the only reason we cache these in the DB is that creating these images is quite expensive and the data used to create the images is also present in plain text on the website. Now to the problem itself: When returning such an image, things get extremely slow. The code runs totally fine on paster itself with no visible delay, but as soon as its running via fcgi/lighttpd the described phenomenon happens. I profiled the method of my controller that returns my blob, and the entire method runs in a few miliseconds, but when "return" executes, the entire app hangs for roughly 10 seconds. We could not reproduce the same error with PHP on FCGI. This only seems to happen with Turbogears or Pylons. Here for your consideration the concerned piece of source code: @expose(content_type=CUSTOM_CONTENT_TYPE) def return_img(self, img_id): """ Return a DB persisted image when requested """ img = model.Images.by_id(img_id) #get image from DB response.headers['content-type'] = 'image/png' return img.data # this causes the app to hang for 10 seconds

    Read the article

  • jQuery image fader slow in IE6 & 7

    - by Jamie
    Hi guys, I'm using the following jQuery script to rotate through a series of images pulled into an unordered list using PHP: function theRotator() { $('#rotator li').css({opacity: 0.0}); $('#rotator li:first').css({opacity: 1.0}); setInterval('rotate()',5000); }; function rotate() { var current = ($('#rotator li.show') ? $('#rotator li.show') : $('#rotator li:first')); var next = ((current.next().length) ? ((current.next().hasClass('show')) ? $('#rotator li:first') :current.next()) : $('#rotator li:first')); next.css({opacity: 0.0}).addClass('show').animate({opacity: 1.0}, 2000); current.animate({opacity: 0.0}, 2000).removeClass('show'); }; $(document).ready(function() { theRotator(); }); It works brilliantly in FF, Safari, Chrome and even IE8 but IE6 & 7 are really slow. Can anyone make any suggestions on making it more efficient or just work better in IE6 & 7? The script is from here btw. Thanks.

    Read the article

  • Slow performance with MVC2 DisplayFor

    - by PretzelSteelersFan
    I'm iterating through a collection on my view model. The collection contains HealthLifeDentalRates which implement IBeginsEnds. I have a Display Template BeginsEnds that displays the date range. This results in very slow performance. If I simply the date range (see commented code), it's fine. I think how I'm defining the lambda is the problem but not sure how to change it. <% foreach (var item in Model.Data.OrderBy(x=x.HealthLifeDentalRateCode)) { % <tr> <td> <%= Html.Encode(item.FiscalPeriodString()) %> </td> <td> <%= Html.Encode(item.HealthLifeDentalRateCode) %> </td> <td> <%= Html.Encode(item.Rate) %> </td> <td> <%= Html.Encode(item.IsDental.YesNo()) %> </td> <td> <%= Html.DisplayFor(i = item, "BeginsEnds") % - <%= Html.Encode(item.Ends.ToDateString()) % -- <%= Html.Encode(String.Format("{0:g}", item.Loaded)) % - <%= Html.Encode(item.LoadedBy) % <% } %>

    Read the article

  • MySQL "OR MATCH" hangs (very slow) on multiple tables

    - by Kerry
    After learning how to do MySQL Full-Text search, the recommended solution for multiple tables was OR MATCH and then do the other database call. You can see that in my query below. When I do this, it just gets stuck in a "busy" state, and I can't access the MySQL database. SELECT a.`product_id`, a.`name`, a.`slug`, a.`description`, b.`list_price`, b.`price`, c.`image`, c.`swatch`, e.`name` AS industry, MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) AS relevance FROM `products` AS a LEFT JOIN `website_products` AS b ON (a.`product_id` = b.`product_id`) LEFT JOIN ( SELECT `product_id`, `image`, `swatch` FROM `product_images` WHERE `sequence` = 0) AS c ON (a.`product_id` = c.`product_id`) LEFT JOIN `brands` AS d ON (a.`brand_id` = d.`brand_id`) INNER JOIN `industries` AS e ON (a.`industry_id` = e.`industry_id`) WHERE b.`website_id` = %d AND b.`status` = %d AND b.`active` = %d AND MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) OR MATCH ( d.`name` ) AGAINST ( '%s' IN BOOLEAN MODE ) GROUP BY a.`product_id` ORDER BY relevance DESC LIMIT 0, 9 Any help would be greatly appreciated. EDIT All the tables involved are MyISAM, utf8_general_ci. Here's the EXPLAIN SELECT statement: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY a ALL NULL NULL NULL NULL 16076 Using temporary; Using filesort 1 PRIMARY b ref product_id product_id 4 database.a.product_id 2 1 PRIMARY e eq_ref PRIMARY PRIMARY 4 database.a.industry_id 1 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 23261 1 PRIMARY d eq_ref PRIMARY PRIMARY 4 database.a.brand_id 1 Using where 2 DERIVED product_images ALL NULL NULL NULL NULL 25933 Using where I don't know how to make that look neater -- sorry about that UPDATE it returns the query after 196 seconds (I think correctly). The query without multiple tables takes about .56 seconds (which I know is really slow, we plan on changing to solr or sphinx soon), but 196 seconds?? If we could add a number to the relevance if it was in the brand name ( d.name ), that would also work

    Read the article

  • Resque: Slow worker startup and Forking

    - by David John
    I'm currently moving my application from a Linode setup to EC2. Redis is currently installed on a remote instance with various worker instances interacting with the queue. Thats all going fantastic. My problem is with the amount of time it takes for a worker to be 'instantiated' and slow forking. Starting a worker will usually take between 30 seconds and a minute(from god.rb starting the worker rake task and the worker actively starting work on the queue). I could live with that, but I've not experienced such a wait time on my current Linode production box so I believe its one of my symptoms to a bigger problem. Next issue is that jobs that took a second or less in my previous environment now seem to take about 5 to 10 times longer.. I'm assuming this must be some sort of issue with my Ubuntu install on EC2? One notable difference is that I'm running REE 1.8.7-2010.01 in my new setup, and REE 1.8.6 on the old Linode boxes. Anyone else experienced these issues?

    Read the article

  • Mysql slow query: INNER JOIN + ORDER BY causes filesort

    - by Alexander
    Hello! I'm trying to optimize this query: SELECT `posts`.* FROM `posts` INNER JOIN `posts_tags` ON `posts`.id = `posts_tags`.post_id WHERE (((`posts_tags`.tag_id = 1))) ORDER BY posts.created_at DESC; The size of tables is 38k rows, and 31k and mysql uses "filesort" so it gets pretty slow. I tried to use different indexes, no luck. CREATE TABLE `posts` ( `id` int(11) NOT NULL auto_increment, `created_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_on_created_at` (`created_at`), KEY `for_tags` (`trashed`,`published`,`clan_private`,`created_at`) ) ENGINE=InnoDB AUTO_INCREMENT=44390 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci CREATE TABLE `posts_tags` ( `id` int(11) NOT NULL auto_increment, `post_id` int(11) default NULL, `tag_id` int(11) default NULL, `created_at` datetime default NULL, `updated_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_tags_on_post_id_and_tag_id` (`post_id`,`tag_id`) ) ENGINE=InnoDB AUTO_INCREMENT=63175 DEFAULT CHARSET=utf8 +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | 1 | SIMPLE | posts_tags | index | index_post_id_and_tag_id | index_post_id_and_tag_id | 10 | NULL | 24159 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | posts | eq_ref | PRIMARY | PRIMARY | 4 | .posts_tags.post_id | 1 | | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ 2 rows in set (0.00 sec) What kind of index I need to define to avoid mysql using filesort? Is it possible when order field is not in where clause?

    Read the article

  • How slow are bit fields in C++

    - by Shane MacLaughlin
    I have a C++ application that includes a number of structures with manually controlled bit fields, something like #define FLAG1 0x0001 #define FLAG2 0x0002 #define FLAG3 0x0004 class MyClass { ' ' unsigned Flags; int IsFlag1Set() { return Flags & FLAG1; } void SetFlag1Set() { Flags |= FLAG1; } void ResetFlag1() { Flags &= 0xffffffff ^ FLAG1; } ' ' }; For obvious reasons I'd like to change this to use bit fields, something like class MyClass { ' ' struct Flags { unsigned Flag1:1; unsigned Flag2:1; unsigned Flag3:1; }; ' ' }; The one concern I have with making this switch is that I've come across a number of references on this site stating how slow bit fields are in C++. My assumption is that they are still faster than the manual code shown above, but is there any hard reference material covering the speed implications of using bit fields on various platforms, specifically 32bit and 64bit windows. The application deals with huge amounts of data in memory and must be both fast and memory efficient, which could well be why it was written this way in the first place.

    Read the article

  • glDrawArrays() slow on iPad?

    - by Nick
    Hey guys, I was wondering how to speed up my iPad application using OpenGLES 2.0. At the moment we have every drawable object draw itself with a call to glDrawArrays(). Blend mode is on, we really need it. Without disabling blendmode, how would we improve performance for this app? For instances, if we now draw 1 texture across the whole screen, the app only gets 15FPS, which is really slow I think? Are we doing something terribly wrong? Our drawing code (for each drawable), is as follows: - (void) draw { GLuint textureAvailable = 0; if(texture != nil){ textureAvailable = 1; } glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture.name); glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, vertices); glEnableVertexAttribArray(ATTRIB_VERTEX); glVertexAttribPointer(ATTRIB_COLOR, 4, GL_FLOAT, 1, 0, colorsWithMultipliedAlpha); glEnableVertexAttribArray(ATTRIB_COLOR); glVertexAttribPointer(ATTRIB_TEXTUREMAP, 2, GL_FLOAT, 1, 0, textureMapping); glEnableVertexAttribArray(ATTRIB_TEXTUREMAP); //Note that we are NOT using position.z here because that is only used to determine drawing order int *jnUniforms = JNOpenGLConstants::getInstance().uniforms; glUniform4f(jnUniforms[UNIFORM_TRANSLATE], position.x, position.y, 0.0, 0.0); glUniform4f(jnUniforms[UNIFORM_SCALE], scale.x, scale.y, 1.0, 1.0); glUniform1f(jnUniforms[UNIFORM_ROTATION], rotation); glUniform1i(jnUniforms[UNIFORM_TEXTURE_SAMPLE], 0); glUniform2f(jnUniforms[UNIFORM_TEXTURE_REPEAT], textureRepeat.x, textureRepeat.y); glUniform1i(jnUniforms[UNIFORM_TEXTURE_AVAILABLE], textureAvailable); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); }

    Read the article

  • Creating many polygons with OpenGL is slow?

    - by user146780
    I want to draw many polygons to the screen but i'm quickly noticing that it slows down quickly. As a test I did this: for(int i = 0; i < 50; ++i) { glBegin( GL_POLYGON); glColor3f( 0.0f, 1, 0.0f ); glVertex2f( 500.0 + frameGL.GetCameraX(), 0.0f + frameGL.GetCameraY()); glColor3f( 0.0f, 1.0f, 0.0f ); glVertex2f( 900.0 + frameGL.GetCameraX(), 0.0f + frameGL.GetCameraY()); glColor3f( 0.0f, 0.0f, 0.5 ); glVertex2f(900.0 + frameGL.GetCameraX(), 500.0f + frameGL.GetCameraY() + (150)); glColor3f( 0.0f, 1.0f, 0.0f ); glVertex2f( 500 + frameGL.GetCameraX(), 500.0f + frameGL.GetCameraY()); glColor3f( 1.0f, 1.0f, 0.0f ); glVertex2f( 300 + frameGL.GetCameraX(), 200.0f + frameGL.GetCameraY()); glEnd(); } This is only 50 polygons and already it's gtting slow. I can't upload them directly to the card because my program will allow the user to reshape the verticies. My question is, how can I speed this up. I'm not using depth. I also know it's not my GetCamera() functions because if I create 500,000 polygons spread apart t's fine, it just has trouble showing them in the view. If a graphics card can support 500,000,000 on screen polygons per second, this should be easy right? Thanks

    Read the article

  • NHibernate - Retrieving Lots of Data Becomes Exponentially Slow

    - by nfplee
    Hi, I have an issue when I retrieve lots of data in NHibernate (such as when producing a report) the page becomes exponentially slower the more data it has to retrieve. I found the following article: http://nhforge.org/blogs/nhibernate/archive/2008/10/30/bulk-data-operations-with-nhibernate-s-stateless-sessions.aspx It explains how doing bulk data operations in NHibernate is slow since the first level cache grows too large and how you should use the IStatelessSession instead. The trouble I have is that I don't wish to tie my application to NHibernate so I've added a wrapper around ISession. I then use Linq as my query mechanism but IStatelessSession does not support Linq (it may do in NHibernate 3 but the Linq provider is not stable as it stands at the moment). I then read that you could do a clear after so many iterations to clear out the first level cache. The problem now is that you can't use lazy loading. The linq provider doesn't allow you to override the mapping defined (or eagerly fetch the additional data) so whenever I grab data which is lazy loaded after I have cleared the session an exception is thrown. I'm completely lost on what do now. I like the ease of producing reports with linq but the limitations of the inbuilt linq provider in NHibernate seem to be holding me back. I'd really appreciate it if someone could show me an alternative approach. Thanks

    Read the article

  • Tracking down slow managed DLL loading

    - by Alex K
    I am faced with the following issue and at this point I feel like I'm severely lacking some sort of tool, I just don't know what that tool is, or what exactly it should be doing. Here is the setup: I have a 3rd party DLL that has to be registered in GAC. This all works fine and good on pretty much every machine our software was deployed on before. But now we got 2 machines, seemingly identical to the ones we know work (they are cloned from the same image and stuffed with the same hardware, so pretty much the only difference is software settings, over which I went over and over, and they seem fine). Now the problem, the DLL in GAC takes a very long time to load. At least I believe this is the issue, what I can say definitively is that instantiating a single class from that DLL is the slow part. Once it is loaded, thing fly as they always have. But while on known-good machines the DLL loads so fast that a timestamp in the log doesn't even change, on these 2 machines it take over 1min to load. Knowns: I have no access to the source, so I can't debug through the DLL. Our app is the only one that uses it (so shouldn't be simultaneous access issues). There is only one version of this DLL in existance, so it shouldn't be a matter of version conflict. The GAC reference is being used (if I uninstall the DLL from GAC, an exception will be thrown about the missing GAC reference). Could someone with a greater skill in debug-fu suggest what I can do to track down the root cause of this issue?

    Read the article

  • Msysgit bash is horrendously slow in Windows 7

    - by Kevin L.
    I love git and use it on OS X pretty much constantly at home. At work, we use svn on Windows, but want to migrate to git as soon as the tools have fully matured (not just TortoiseGit, but also something akin the really nice Visual Studio integration provided by VisualSVN). But I digress... I recently installed msysgit on my Windows 7 machine, and when using the included version of bash, it is horrendously slow. And not just the git operations; clear takes about five seconds. AAAAH! Has anyone experienced a similar issue? Edit: It appears that msysgit is not playing nicely with UAC and might just be a tiny design oversight resulting from developing on XP or running Vista or 7 with UAC disabled; starting Git Bash using Run as administrator results in the lightning speed I see with OS X (or on 7 after starting Git Bash w/o a network connection - see @Gauthier answer). Edit 2: AH HA! See my answer.

    Read the article

  • Batch insert mode with hibernate and oracle: seems to be dropping back to slow mode silently

    - by Chris
    I'm trying to get a batch insert working with Hibernate into Oracle, according to what i've read here: http://docs.jboss.org/hibernate/core/3.3/reference/en/html/batch.html , but with my benchmarking it doesn't seem any faster than before. Can anyone suggest a way to prove whether hibernate is using batch mode or not? I hear that there are numerous reasons why it may silently drop into normal mode (eg associations and generated ids) so is there some way to find out why it has gone non-batch? My hibernate.cfg.xml contains this line which i believe is all i need to enable batch mode: <property name="jdbc.batch_size">50</property> My insert code looks like this: List<LogEntry> entries = ..a list of 100 LogEntry data classes... Session sess = sessionFactory.getCurrentSession(); for(LogEntry e : entries) { sess.save(e); } sess.flush(); sess.clear(); My 'logentry' class has no associations, the only interesting field is the id: @Entity @Table(name="log_entries") public class LogEntry { @Id @GeneratedValue public Long id; ..other fields - strings and ints... However, since it is oracle, i believe the @GeneratedValue will use the sequence generator. And i believe that only the 'identity' generator will stop bulk inserts. So if anyone can explain why it isn't running in batch mode, or how i can find out for sure if it is or isn't in batch mode, or find out why hibernate is silently dropping back to slow mode, i'd be most grateful. Thanks

    Read the article

  • VSTO Outlook - Contact iteration is SO SLOW!

    - by DustinDavis
    I'm working on an outlook add-in and I have a dialog window that allows the user to select contacts. I havent been able to find a way to use the outlook contact window so I am looping through the ContactFolder.Items and doing my work that way. The problem is that I have to handle up to 70K contacts. I tried multi-threading and many other things but it is just so slow. It takes 15 seconds to load 30k contacts. I can load and bind 500k POCO objects in milliseconds but when I need to get the contact items from outlook it just takes forever. The problem seems to be when you actually need to get a property from the contactitem it has to fetch it from the database or something. Is there a contact cache I can pull from? I only need Display and Email, nothing else. An ID would be nice but I don't need it. Can someone please tell me a better way of getting contacts from outlook or at least tell me how to open the outlook contact selection window? I was able to find code to open it but it wont let me because I'm showing a modal dialog and it wont open if there is a modal open.

    Read the article

  • Word frequency tally script is too slow

    - by Dave Jarvis
    Background Created a script to count the frequency of words in a plain text file. The script performs the following steps: Count the frequency of words from a corpus. Retain each word in the corpus found in a dictionary. Create a comma-separated file of the frequencies. The script is at: http://pastebin.com/VAZdeKXs Problem The following lines continually cycle through the dictionary to match words: for i in $(awk '{if( $2 ) print $2}' frequency.txt); do grep -m 1 ^$i\$ dictionary.txt >> corpus-lexicon.txt; done It works, but it is slow because it is scanning the words it found to remove any that are not in the dictionary. The code performs this task by scanning the dictionary for every single word. (The -m 1 parameter stops the scan when the match is found.) Question How would you optimize the script so that the dictionary is not scanned from start to finish for every single word? The majority of the words will not be in the dictionary. Thank you!

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >