Search Results

Search found 14643 results on 586 pages for 'performance comparison'.

Page 9/586 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Slow Firefox Javascript Canvas Performance?

    - by jujumbura
    As a followup from a previous post, I have been trying to track down some slowdown I am having when drawing a scene using Javascript and the canvas element. I decided to narrow down my focus to a REALLY barebones animation that only clears the canvas and draws a single image, once per-frame. This of course runs silky smooth in Chrome, but it still stutters in Firefox. I added a simple FPS calculator, and indeed it appears that my page is typically getting an FPS in the 50's when running Firefox. This doesn't seem right to me, I must be doing something wrong here. Can anybody see anything I might be doing that is causing this drop in FPS? <!DOCTYPE HTML> <html> <head> </head> <body bgcolor=silver> <canvas id="myCanvas" width="600" height="400"></canvas> <img id="myHexagon" src="Images/Hexagon.png" style="display: none;"> <script> window.requestAnimFrame = (function(callback) { return window.requestAnimationFrame || window.webkitRequestAnimationFrame || window.mozRequestAnimationFrame || window.oRequestAnimationFrame || window.msRequestAnimationFrame || function(callback) { window.setTimeout(callback, 1000 / 60); }; })(); var animX = 0; var frameCounter = 0; var fps = 0; var time = new Date(); function animate() { var canvas = document.getElementById("myCanvas"); var context = canvas.getContext("2d"); context.clearRect(0, 0, canvas.width, canvas.height); animX += 1; if (animX == canvas.width) { animX = 0; } var image = document.getElementById("myHexagon"); context.drawImage(image, animX, 128); context.lineWidth=1; context.fillStyle="#000000"; context.lineStyle="#ffffff"; context.font="18px sans-serif"; context.fillText("fps: " + fps, 20, 20); ++frameCounter; var currentTime = new Date(); var elapsedTimeMS = currentTime - time; if (elapsedTimeMS >= 1000) { fps = frameCounter; frameCounter = 0; time = currentTime; } // request new frame requestAnimFrame(function() { animate(); }); } window.onload = function() { animate(); }; </script> </body> </html>

    Read the article

  • Website performance tips?

    - by Michael Schinis
    Im kind of having some troubles with the loading of my website: Here's a link to the website: link It sometimes loads fast, then when you refresh it.. most of the times, it will just keep trying to load images, and keep doing that for a minute or so, and none of the javascript will execute. I have followed most of the tips given by yahoo, except caching, which I couldn't get working properly. Does anyone know how to do proper caching of image and javascript files using htaccess? most of the code I found online won't work. Any advice whatsoever is extremely helpful. Thanks

    Read the article

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

  • DDD Model Design and Repository Persistence Performance Considerations

    - by agarhy
    So I have been reading about DDD for some time and trying to figure out the best approach on several issues. I tend to agree that I should design my model in a persistent agnostic manner. And that repositories should load and persist my models in valid states. But are these approaches realistic practically? I mean its normal for a model to hold a reference to a collection of another type. Persisting that model should mean persist the entire collection. Fine. But do I really need to load the entire collection every time I load the model? Probably not. So I can have specialized repositories. Some that load maybe a subset of the object graph via DTOs and others that load the entire object graph. But when do I use which? If I have DTOs, what's stopping client code from directly calling them and completely bypassing the model? I can have mappers and factories to create my models from DTOs maybe? But depending on the design of my models that might not always work. Or it might not allow my models to be created in a valid state. What's the correct approach here?

    Read the article

  • Ubuntu with KDE and kernel 3.6.2 - performance issues

    - by Pelda
    I have recently swiched to Ubuntu and installed KDe on it. I like software included in Ubuntu but interface from Kubuntu. My problem is, that after installation of kernel 3.6.2 (deafult ubuntu 12.04 is 3.2 I think) - whole KDE interface is laggy and I have to render using Xrended because Opel AL doesnt work. So please tell me - I didint find it anywhere - Does KDE has some problems with new kernel? should i downgrade back to 3.x.x? Thank you for answers and your time. Pelda

    Read the article

  • Optimizing hash lookup & memory performance in Go

    - by Moishe
    As an exercise, I'm implementing HashLife in Go. In brief, HashLife works by memoizing nodes in a quadtree so that once a given node's value in the future has been calculated, it can just be looked up instead of being re-calculated. So eg. if you have a node at the 8x8 level, you remember it by its four children (each at the 2x2 level). So next time you see an 8x8 node, when you calculate the next generation, you first check if you've already seen a node with those same four children. This is extended up through all levels of the quadtree, which gives you some pretty amazing optimizations if eg. you're 10 levels above the leaves. Unsurprisingly, it looks like the perfmance crux of this is the lookup of nodes by child-node values. Currently I have a hashmap of {&upper_left_node,&upper_right_node,&lower_left_node,&lower_right_node} -> node So my lookup function is this: func FindNode(ul, ur, ll, lr *Node) *Node { var node *Node var ok bool nc := NodeChildren{ul, ur, ll, lr} node, ok = NodeMap[nc] if ok { return node } node = &Node{ul, ur, ll, lr, 0, ul.Level + 1, nil} NodeMap[nc] = node return node } What I'm trying to figure out is if the "nc := NodeChildren..." line causes a memory allocation each time the function is called. If it does, can I/should I move the declaration to the global scope and just modify the values each time this function is called? Or is there a more efficient way to do this? Any advice/feedback would be welcome. (even coding style nits; this is literally the first thing I've written in Go so I'd love any feedback)

    Read the article

  • Performance cost of running Ubuntu from external hard drive

    - by dandan78
    A friend just complained to me about Ubuntu being slow. Although I've noticed a certain lack of snappiness with Linux vs Windows in the past, I really can't say I've had much to grumble about with the recent distributions of Ubuntu. That said, his objections seem much worse than the ones I used to have and I know that his current setup is significantly more powerful than my laptop. And then it turned out he is running Ubuntu off an external HDD hooked up via USB2.0. The HD enclosure is USB3.0 but apparently he can't manage to get it to boot on USB3.0 so he switched to one of the USB2.0 ports or whatever and that works, albeit not very well. Now I would expect USB to add some overhead to communication between the computer and the HDD; SATA is after all designed to get the maximum out of a hard drive, whereas USB is, well, universal. What are your expreriences with booting off external HDDs? Edit: Does anybody know just how much of a slowdown can be expected?

    Read the article

  • AWR Performance Report and Read by Other Session Waits

    - by user702295
    For the questions regarding "read by other session" and its relation to "db file sequential/scattered read", the logic is like this: When a "db file sequential/scattered read" is done, the blocks are either already in the cache or on the disk.  Since any operation on blocks is done in the cache and since and the issue is "read by other session" I will relate to the case the blocks are on the disk. Process A is reading the needed block from the disk to the cache.  During that time, if process B (and C and others) need the same block, it will wait on "read by other session".  A and B can be threads of the same process running in parallel or unrelated processes.  For example two processes doing full table scan on mdp_matrix etc. Solutions for that can be lowering the number of processes competing on the same blocks, increasing PCTFREE.  If it is a full table scan, maybe an index is missing that can result in less blocks being read from the cache and so on.

    Read the article

  • Pain of the Week/Expert's Perspective: Performance Tuning for Backups and Restores

    - by KKline
    First off - the Pain of the Week webcast series has been renamed. It's now known as The Expert's Perspective . Please join us for future webcasts and, if you're interested in speaking, drop me a note to see if we can get you on the roster! The bigger your databases get, the longer backups take. That doesn't really seem like a huge problem — until disaster strikes and you need to restore your databases as fast as possible. Join my buddy Brent Ozar ( blog | twitter ), a Microsoft Certified Master of...(read more)

    Read the article

  • Intel programming "performance" books? [closed]

    - by user997112
    I vaguely remember seeing that Intel have produced a few good books, especially with regards to low latency programming, but I cannot remember the titles. Could people suggest the titles of Intel books (or ones relating to Intel products)? Examples include books on: -Intel Compiler -Intel Assembler -Any low level programming on Intel assembler -The Intel CPU architecture -Intel threading blocks library

    Read the article

  • EBS Seed Data Comparison Reports Now Available

    - by Steven Chan (Oracle Development)
    Earlier this year we released a reporting tool that reports on the differences in E-Business Suite database objects between one release and another.  That's a very useful reference, but EBS defaults are delivered as seed data within the database objects themselves. What about the differences in this seed data between one release and another? I'm pleased to announce the availability of a new tool that provides comparison reports of E-Business Suite seed data between EBS 11.5.10.2, 12.0.4, 12.0.6, 12.1.1, and 12.1.3.  This new tool complements the information in the data model comparison tool.  You can download the new seed data comparison tool here: EBS ATG Seed Data Comparison Report (Note 1327399.1) The EBS ATG Seed Data Comparison Report provides report on the changes between different EBS releases based upon the seed data changes delivered by the product data loader files (.ldt extension) based on EBS ATG loader control (.lct extension) files.  You can use this new tool to report on the differences in the following types of seed data: Concurrent Program definitions Descriptive Flexfield entity definitions Application Object Library profile option definitions Application Object Library (AOL) key flexfield, function, lookups, value set definitions Application Object Library (AOL) menu and responsibility definitions Application Object Library messages Application Object Library request set definitions Application Object Library printer styles definitions Report Manager / WebADI component and integrator entity definitions Business Intelligence Publisher (BI Publisher) entity definitions BIS Request Set Generator entity definitions ... and more Your feedback is welcomeThis new tool was produced by our hard-working EBS Release Management team, and they're actively seeking your feedback.  Please feel free to share your experiences with it by posting a comment here.  You can also request enhancements to this tool via the distribution list address included in Note 1327399.1.Related Articles Oracle E-Business Suite Release 12.1.3 Now Available New Whitepaper: Upgrading EBS 11i Forms + OA Framework Personalizations to EBS 12 EBS 12.0 Minimum Requirements for Extended Support Finalized Five Key Resources for Upgrading to E-Business Suite Release 12 E-Business Suite Release 12.1.1 Consolidated Upgrade Patch 1 Now Available New Whitepaper: Planning Your E-Business Suite Upgrade from Release 11i to 12.1

    Read the article

  • Displaying performance data per engine subsystem

    - by liortal
    Our game (Android based) traces how long it takes to do the world logic updates, and how long it takes to a render a frame to the device screen. These traces are collected every frame, and displayed at a constant interval (currently every 1 second). I've seen games where on-screen data of various engine subsystems is displayed, with the time they consume (either in text) or as horizontal colored bars. I am wondering how to implement such a feature?

    Read the article

  • Use Your Android Phone to Comparison Shop: 4 Scanner Apps Reviewed

    - by Jason Fitzpatrick
    A smart phone in your pocket is great for on the go news, web browsing, and—of course—mobile gaming. It’s also fantastic for comparison shopping. Today we take a look at four Android scanners and price comparison engines. It’s quite a neat time to be a consumer. Historically if you wanted to do serious price comparisons you had to haul yourself around town, gather flyers from the newspapers, and otherwise invest way too much energy into potential savings that might not even break into double digits. Now you can comparison shop with an ease that borders on magic: by simply pulling out your smart phone and scanning the barcode or typing in the name of the item you wish to compare. Today we’re taking a look at some of the more popular and powerful barcode scanners and price comparison engines available for the Android platform. Before we get to that, a word on our methodology. To test the barcode scanners and the resulting search results we wandered around and rounded up some relatively random items from around the How-To Geek offices. This included a children’s graphic novel, a Wii game, a board game, a pack of razors, a box of tea, and a bottle of nail polish. It’s a decent spread of consumer items that covers several genres. For each application we scanned all the items, looked for the best price at the time, and noted any other relevant benefits of using one scanner over another. It’s worth noting that our primary focus was on the speed and ease of use. You may find that certain scanners have specific features that best suit your needs. What we focused on was how fast you could scan, compare prices, and purchase items if you desired. Since all the scanners are free-as-in-beer, feel free to download them all and run your own tests to confirm our conclusions. Use Your Android Phone to Comparison Shop: 4 Scanner Apps Reviewed How to Run Android Apps on Your Desktop the Easy Way HTG Explains: Do You Really Need to Defrag Your PC?

    Read the article

  • Practical RAID Performance?

    - by wag2639
    I've always thought the following to be a general rule of thumb for RAID: RAID 0: Best performance for READ and WRITE from stripping, greatest risk RAID 1: Redundant, decent for READ (I believe it can read from different parts of a file from different hard drives), not the best for WRITE RAID 0+1 (01): combines redundancy of RAID 1 with performance of RAID 0 RAID 1+0 (10): slightly better version of RAID 0+1 RAID 5: good READ performance, bad WRITE performance, redundant IS THIS ASSUMPTION CORRECT? (and how do they compare to a JBOD setup for R/W IO performance) Are certain practical RAID setups better for different applications: gaming, video editing, database (Acccess or SQL)? I was thinking about hard disk drives but does this apply to solid state drives as well?

    Read the article

  • OpenVZ vs Xen, how much difference in performance?

    - by Aleksandr Levchuk
    There is a Xen vs. KVM in performance question on ServerFault. What will be the performance difference if the choice is between Xen and OpenVZ? How is it best to measure? Some may say "you're comparing apples and oranges" but I have to choose one of the two and it needs to be wise choice. Performance is most important to us. We may switching to Xen from OpenVZ because Xen is more ubiquitous but only if performance difference is not significant. In January 2011 I'm thinking of doing a head to head performance comparison - here is my project proposal to our Bioinformatics facility director.

    Read the article

  • SQL SERVER – Transcript of Learning SQL Server Performance: Indexing Basics – Interview of Vinod Kumar by Pinal Dave

    - by pinaldave
    Recently I just wrote a blog post on about Learning SQL Server Performance: Indexing Basics and I received lots of request that if we can share some insight into the course. Here is 200 seconds interview of Vinod Kumar I took right after completing the course. We have few free codes to watch the course, please your comment at http://facebook.com/SQLAuth and we will few of first ones, we will send the code. There are many people who said they would like to read the transcript of the video. Here I have generated the same. Pinal: Vinod, we recently released this course, SQL Server Indexing. It is about performance tuning. So tell me – how do indexes help performance? Vinod: I think what happens in the industry when it comes to performance is that developers and DBAs look at indexes first.  So that’s the first step for any performance tuning exercise, indexing is one of the most critical aspects and it is important to learn it the right way. Pinal: Correct. So what you mean to say is that if you know indexing you can pretty much tune any server and query. Vinod: So I might contradict my false statement now. Indexing is usually a stepping stone but it does not lead you to the end. But it’s good to start with indexing and there are lots of nuances to indexing that you need to understand, like how SQL uses indexing and how performance can improve because of the strategies that you have made. Pinal: But now I’m confused. First you said indexes are good, and then you said that indexes can degrade your performance.  So what is this course about?  I mean how does this course really make an impact? Vinod: Ok -so from the course perspective, what we are trying to do is give you a capsule which gives you a good start. Every journey needs a beginning, you need that first step.  This course is that first step in understanding. This is the most basic, fundamental course that we have tried to attack. This is the fundamentals of indexing, some of the key things that you must know about indexing.   Some of the basics of indexing are lesser known and so I think this course is geared towards each and every one of you out there who wants to understand little bit more about indexing. Pinal: So what I understand is that if I enrolled in this course I will have a minimum understanding about indexing when dealing with performance tuning.  Right? Vinod: Exactly. In this course is we have tried to give you a nice summary. We are talking about clustered indexing, non clustered indexing, too many indexes, too few indexes, over indexing, under indexing, duplicate indexing, columns tune indexing, with SQL Server 2012. There’s lot’s to learn. Pinal: You can see the URL [http://bit.ly/sql-index] of the course on the screen. Go ahead, attend, and let us know what you think about it. Thank you. Vinod: Thank you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology, Video

    Read the article

  • Basic C++ Speed (initialization vs adding) and comparison speed

    - by seld
    I was curious if anyone knows which of the following executes faster (I know this seems like a weird question but I'm trying to shave as much time and resources as possible off my program.) int i; i+=1; or int i; i=1; and I also was curious about which comparison is faster: //given some integer i // X is some constant i < X+1 or i<=X

    Read the article

  • Java Integer: what is faster comparison or subtraction?

    - by Vladimir
    I've found that java.lang.Integer implementation of compareTo method looks as follows: public int compareTo(Integer anotherInteger) { int thisVal = this.value; int anotherVal = anotherInteger.value; return (thisVal<anotherVal ? -1 : (thisVal==anotherVal ? 0 : 1)); } The question is why use comparison instead of subtraction: return thisVal - anotherVal;

    Read the article

  • After writing SQL statements in MySQL, how to measure the speed / performance of them?

    - by Jian Lin
    I saw something from an "execution plan" article: 10 rows fetched in 0.0003s (0.7344s) How come there are 2 durations shown? What if I don't have large data set yet. For example, if I have only 20, 50, or even just 100 records, I can't really measure how faster 2 different SQL statements compare in term of speed in real life situation? In other words, there needs to be at least hundreds of thousands of records, or even a million records to accurately compares the performance of 2 different SQL statements?

    Read the article

  • DB comparison tools

    - by Dimi Toulakis
    Has someone experience with database comparison tools? Which one you would recommend? We are currently using "SQLCompare" from Redgate, but I am curious to know if there are better tools on the market. The main requirement is that they should be able to compare scripts folder against a live database. Thanks, Dimi

    Read the article

  • Any good SQL Anywhere database schema comparison tools?

    - by Lurker Indeed
    Are there any good database schema comparison tools out there that support Sybase SQL Anywhere version 10? I've seen a litany of them for SQL Server, a few for MySQL and Oracle, but nothing that supports SQL Anywhere correctly. I tried using DB Solo, but it turned all my non-unique indexes into unique ones, and I didn't see any options to change that.

    Read the article

  • SQL Comparison Tools

    - by David Ward
    Which SQL comparison tool would you recommend for SQL server database comparisons. I've been looking at SQL Compare and SQL Delta. I'd like the ability to compare and sync database schema and data.

    Read the article

  • Java Interger: what is faster comparison or subtraction?

    - by Vladimir
    I've found that java.lang.Ingteger implementation of compareTo method looks as follows: public int compareTo(Integer anotherInteger) { int thisVal = this.value; int anotherVal = anotherInteger.value; return (thisVal<anotherVal ? -1 : (thisVal==anotherVal ? 0 : 1)); } The question is why use comparison instead of subtraction: return thisVal - anotherVal;

    Read the article

  • facial comparison software

    - by chris beardmore
    I have images of my children we all do, but I have found a picture of a toddler that bears great resemblance to 1 of my own sons on the internet, is there any software available that I can perform side-by-side comparison with, or maybe software that can give an X-% probability match

    Read the article

  • How does a website latency simulator work

    - by nighthawk457
    Sites like webpagetest allow users to enter a website url and a test location, to run a speed test on the site from multiple locations using real browsers. Can anyone give me a basic idea of how sites like this work? You also have plugin's like Aptimize latency simulator or charles web debugging proxy app, that simulate the delay while accessing a site from different locations. I am assuming since these are plugin's these function in a different way. How do these plugin's work ?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >