Search Results

Search found 14643 results on 586 pages for 'performance comparison'.

Page 90/586 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Optimizing hierarchical transform

    - by Geotarget
    I'm transforming objects in 3D space by transforming each vector with the object's 4x4 transform matrix. In order to achieve hierarchical transform, I transform the child by its own matrix, and then the child by the parent matrix. This becomes costly because objects deeper in the display tree have to be transformed by all the parent objects. This is what's happening, in summary: Root -- transform its verts by Root matrix Parent -- transform its verts by Parent, Root matrix Child -- transform its verts by Child, Parent, Root matrix Is there a faster way to transform vertices to achieve hierarchical transform? What If I first concatenated each transform matrix with the parent matrices, and then transform verts by that final resulting matrix, would that work and wouldn't that be faster? Root -- transform its verts by Root matrix Parent -- concat Parent, Root matrices, transform its verts by Concated matrix Child -- concat Child, Parent, Root matrices, transform its verts by Concated matrix

    Read the article

  • Wait Statistics in Microsoft SQL Server

    - by KKline
    When it comes to troubleshooting in relational databases, there's no better place to start than wait statistics. In a nutshell, a wait statistic is an internal counter that tells you how long the database spent waiting for a particular resource, activity, or process. Since wait statistics are categorized by type, one look will quickly tell the variety of problem that needs your attention, assuming you know meaning for Microsoft's lingo for each wait type....(read more)

    Read the article

  • Beginners guide to developing optimization software

    - by Florenc
    I am novice in "serious" programming i.e. applications that deal with real-life applications and software projects that go beyond school assignments. My interests include optimization, operations research, algorithms and lately i discovered how much I do like software design/development/engineering. I have already developed some simple desktop applications for some "famous" problems like TSP using heuristc approaches, a VRP solver (in progress) and so on. While developing this kind of software I actually used basic concepts taught at school such as object-orientation analysis and design. But, I found these courses rather elementary and quite boring (for my expectations). So I decided to go a little further and start developing "real" software (and this is where I realized how important and interesting software engineering/design is.) Now, here's my issue: I can not find a "study guide" for developing software of this kind. Currently, there are numerous resources out there (books, websites, tutorials) in designing and developing complex IS, web applications, smartphone apps but I can't find a book for example entitled "optimization software development". Definetly, someone could claim that "design patterns apply to software in general" but that's not my point. My point is that I could simply use my imagination for "simple" implementations, but what happens, when my imagination can not go further? In other words I'm looking for a guide/path to bridge the gap between: Mathematics-Algorithm Design-Software Engineering-Optimization-Software development

    Read the article

  • What are the Crappy Code Games - What are the challenges?

    - by simonsabin
    This is part of a series on the Crappy Code Games The background Who can enter? What are the challenges? What are the prizes? Why should I attend? Tips on how to win What are the challenges? There are 4 games that you can enter. Each one is to test a different aspect of SQL Server. The High Jump: Generate the highest I/O per second The 100 m dash: Cumulative highest number of I/O’s in 60 seconds The SSIS-athon: Load one billion row fact table in the shortest time The Marathon: Generate the highest...(read more)

    Read the article

  • Updated sp_indexinfo

    - by TiborKaraszi
    It was time to give sp_indexinfo some love. The procedure is meant to be the "ultimate" index information procedure, providing lots of information about all indexes in a database or all indexes for a certain table. Here is what I did in this update: Changed the second query that retrieves missing index information so it generates the index name (based on schema name, table name and column named - limited to 128 characters). Re-arranged and shortened column names to make output more compact and more...(read more)

    Read the article

  • slow virtualbox guest

    - by ecoologic
    I run a guest ubuntu 12.04 on a host ubuntu 12.04, with virtual box, and the guest is much, much slower than the host (ALT+TAB costs 4-5secs). I had a look around and I found contradicting opinions on virtualbox vs vmware (free), so I taught to keep the former. Both systems are updated, I installed the additions on the guest and I evenly split memory and video memory (64mb) between guest and host. I am running a toshiba m200 laptop with 4GB ram and shared video memory. The host bios does not include a configuration option for machine virtualization. I have 2 cpus and I can't give them both to the vm. Is there anything I overlooked that could solve my problem? Feel free to ask for more info, and thank you for any help. EDIT Idling with the monitor open the (single) guest cpu never gets below 55% and could raise to 80 - 90% just moving the mouse around, opening ff will cause the monitor to run 100% in the guest, while the host shows that both cpus are evenly working around 60%. My cpu is Intel® Core™2 Duo CPU T5450 @ 1.66GHz × 2. If this is not a configuration problem, does it mean my machine is too weak for virtualization?

    Read the article

  • Why are slower programming languages considered worse than faster ones?

    - by Emanuil
    Here's how I see it. There's machine code and it's all that the computers needs in order to run something. The computers don't care about programming languages. It doesn't matter to them if the machine code comes from Perl, Python or PHP. Programming languages exist to serve programmers. Some programming languages run slower then others but that's not because there is something wrong with them. It's often because they do more things that otherwise programmers would do and by doing these things, they do better what they are supposed to do - serve programmers. So why are slower programming languages considered worse than faster ones?

    Read the article

  • Unity environment way too slow in Ubuntu 13.10

    - by Santiago
    Unity and its apps open too slowly whenever I open one. It takes a while for them to appear completely. Everything works properly when the window is already open. The biggest problem is with the dash: it's SO SLOW when I'm looking for an app although I have removed some lenses. What should I do or what can I do? These issues only occur with Ubuntu 13.04 and 13.10 whereas 12.04 works AMAZNGLY but I have issues when updating a package or installing a new one, that's why I don't opt for that one. Specifications: RAM: 2GB, Processor: Intel® Atom™ CPU N2600 @ 1.60GHz × 4, Graphics card: Gallium 0.4 on llvmpipe (LLVM 3.3, 128 bits)

    Read the article

  • Does connection pooling work fine to execute 60 DB queries to load a page?

    - by willem
    We use Linq2Sql in an ASP.NET application. Unfortunately the eager-loading in Linq2Sql isn't as powerful as in Entity Framework, so a lot of the data has to be lazy loaded as needed. Taking connection pooling into account, is it OK for a web page to execute 60 queries to load a page? Executing a single big query probably won't be much better, as those 60 queries will all those connection pooled connections and not open a new connection each time (which I realize is slow). Any thoughts?

    Read the article

  • Material usage, one per model or per object?

    - by WSkid
    Is it better (memory, time (of developer), space) to use single model that is unwrapped and uses a single material or to break a model down into appropriate bits, each with their own smaller texture/material? Or does it depend on the target platform as to what is acceptable - ie PC vs tablet? An example: Say you have a typical house with a tiled roof. Model it, make sure everything is attached, unwrap the walls/roof so in your UV template the walls and roof would be in one texture file, side-by-side in say a 512x512 file. Model the roof/walls as separate objects, unwrap them individually and have two UV templates. You could then have a 256x256 file for each one.

    Read the article

  • How one could use a live editor

    - by Sathvik
    I was thinking about a live editing environment where code / a source file is synchronized so that, changes made by one user would be carried across to all others editing the file. Something like Google Wave, but for code. Could this kind of an environment be better for the code, as changes are shared instantly? (with revision-control, of course) Has anyone tried (or has had a need for) using a shared environment for code?

    Read the article

  • How to speed up rsync/tar of large Maildir

    - by psusi
    I have a very large Maildir I am copying to a new machine ( over 100 BaseT ) with rsync. The progress is slow. VERY SLOW. Like 1 MB/s slow. I think this is because it is a lot of small files that are being read in an order that essentially is random with respect to where the blocks are stored on disk, causing a massive seek storm. I get similar results when trying to tar the directory. Is there a way to get rsync/tar to read in disk block order, or otherwise overcome this problem?

    Read the article

  • How best to merge/sort/page through tons of JSON arrays?

    - by Joshiatto
    Here's the scenario: Say you have millions of JSON documents stored as text files. Each JSON document is an array of "activity" objects, each of which contain a "created_datetime" attribute. What is the best way to merge/sort/filter/page through these activities via a web UI? For example, say we want to take a few thousand of the documents, merge them into a gigantic array, sort the array by the "created_datetime" attribute descending and then page through it 10 activities at a time. Also keep in mind that roughly 25% of these JSON documents are updated every day, and updates have to make it into the view within 5 minutes. My first thought is to parse all of the documents into an RDBMS table and then it would just be a simple query such as "select top 10 name, created_datetime from Activity where user_id=12345 order by created_datetime desc". Some have suggested I use NoSQL techniques such as hadoop or map/reduce instead. How exactly would this work? For more background, see: Why is NoSQL better for this scenario?

    Read the article

  • What are the Crappy Code Games - What are the prizes?

    - by simonsabin
    This is part of a series on the Crappy Code Games The background Who can enter? What are the challenges? What are the prizes? Why should I attend? Tips on how to win What are the prizes? There are loads of them at both the heats and the final. At the heats the top three coders at each event >will take home Gold, Silver and Bronze medals, along with some great prizes such as Steve Wozniak signed ipods, developer laptops, Win-Mo phones, Xbox 360 S consoles, t-shirts and more. And then in the final...(read more)

    Read the article

  • What are the crappy code games - the background?

    - by simonsabin
    This is part of a series on the Crappy Code Games The background Who can enter? What are the challenges? What are the prizes? Why should I attend? Tips on how to win   The Background Fusion IO came to us a while back wanting to run a competition to highlight the how bad code can really impact your system. We’ve all seen it, I saw an example yesterday where someone had implemented a cursor on a whole table just to update a few rows, something like this. declare cUpdateCursor cursor for  ...(read more)

    Read the article

  • Fast lighting with multiple lights

    - by codymanix
    How can I implement fast lighting with multiple lights? I don't want to restrain the player, he can place an unlimited number and possibly overlapping (point) lights into the level. The problem is that shaders which contain dynamic loops which would be necessary to calculate the lighting tend to be very slow. I had the idea that if it could be possible at compiletime to compile a shader n times where n is the number of lights. If the number n is known at compiletime, the loops can be unrolled automatically. Is this possible to generate n versions of the same shader with just a different number of lights? At runtime I could then decide which shader to use for which part of the level.

    Read the article

  • Input of mouseclick not always registered in XNA Update method

    - by LordrAider
    I have a problem that not all inputs of my mouse events seem to be registered. The update logic is checking a 2 dimensional array of 10x10 . It's logic for a jewel matching game. So when i switch my jewel I can't click on another jewel for like half a second. I tested it with a click counter variable and it doesn't hit the debugger when i click the second time after the jewel switch. Only if I do the second click after waiting half a second longer. Could it be that the update logic is too heavy that while he is executing update logic my click is happening and he doesn't register it? What am I not seeing here :)? Or doing wrong. It is my first game. My function of the update methode looks like this. public void UpdateBoard() { MouseState currentMouseState; currentMouseState = Mouse.GetState(); if (currentMouseState.LeftButton == ButtonState.Pressed && prevMouseState.LeftButton != ButtonState.Pressed) { UpdatingLogic = true; // this.CheckDropJewels(currentMouseState); //this.CheckMatches(3); //this.RemoveMatches(); this.CheckForSwitch(currentMouseState); this.MarkJewel(currentMouseState); UpdatingLogic = false; //reIndexMissingJewels = true; reIndexSwitchedJewels = true; } prevMouseState = currentMouseState; this.ReIndex(); this.UpdateJewels(); }

    Read the article

  • SQL TuneIn Zagreb 2014 – Session material

    - by Hugo Kornelis
    I spent the last few days in Zagreb, Croatie, at the third edition of the SQL TuneIn conference , and I had a very good time here. Nice company, good sessions, and awesome audiences. I presented my “Understanding Execution Plans” precon to a small but interested audience on Monday. Participants have received a download link for the slide deck. On Tuesday I had a larger crowd for my session on cardinality estimation. The slide deck and demo code used for that presentation will be available through...(read more)

    Read the article

  • Need a Quick Sure Method to Produce a Formatted Explain Plan? This will help!

    - by user702295
    Please use the following on the production machine to get formatted explain plan and sql trace using the SLOW sql (e.g. 'T_COMB_LIST.COMB_ID = 216') or any other value that takes longer: -- Open new session is SQL*Plus */ -- Make sure you are using updated PLAN_TABLE -- This can be done by dropping it and recreate it by running: -- SQL> @?/rdbms/admin/utlxplan.sql) set lines 1000 set pages 1000 spool xplan_1.txt EXPLAIN PLAN FOR <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> @?/rdbms/admin/utlxplp spool off EXIT --Open a second session is SQL*Plus ALTER SESSION SET max_dump_file_size = unlimited; ALTER SESSION SET tracefile_identifier = '10046'; ALTER SESSION SET statistics_level = ALL; ALTER SESSION SET events '10046 trace name context forever, level 12'; <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> select 'verify cursor closed' from dual; ALTER SYSTEM SET EVENTS '10046 trace name context off'; EXIT Make sure spooled file is formatted properly and that the 10046 trace has relevant explain plan in it.  Please Upload both files (10046 trace is generated in udump). Need instructions to find udump?   sqlplus "/ as sysdba" show parameters dump_dest This will show you bdump, cdump and udump locations.

    Read the article

  • Using a CDN for CMS software (multiple sites)

    - by SmokeyPHP
    I'm currently researching ideas for the media management side of a CMS I'm writing. I was looking at having images served from a CDN which is fine on a single site, but I want all sites that run the CMS to make use of a CDN (which will most likely be a custom developed one, rather than a third party service like S3). My main question is: Is a multi-site CDN a good idea? I can't think of a downside, but have probably missed something - obviously they won't share the same folder, as I invisage the requests to be css.cdnsite.com/example.com/style.css or something along those lines. Having multiple sites in the same place will obviously make it easier for us to manage, as well as being cheaper, but then I wonder if it'll be worth it... Long story short: How should the CMS handle user uploaded media (separate installations) Just keep a local copy of all assets and serve them from the same site, like in days of yore? Keep a local copy, force site to use www. and have CDN subdomains per site? Or use a single separate CDN for all sites? Apologies for the length of this question, not sure if this should be multiple questions or not, as all parts are kind of related and could affect each other.

    Read the article

  • OpenGL VBOs are slower then glDrawArrays.

    - by Arelius
    So, this seems odd to me. I upload a large buffer of vertices, then every frame I call glBindbuffer and then the appropriate gl*Pointer functions with offsets into the buffer, then I use glDrawArrays to draw all of my triangles. I'm only drawing about 100K triangles, however I'm getting about 15FPS. This is where it gets weird, if I change it to not call glBindBuffer, then change the gl*Pointer calls to be actual pointers into the array I have in system memory, and then call glDrawArrays the same, my framerate jumps up to about 50FPS. Any idea what I weird thing I could be doing that would cause this? Did I maybe forget to call glEnable(GL_ALLOW_VBOS_TO_RUN_FAST) or something?

    Read the article

  • How to test high load on a website? [closed]

    - by rFactor
    Possible Duplicate: How do you load test your application? Hi, I am nearing a point of finalizing a website and it will soon be released. We have bought some traffic and advertisement packages and the nature of the site makes it heavier than typical static-like websites. I am looking for hear good ways to test how well the site performs under heavy load. I already know ab. Got any other tips to spare?

    Read the article

  • Fast lighting with multple lights

    - by codymanix
    How can I implement fast lighting with multiple lights? I don't want to restrain the player, he can place an unlimited number and possibly overlapping (point) lights into the level. The problem is that shaders which contain dynamic loops which would be necessary to calculate the lighting tend to be very slow. I had the idea that if it could be possible at compiletime to compile a shader n times where n is the number of lights. If the number n is known at compiletime, the loops can be unrolled automatically. Is this possible to generate n versions of the same shader with just a different number of lights? At runtime I could then decide which shader to use for which part of the level.

    Read the article

  • Structuring cascading properties - parent only or parent + entire child graph?

    - by SB2055
    I have a Folder entity that can be Moderated by users. Folders can contain other folders. So I may have a structure like this: Folder 1 Folder 2 Folder 3 Folder 4 I have to decide how to implement Moderation for this entity. I've come up with two options: Option 1 When the user is given moderation privileges to Folder 1, define a moderator relationship between Folder 1 and User 1. No other relationships are added to the db. To determine if the user can moderate Folder 3, I check and see if User 1 is the moderator of any parent folders. This seems to alleviate some of the complexity of handling updates / moved entities / additions under Folder 1 after the relationship has been defined, and reverting the relationship means I only have to deal with one entity. Option 2 When the user is given moderation privileges to Folder 1, define a new relationship between User 1 and Folder 1, and all child entities down to the grandest of grandchildren when the relationship is created, and if it's ever removed, iterate back down the graph to remove the relationship. If I add something under Folder 2 after this relationship has been made, I just copy all Moderators into the new Entity. But when I need to show only the top-level Folders that a user is Moderating, I need to query all folders that have a parent folder that the user does not moderate, as opposed to option 1, where I just query any items that the user is moderating. I think it comes down to determining if users will be querying for all parent items more than they'll be querying child items... if so, then option 1 seems better. But I'm not sure. Is either approach better than the other? Why? Or is there another approach that's better than both? I'm using Entity Framework in case it matters.

    Read the article

  • Rules of Holes #3 -A Better Shovel is NOT the Answer!

    - by ArnieRowland
    You stopped digging. You looked around and saw that you were still in the Hole. You needed to get out. AHA! Problem solved, you thought. You'll just get a better and more efficient shovel! Sorry, I have to tell you that switching to a more efficient shovel is unlikely to help you get out of the Hole. Yes, your resumed digging may be faster, more directed, and even well planned and articulated. But you will still be in the Hole, and digging. And that's just not the solution. A new process (scrum,...(read more)

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >