Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 129/883 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Material usage, one per model or per object?

    - by WSkid
    Is it better (memory, time (of developer), space) to use single model that is unwrapped and uses a single material or to break a model down into appropriate bits, each with their own smaller texture/material? Or does it depend on the target platform as to what is acceptable - ie PC vs tablet? An example: Say you have a typical house with a tiled roof. Model it, make sure everything is attached, unwrap the walls/roof so in your UV template the walls and roof would be in one texture file, side-by-side in say a 512x512 file. Model the roof/walls as separate objects, unwrap them individually and have two UV templates. You could then have a 256x256 file for each one.

    Read the article

  • How one could use a live editor

    - by Sathvik
    I was thinking about a live editing environment where code / a source file is synchronized so that, changes made by one user would be carried across to all others editing the file. Something like Google Wave, but for code. Could this kind of an environment be better for the code, as changes are shared instantly? (with revision-control, of course) Has anyone tried (or has had a need for) using a shared environment for code?

    Read the article

  • How to Convince management that a specific product training is important to QA?

    - by Rahul
    I am leading a QA team of 10 people. we have been received the request for a training of a ETL dataware housing tool for QA, Support and Development. But however the management does not feel that it is important for QA to be involved in such a training as it is support and development team who will be involved ih developing or fixing the issues in the product. How do I convince the management that this training is very important from the QA perspective as this is the team that will find bugs and which will reduce the maintainance cost?

    Read the article

  • Automated tests for differencing algorithm

    - by Matthew Rodatus
    We are designing a differencing algorithm (based on Longest Common Subsequence) that compares a source text and a modified copy to extract the new content (i.e. content that is only in the modified copy). I'm currently compiling a library of test case data. We need to be able to run automated tests that verify the test cases, but we don't want to verify strict accuracy. Given the heuristic nature of our algorithm, we need our test pass/failures to be fuzzy. We want to specify a threshold of overlap between the desired result and the actual result (i.e. the content that is extracted). I have a few sketches in my mind as to how to solve this, but has anyone done this before? Does anyone have guidance or ideas about how to do this effectively?

    Read the article

  • Quality Assurance tools discrepancies

    - by Roudak
    It is a bit ironic, yesterday I answered a question related to this topic that was marked to be good and today I'm the one who asks. These are my thoughts and a question: Also let's agree on the terms: QA is a set of activities that defines and implements processes during SW development. The common tool is the process audit. However, my colleague at work agrees with the opinion that reviews and inspections are also quality assurance tools, although most sources classify them as quality control. I would say both sides are partially right: during inspections, we evaluate a physical product (clearly QC) but we see it as a white box so we can check its compliance with set processes (QA). Do you think it is the reason of the dichotomy among the authors? I know it is more like an academic question but it deserves the answer :)

    Read the article

  • slow virtualbox guest

    - by ecoologic
    I run a guest ubuntu 12.04 on a host ubuntu 12.04, with virtual box, and the guest is much, much slower than the host (ALT+TAB costs 4-5secs). I had a look around and I found contradicting opinions on virtualbox vs vmware (free), so I taught to keep the former. Both systems are updated, I installed the additions on the guest and I evenly split memory and video memory (64mb) between guest and host. I am running a toshiba m200 laptop with 4GB ram and shared video memory. The host bios does not include a configuration option for machine virtualization. I have 2 cpus and I can't give them both to the vm. Is there anything I overlooked that could solve my problem? Feel free to ask for more info, and thank you for any help. EDIT Idling with the monitor open the (single) guest cpu never gets below 55% and could raise to 80 - 90% just moving the mouse around, opening ff will cause the monitor to run 100% in the guest, while the host shows that both cpus are evenly working around 60%. My cpu is Intel® Core™2 Duo CPU T5450 @ 1.66GHz × 2. If this is not a configuration problem, does it mean my machine is too weak for virtualization?

    Read the article

  • Unity environment way too slow in Ubuntu 13.10

    - by Santiago
    Unity and its apps open too slowly whenever I open one. It takes a while for them to appear completely. Everything works properly when the window is already open. The biggest problem is with the dash: it's SO SLOW when I'm looking for an app although I have removed some lenses. What should I do or what can I do? These issues only occur with Ubuntu 13.04 and 13.10 whereas 12.04 works AMAZNGLY but I have issues when updating a package or installing a new one, that's why I don't opt for that one. Specifications: RAM: 2GB, Processor: Intel® Atom™ CPU N2600 @ 1.60GHz × 4, Graphics card: Gallium 0.4 on llvmpipe (LLVM 3.3, 128 bits)

    Read the article

  • Need Help Hiring a Perfectionist Programmer [closed]

    - by Bryan Hadaway
    I understand my question may be in the gray area, but I'm not able to use the Meta to ask if this question is appropriate or not so I'll simply have to risk it. My project is complete in the sense that it's a fully functional, ready to go 1.0 version. However, that's not good enough for my standards. My expertise is in HTML/CSS, not jQuery and PHP. I'm looking for someone to refine every character of my code for quality, speed, security and compatibility. I want everything to be as bug free as possible for launch. So I need an expert programmer who's a perfectionist in their coding who cares about the quality of their work (not just making it work) to review and refine my code. I'm sure I can't outright post the project's details and hope for interested parties to contact me as that wouldn't be beneficial to the community so instead I'm looking for advice from programmers about where some of best places to hire quality programmers are and the best strategies to hire the right programmer. In other words, screening applicants off of craigslist isn't going to cut it for this project. Thanks

    Read the article

  • What are the Crappy Code Games - What are the challenges?

    - by simonsabin
    This is part of a series on the Crappy Code Games The background Who can enter? What are the challenges? What are the prizes? Why should I attend? Tips on how to win What are the challenges? There are 4 games that you can enter. Each one is to test a different aspect of SQL Server. The High Jump: Generate the highest I/O per second The 100 m dash: Cumulative highest number of I/O’s in 60 seconds The SSIS-athon: Load one billion row fact table in the shortest time The Marathon: Generate the highest...(read more)

    Read the article

  • how to architect this to make it unit testable

    - by SOfanatic
    I'm currently working on a project where I'm receiving an object via web service (WSDL). The overall process is the following: Receive object - add/delete/update parts (or all) of it - and return the object with the changes made. The thing is that sometimes these changes are complicated and there is some logic involved, other databases, other web services, etc. so to facilitate this I'm creating a custom object that mimics the original one but has some enhanced functionality to make some things easier. So I'm trying to have this process: Receive original object - convert/copy it to custom object - add/delete/update - convert/copy it back to original object - return original object. Example: public class Row { public List<Field> Fields { get; set; } public string RowId { get; set; } public Row() { this.Fields = new List<Field>(); } } public class Field { public string Number { get; set; } public string Value { get; set; } } So for example, one of the "actions" to perform on this would be to find all Fields in a Row that match a Value equal to something, and update them with some other value. I have a CustomRow class that represents the Row class, how can I make this class unit testable? Do I have to create an interface ICustomRow to mock it in the unit test? If one of the actions is to sum all of the Values in the Fields that have a Number equal to 10, like this function, how can design the custom class to facilitate unit tests. Sample function: public int Sum(FieldNumber number) { return row.Fields.Where(x => x.FieldNumber.Equals(number)).Sum(x => x.FieldValue); } Am I approaching this the wrong way?

    Read the article

  • Fast lighting with multiple lights

    - by codymanix
    How can I implement fast lighting with multiple lights? I don't want to restrain the player, he can place an unlimited number and possibly overlapping (point) lights into the level. The problem is that shaders which contain dynamic loops which would be necessary to calculate the lighting tend to be very slow. I had the idea that if it could be possible at compiletime to compile a shader n times where n is the number of lights. If the number n is known at compiletime, the loops can be unrolled automatically. Is this possible to generate n versions of the same shader with just a different number of lights? At runtime I could then decide which shader to use for which part of the level.

    Read the article

  • Is deserializing complex objects instead of creating them a good idea, in test setup?

    - by Chris Bye
    I'm writing tests for a component that takes very complex objects as input. These tests are mixes of tests against already existing components, and test-first tests for new features. Instead of re-creating my input objects (this would be a large chunk of code) or reading one from our data store, I had the thought to serialize a live instance of one of these objects, and just deserialize it into test setup. I can't decide if this is a reasonable idea that will save effort in long run, or whether it's the worst idea that I've ever had, causing those that will maintain this code will hunt me down as soon as they read it. Is deserialization of inputs a valid means of test setup in some cases? To give a sense of scale of what I'm dealing with, the size of serialization output for one of these input objects is 93KB. Obtained by, in C#: new BinaryFormatter().Serialize((Stream)fileStream, myObject);

    Read the article

  • Where should Acceptance tests be written against?

    - by Jonn
    I'm starting to get into writing automated Acceptance tests and I'm quite confused where to write these tests against, specifically what layer in the app. Most examples I've seen are Acceptance tests written against the Domain but how about tests like: Given Incorrect Data When the user submits the form Then Play an Error Beep These seem to be fit for the UI and not for the Domain, or probably even the Service layer.

    Read the article

  • What are the Crappy Code Games - What are the prizes?

    - by simonsabin
    This is part of a series on the Crappy Code Games The background Who can enter? What are the challenges? What are the prizes? Why should I attend? Tips on how to win What are the prizes? There are loads of them at both the heats and the final. At the heats the top three coders at each event >will take home Gold, Silver and Bronze medals, along with some great prizes such as Steve Wozniak signed ipods, developer laptops, Win-Mo phones, Xbox 360 S consoles, t-shirts and more. And then in the final...(read more)

    Read the article

  • Input of mouseclick not always registered in XNA Update method

    - by LordrAider
    I have a problem that not all inputs of my mouse events seem to be registered. The update logic is checking a 2 dimensional array of 10x10 . It's logic for a jewel matching game. So when i switch my jewel I can't click on another jewel for like half a second. I tested it with a click counter variable and it doesn't hit the debugger when i click the second time after the jewel switch. Only if I do the second click after waiting half a second longer. Could it be that the update logic is too heavy that while he is executing update logic my click is happening and he doesn't register it? What am I not seeing here :)? Or doing wrong. It is my first game. My function of the update methode looks like this. public void UpdateBoard() { MouseState currentMouseState; currentMouseState = Mouse.GetState(); if (currentMouseState.LeftButton == ButtonState.Pressed && prevMouseState.LeftButton != ButtonState.Pressed) { UpdatingLogic = true; // this.CheckDropJewels(currentMouseState); //this.CheckMatches(3); //this.RemoveMatches(); this.CheckForSwitch(currentMouseState); this.MarkJewel(currentMouseState); UpdatingLogic = false; //reIndexMissingJewels = true; reIndexSwitchedJewels = true; } prevMouseState = currentMouseState; this.ReIndex(); this.UpdateJewels(); }

    Read the article

  • How best to merge/sort/page through tons of JSON arrays?

    - by Joshiatto
    Here's the scenario: Say you have millions of JSON documents stored as text files. Each JSON document is an array of "activity" objects, each of which contain a "created_datetime" attribute. What is the best way to merge/sort/filter/page through these activities via a web UI? For example, say we want to take a few thousand of the documents, merge them into a gigantic array, sort the array by the "created_datetime" attribute descending and then page through it 10 activities at a time. Also keep in mind that roughly 25% of these JSON documents are updated every day, and updates have to make it into the view within 5 minutes. My first thought is to parse all of the documents into an RDBMS table and then it would just be a simple query such as "select top 10 name, created_datetime from Activity where user_id=12345 order by created_datetime desc". Some have suggested I use NoSQL techniques such as hadoop or map/reduce instead. How exactly would this work? For more background, see: Why is NoSQL better for this scenario?

    Read the article

  • Empirical evidence regarding testability

    - by Xodarap
    A google scholar search turns up numerous papers on testability, including models for computing testability, recommendations for how ones code can be more testable, etc. They all come with the assertion that more testable code is more stable, but I can't find any studies which actually demonstrate this. Can someone link me to a study evaluating the effect of testable code vs. quality? The closest I can find is Improving the Testability of Object Oriented Systems, which discusses the relationship between design flaws and testability.

    Read the article

  • How to keep the trunk stable when tests take a long time?

    - by Oak
    We have three sets of test suites: A "small" suite, taking only a couple of hours to run A "medium" suite that takes multiple hours, usually ran every night (nightly) A "large" suite that takes a week+ to run We also have a bunch of shorter test suites, but I'm not focusing on them here. The current methodology is to run the small suite before each commit to the trunk. Then, the medium suite runs every night, and if in the morning it turned out it failed, we try to isolate which of yesterday's commits was to blame, rollback that commit and retry the tests. A similar process, only at a weekly instead of nightly frequency, is done for the large suite. Unfortunately, the medium suite does fail pretty frequently. That means that the trunk is often unstable, which is extremely annoying when you want to make modifications and test them. It's annoying because when I check out from the trunk, I cannot know for certain it's stable, and if a test fails I cannot know for certain if it's my fault or not. My question is, is there some known methodology for handling these kinds of situations in a way which will leave the trunk always in top shape? e.g. "commit into a special precommit branch which will then periodically update the trunk every time the nightly passes". And does it matter if it's a centralized source control system like SVN or a distributed one like git? By the way I am a junior developer with a limited ability to change things, I'm just trying to understand if there's a way to handle this pain I am experiencing.

    Read the article

  • Need a Quick Sure Method to Produce a Formatted Explain Plan? This will help!

    - by user702295
    Please use the following on the production machine to get formatted explain plan and sql trace using the SLOW sql (e.g. 'T_COMB_LIST.COMB_ID = 216') or any other value that takes longer: -- Open new session is SQL*Plus */ -- Make sure you are using updated PLAN_TABLE -- This can be done by dropping it and recreate it by running: -- SQL> @?/rdbms/admin/utlxplan.sql) set lines 1000 set pages 1000 spool xplan_1.txt EXPLAIN PLAN FOR <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> @?/rdbms/admin/utlxplp spool off EXIT --Open a second session is SQL*Plus ALTER SESSION SET max_dump_file_size = unlimited; ALTER SESSION SET tracefile_identifier = '10046'; ALTER SESSION SET statistics_level = ALL; ALTER SESSION SET events '10046 trace name context forever, level 12'; <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> select 'verify cursor closed' from dual; ALTER SYSTEM SET EVENTS '10046 trace name context off'; EXIT Make sure spooled file is formatted properly and that the 10046 trace has relevant explain plan in it.  Please Upload both files (10046 trace is generated in udump). Need instructions to find udump?   sqlplus "/ as sysdba" show parameters dump_dest This will show you bdump, cdump and udump locations.

    Read the article

  • Skip CodedUI Tests, use Selenium for Web Automation

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/10/31/skip-codedui-tests-use-selenium-for-web-automation.aspxI recently joined a team that was using Agile Methodologies to create a new product. They have a working beta product after 10 or so 2 week sprints and already had UI’s that had changed several times as they went through iterations of their UI. As a result, the QA team was falling behind with automated tests and I was tasked to help them catch up and expand their tests. The project is a website. I heard many complaints about how hard it is to work with CodedUI (writing our own code, not relying on the recorder as we wanted re-usable and more maintainable code) then it took me 4+ hours to fix one issue. It was hard to traverse the key and debugging the objects with breakpoints… I said out loud “there has to be a better way or a framework the uses jQuery to run through the tests.” Plus it seemed really slow (wait… finding the object … wait… start putting in text…). Plus some tests would randomly fail on the test agents (using the test settings and an automated build, they are run on VMs using Microsoft test agents). Enough complaining. Selenium to the rescue (mostly). The lead QA guy decided to try it out and we haven’t turned back. We are now running tests in Chrome and Firefox and they run a lot faster. We had IE running to, but some of the tests were running fine locally, but hanging on the test agents. I’ll add some hints and lessons learned in a later post.

    Read the article

  • Does it make sense to write tests for legacy code when there is no time for a complete refactoring?

    - by is4
    I usually try to follow the advice of the book Working Effectively with Legacy Code. I break dependencies, move parts of the code to @VisibleForTesting public static methods and to new classes to make the code (or at least some part of it) testable. And I write tests to make sure that I don't break anything when I'm modifying or adding new functions. A colleague says that I shouldn't do this. His reasoning: The original code might not work properly in the first place. And writing tests for it makes future fixes and modifications harder since devs have to understand and modify the tests too. If it's GUI code with some logic (~12 lines, 2-3 if/else block, for example), a test isn't worth the trouble since the code is too trivial to begin with. Similar bad patterns could exist in other parts of the codebase, too (which I haven't seen yet, I'm rather new); it will be easier to clean them all up in one big refactoring. Extracting out logic could undermine this future possibility. Should I avoid extracting out testable parts and writing tests if we don't have time for complete refactoring? Is there any disadvantage to this that I should consider?

    Read the article

  • How do you QA and release software quickly with a large team?

    - by sadadasd
    My work used to be a smaller team. We had less than 13 devs for a while. We are now growing rapidly, and are over 20 with plans to be over 30 in a few months. Our process for QA'ing and releasing each build is no longer working. We currently have everyone develop the new code, and stick it onto a staging environment. A few days before our weekly release, we would freeze the staging environment and QA everything. By our normal release time, everything was usually deemed acceptable and pushed out the door to the main site. We reached a point where our code got too big so we could no longer regress the entire site each week in QA. We were ok with that, we just made a list of everything important and only covered that and the new stuff. Now we are reaching a point where all the new stuff each week is becoming too big and too unstable. Our staging environment is really buggy week after week, and we are usually 1-2 hours behind the normal release time. As the team is growing further, we are going to drown with this same process. We are re-evaluating everything, and I personally am looking for suggestions / success stories. Many companies have been where before and progressed beyond, we need to do the same

    Read the article

  • What are the crappy code games - the background?

    - by simonsabin
    This is part of a series on the Crappy Code Games The background Who can enter? What are the challenges? What are the prizes? Why should I attend? Tips on how to win   The Background Fusion IO came to us a while back wanting to run a competition to highlight the how bad code can really impact your system. We’ve all seen it, I saw an example yesterday where someone had implemented a cursor on a whole table just to update a few rows, something like this. declare cUpdateCursor cursor for  ...(read more)

    Read the article

  • Structuring cascading properties - parent only or parent + entire child graph?

    - by SB2055
    I have a Folder entity that can be Moderated by users. Folders can contain other folders. So I may have a structure like this: Folder 1 Folder 2 Folder 3 Folder 4 I have to decide how to implement Moderation for this entity. I've come up with two options: Option 1 When the user is given moderation privileges to Folder 1, define a moderator relationship between Folder 1 and User 1. No other relationships are added to the db. To determine if the user can moderate Folder 3, I check and see if User 1 is the moderator of any parent folders. This seems to alleviate some of the complexity of handling updates / moved entities / additions under Folder 1 after the relationship has been defined, and reverting the relationship means I only have to deal with one entity. Option 2 When the user is given moderation privileges to Folder 1, define a new relationship between User 1 and Folder 1, and all child entities down to the grandest of grandchildren when the relationship is created, and if it's ever removed, iterate back down the graph to remove the relationship. If I add something under Folder 2 after this relationship has been made, I just copy all Moderators into the new Entity. But when I need to show only the top-level Folders that a user is Moderating, I need to query all folders that have a parent folder that the user does not moderate, as opposed to option 1, where I just query any items that the user is moderating. I think it comes down to determining if users will be querying for all parent items more than they'll be querying child items... if so, then option 1 seems better. But I'm not sure. Is either approach better than the other? Why? Or is there another approach that's better than both? I'm using Entity Framework in case it matters.

    Read the article

  • Olympic clock stops after a few hours: can this even be a software problem? [closed]

    - by mvexel
    I fail to understand how something uncomplicated as a countdown clock can fail - to much public humiliation of sponsor and renowned clock maker Omega - after only a few hours of operation. The clock, which was 'developed by our experts and fully tested' according to a spokesperson who goes on to say that is 'not immediately apparent what has caused the problem'. Can this even be a software problem? What has gone wrong here?

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >