Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 487/545 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • Given a typical Rails 3 environment, why am I unable to execute any tests?

    - by Tom
    I'm working on writing simple unit tests for a Rails 3 project, but I'm unable to actually execute any tests. Case in point, attempting to run the test auto-generated by Rails fails: require 'test_helper' class UserTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end end Results in the following error: <internal:lib/rubygems/custom_require>:29:in `require': no such file to load -- test_helper (LoadError) from <internal:lib/rubygems/custom_require>:29:in `require' from user_test.rb:1:in `<main>' Commenting out the require 'test_helper' line and attempting to run the test results in this error: user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) The action pack gems appear to be properly installed and up to date: actionmailer (3.0.3, 2.3.5) actionpack (3.0.3, 2.3.5) activemodel (3.0.3) activerecord (3.0.3, 2.3.5) activeresource (3.0.3, 2.3.5) activesupport (3.0.3, 2.3.5) Ruby is at 1.9.2p0 and Rails is at 3.0.3. The sample dump of my test directory is as follows: /fixtures /functional /integration /performance /unit -- /helpers -- user_helper_test.rb -- user_test.rb test_helper.rb I've never seen this problem before - I've run the typical rake tasks for preparing the test environment. I have nothing out of the ordinary in my application or environment configuration files, nor have I installed any unusual gems that would interfere with the test environment. Edit Xavier Holt's suggestion, explicitly specifying the path to the test_helper worked; however, this revealed an issue with ActiveSupport. Now when I attempt to run the test, I receive the following error message (as also listed above): user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) But as you can see above, Action Pack is all installed and update to date.

    Read the article

  • I'm having an issue to use GLshort for representing Vertex, and Normal.

    - by Xylopia
    As my project gets close to optimization stage, I notice that reducing Vertex Metadata could vastly improve the performance of 3D rendering. Eventually, I've dearly searched around and have found following advices from stackoverflow. Using GL_SHORT instead of GL_FLOAT in an OpenGL ES vertex array How do you represent a normal or texture coordinate using GLshorts? Advice on speeding up OpenGL ES 1.1 on the iPhone Simple experiments show that switching from "FLOAT" to "SHORT" for vertex and normal isn't tough, but what troubles me is when you're to scale back verticies to their original size (with glScalef), normals are multiplied by the reciprocal of the scale. Natural remedy for this is to multiply the normals w/ scale before you submit to GPU. Then, my short normals almost become 0, because the scale factor is usually smaller than 0. Duh! How do you use "short" for both vertex and normal at the same time? I've been trying this and that for about a full day, but I could only go for "float vertex w/ byte normal" or "short vertex w/ float normal" so far. Your help would be truly appreciated.

    Read the article

  • Event feed implementation - will it scale?

    - by SlappyTheFish
    Situation: I am currently designing a feed system for a social website whereby each user has a feed of their friends' activities. I have two possible methods how to generate the feeds and I would like to ask which is best in terms of ability to scale. Events from all users are collected in one central database table, event_log. Users are paired as friends in the table friends. The RDBMS we are using is MySQL. Standard method: When a user requests their feed page, the system generates the feed by inner joining event_log with friends. The result is then cached and set to timeout after 5 minutes. Scaling is achieved by varying this timeout. Hypothesised method: A task runs in the background and for each new, unprocessed item in event_log, it creates entries in the database table user_feed pairing that event with all of the users who are friends with the user who initiated the event. One table row pairs one event with one user. The problems with the standard method are well known – what if a lot of people's caches expire at the same time? The solution also does not scale well – the brief is for feeds to update as close to real-time as possible The hypothesised solution in my eyes seems much better; all processing is done offline so no user waits for a page to generate and there are no joins so database tables can be sharded across physical machines. However, if a user has 100,000 friends and creates 20 events in one session, then that results in inserting 2,000,000 rows into the database. Question: The question boils down to two points: Is this worst-case scenario mentioned above problematic, i.e. does table size have an impact on MySQL performance and are there any issues with this mass inserting of data for each event? Is there anything else I have missed?

    Read the article

  • Rails: creating a custom data type, to use with generator classes and a bunch of questions related t

    - by Shyam
    Hi, After being productive with Rails for some weeks, I learned some tricks and got some experience with the framework. About 10 days ago, I figured out it is possible to build a custom data type for migrations by adding some code in the Table definition. Also, after learning a bit about floating points (and how evil they are) vs integers, the money gem and other possible solutions, I decided I didn't WANT to use the money gem, but instead try to learn more about programming and finding a solution myself. Some suggestions said that I should be using integers, one for the whole numbers and one for the cents. When playing in script/console, I discovered how easy it is to work with calculations and arrays. But, I am talking to much (and the reason I am, is to give some sufficient background). Right now, while playing with the scaffold generator (yes, I use it, because I like they way I can quickly set up a prototype while I am still researching my objectives), I like to use a DRY method. In my opinion, I should build a custom "object", that can hold two variables (Fixnum), one for the whole, one for the cents. In my big dream, I would be able to do the following: script/generate scaffold Cake name:string description:text cost:mycustom Where mycustom should create two integer columns (one for wholes, one for cents). Right now I could do this by doing: script/generate scaffold Cake name:string description:text cost_w:integer cost_c:integer I had also had an idea that would be creating a "cost model", which would hold two columns of integers and create a cost_id column to my scaffold. But wouldn't that be an extra table that would cause some kind of performance penalty? And wouldn't that be defy the purpose of the Cake model in the first place, because the costs are an attribute of individual Cake entries? The reason why I would want to have such a functionality because I am thinking of having multiple "costs" inside my rails application. Thank you for your feedback, comments and answers! I hope my message got through as understandable, my apologies for incorrect grammar or weird sentences as English is not my native language.

    Read the article

  • Doing coding in Linux through a virtual machine on Windows VS partitioning

    - by ihaveitnow
    I already have experience with setting up virtual machines, running them and other minor tasks. Im a gamer, so I wont get rid of windows (for now at least...) but I do want to be a great programmer and to be involved with the Open-Source community. Id like to know if its a good idea to do my programming in linux through a virtual machine, vs giving it a partitioned section of the HDD. Id like to know about performance pros and cons and functionality. All responses are appreciated, thanks in advance. The type of programming I intend to dive into : Android Dev, Web Dev, Desktop Dev...More Android and Web right now though. So im looking at C#,C,C++,Java,PHP,HTML,MySQL...Off the top of the dome. I do web designing as well, so dreamweaver is added as an "essential". But im sure I can do dreamweaver files and upload them to the server after programming in Linux...Right? And any info on IDE's in Linux for the above mentioned are appreciated, but i would prefer going the coding route and understanding the essence of whats happening "under the covers" Thanks to all for reading, I appreciate it. Hope this isnt confusing :S

    Read the article

  • What is different about C++ math.h abs() compared to my abs()

    - by moka
    I am currently writing some glsl like vector math classes in c++, and I just implemented an abs() function like this: template<class T> static inline T abs(T _a) { return _a < 0 ? -_a : _a; } I compared its speed to the default c++ abs from math.h like this: clock_t begin = clock(); for(int i=0; i<10000000; ++i) { float a = abs(-1.25); }; clock_t end = clock(); unsigned long time1 = (unsigned long)((float)(end-begin) / ((float)CLOCKS_PER_SEC/1000.0)); begin = clock(); for(int i=0; i<10000000; ++i) { float a = myMath::abs(-1.25); }; end = clock(); unsigned long time2 = (unsigned long)((float)(end-begin) / ((float)CLOCKS_PER_SEC/1000.0)); std::cout<<time1<<std::endl; std::cout<<time2<<std::endl; Now the default abs takes about 25ms while mine takes 60. I guess there is some low level optimisation going on. Does anybody know how math.h abs works internally? The performance difference is nothing dramatic, but I am just curious!

    Read the article

  • PreparedStatement question in Java against Oracle.

    - by fardon57
    Hi everyone, I'm working on the modification of some code to use preparedStatement instead of normal Statement, for security and performance reason. Our application is currently storing information into an embedded derby database, but we are going to move soon to Oracle. I've found two things that I need your help guys about Oracle and Prepared Statement : 1- I've found this document saying that Oracle doesn't handle bind parameters into IN clauses, so we cannot supply a query like : Select pokemon from pokemonTable where capacity in (?,?,?,?) Is that true ? Is there any workaround ? ... Why ? 2- We have some fields which are of type TIMESTAMP. So with our actual Statement, the query looks like this : Select raichu from pokemonTable where evolution = TO_TIMESTAMP('2500-12-31 00:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') What should be done for a prepared Statement ? Should I put into the array of parameters : 2500-12-31 or TO_TIMESTAMP('2500-12-31 00:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') ? Thanks for your help, I hope my questions are clear ! Regards,

    Read the article

  • OptimisticLockException in inner transaction ruins outer transaction

    - by Pace
    I have the following code (OLE = OptimisticLockException)... public void outer() { try { middle() } catch (OLE) { updateEntities(); outer(); } } @Transactional public void middle() { try { inner() } catch (OLE) { updateEntities(); middle(); } @Transactional public void inner() { //Do DB operation } inner() is called by other non-transactional methods which is why both middle() and inner() are transactional. As you can see, I deal with OLEs by updating the entities and retrying the operation. The problem I'm having is that when I designed things this way I was assuming that the only time one could get an OLE was when a transaction closed. This is apparently not the case as the call to inner() is throwing an OLE even when the stack is outer()->middle()->inner(). Now, middle() is properly handling the OLE and the retry succeeds but when it comes time to close the transaction it has been marked rollbackOnly by Spring. When the middle() method call finally returns the closing aspect throws an exception because it can't commit a transaction marked rollbackOnly. I'm uncertain what to do here. I can't clear the rollbackOnly state. I don't want to force create a transaction on every call to inner because that kills my performance. Am I missing something or can anyone see a way I can structure this differently? EDIT: To clarify what I'm asking, let me explain my main question. Is it possible to catch and handle OLE if you are inside of an @Transactional method? FYI: The transaction manager is a JpaTransactionManager and the JPA provider is Hibernate.

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

  • The CLR has been unable to transition from COM context [...] for 60 seconds

    - by BlueRaja The Green Unicorn
    I am getting this error on code that used to work. I have not changed the code. Here is the full error: The CLR has been unable to transition from COM context 0x3322d98 to COM context 0x3322f08 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. And here is the code that caused it: var openFileDialog1 = new System.Windows.Forms.OpenFileDialog(); openFileDialog1.DefaultExt = "mdb"; openFileDialog1.Filter = "Management Database (manage.mdb)|manage.mdb"; //Stalls indefinitely on the following line, then gives the CLR error //one minute later. The dialog never opens. if(openFileDialog1.ShowDialog() == DialogResult.OK) { .... } Yes, I am sure the dialog is not open in the background, and no, I don't have any explicit COM code or unmanaged marshalling or multithreading. I have no idea why the OpenFileDialog won't open - any ideas?

    Read the article

  • Detaching all entities of T to get fresh data

    - by Goran
    Lets take an example where there are two type of entites loaded: Product and Category, Product.CategoryId - Category.Id. We have available CRUD operations on products (not Categories). If on another screen Categories are updated (or from another user in the network), we would like to be able to reload the Categories, while preserving the context we currently use, since we could be in the middle of editing data, and we do not want changes to be lost (and we cannot depend on saving, since we have incomplete data). Since there is no easy way to tell EF to get fresh data (added, removed and modified), we thought of twp possible ways: 1) Getting products attached to context, and categories detached from context. This would mean that we loose the ability to access Product.Category.Name, which we do sometimes require, so we would need to manually resolve it (example when printing data). 2) detaching / attaching all Categories from current context. Context.ChangeTracker.Entries().Where(x => x.Entity.GetType() == typeof(T)).ForEach(x => x.State = EntityState.Detached); And then reload the categories, which will get fresh data. Do you find any problem with this second approach? We understand that this will require all constraints to be put on foreign keys, and not navigation properties, since when detaching all Categories, Product.Category navigation properties would be reset to null also. Also, there could be a potential performance problem, which we did not test, since there could be couple of thousand products loaded, and all would need to resolve navigation property when reloading. Which of the two do you prefer, and is there a better way (EF6 + .NET 4.0)?

    Read the article

  • MySQL Database Design with Internationalization

    - by Some name
    Hello, I'm going to start work on a medium sized application, and i'm planning it's db design. One thing that I'm not sure about is this. I will have many tables which will need internationalization, such as: "membership_options, gender_options, language_options etc" Each of these tables will share common i18n fields, like: "title, alternative_title, short_description, description" In your opinion which is the best way to do it? Have an i18n table with the same fields for each of the tables that will need them? or do something like: Membership table Gender table ---------------- -------------- id | created_at id | created_at 1 - 22.03.2001 1 - 14.08.2002 2 - 22.03.2001 2 - 14.08.2002 General translation table ------------------------- record_id | table_name | string_name | alternative_title| .... |id_language 1 - membership regular null 1 (english) 1 - membership normale null 2 (italian) 1 - gender man null 1(english) 1 -gender uomo null 2(italian) This would avoid me repeating something like: membership_translation table ----------------------------- membership_id | name | alternative_title | id_lang 1 regular null 1 1 normale null 2 gender_translation table ----------------------------- gender_id | name | alternative_title | id_lang 1 man null 1 1 uomo null 2 and so on, so i would probably reduce the number of db tables, but i'm not sure about performance.I'm not much of a DB designer, so please let me know.

    Read the article

  • Optimizing an embedded SELECT query in mySQL

    - by Crazy Serb
    Ok, here's a query that I am running right now on a table that has 45,000 records and is 65MB in size... and is just about to get bigger and bigger (so I gotta think of the future performance as well here): SELECT count(payment_id) as signup_count, sum(amount) as signup_amount FROM payments p WHERE tm_completed BETWEEN '2009-05-01' AND '2009-05-30' AND completed > 0 AND tm_completed IS NOT NULL AND member_id NOT IN (SELECT p2.member_id FROM payments p2 WHERE p2.completed=1 AND p2.tm_completed < '2009-05-01' AND p2.tm_completed IS NOT NULL GROUP BY p2.member_id) And as you might or might not imagine - it chokes the mysql server to a standstill... What it does is - it simply pulls the number of new users who signed up, have at least one "completed" payment, tm_completed is not empty (as it is only populated for completed payments), and (the embedded Select) that member has never had a "completed" payment before - meaning he's a new member (just because the system does rebills and whatnot, and this is the only way to sort of differentiate between an existing member who just got rebilled and a new member who got billed for the first time). Now, is there any possible way to optimize this query to use less resources or something, and to stop taking my mysql resources down on their knees...? Am I missing any info to clarify this any further? Let me know... EDIT: Here are the indexes already on that table: PRIMARY PRIMARY 46757 payment_id member_id INDEX 23378 member_id payer_id INDEX 11689 payer_id coupon_id INDEX 1 coupon_id tm_added INDEX 46757 tm_added, product_id tm_completed INDEX 46757 tm_completed, product_id

    Read the article

  • OpenGLES - Rendering a background image only once and not wiping it

    - by chaosbeaker
    Hello, first time asking a question here but been watching others answers for a while. My own question is one for improving the performance of my program. Currently I'm wiping the viewFrameBuffer on each pass through my program and then rendering the background image first followed by the rest of my scene. I was wondering how I go about rendering the background image once, and only wiping the rest of the scene for updating/re-rendering. I tried using a seperate buffer but I'm not sure how to present this new buffer to the render buffer. // Set the current EAGLContext and bind to the framebuffer. This will direct all OGL commands to the // framebuffer and the associated renderbuffer attachment which is where our scene will be rendered [EAGLContext setCurrentContext:context]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); // Define the viewport. Changing the settings for the viewport can allow you to scale the viewport // as well as the dimensions etc and so I'm setting it for each frame in case we want to change i glViewport(0, 0, screenBounds.size.width , screenBounds.size.height); // Clear the screen. If we are going to draw a background image then this clear is not necessary // as drawing the background image will destroy the previous image glClearColor(0.0f, 1.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // Setup how the images are to be blended when rendered. This could be changed at different points during your // render process if you wanted to apply different effects glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); switch (currentViewInt) { case 1: { [background render:CGPointMake(240, 0) fromTopLeftBottomRightCenter:@"Bottom"]; // Other Rendering Code }} // Bind to the renderbuffer and then present this image to the current context glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; Hopefully by solving this I'll also be able to implement another buffer just for rendering particles as I can set them to always use a black background as their alpha source. Any help is greatly appreciated

    Read the article

  • How to justify using a scripting language as part of a project

    - by sylvanaar
    I have a specific project in which I want to use either a scripting language + C, or as an alternative a 100% Java solution. The program adapts a legacy system for use with other moderns systems. Basically, I have few choices as to what language I can use. I have C/C++, Java 1.4, and I have also compiled the Lua for this environment. The program does 'screen scraping' and has to deal with alot of strings. That part of the code is highly variable. Most of the developers at my company use C, so - my original design was to write some portions in C, and use Lua for the part that dealt with strings and changed freqently. I was told 'You have to justify your use of the scripting language.' So i reworked my design using 100% Java, and was told - Java wont have enough performance. You should do the whole thing in C. I'm not controlling lasers or doing image processing - just some screen scraping. I still have to provide justification for using anything but C - so what justification can I provide?

    Read the article

  • ReaderWriterLockSlim and Pulse/Wait

    - by Jono
    Is there an equivalent of Monitor.Pulse and Monitor.Wait that I can use in conjunction with a ReaderWriterLockSlim? I have a class where I've encapsulated multi-threaded access to an underlying queue. To enqueue something, I acquire a lock that protects the underlying queue (and a couple of other objects) then add the item and Monitor.Pulse the locked object to signal that something was added to the queue. public void Enqueue(ITask task) { lock (mutex) { underlying.Enqueue(task); Monitor.Pulse(mutex); } } On the other end of the queue, I have a single background thread that continuously processes messages as they arrive on the queue. It uses Monitor.Wait when there are no items in the queue, to avoid unnecessary polling. (I consider this to be good design, but any flames (within reason) are welcome if they help me learn otherwise.) private void DequeueForProcessing(object state) { while (true) { ITask task; lock (mutex) { while (underlying.Count == 0) { Monitor.Wait(mutex); } task = underlying.Dequeue(); } Process(task); } } As more operations are added to this class (requiring read-only access to the lock protected underlying), someone suggested using ReaderWriterLockSlim. I've never used the class before, and assuming it can offer some performance benefit, I'm not against it, but only if I can keep the Pulse/Wait design.

    Read the article

  • How would I go about sharing variables in a C++ class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • Boost ASIO async_write "Vector iterator not dereferencable"

    - by xeross
    Hey, I've been working on an async boost server program, and so far I've got it to connect. However I'm now getting a "Vector iterator not dereferencable" error. I suspect the vector gets destroyed or dereferenced before he packet gets sent thus causing the error. void start() { Packet packet; packet.setOpcode(SMSG_PING); send(packet); } void send(Packet packet) { cout << "DEBUG> Transferring packet with opcode " << packet.GetOpcode() << endl; async_write(m_socket, buffer(packet.write()), boost::bind(&Session::writeHandler, shared_from_this(), placeholders::error, placeholders::bytes_transferred)); } void writeHandler(const boost::system::error_code& errorCode, size_t bytesTransferred) { cout << "DEBUG> Transfered " << bytesTransferred << " bytes to " << m_socket.remote_endpoint().address().to_string() << endl; } Start gets called once a connection is made. packet.write() returns a uint8_t vector Would it matter if I'd change void send(Packet packet) to void send(Packet& packet) Not in relation to this problem but performance wise.

    Read the article

  • HTML5 audio object doesn't play on iPad (when called from a setTimeout)

    - by Dan Halliday
    I have a page with a hidden <audio> object which is being started and stopped using a custom button via javascript. (The reason being I want to customise the button, and that drawing an audio player seems to destroy rendering performance on iPad anyway). A simplified example (in coffeescript): // Works fine on all browsers constructor: (@_button, @_audio) -> @_button.on 'click', @_play // Bind button's click event with jQuery _play: (e) => @_audio[0].play() // Call play() on audio element The audio plays fine when triggered from a function bound to a click event, but I actually want an animation to complete before the file plays so I put .play() inside a setTimeout. However I just can't get this to work: // Will not play on iPad constructor: (@_button, @_audio) -> @_button.on 'click', @_play // Bind button's click event with jQuery _play: (e) => setTimeout (=> // Declare a 300ms timeout @_audio[0].play() // Call play() on audio element ), 300 I've checked that @_audio (this._audio) is in scope and that its play() method exists. Why doesn't this work on iPad?

    Read the article

  • Testing approach for multi-threaded software

    - by Shane MacLaughlin
    I have a piece of mature geospatial software that has recently had areas rewritten to take better advantage of the multiple processors available in modern PCs. Specifically, display, GUI, spatial searching, and main processing have all been hived off to seperate threads. The software has a pretty sizeable GUI automation suite for functional regression, and another smaller one for performance regression. While all automated tests are passing, I'm not convinced that they provide nearly enough coverage in terms of finding bugs relating race conditions, deadlocks, and other nasties associated with multi-threading. What techniques would you use to see if such bugs exist? What techniques would you advocate for rooting them out, assuming there are some in there to root out? What I'm doing so far is running the GUI functional automation on the app running under a debugger, such that I can break out of deadlocks and catch crashes, and plan to make a bounds checker build and repeat the tests against that version. I've also carried out a static analysis of the source via PC-Lint with the hope of locating potential dead locks, but not had any worthwhile results. The application is C++, MFC, mulitple document/view, with a number of threads per doc. The locking mechanism I'm using is based on an object that includes a pointer to a CMutex, which is locked in the ctor and freed in the dtor. I use local variables of this object to lock various bits of code as required, and my mutex has a time out that fires my a warning if the timeout is reached. I avoid locking where possible, using resource copies where possible instead. What other tests would you carry out?

    Read the article

  • AppFabric caching's local cache isnt working for us... What are we doing wrong?

    - by Olly
    We are using appfabric as the 2ndlevel cache for an NHibernate asp.net application comprising a customer facing website and an admin website. They are both connected to the same cache so when admin updates something, the customer facing site is updated. It seems to be working OK - we have a CacheCLuster on a seperate server and all is well but we want to enable localcache to get better performance, however, it dosnt seem to be working. We have enabled it like this... bool UseLocalCache = int LocalCacheObjectCount = int.MaxValue; TimeSpan LocalCacheDefaultTimeout = TimeSpan.FromMinutes(3); DataCacheLocalCacheInvalidationPolicy LocalCacheInvalidationPolicy = DataCacheLocalCacheInvalidationPolicy.TimeoutBased; if (UseLocalCache) { configuration.LocalCacheProperties = new DataCacheLocalCacheProperties( LocalCacheObjectCount, LocalCacheDefaultTimeout, LocalCacheInvalidationPolicy ); // configuration.NotificationProperties = new DataCacheNotificationProperties(500, TimeSpan.FromSeconds(300)); } Initially we tried using a timeout invalidation policy (3mins) and our app felt like it was running faster. HOWEVER, we noticed that if we changed something in the admin site, it was immediatley updated in the live site. As we are using timeouts not notifications, this demonstrates that the local cache isnt being queried (or is, but is always missing). The cache.GetType().Name returns "LocalCache" - so the factory has made a local cache. Running "Get-Cache-Statistics MyCache" in PS on my dev environment (asp.net app running local from vs2008, cache cluster running on a seperate w2k8 machine) show a handful of Request Counts. However, on the Production environment, the Request Count increases dramaticaly. We tried following the method here to se the cache cliebt-server traffic... http://blogs.msdn.com/b/appfabriccat/archive/2010/09/20/appfabric-cache-peeking-into-client-amp-server-wcf-communication.aspx but the log file had nothing but the initial header in it - i.e no loggin either. I cant find anything in SO or Google. Have we done something wrong? Have we got a screwy install of AppFabric - we installed it via WebPlatform Installer - I think? (note: the IIS box running ASp.net isnt in yhe cluster - it is just the client). Any insights greatfully received!

    Read the article

  • Python optimization problem?

    - by user342079
    Alright, i had this homework recently (don't worry, i've already done it, but in c++) but I got curious how i could do it in python. The problem is about 2 light sources that emit light. I won't get into details tho. Here's the code (that I've managed to optimize a bit in the latter part): import math, array import numpy as np from PIL import Image size = (800,800) width, height = size s1x = width * 1./8 s1y = height * 1./8 s2x = width * 7./8 s2y = height * 7./8 r,g,b = (255,255,255) arr = np.zeros((width,height,3)) hy = math.hypot print 'computing distances (%s by %s)'%size, for i in xrange(width): if i%(width/10)==0: print i, if i%20==0: print '.', for j in xrange(height): d1 = hy(i-s1x,j-s1y) d2 = hy(i-s2x,j-s2y) arr[i][j] = abs(d1-d2) print '' arr2 = np.zeros((width,height,3),dtype="uint8") for ld in [200,116,100,84,68,52,36,20,8,4,2]: print 'now computing image for ld = '+str(ld) arr2 *= 0 arr2 += abs(arr%ld-ld/2)*(r,g,b)/(ld/2) print 'saving image...' ar2img = Image.fromarray(arr2) ar2img.save('ld'+str(ld).rjust(4,'0')+'.png') print 'saved as ld'+str(ld).rjust(4,'0')+'.png' I have managed to optimize most of it, but there's still a huge performance gap in the part with the 2 for-s, and I can't seem to think of a way to bypass that using common array operations... I'm open to suggestions :D

    Read the article

  • Undefined Web.config error in VS 2008

    - by user1066050
    I'm working on a web app using VS 2008, .Net 3.5 and C#. Most of the projects in the solution are either classic asp.net pages with some MVC 1 in the mix, the rest is shared libraries. The solution is one that is some 5 years old and has gone through a variety of developers working on it and clearly has some performance and architectural issues. Previously, I've been working on the project using VS 2008 on a Win XP machine, but have just transitioned over to a new box using Win 7 Ultimate. To do so, I've installed VS 2008, asp.net 3.5. To support future work on the solution I've also installed VS 2010 and asp.net 4.0. Opening the solution on the new box with VS 2008 works fine, and it builds without error. However, when I attempt to run it with the debugger, I get the following message: "There is an error in web.config. Please correct before proceeding. (You might rename the current web.config and add a new one.)" I think it's clear that there is some sort of environmental issue regarding web.config on the new machine, but the error message is not "helpful". Adding a new web.config is not an option as the existing one is quite long and involved (too much to post here). I'm hoping someone has a suggestion or two about where I might look for missing elements or changed configurations that might produce such an error message. Lacking that, I'll revisit this post and provide the web.config in the hope that will elicit further help. Thanks to all in advance for taking a look at this. The StackOverflow community has helped me many times in the past with pertinent answers although this is my first posting. Jeff

    Read the article

  • How good is the memory mapped Circular Buffer on Wikipedia?

    - by abroun
    I'm trying to implement a circular buffer in C, and have come across this example on Wikipedia. It looks as if it would provide a really nice interface for anyone reading from the buffer, as reads which wrap around from the end to the beginning of the buffer are handled automatically. So all reads are contiguous. However, I'm a bit unsure about using it straight away as I don't really have much experience with memory mapping or virtual memory and I'm not sure that I fully understand what it's doing. What I think I understand is that it's mapping a shared memory file the size of the buffer into memory twice. Then, whenever data is written into the buffer it appears in memory in 2 places at once. This allows all reads to be contiguous. What would be really great is if someone with more experience of POSIX memory mapping could have a quick look at the code and tell me if the underlying mechanism used is really that efficient. Am I right in thinking for example that the file in /dev/shm used for the shared memory always stays in RAM or could it get written to the hard drive (performance hit) at some point? Are there any gotchas I should be aware of? As it stands, I'm probably going to use a simpler method for my current project, but it'd be good to understand this to have it in my toolbox for the future. Thanks in advance for your time.

    Read the article

  • Heavy Mysql operation & Time Constraints [closed]

    - by Rahul Jha
    There is a performance issue where that I have stuck with my application which is based on PHP & MySql. The application is for Data Migration where data has to be uploaded and after various processes (Cleaning from foreign characters, duplicate check, id generation) it has to be inserted into one central table and then to 5 different tables. There, an id is generated and that id has to be updated to central table. There are different sets of records and validation rules. The problem I am facing is that when I insert say(4K) rows file (containing 20 columns) it is working fine within 15 min it gets inserted everywhere. But, when I insert the same records again then at this time it is taking one hour to insert (ideally it should get inserted by marking earlier inserted data as duplicate). After going through the log file, I noticed is that there is a Mysql select statement where I am checking the duplicates and getting ID which are duplicates. Then I am calling a function inside for loop which is basically inserting records into 5 tables and updates id to central table. This Calling function is major time of whole process. P.S. The records has to be inserted record by record.. Kindly Suggest some solution.. //This is that sample code $query=mysql_query("SELECT DISTINCT p1.ID FROM table1 p1, table2 p2, table3 a WHERE p2.datatype =0 AND (p1.datatype =1 || p1.datatype=2) AND p2.ID =0 AND p1.ID = a.ID AND p1.coulmn1 = p2.column1 AND p1.coulmn2 = p2.coulmn2 AND a.coulmn3 = p2.column3"); $num=mysql_num_rows($query); for($i=0;$i<$num;$i++) { $f=mysql_result($query,$i,"ID"); //calling function RecordInsert($f); }

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >