Search Results

Search found 25532 results on 1022 pages for 'call graph'.

Page 157/1022 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Asynchronous background processes in Python?

    - by Geuis
    I have been using this as a reference, but not able to accomplish exactly what I need: http://stackoverflow.com/questions/89228/how-to-call-external-command-in-python/92395#92395 I also was reading this: http://www.python.org/dev/peps/pep-3145/ For our project, we have 5 svn checkouts that need to update before we can deploy our application. In my dev environment, where speedy deployments are a bit more important for productivity than a production deployment, I have been working on speeding up the process. I have a bash script that has been working decently but has some limitations. I fire up multiple 'svn updates' with the following bash command: (svn update /repo1) & (svn update /repo2) & (svn update /repo3) & These all run in parallel and it works pretty well. I also use this pattern in the rest of the build script for firing off each ant build, then moving the wars to Tomcat. However, I have no control over stopping deployment if one of the updates or a build fails. I'm re-writing my bash script with Python so I have more control over branches and the deployment process. I am using subprocess.call() to fire off the 'svn update /repo' commands, but each one is acting sequentially. I try '(svn update /repo) &' and those all fire off, but the result code returns immediately. So I have no way to determine if a particular command fails or not in the asynchronous mode. import subprocess subprocess.call( 'svn update /repo1', shell=True ) subprocess.call( 'svn update /repo2', shell=True ) subprocess.call( 'svn update /repo3', shell=True ) I'd love to find a way to have Python fire off each Unix command, and if any of the calls fails at any time the entire script stops.

    Read the article

  • ANTS Memory Profiler 7.0

    - by James Michael Hare
    I had always been a fan of ANTS products (Reflector is absolutely invaluable, and their performance profiler is great as well – very easy to use!), so I was curious to see what the ANTS Memory Profiler could show me. Background While a performance profiler will track how much time is typically spent in each unit of code, a memory profiler gives you much more detail on how and where your memory is being consumed and released in a program. As an example, I’d been working on a data access layer at work to call a market data web service.  This web service would take a list of symbols to quote and would return back the quote data.  To help consolidate the thousands of web requests per second we get and reduce load on the web services, we implemented a 5-second cache of quote data.  Not quite long enough to where customers will typically notice a quote go “stale”, but just long enough to be able to collapse multiple quote requests for the same symbol in a short period of time. A 5-second cache may not sound like much, but it actually pays off by saving us roughly 42% of our web service calls, while still providing relatively up-to-date information.  The question is whether or not the extra memory involved in maintaining the cache was worth it, so I decided to fire up the ANTS Memory Profiler and take a look at memory usage. First Impressions The main thing I’ve always loved about the ANTS tools is their ease of use.  Pretty much everything is right there in front of you in a way that makes it easy for you to find what you need with little digging required.  I’ve worked with other, older profilers before (that shall remain nameless other than to hint it was created by a very large chip maker) where it was a mind boggling experience to figure out how to do simple tasks. Not so with AMP.  The opening dialog is very straightforward.  You can choose from here whether to debug an executable, a web application (either in IIS or from VS’s web development server), windows services, etc. So I chose a .NET Executable and navigated to the build location of my test harness.  Then began profiling. At this point while the application is running, you can see a chart of the memory as it ebbs and wanes with allocations and collections.  At any given point in time, you can take snapshots (to compare states) zoom in, or choose to stop at any time.  Snapshots Taking a snapshot also gives you a breakdown of the managed memory heaps for each generation so you get an idea how many objects are staying around for extended periods of time (as an object lives and survives collections, it gets promoted into higher generations where collection becomes less frequent). Generating a snapshot brings up an analysis view with very handy graphs that show your generation sizes.  Almost all my memory is in Generation 1 in the managed memory component of the first graph, which is good news to me, because Gen 2 collections are much rarer.  I once3 made the mistake once of caching data for 30 minutes and found it didn’t get collected very quick after I released my reference because it had been promoted to Gen 2 – doh! Analysis It looks like (from the second pie chart) that the majority of the allocations were in the string class.  This also is expected for me because the majority of the memory allocated is in the web service responses, so it doesn’t seem the entities I’m adapting to (to prevent being too tightly coupled to the web service proxy classes, which can change easily out from under me) aren’t taking a significant portion of memory. I also appreciate that they have clear summary text in key places such as “No issues with large object heap fragmentation were detected”.  For novice users, this type of summary information can be critical to getting them to use a tool and develop a good working knowledge of it. There is also a handy link at the bottom for “What to look for on the summary” which loads a web page of help on key points to look for. Clicking over to the session overview, it’s easy to compare the samples at each snapshot to see how your memory is growing, shrinking, or staying relatively the same.  Looking at my snapshots, I’m pretty happy with the fact that memory allocation and heap size seems to be fairly stable and in control: Once again, you can check on the large object heap, generation one heap, and generation two heap across each snapshot to spot trends. Back on the analysis tab, we can go to the [Class List] button to get an idea what classes are making up the majority of our memory usage.  As was little surprise to me, System.String was the clear majority of my allocations, though I found it surprising that the System.Reflection.RuntimeMehtodInfo came in second.  I was curious about this, so I selected it and went into the [Instance Categorizer].  This view let me see where these instances to RuntimeMehtodInfo were coming from. So I scrolled back through the graph, and discovered that these were being held by the System.ServiceModel.ChannelFactoryRefCache and I was satisfied this was just an artifact of my WCF proxy. I also like that down at the bottom of the Instance Categorizer it gives you a series of filters and offers to guide you on which filter to use based on the problem you are trying to find.  For example, if I suspected a memory leak, I might try to filter for survivors in growing classes.  This means that for instances of a class that are growing in memory (more are being created than cleaned up), which ones are survivors (not collected) from garbage collection.  This might allow me to drill down and find places where I’m holding onto references by mistake and not freeing them! Finally, if you want to really see all your instances and who is holding onto them (preventing collection), you can go to the “Instance Retention Graph” which creates a graph showing what references are being held in memory and who is holding onto them. Visual Studio Integration Of course, VS has its own profiler built in – and for a free bundled profiler it is quite capable – but AMP gives a much cleaner and easier-to-use experience, and when you install it you also get the option of letting it integrate directly into VS. So once you go back into VS after installation, you’ll notice an ANTS menu which lets you launch the ANTS profiler directly from Visual Studio.   Clicking on one of these options fires up the project in the profiler immediately, allowing you to get right in.  It doesn’t integrate with the Visual Studio windows themselves (like the VS profiler does), but still the plethora of information it provides and the clear and concise manner in which it presents it makes it well worth it. Summary If you like the ANTS series of tools, you shouldn’t be disappointed with the ANTS Memory Profiler.  It was so easy to use that I was able to jump in with very little product knowledge and get the information I was looking it for. I’ve used other profilers before that came with 3-inch thick tomes that you had to read in order to get anywhere with the tool, and this one is not like that at all.  It’s built for your everyday developer to get in and find their problems quickly, and I like that! Tweet Technorati Tags: Influencers,ANTS,Memory,Profiler

    Read the article

  • Any porting avaiable of backtrace for uclibc?

    - by user303967
    We are running the uclibc linux on ARM 9. The problem is uclibc doesn't support backtrace. When core-dump happen, cannot grap the call stack. Anyone has good solution on that? For example, an existing porting of backtrace for uclibc? or any good method to grap call stack when call dump happen(uclibc+ARM+Linux)? thanks

    Read the article

  • Modifying CollectionEditor in PropertyGrid

    - by Chris
    I currently have a list containing Call's, which is the base class. If I want to add derived classes of Call to the list, I know to do the following. public class CustomCollectionEditor : System.ComponentModel.Design.CollectionEditor { private Type[] types; public CustomCollectionEditor(Type type) : base(type) { types = new Type[] { typeof(Call), typeof(CappedCall) }; } protected override Type[] CreateNewItemTypes() { return types; } } public class DisplayList { public DisplayList() { } [Editor(typeof(CustomCollectionEditor), typeof(UITypeEditor))] [DataMember] public List<Call> ListCalls { get; set; } } My questions is there anyway of moving where you mark up the Type[] containing all possible types the list can contain? I thought of adding the following to my CustomCollectionEditor class, but this doesn't work. public CustomCollectionEditor(Type type, List<Type> types_) : base(type) { types = types_.ToArray(); } It would be ideal if I could mark up which classes the CustomCollectionEditor needed to be aware of in the DisplayList class somehow.

    Read the article

  • WebSphere Application Server EJB Optimization

    - by Chris Aldrich
    We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes. Our application, or our system, as I should rather say, comes in two or three parts. Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces. Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients. Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster. Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services. That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call? Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email: The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container? As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following: Because EJBs are inherently location independent, they use a remote programming model. Method parameters and return values are serialized over RMI-IIOP and returned by value. This is the intrinsic RMI "Call By Value" model. WebSphere provides the "No Local Copies" performance optimization for running EJBs and clients (typically servlets) in the same application server JVM. The "No Local Copies" option uses "Call By Reference" and does not create local proxies for called objects when both the client and the remote object are in the same process. Depending on your workload, this can result in a significant overhead savings. Configure "No Local Copies" by adding the following two command line parameters to the application server JVM: * -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util * -Dcom.ibm.CORBA.iiop.noLocalCopies=true CAUTION: The "No Local Copies" configuration option improves performance by changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM. One side effect of this is that the Java object derived (non-primitive) method parameters can actually be changed by the called enterprise bean. Consider Figure 16a: Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations? Thanks

    Read the article

  • Big Visible Charts

    - by Robert May
    An important part of Agile is the concept of transparency and visibility. In proper functioning teams, stakeholders can look at any team at any time in the iteration or release and see how that team is doing by simply looking at what we call Big Visible Charts. If you’ve done Scrum, you’ve seen these charts. However, interpreting these charts can often be an art form. There are several different charts that can be useful. In this newsletter, I’ll focus on the Iteration Burndown and Cumulative Flow charts. I’ve included a copy of the spreadsheet that I used to create the charts, and if you don’t have a tool that creates them for you, you can use this spreadsheet to do so. Our preferred tool for managing Scrum projects is Rally. Rally creates all of these charts for you, saving you quite a bit of time. The Iteration Burndown and Cumulative Flow Charts This is the main chart that teams use. Although less useful to stakeholders, this chart is critical to the team and provides quite a bit of information to the team about how their iteration is going. Most charts are a combination of the charts below, so you may need to combine aspects of each section to understand what is happening in your iterations. Ideal Ah, isn’t that a pretty picture? Unfortunately, it’s also very unrealistic. I’ve seen iterations that come close to ideal, but never that match perfectly. If your iteration matches perfectly, chances are, someone is playing with the numbers. Reality is just too difficult to have a burndown chart that matches this exactly. Late Planning Iteration started, but the team didn’t. You can tell this by the fact that the real number of estimated hours didn’t appear until day two. In the cumulative flow, you can also see that nothing was defined in Day one and two. You want to avoid situations like this. You’ll note that the team had to burn faster than is ideal to meet the iteration because of the late planning. This often results in long weeks and days. Testing Starved Determining whether or not testing is starved is difficult without the cumulative flow. The pattern in the burndown could be nothing more that developers not completing stories early enough or could be caused by stories being too big. With the cumulative flow, however, you see that only small bites are in progress and stories were completed early, but testing didn’t start testing until the end of the iteration, and didn’t complete testing all stories in the iteration. When this happens, question whether or not your testing resources are sufficient for your team and whether or not acceptance is adequately defined. No Testing With this one, both graphs show the same thing; the team needs testers and testing! Without testing, what was completed cannot be verified to make sure that it is acceptable to the business. If you find yourself in this situation, review your testing practices and acceptance testing process and make changes today. Late Development With this situation, both graphs tell a story. In the top graph, you can see that the hours failed to burn down as quickly as the team expected. This could be caused by the team not correctly estimating their hours or the team could have had illness or some other issue that affected them. Often, when teams are tackling something that is more unknown, they’ll run into technical barriers that cause the burn down to happen slower than expected. In the cumulative flow graph, you can see that not much was completed in the first few days. This could be because of illness or technical barriers or simply poor estimation. Testing was able to keep up with everything that was completed, however. No Tool Updating When you see graphs that look like this, you can be assured that it’s because the team is not updating the tool that generates the graphs. Review your policy for when they are to update. On the teams that I run, I require that each team member updates the tool at least once daily. You should also check to see how well the team is breaking down stories into tasks. If they’re creating few large tasks, graphs can look similar to this. As a general rule, I never allow tasks, other than Unit Testing and Uncertainty, to be greater than eight hours in duration. Scope Increase I always encourage team members to enter in however much time they think they have left on a task, even if that means increasing the total amount of time left to do. You get a much better and more realistic picture this way. Increasing time remaining could explain the burndown graph, but by looking at the cumulative flow graph, we can see that stories were added to the iteration and scope was increased. Since planning should consume all of the hours in the iteration, this is almost always a bad thing. If the scope change happened late in the iteration and the hours remaining were well below the ideal burn, then increasing scope is probably o.k., but estimation needs to get better. However, with the charts above, that’s clearly not what happened and the team was required to do extra work to make the iteration. If you find this happening, your product owner and ScrumMasters need training. The team also needs to learn to say no. Scope Decrease Scope decreases are just as bad as scope increases. Usually, graphs above show that the team did a poor job of estimating their stories and part way through had to reduce scope to change the iteration. This will happen once in a while, but if you find it’s a pattern on your team, you need to re-evaluate planning. Some teams are hopelessly optimistic. In those cases, I’ll introduce a task I call “Uncertainty.” With Uncertainty, the team estimates how many hours they might need if things don’t go well with the tasks they’ve defined. They try to estimate things that could go poorly and increase the time appropriately. Having an Uncertainty task allows them to have a low and high estimate. Uncertainty should not just be an arbitrary buffer. It must correlate to real uncertainty in the tasks that have been defined. Stories are too Big Often, we see graphs like the ones above. Note that the burndown looks fairly good, other than the chunky acceptance of stories. However, when you look at cumulative flow, you can see that at one point, everything is in progress. This is a bad thing. When you see graphs like this, you’re in one of two states. You may just have a very small team and can only handle one or two stories in your iteration. If you have more than one or two people, then the most likely problem is that your stories are far too big. To combat this, break large high hour stories into smaller pieces that can be completed independently and accepted independently. If you don’t, you’ll likely be requiring your testers to do heroic things to complete testing on the last day of the iteration and you’re much more likely to have the entire iteration fail, because of the limited amount of things that can be completed. Summary There are other charts that can be useful when doing scrum. If you don’t have any big visible charts, you really need to evaluate your process and change. These charts can provide the team a wealth of information and help you write better software. If you have any questions about charts that you’re seeing on your team, contact me with a screen capture of the charts and I’ll tell you what I’m seeing in those charts. I always want this information to be useful, so please let me know if you have other questions. Technorati Tags: Agile

    Read the article

  • 'Generic' ViewModel

    - by Ian MacPherson
    Using EF 4, I have several subtypes of a 'Business' entity (customers, suppliers, haulage companies etc). They DO need to be subtypes. I am building a general viewmodel which calls into a service from which a generic repository is accessed. As I have 4 subtypes, it would be good to have a 'generic' viewmodel used for all of these. Problem is of course is that I have to call a specific type into my generic repository, for example: BusinessToRetrieve = _repository .LoadEntity<Customer>(o => o.CustomerID == customerID); It would be good to be able to call <SomethingElse>, somethingElse being one or other of the subtypes), otherwise I shall have to create 4 near identical viemodels, which seems a waste of course! The subtype entity name is available to the viewmodel but I've been unable to figure out how to make the above call convert this into a type. An issue with achieving what I want is that presumably the lambda expression being passed in wouldn't be able to resolve on a 'generic' call ?

    Read the article

  • Scheduling a recurring alarm/event

    - by martinjd
    I have a class that extends Application. In the class I make a call to AlarmManager and pass in an intent. As scheduled my EventReceiver class, that extends BroadcastReceiver, processes the call in the onReceive method. How would I call the intent again from the onReceive method to schedule another event?

    Read the article

  • Constant game speed independent of variable FPS in OpenGL with GLUT?

    - by Nazgulled
    I've been reading Koen Witters detailed article about different game loop solutions but I'm having some problems implementing the last one with GLUT, which is the recommended one. After reading a couple of articles, tutorials and code from other people on how to achieve a constant game speed, I think that what I currently have implemented (I'll post the code below) is what Koen Witters called Game Speed dependent on Variable FPS, the second on his article. First, through my searching experience, there's a couple of people that probably have the knowledge to help out on this but don't know what GLUT is and I'm going to try and explain (feel free to correct me) the relevant functions for my problem of this OpenGL toolkit. Skip this section if you know what GLUT is and how to play with it. GLUT Toolkit: GLUT is an OpenGL toolkit and helps with common tasks in OpenGL. The glutDisplayFunc(renderScene) takes a pointer to a renderScene() function callback, which will be responsible for rendering everything. The renderScene() function will only be called once after the callback registration. The glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0) takes the number of milliseconds to pass before calling the callback processAnimationTimer(). The last argument is just a value to pass to the timer callback. The processAnimationTimer() will not be called each TIMER_MILLISECONDS but just once. The glutPostRedisplay() function requests GLUT to render a new frame so we need call this every time we change something in the scene. The glutIdleFunc(renderScene) could be used to register a callback to renderScene() (this does not make glutDisplayFunc() irrelevant) but this function should be avoided because the idle callback is continuously called when events are not being received, increasing the CPU load. The glutGet(GLUT_ELAPSED_TIME) function returns the number of milliseconds since glutInit was called (or first call to glutGet(GLUT_ELAPSED_TIME)). That's the timer we have with GLUT. I know there are better alternatives for high resolution timers, but let's keep with this one for now. I think this is enough information on how GLUT renders frames so people that didn't know about it could also pitch in this question to try and help if they fell like it. Current Implementation: Now, I'm not sure I have correctly implemented the second solution proposed by Koen, Game Speed dependent on Variable FPS. The relevant code for that goes like this: #define TICKS_PER_SECOND 30 #define MOVEMENT_SPEED 2.0f const int TIMER_MILLISECONDS = 1000 / TICKS_PER_SECOND; int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void processAnimationTimer(int value) { // setups the timer to be called again glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Requests to render a new frame (this will call my renderScene() once) glutPostRedisplay(); } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) // Setup the timer to be called one first time glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Read the current time since glutInit was called currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } This implementation doesn't fell right. It works in the sense that helps the game speed to be constant dependent on the FPS. So that moving from point A to point B takes the same time no matter the high/low framerate. However, I believe I'm limiting the game framerate with this approach. Each frame will only be rendered when the time callback is called, that means the framerate will be roughly around TICKS_PER_SECOND frames per second. This doesn't feel right, you shouldn't limit your powerful hardware, it's wrong. It's my understanding though, that I still need to calculate the elapsedTime. Just because I'm telling GLUT to call the timer callback every TIMER_MILLISECONDS, it doesn't mean it will always do that on time. I'm not sure how can I fix this and to be completely honest, I have no idea what is the game loop in GLUT, you know, the while( game_is_running ) loop in Koen's article. But it's my understanding that GLUT is event-driven and that game loop starts when I call glutMainLoop() (which never returns), yes? I thought I could register an idle callback with glutIdleFunc() and use that as replacement of glutTimerFunc(), only rendering when necessary (instead of all the time as usual) but when I tested this with an empty callback (like void gameLoop() {}) and it was basically doing nothing, only a black screen, the CPU spiked to 25% and remained there until I killed the game and it went back to normal. So I don't think that's the path to follow. Using glutTimerFunc() is definitely not a good approach to perform all movements/animations based on that, as I'm limiting my game to a constant FPS, not cool. Or maybe I'm using it wrong and my implementation is not right? How exactly can I have a constant game speed with variable FPS? More exactly, how do I correctly implement Koen's Constant Game Speed with Maximum FPS solution (the fourth one on his article) with GLUT? Maybe this is not possible at all with GLUT? If not, what are my alternatives? What is the best approach to this problem (constant game speed) with GLUT? I originally posted this question on Stack Overflow before being pointed out about this site. The following is a different approach I tried after creating the question in SO, so I'm posting it here too. Another Approach: I've been experimenting and here's what I was able to achieve now. Instead of calculating the elapsed time on a timed function (which limits my game's framerate) I'm now doing it in renderScene(). Whenever changes to the scene happen I call glutPostRedisplay() (ie: camera moving, some object animation, etc...) which will make a call to renderScene(). I can use the elapsed time in this function to move my camera for instance. My code has now turned into this: int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void renderScene(void) { (...) // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Setup the camera position and looking point SceneCamera.LookAt(); // All drawing code goes inside this function drawCompleteScene(); glutSwapBuffers(); /* Redraw the frame ONLY if the user is moving the camera (similar code will be needed to redraw the frame for other events) */ if(!IsTupleEmpty(cameraDirection)) { glutPostRedisplay(); } } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } Conclusion, it's working, or so it seems. If I don't move the camera, the CPU usage is low, nothing is being rendered (for testing purposes I only have a grid extending for 4000.0f, while zFar is set to 1000.0f). When I start moving the camera the scene starts redrawing itself. If I keep pressing the move keys, the CPU usage will increase; this is normal behavior. It drops back when I stop moving. Unless I'm missing something, it seems like a good approach for now. I did find this interesting article on iDevGames and this implementation is probably affected by the problem described on that article. What's your thoughts on that? Please note that I'm just doing this for fun, I have no intentions of creating some game to distribute or something like that, not in the near future at least. If I did, I would probably go with something else besides GLUT. But since I'm using GLUT, and other than the problem described on iDevGames, do you think this latest implementation is sufficient for GLUT? The only real issue I can think of right now is that I'll need to keep calling glutPostRedisplay() every time the scene changes something and keep calling it until there's nothing new to redraw. A little complexity added to the code for a better cause, I think. What do you think?

    Read the article

  • masm division overflow

    - by Help I'm in college
    I'm trying divide two numbers in assembly. I'm working out of the Irvine assembly for intel computers book and I can't make division work for the life of me. Here's my code .code main PROC call division exit main ENDP division PROC mov eax, 4 mov ebx, 2 div ebx call WriteDec ret divison ENDP END main Where WriteDec should write whatever number is in the eax register (should be set to the quotient after the division call). Instead everytime I run it visual studio crashes (the program does compile however).

    Read the article

  • to stop trigger in jquery

    - by Alexander Corotchi
    Hi everybody , I have an AJAX call, which is doing this call every 5 seconds and when the call "succeed " I have a trigger success: function (msg) { ... $('#test').trigger('click'); return false; }, ... But i need to do this trigger just once , the first time, not every 5 second ! Can somebody suggest me how to stop this trigger, or maybe to use another stuff to trigger this "click " Thanks!

    Read the article

  • Counting the number of visitors for a web page - JavaScript and Struts

    - by Dj
    Hi i am Dhananjay, I need to keep track of the number of visitors to a web page. I have planned like this: on load of the home page, I will call a javascript callCounter(); From javascript, I need then to call an action and update a record in database. Please help me with this. How do i call the action? I should be in the same home jsp after updating database. Thanks in advance, Dhananjay

    Read the article

  • Run python file -- what function is main?

    - by Jasie
    I have simple python script, 'first.py': #first.py def firstFunctionEver() : print "hello" firstFunctionEver() I want to call this script using : python first.py and have it call the firstFunctionEver(). But, the script is ugly -- what function can I put the call to firstFunctionEver() in and have it run when the script is loaded?

    Read the article

  • Erlang: How to view output of io:format/2 calls in processes spawned on remote nodes.

    - by jkndrkn
    Hello, I am working on a decentralized Erlang application. I am currently working on a single PC and creating multiple nodes by initializing erl with the -sname flag. When I spawn a process using spawn/4 on its home node, I can see output generated by calls io:format/2 within that process in its home erl instance. When I spawn a process remotely by using spawn/4 in combination with register_name, output of io:format/2 is sometimes redirected back to the erl instance where the remote spawn/4 call was made, and sometimes remains completely invisible. Similarly, when I use rpc:call/4, output of io:format/2 calls is redirected back to the erl instance where the `rpc:call/4' call is made. How do you get a process to emit debugging output back to its parent erl instance?

    Read the article

  • Which cross threading invoke function is better for a control that is inherited?

    - by Stevoni
    I have a relatively simple question regarding the best way to call the DataGridView.Rows.Add function when it is inherited into the current control. Which is the best way to make the call to the inherited control? Call it directly in the invoke or call it using recursive-like function? They both seem to produce the same result, a row gets added and the quantity is returned, but which is the most efficient? The delegate: Private Delegate Function ReturnDelegate() As Object The two ways are: A) Private Overloads Function AddRow() As Integer If InvokeRequired Then Return CInt(Invoke(New ReturnDelegate(AddressOf AddRow))) Else Return Rows.Add() End If End Function Or B) Private Function RowsAdd() As Integer If Me.InvokeRequired Then Return CInt(Me.Invoke(New ReturnDelegate(AddressOf MyBase.Rows.Add))) Else Return MyBase.Rows.Add End If End Function

    Read the article

  • Perform selector on parent NSOperation

    - by user326943
    I extend NSOperation (call it A) which contains NSOperationQueue for other NSOperations (which is another extended class different from A, call these operations B). When operation A is running (executing B operations) how do i call a specific function/method on operation A when certain event takes place on B operations? For example every operation B that finishes it calls a function on operation A returning itself? *Nested NSOperation and NSOperationQueue(s) Hope this mockup pseudo code can help to draw the picture. //My classes extended from NSOperation NSOperation ClassA NSOperation ClassB //MainApp -(void)applicationDidFinishLaunching:(NSNotification *)aNotification { ClassA A1; ClassA A2; NSOperationQueue Queue; Queue AddOperation: A1; Queue AddOperation: A2; } //Main of ClassA -(void)main { ClassB B1; ClassB B2; NSOperationQueue Queue; Queue AddOperation: B1; Queue AddOperation: B2; } //Main of ClassB -(void)main { //Do some work and when done call selector on ClassA above }

    Read the article

  • C++ arithmetic with pointers

    - by user69514
    I am trying to add the following: I have an array of double pointers call A. I have another array of double pointers call it B, and I have an unsigned int call it C. So I want to do: A[i] = B[i] - C; how do I do it? I did: A[i] = &B[i] - C; I don't think I am doing this correctly.

    Read the article

  • XSLT multiple string replacement with recursion

    - by John Terry
    I have been attempting to perform multiple (different) string replacement with recursion and I have hit a roadblock. I have sucessfully gotten the first replacement to work, but the subsequent replacements never fire. I know this has to do with the recursion and how the with-param string is passed back into the call-template. I see my error and why the next xsl:when never fires, but I just cant seem to figure out exactly how to pass the complete modified string from the first xsl:when to the second xsl:when. Any help is greatly appreciated. <xsl:template name="replace"> <xsl:param name="string" select="." /> <xsl:choose> <xsl:when test="contains($string, '&#13;&#10;')"> <xsl:value-of select="substring-before($string, '&#13;&#10;')" /> <br/> <xsl:call-template name="replace"> <xsl:with-param name="string" select="substring-after($string, '&#13;&#10;')"/> </xsl:call-template> </xsl:when> <xsl:when test="contains($string, 'TXT')"> <xsl:value-of select="substring-before($string, '&#13;TXT')" /> <xsl:call-template name="replace"> <xsl:with-param name="string" select="substring-after($string, '&#13;')" /> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$string"/> </xsl:otherwise> </xsl:choose> </xsl:template>

    Read the article

  • PocketPC c++ windows message processing recursion problem

    - by user197350
    Hello, I am having a problem in a large scale application that seems related to windows messaging on the Pocket PC. What I have is a PocketPC application written in c++. It has only one standard message loop. while (GetMessage (&msg, NULL, 0, 0)) { { TranslateMessage (&msg); DispatchMessage (&msg); } } We also have standard dlgProc's. In the switch of the dlgProc, we will call a proprietary 3rd party API. This API uses a socket connection to communicate with another process. The problem I am seeing is this: whenever two of the same messages come in quickly (from the user clicking the screen twice too fast and shouldn't be) it seems as though recursion is created. Windows begins processing the first message, gets the api into a thread safe state, and then jumps to process the next (identical ui) message. Well since the second message also makes the API call, the call fails because it is locked. Because of the design of this legacy system, the API will be locked until the recursion comes back out (which also is triggered by the user; so it could be locked the entire working day). I am struggling to figure out exactly why this is happening and what I can do about it. Is this because windows recognizes the socket communication will take time and preempts it? Is there a way I can force this API call to complete before preemption? Is there a way I can slow down the message processing or re-queue the message to ensure the first will execute (capturing it and doing a PostMessage back to itself didnt work). We don't want to lock the ui down while the first call completes. Any insight is greatly appreciated! Thanks!!

    Read the article

  • Star-schema: Separate dimensions for clients and non-clients or shared dimension for attendants?

    - by celopes
    I'm new to modeling star schemas, fresh from reading the Data Warehouse Toolkit. I have a business process that has clients and non-clients calling into conference calls with some of our employees. My fact table, call it "Audience", will contain a measure of how long an attending person was connected to the call, and the cost per minute of this person's connection to the call. The grain is "individual connection to the conference call". Should I use my conformed Client dimension and create a non-client dimension (for the callers that are not yet clients) this way (omitting dimensions that are not part of this questions): Or would it be OK/better to have a non-conformed Attending dimension related to the conformed Client dimension in this manner: Or is there a better/standard mechanism to model business processes like this one?

    Read the article

  • Assembly stack persistency

    - by user246100
    Hello. I would like to know if after calling functions the data I have in the stack is persistent. Like, I would like to know if (assuming cdecl convention) can I do this (independently of function X and independently of optimizations): push 1 push 2 push 3 call X call X call X add 12 esp ? Also, let's say that before the calls I save the address of where the pushed values are in a global variable. Can I, inside X, alter the values it contain by acessing the global variable? Like, for some reason I want that in X I'm able to alter the values in stack so that the second and third call to X receive different values.

    Read the article

  • Changing the system time zone succeeds once and then no longer changes

    - by Adam Driscoll
    I'm using the WinAPI to set the time zone on a Windows XP SP3 box. I'm reading the time zone information from the HKLM\Software\Microsoft\WindowsNT\Time Zones\<time zone name> key and then setting the time zone to the specified time zone. I enumerate the keys under the Time Zones key, grab the TZI value and stuff it into a TIME_ZONE_INFORMATION struct to be passed to SetTimeZoneInformation. All seems to work on the first pass. The time zone changes, no error is returned. The second time I perform this operation (same user, new session, on login before userinit) the call succeeds but the system does not reflect the time zone change. Neither the clock nor time stamps on files are updated to the new time zone. When I navigate to: HKLM\System\CurrentControlSet\Control\TimeZoneInformation my new time zone information is present. A couple strange things are happening when I'm setting my time zone: Also when I parse the TZI binary value from the registry to store in my TIME_ZONE_INFORMATION struct I'm noticing the struct has the DaylightDate.wDay and StandardDate.wDay field always set to 0 I tried to call GetTimeZoneInformation right after I call SetTimeZoneInformation but the call fails with a 1300 error (Not all privileges or groups referenced are assigned to the caller. ) I'm also making sure to send a WM_BROADCAST message so Explorer knows whats going on. Think it's the parsing of the byte array to the TIME_ZONE_INFORMATION struct? Or am I missing some thing else important? EDIT: Found a document stating why this is happening: here. Privilege was introduced in Vista...thanks MSDN docs... Per the Microsoft documentation I'm enabling the SE_TIME_ZONE_NAME privilege for the current processes token. But when I attempt to call LookupPriviledgeValue for SE_TIME_ZONE_NAME I get a 1313 error (A specified privilege does not exist. ).

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >