Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 74/527 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Determining cause of random latency and loading issues

    - by Sherwin Flight
    I'm not sure exactly what details to post in regards to my issue, because I'm not sure what is relevant. Prior to the end of September my websites all loaded quickly, in almost all cases. Loading time wasn't usually more than a few seconds. However, since the end of September I noticed a big increase in page loading times. In some cases pages were taking 30 seconds or more to load. I do have a remote monitoring service monitoring some of the sites as well, and the image below shows the response times over the past month. The response times shown at the beginning of this graph were what the usual response times were prior to this issue occurring. You can see that there has been a significant increase in response times from the beginning to the end of this graph. The thing is, the problem is not happening 100% of the time. If I click through the site, or even just keep refreshing the page, about 25% of the time the pages load quickly, the remaining 75% of the time they load slowly. Sometimes the pages take so long to load that they time out, and don't load at all. I have contacted my hosting provider, and they said things at their end was fine. I don't believe the problem is my home internet provider, because all other websites load without a problem. The server is located in Texas, USA. This also raises another interesting point. My remote monitor checks my site from two locations, California, USA, and London, England. As you can see in the chart below the response time is actually shorter when checked from London, which doesn't seem to make sense, since the server is physically closer to the California monitoring location. I would have expected the London monitoring location to have higher response times since they are physically farther away. I should also point out that in some traceroute test I've done it seem like the first connection to the server seems to take the longest, then after that the rest of the page loads quickly. Below is a little chart showing the times for the first connection to the server. So, what could be causing this problem, and what steps can I take to resolve it or at least narrow down the problem? Sending the request to the server was very quick, and receiving the reply back seems pretty quick, but the WAIT time is really long. So it connects, sends the request, but then waits close to 30 seconds before it starts receiving data back. I am also aware that there are things I can do to speed up page loading times, like reducing the number of CSS and JS files used on a page, compressing images, etc. This is not really what the source of the problem is though, because nothing has really changed on the site since before the problem started, and other sites on the same server are loading slowly as well.

    Read the article

  • When to start thinking about scalability?

    - by Rits
    I'm having a funny but also terrible problem. I'm about to launch a new (iPhone) app. It's a turn-based multiplayer game running on my own custom backend. But I'm afraid to launch. For some reason, I think it might become something big and that its popularity will kill my poor lonely single server + MySQL database. On one hand I'm thinking that if it's growing, I'd better be prepared and have a scalable infrastructure already in place. On the other hand I just feel like getting it out into the world and see what happens. I often read stuff like "premature optimization is the root of all evil" or people saying that you should just build your killer game now, with the tools at hand, and worry about other stuff like scalability later. I'd love to hear some opinions on this from experts or people with experience with this. Thanks!

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • High Usage of RAM by wxPython's GUI and need some advice to reduce it

    - by user67024
    I've recently developed a GUI in wxPython for windows platform. It contains a five tabs, 4 of them are just richTextCtrl boxes and the other one has controls for uploading files, buttons, textctrls, a slider etc.. As I was new to GUI development in Python, I used wxFormBuilder to generate some of the code using a good amount of sizers. So, now the problem is that the GUI starts off with a initial memory of around 40MB which is too much for such a simple application (Or so I think) . Also, when the functions handling the functions use huge lists as the program is for debugging large data logs and identifying the problems in'em implying that I can't afford memory for GUI. So, how can I reduce that start working memory size? Is it a general issue in wxPython? And currently trying use profilers but not sure if it's going to help.

    Read the article

  • What is the benefit of triple buffering?

    - by user782220
    I read everything written in a previous question. From what I understand in double buffering the program must wait until the finished drawing is copied or swapped before starting the next drawing. In triple buffering the program has two back buffers and can immediately start drawing in the one that is not involved in such copying. But with triple buffering if you're in a situation where you can take advantage of the third buffer doesn't that suggest that you are drawing frames faster than the monitor can refresh. So then you don't actually get a higher frame rate. So what is the benefit of triple buffering then?

    Read the article

  • Java Alphabetize Algorithm Insertion sort vs Bubble Sort

    - by Chris Okyen
    I am supposed to "Develop a program that alphabetizes three strings. The program should allow the user to enter the three strings, and then display the strings in alphabetical order." It's instructed that I need to use the String library compareTo()/charAt()/toLowerCase() to make all the characters lowercase so the Lexicon comparison is also a alphabetical comparison. Input Pseudo Code: String input[3]; Scanner keyboard = new Scanner(System.in); System.out.println("Enter three strings: "); for(byte i = 0; i < 3; i++) input[i] = keyboard.next() The sorting would be Insertion Sort: 321 2 3 1 2 31 231 1 23 1 2 3 1 23 1 23 123 Bubble Sort 321 231 213 123 Which would be more efficient in this case? The bubble sort seems to be more efficient though they seem to have equal stats for worst best and avg case, but I read the Insertion Sort is quicker for small amounts of data like my case.

    Read the article

  • Ways to break the "Syndrome of the perfect programmer"

    - by Rushino
    I am probably not the only one that feel that way. But I have what I tend to call "The syndrome of the perfect programmer" which many might say is the same as being perfectionist but in this case it's in the domain of programming. However, the domain of programming is a bit problematic for such a syndrome. Have you ever felt that when you are programming you're not confident or never confident enought that your code is clean and good code that follows most of the best practices ? There so many rules to follow that I feel like being overwhelmed somehow. Not that I don't like to follow the rules of course I am a programmer and I love programming, I see this as an art and I must follow the rules. But I love it too, I mean I want and I love to follow the rules in order to have a good feeling of what im doing is going the right way.. but I only wish I could have everything a bit more in "control" regarding best practices and good code. Maybe it's a lack of organization? Maybe it's a lack of experience? Maybe a lack of practice? Maybe it's a lack of something else someone could point out? Is there any way to get rid of that syndrome somehow ?

    Read the article

  • What is the best way to check if there is overlap between player and static, non-collidable items in bullet physic engine

    - by tigrou
    I'd like to add non collidable objects (eg: power ups, items, ...) in a game world using Bullet Physics Engine and to know if there is collision between player and them. Some info : there is a lot of items ( 1000), all are box shapes and they don't overlap. Here is things i have tried : btDbvt* bvtItems = new btDbvt(); //btDbvt is a hierachical AABB tree, used by Bullet foreach(var item ...) { btDbvtVolume volume = ... //compute item AABB; bvtItems->insert(volume, (void*)someExtraData); } Then, to find collisions between items and player : playerRigidBody->getAabb(min, max); btDbvtVolume playervolume = ... //compute player AABB bvtItems->collideTV(bvtItems->m_root, playervolume, *someCollisionHandler); This works fairly well (and its very fast), however, there is a problem : it only check items AABB against player AABB. That loss of precision is acceptable for items but not for player which is not a box. It would actually need another check to make sure player really collide with item but i don't know how to do this in Bullet. It would have been nice to have a function like this : playerRigidBody->checkCollisionWithAABB(); After doing trying that, I discovered that a btGhostObject exist and seems to have been made for that. I changed my code like this : foreach(var item...) { btCollisionObject* ghostObject = new btGhostObject(); ghostObject->setCollisionShape(boxShape); ghostObject->setCollisionFlags(ghostObject->getCollisionFlags() | btCollisionObject::CF_NO_CONTACT_RESPONSE); startTransform.setOrigin(...); //item position ghostObject->setWorldTransform(startTransform); dynamicsWorld->addCollisionObject(ghostObject, btBroadphaseProxy::SensorTrigger, btBroadphaseProxy:: CharacterFilter); } It also works ok, but there is a huge fps drop (almost ten times slower) which is not acceptable. Maybe there is something missing (forget set a flag) and Bullet is doing extra job for nothing or maybe all that ghostObjects are polluting broad phase and ghostObject is not the right thing for that. Any help would be appreciated.

    Read the article

  • EXALYTICS - If Oracle BI Server Does Not Fail Over to the TimesTen Instance

    - by Ahmed Awan
    If the BI Server does not fail over to the second TimesTen instance on the scaled-out node, then ensure that the logical table source (LTS) for the repository has mapped both TimesTen physical data sources. This mapping ensures that at the logical table source level, a mapping exists to both TimesTen instances. If one TimesTen instance is not available, then failover logic for the BI Server at the DSN level tries to connect to the other TimesTen instance. Reference: http://docs.oracle.com/cd/E23943_01/bi.1111/e24706/toc.htm

    Read the article

  • SQL Azure Federations and Semantic Search by the SQL product team in London tonight (Monday)

    - by simonsabin
    Don’t forget that tonight we have Michael Rys from the SQL Server Product Team presenting on the Federation support coming to SQL Azure and the Semantic search coming in SQL Server Denali. This is a must attend evening for anyone that is serious about scaling SQL or doing search in SQL Server. Michael also has a few other hats including Microsoft’s representative on the W3C XML Query Working Group. To register go to http://sqlsocial20110613.eventbrite.com/   Ps Beer and Pizza will be laid on...(read more)

    Read the article

  • Shared to Dedicated or Amazon CloudFront to improve performances and keep secured?

    - by user978548
    I have a Wordpress which currently takes about 1.8s to 2.5s for the home page to completely load in my country. The page weight is about 700Ko (static content included). In order to increase performances, I'm considering two solutions: Switching to a dedicated host. Using amazon s3 cloudfront to serve static contents. My current shared hosting have servers in a neighboring country but not exactly in mine, and both amazon and the dedicated hosting have some, so that's already an advantage. So considering all that, I still have three questions remaining: Currently having a low traffic (100 unique visitors/days, but growing) will it make a huge difference between my shared hosting and a dedicated server ? Knowing that I already use a cookie-less domain to deliver static contents (but using a redirection to the same server), would using amazon s3 make a real difference ? Talking about the cons of dedicated vs amazon s3, if I choose for the dedicated server something like Ubuntu server and do daily package updates and have only port 80 open, would it be sufficient in terms of security (in comparison with my current shared hosting which manage everything for me) ?

    Read the article

  • Demantra Partitioning and the First PK Column

    - by user702295
      We have found that it is necessary in Demantra to have an index that matches the partition key, although it does not have to be the PK.  It is ok   to create a new index instead of changing the PK.   For example, if my PK on SALES_DATA is (ITEM_ID, LOCATION_ID, SALES_DATE) and I decide partition by SALES_DATE, then I should add an index starting   with the partition key like this: (SALES_DATE, ITEM_ID, LOCATION_ID).   * Note that the first column of the new index matches the partition key.   It might also be helpful to create a 2nd index with the other PK columns reversed (SALES_DATE, LOCATION_ID, ITEM_ID). Again, the first column   matches the partition key.

    Read the article

  • What is the best way to keep track of the median?

    - by Steven Mou
    I read a question in one book: Numbers are randomly generated and stored into an (expanding) array, How would you keep track of the median? There are two data structures can solve the problem. One is the balanced binary tree, the other is two heaps which keep trace of the biggest half and the smallest half of the elements. I think these two solutions has the same running time as O(n lg n), but I am not sure of my judgement. In your opinions, What is the best way to keep track of the median?

    Read the article

  • Java Spotlight Episode 98: Cliff Click on Benchmarkings

    - by Roger Brinkley
    Interview with Cliff Click of 0xdata on benchmarking. Recorded live at JFokus 2012. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Bean Validation 1.1 Java EE 7 Roadmap Java JRE Update 7u7 and 6u35 available. Change to Java SE 7 and Java SE 6 Update Release Numbers JCP 2012 Award Nominations Announced Griffon JavaFX Plugin Events Sep 3-6, Herbstcampus, Nuremberg, Germany Sep 10-15, IMTS 2012 Conference,  Chicago Sep 12,  The Coming M2M Revolution: Critical Issues for End-to-End Software and Systems Development,  Webinar Sep 30-Oct 4, JavaONE, San Francisco Oct 3-4, Java Embedded @ JavaONE, San Francisco Oct 15-17, JAX London Oct 30-Nov 1, Arm TechCon, Santa Clara Oct 22-23, Freescale Technology Forum - Japan, Tokyo Nov 2-3, JMagreb, Morocco Nov 13-17, Devoxx, Belgium Feature Interview Cliff Click is the CTO and Co-Founder of 0xdata, a firm dedicated to creating a new way to think about web-scale data storage and real-time analytics. I wrote my first compiler when I was 15 (Pascal to TRS Z-80!), although my most famous compiler is the HotSpot Server Compiler (the Sea of Nodes IR). I helped Azul Systems build an 864 core pure-Java mainframe that keeps GC pauses on 500Gb heaps to under 10ms, and worked on all aspects of that JVM. Before that I worked on HotSpot at Sun Microsystems, and am at least partially responsible for bringing Java into the mainstream. I am invited to speak regularly at industry and academic conferences and has published many papers about HotSpot technology. I hold a PhD in Computer Science from Rice University and about 15 patents. What’s Cool Shaun Smith’s Devoxx 2011 talk "JPA Multi-Tenancy & Extensibility" now freely available at Parleys.

    Read the article

  • How to become a "faster" programmer?

    - by Nick Gotch
    My last job evaluation included just one weak point: timeliness. I'm already aware of some things I can do to improve this but what I'm looking for are some more. Does anyone have tips or advice on what they do to increase the speed of their output without sacrificing its quality? How do you estimate timelines and stick to them? What do you do to get more done in shorter time periods? Any feedback is greatly appreciated, thanks,

    Read the article

  • Current trends in Random Access Memory

    - by Nutel
    As I know for now because of laws of Physics there will be not any tangible improvements in CPU cycles per second for the nearest future. However because of Von Neumann bottleneck it seems to not be an issue for non-server applications. So what about RAM, is there any upcoming technologies that promise to improve memory speed or we are stack with the current situation till quantum computers will come out from labs?

    Read the article

  • Cat 6 Only 100mbit speed

    - by Stu2000
    I tried two different cat6 cables directly connected between my two ubuntu machines. This one I ordered online: http://www.amazon.co.uk/gp/product/B002SQPDXS/ref=wms_ohs_product only achieves 100mbit speeds, but does appear to be supporting cross-talk (direct pc to pc), the other cat 6 cable, worked perfectly and gets the full 1gigabit speed. Both tests were performed using ftp and checking the network monitor with direct pc to pc connection. Did the product from amazon lie to me or do I need to manually set a setting somewhere in ubuntu for some cables? I had thought 10 quid for 20m of gigabit ethernet cable was a bit cheap, you get what you pay for... Regards, Stu Update: It seems that after rebooting, the device is set to 1000mbit sec when looking it up with sudo ethtool eth0 However after a while, this will drop down to just 100, after which to reset it to 1000 again, I have to reboot, and simply unpugging and re-plugging in the cable doesn't do it. I tried setting this in networking config file as suggested here: auto eth0 iface eth0 inet static pre-up /usr/sbin/ethtool -s eth0 speed 1000 duplex full but that resulted in my networking failing to start. Is there a problem with my 'auto-negotiation' or something? Can I manually override a setting to 1000mbit?

    Read the article

  • Spatial data from shapefiles (for T-SQL Tuesday #006)

    - by Rob Farley
    I’m giving a presentation on May 12th at the Adelaide .Net User Group, around the topic of spatial data, and in particular, the visualization of said data. Given that it’s about one the larger types, this post should also count towards Michael Coles’ T-SQL Tuesday on BLOB data . I wrote recently about my experience with exploded data , but what I didn’t go on to talk about was how using a shapefile like this would translate into a scenario with a much larger number of shapes, such as all the postcode...(read more)

    Read the article

  • Designing a flexible tile-based engine

    - by Vee
    I'm trying to create a flexible tile-based game engine to make all sorts of non-realtime puzzle games, just as Bejeweled, Civilization, Sokoban, and so on. The first approach I had was to have a 2D array of Tile objects, and then have classes inheriting from Tile that represented the game objects. Unfortunately that way I couldn't stack more game elements on the same Tile without having a 3D array. Then I did something different: I still had the 2D array of Tile objects, but every Tile object contained a List where I put and different entities. This worked fine until 20 minutes ago, when I realized that it's too expensive to do many things, look at this example: I have a Wall entity. Every update I have to check the 8 adjacent Tiles, then check all of the entities in the Tile's List, check if any of those entities is a Wall, then finally draw the correct sprite. (This is done to draw walls that are next to each other seamlessly) The only solution I see now is having a 3D array, with many layers, that could suit every situation. But that way I can't stack two entities that share the same layer on the same tile. Whenever I want to do that I have to create a new layer. Is there a better solution? What would you do?

    Read the article

  • Database Maintenance Scripting Done Right

    - by KKline
    I first wrote about useful database maintenance scripts on my SQLBlog account way back in 2008. Hmmm - now that I think about it, I first wrote about my own useful database maintenance scripts in a journal called SQL Server Professional back in the mid-1990's on SQL Server v6.5 or some such. But I digress... Anyway, I pointed out a couple useful sites where you could get some good scripts that would take care of preventative maintenance on your SQL Server, such as index defragmentation, updating...(read more)

    Read the article

  • Regulating how much to draw based on how much was drawn last frame.

    - by Mike Howard
    I have a 3D game world on an iPhone (limited graphics speed), and I'm already regulating whether I draw each shape on the screen based on it's size and distance from the camera. Something like... if (how_big_it_looks_from_the_camera > constant) then draw What I want to do now is also take into account how many shapes are being drawn, so that in busier areas of the game world I can draw less than I otherwise would. I tried to do this by dividing how_big_it_looks by the number of shapes that were drawn last frame (well, the square root of this but I'm simplifying - the problem is the same). if (how_big_it_looks / shapes_drawn > constant2) then draw But the check happens at the level of objects which represent many drawn shapes, and if an object containing many shapes is switched on, it increases shapes_drawn lots and switches itself back off the next frame. It flickers on and off. I tried keeping a kind of weighted average of previous values, by each frame doing something like shapes_drawn_recently = 0.9 * shapes_drawn_recently + 0.1 * shapes_just_drawn, but of course it only slows the flickering down because of the nature of the feedback loop. Is there a good way of solving this? My project is in Objective-C, but a general algorithm or pseudo-code is good too. Thanks.

    Read the article

  • Add-ons for Firefox - Java Plugin has been blocked JRE versions below 1.6.0_31 or between 1.7.0 and 1.7.0_2

    - by user702295
    As Java 1.6u31 is not certified for use with EBS or Demantra, you may notice issues in relation to the Java plug-in.  Demantra Development is currently working to certify Java 1.6u31.  They are recommending that you upgrade to that version. EBS customers, should not be installing 1.6u31 as it is not certified.  If you do upgrade your browser, you will either need to downgrade to a lower release of Firefox or find a way of allowing Firefox to use the older version of the Java Plug-in.

    Read the article

  • Slides and demo code for Columnstore Index session

    - by Hugo Kornelis
    Almost a week has passed after SQLBits X in London , so I guess it’s about time for me to share the slides and demo code of my session on columnstore indexes. After all, I promised people I would do that – especially when I found out that I had enough demos prepared to fill two sessions! I made some changes to the demo code. I added extra comments, not only to the demos I could not explain and run during the session, but also to the rest, so that people who missed the session will also be able to...(read more)

    Read the article

  • OpenGL profiling with AMD PerfStudio 2

    - by Aurus
    I'm rendering just a really small amount of polygons for my UI but however I still tried to increase the FPS. In the end I removed redundant calls which increased the FPS. I really don't want to lose FPS for nothing so I keep looking for more improvements. The first thing I noticed is the "huge" time where no calls are made before SwapBuffer (the black one). Well I know that OpenGL works asynchronous so SwapBuffer has to wait until everything is done. But shouldn't PerfStudio mark this time also as black ? Correct me If I am wrong. The second thing I noticed is that some glUniform2f calls just take longer (the brown ones). I mean they should all upload 2floats to the GPU how can the time be so different from call to call. The program isn't even changed or something like that. I also tried to look at other programs like gDebugger or CodeXL but they often crashed and they show less statistics (only # of calls or redundant calls etc.) EDIT: I also realized that the draw calls also have different durations, which was obvious for me but sometimes drawing more vertices is faster than drawing less vertices.

    Read the article

  • How to efficiently map tokens to code in a script interpreter?

    - by lithander
    I'm writing an interpreter for a simple scripting language where each line is a complete, executable command. (Like the instructions in assembler) When parsing a line I have to map the requested command to actual code. My current solution looks like this: std::string op, param1, param2; //parse line, identify op, param1, param2 ... //call command specific code if(op == "MOV") ExecuteMov(AsNumber(param1)); else if(op == "ROT") ExecuteRot(AsNumber(param1)); else if(op == "SZE") ExecuteSze(AsNumber(param1)); else if(op == "POS") ExecutePos((AsNumber(param1), AsNumber(param2)); else if(op == "DIR") ExecuteDir((AsNumber(param1), AsNumber(param2)); else if(op == "SET") ExecuteSet(param1, AsNumber(param2)); else if(op == "EVL") ... The more commands are supported the more string comparisions I'll have to do to identify and call the associated method. Can you point me to a more efficient implementation in the described scenario?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >