Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 7/763 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Quick ways to boost performance and scalability of ASP.NET, WCF and Desktop Clients

    - by oazabir
    There are some simple configuration changes that you can make on machine.config and IIS to give your web applications significant performance boost. These are simple harmless changes but makes a lot of difference in terms of scalability. By tweaking system.net changes, you can increase the number of parallel calls that can be made from the services hosted on your servers as well as on desktop computers and thus increase scalability. By changing WCF throttling config you can increase number of simultaneous calls WCF can accept and thus make most use of your hardware power. By changing ASP.NET process model, you can increase number of concurrent requests that can be served by your website. And finally by turning on IIS caching and dynamic compression, you can dramatically increase the page download speed on browsers and and overall responsiveness of your applications. Read the CodeProject article for more details. http://www.codeproject.com/KB/webservices/quickwins.aspx Please vote for me if you find the article useful.

    Read the article

  • Java performance of StringBuilder append chains

    - by ultimate_guy
    In Java, if I am building a significant number of strings, is there any difference in performance in the following two examples? StringBuilder sb = new StringBuilder(); for (int i = 0; i < largeNumber; i++) { sb.append(var[i]); sb.append('='); sb.append(value[i]); sb.append(','); } or StringBuilder sb = new StringBuilder(); for (int i = 0; i < largeNumber; i++) { sb.append(var[i]).append('=').append(value[i]).append(','); } Thanks!

    Read the article

  • SQL SERVER – Video – Performance Improvement in Columnstore Index

    - by pinaldave
    I earlier wrote an article about SQL SERVER – Fundamentals of Columnstore Index and it got very well accepted in community. However, one of the suggestion I keep on receiving for that article is that many of the reader wanted to see columnstore index in the action but they were not able to do that. Some of the readers did not install SQL Server 2012 or some did not have good machine to recreate the big table involved in the demo. For the same reason, I have created small video for that. I have written two more article on columstore index. Please read them as followup to the video: SQL SERVER – How to Ignore Columnstore Index Usage in Query SQL SERVER – Updating Data in A Columnstore Index Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • How can I improve overall system performance?

    - by Decio Lira
    What are your tips for improving overall system performance on ubuntu? Inspired by this question I realized that some default settings may be rather conservative on Ubuntu and that it's possible to tweak it with little or no risk if you wish to make it faster. This is not meant to be application specific (e.g. make firefox load pages faster), but system wide. Preferably 1 tip per answer, with enough detail for people to implement it. A couple of mine would be: Install Preload (via Software Center or sudo apt-get install preload); Change Swappiness value - "which controls the degree to which the kernel prefers to swap when it tries to free memory"; What are yours? PS: Since this is not intended to have a unique answer but rather, several useful tips, I'm making this community wiki out-of-the-box.

    Read the article

  • Coded ui to measure performance

    - by Mike Weber
    I have been tasked with using coded UI to measure performance on a proprietary windows desktop application. The need is to measure how long it takes for the next page/screen to display after a user clicks on a control. For example - a user enters their ID and PW and clicks sign-in. The need is to measure how long it takes for the next screen to display when the user clicks the sign-in button. I understand the need to define what indicates the screen is loaded and ready for use. One approach is to use control.WaitForControlReady and use BeginTimer/EndTimer. Is coded ui a dependable and accurate way of measuring time? Is WaitForControlReady the best method to determine when a control is ready for use?

    Read the article

  • Will having many timers affect my game performance?

    - by iQue
    I'm making a game for android, and earlier today I was trying to add some cool stuff to my game. The problem is this thing needs like 5 timers. I build my timers like this: timer += deltaTime; if(timer >= 2.0f){ doStuff; timer -= 2.0f; } // this timers gets stuff done every 2 secs Will having to many timers like this, getting checked every frame, screw up my games performance? The effect I wanted to add was a crosshair every 2 sec, then remove it after 2 sec and do a timed animation. So an array of crosshairs dependent on a bunch of timers to be exact. This caused my game to shut down when used, so thats why Im wondering if using that many timers causes my game to flip out.

    Read the article

  • Poor mobile performance when running from Eclipse

    - by Yajirobe_LOL
    So after weeks of thinking my rendering code was bad, I accidentally discovered the following: Running my game on a Nexus S From Eclipse (Debug as - Android application): 12fps From the device while still attached to USB (getting log info in Eclipse still): 24fps From the device while not attached via USB: 56fps I was wondering if anyone else has issues like this? I mean, the problem really isn't a problem since the final release build will likely have good performance, but for the time being I don't want to have to keep (un)plugging my device in and out when testing code all day long. Is there some remedy for this or does anyone have any input/advice? Thanks.

    Read the article

  • Setter Validation can affect performance?

    - by TiagoBrenck
    Whitin a scenario where you use an ORM to map your entities to the DB, and you have setter validations (nullable, date lower than today validation, etc) every time the ORM get a result, it will pass into the setter to instance the object. If I have a grid that usually returns 500 records, I assume that for each record it passes on all validations. If my entity has 5 setter validations, than I have passed in 2.500 validations. Does those 2.500 validations will affect the performance? If was 15.000 validation, it will be different? In my opinion, and according to this answer (http://stackoverflow.com/questions/4893558/calling-setters-from-a-constructor/4893604#4893604), setter validation is usefull than constructors validation. Is there a way to avoid unecessary validation, since I am safe that the values I send to DB when saving the entity wont change until I edit it on my system?

    Read the article

  • Performance tracking/monitoring in games

    - by vitaliy kotik
    Let's say I have an online game with a downloadable client / browser plugin. I want to track performance of my software and automatically send summary to the server. Let it be fps, latency, load time, physics step calc. time, whatever... I also want tools to perform data analysis: per session stats, per hardware stats, avgs, totals, diagrams, etc. So that I could see what are the real world hotspots / bottlenecks. Is there any common out-of-the-box / SaS solution?

    Read the article

  • Using a subset of GetHashCode() to increase AzureTable performance through partitioning

    - by makerofthings7
    Generally speaking, Azure Table IO performance improves as more partitions are used (with some tradeoffs in continuation tokens and batch updates I won't go into). Since the partition key is always a string I am considering using a "natural" load balancing technique based on a subset of the GetHashCode() of the partition key, and appending this subset to the partition key itself. This will allow all direct PK/RK queries to be computed with little overhead and with ease. Batch updates may just need an intermediate to group similar PKs together prior to submission. Question: Should I use GetHashCode() to compute the partition key? Is a better function available? If I use GetHashCode() does it matter which character I use for my PK? Is there an abstraction for Azure Table and Blob storage that does this for me already?

    Read the article

  • Improving grepping over a huge file performance

    - by rogerio_marcio
    I have FILE_A which has over 300K lines and FILE_B which has over 30M lines. I created a bash script that greps each line in FILE_A over in FILE_B and writes the result of the grep to a new file. This whole process is taking over 5+ hours. I'm looking for suggestions on whether you see any way of improving the performance of my script. I'm using grep -F -m 1 as the grep command. FILE_A looks like this: 123456789 123455321 and FILE_B is like this: 123456789,123456789,730025400149993, 123455321,123455321,730025400126097, So with bash I have a while loop that picks the next line in FILE_A and greps it over in FILE_B. When the pattern is found in FILE_B i write it to result.txt. while read -r line; do grep -F -m1 $line 30MFile done < 300KFile Thanks a lot in advance for your help.

    Read the article

  • game performance

    - by iQue
    I'm making a game for android, and earlier today I was trying to add some cool stuff to my game. The problem is this thing needs like 5 timers. I build my timers like this: timer += deltaTime; if(timer >= 2.0f){ doStuff; timer -= 2.0f; } // this timers gets stuff done every 2 secs Will having to many timers like this, getting checked every frame, screw up my games performance? The effect I wanted to add was a crosshair every 2 sec, then remove it after 2 sec and do a timed animation. So an array of crosshairs dependent on a bunch of timers to be exact. This caused my game to shut down when used, so thats why Im wondering if using that many timers causes my game to flip out.

    Read the article

  • How to force high-dpi scaling?

    - by Ian Boyd
    How can i force an application to be high-dpi scaled? Or alternatively, how can an application that is not manifested as high-dpi aware, and doesn't have dpi-scaling disabled from the Properties-Compatibility tab, not be scaled? Pretend i have an application, who's developers decided to manifest it as high-dpi aware, when it really isn't. How can i force dpi-scaling? Pretend i have an applicaiton that Microsoft has applied the "HighDpiAware" compatiblity shim to, when the application really isn't. How can i force dpi-scaling? Pretend i have an application (i.e. Shareaza) that does not have a dpi-aware manifest, does not disable dpi-scaling from the application's Properties tab, but high-dpi scaling is not being applied? How can i force it? How can i force Windows to apply dpi-scaling to an application? See also

    Read the article

  • Scalable WordPress Host for High-Volume Site?

    - by Jonathan Eunice
    I need recommendations for a scalable web host for a high volume WordPress web site. For my purposes, high-volume might be 100K-500K visitors/hour. Might think towards a 1M/hour burst rate as a "high water mark." I know WP isn't the highest-performing platform out there (ha!), but it's non-negotiable. I can do "the usual optimizations" (e.g. put static files in a CDN, run and follow the advice of performance analyzers like YSlow, etc). But it will still be WordPress, and there will be a dozen or so plugins involved. So, where to host the site? Most "what's the best WordPress host?" discussions seem to focus on lowest-cost. I need the opposite. What are the high-volume, scalable, or clustered WordPress hosts with which you've had great experiences?

    Read the article

  • Missing processor/memory counters in the Windows XP Performance Monitor application (perfmon)

    - by Jader Dias
    Perfmon is a Windows utility that helps the developer to find bottlenecks in his applications, by measuring system counters. I was reading a perfmon tutorial and from this list of essential counters I have found the following ones on my machine: PhysicalDisk\Bytes/sec_Total Network Interface\Bytes Total/Sec\nic name But I haven't found the following counters nowhere: Processor\% Processor Time_Total Process\Working Set_Total Memory\Available MBytes Where do I find them? Note that my Windows is pt-BR (instead of en-US). Where do I find language specific documentation for windows tools like PerfMon?

    Read the article

  • WCF Service Layer in n-layered application: performance considerations

    - by Marconline
    Hi all. When I went to University, teachers used to say that in good structured application you have presentation layer, business layer and data layer. This is what I heard for more than 5 years. When I started working I discovered that this is true but sometimes is better to have more than just three layers. Two or three days ago I discovered this article by John Papa that explain how to use Entity Framework in layered application. According to that article you should have: UI Layer and Presentation Layer (Model View Pattern) Service Layer (WCF) Business Layer Data Access Layer Service Layer is, to me, one of the best ideas I've ever heard since I work. Your UI is then completely "diconnected" from Business and Data Layer. Now when I went deeper by looking into provided source code, I began to have some questions. Can you help me in answering them? Question #0: is this a good enterpise application template in your opinion? Question #1: where should I host the service layer? Should it be a Windows Service or what else? Question #2: in the source code provided the service layer expose just an endpoint with WSHttpBinding. This is the most interoperable binding but (I think) the worst in terms of performances (due to serialization and deserializations of objects). Do you agree? Question #3: if you agree with me at Question 2, which kind of binding would you use? Looking forward to hear from you. Have a nice weekend! Marco

    Read the article

  • Performance profiler for a java application

    - by Nitin Garg
    I need to optimize a java application. It makes some 3rd party calls. I need some good tool to accurately measure the time taken by individual api calls. To give an idea of complexity- the application takes a data source file containing 10 lakh rows, and it takes around one hour to complete the processing. As a part of processing , it makes some 3rd party calls (including some network calls). I need to identify which calls are taking more time then others, and based on that, find out a way to optimize the application. Any suggestions would be appreciated.

    Read the article

  • Performance-Driven Development

    - by BuckWoody
    I was reading a blog yesterday about the evils of SELECT *. The author pointed out that it's almost always a bad idea to use SELECT * for a query, but in the case of SQL Azure (or any cloud database, for that matter) it's especially bad, since you're paying for each transmission that comes down the line. A very good point indeed. This got me to thinking - shouldn't we treat ALL programming that way? In other words, wouldn't it make sense to pretend that we are paying for every chunk of data - a little less for a bit, a lot more for a BLOB or VARCHAR(MAX), that sort of thing? In effect, we really are paying for that. Which led me to the thought of Performance-Driven Development, or the act of programming with the goal of having the fastest code from the very outset. This isn't an original title, since a quick Bing-search shows me a couple of offerings from Forrester and a professional in Israel who already used that title, but the general idea I'm thinking of is assigning a "cost" to each code round-trip, be it network, storage, trip time and other variables, and then rewarding the developers that come up with the fastest code. I wonder what kind of throughput and round-trip times you could get if your developers were paid on a scale of how fast the application performed... Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What causes bad performance in consumer apps?

    - by Crashworks
    My Comcast DVR takes at least three seconds to respond to every remote control keypress, making the simple task of watching television into a frustrating button-mashing experience. My iPhone takes at least fifteen seconds to display text messages and crashes ¼ of the times I try to bring up the iPad app; simply receiving and reading an email often takes well over a minute. Even the navcom in my car has mushy and unresponsive controls, often swallowing successive inputs if I make them less than a few seconds apart. These are all fixed-hardware end-consumer appliances for which usability should be paramount, and yet they all fail at basic responsiveness and latency. Their software is just too slow. What's behind this? Is it a technical problem, or a social one? Who or what is responsible? Is it because these were all written in managed, garbage-collected languages rather than native code? Is it the individual programmers who wrote the software for these devices? In all of these cases the app developers knew exactly what hardware platform they were targeting and what its capabilities were; did they not take it into account? Is it the guy who goes around repeating "optimization is the root of all evil," did he lead them astray? Was it a mentality of "oh it's just an additional 100ms" each time until all those milliseconds add up to minutes? Is it my fault, for having bought these products in the first place? This is a subjective question, with no single answer, but I'm often frustrated to see so many answers here saying "oh, don't worry about code speed, performance doesn't matter" when clearly at some point it does matter for the end-user who gets stuck with a slow, unresponsive, awful experience. So, at what point did things go wrong for these products? What can we as programmers do to avoid inflicting this pain on our own customers?

    Read the article

  • Do or can robots cause considerable performance issues?

    - by Anicho
    So the question in the title is exactly what I am trying to find out. My case is: At work we are in a discussion with team members who seem to think bots will cause us problems relating to performance when running on our services website. Out setup: Lets say I have site www.mysite.co.uk this is a shop window to our online services which sit on www.mysiteonline.co.uk. When people search in google for mysite they see mysiteonline.co.uk as well as mysite.co.uk. Cases against stopping bots crawling: We don't store gb's of data publicly available on the web Most friendly bots, if they were to cause issues would have done so already In our instance the bots can't crawl the site because it requires username & password Stopping bots with robot .txt causes an issue with seo (ref.1) If it was a malicious bot, it would ignore robot.txt or meta tags anyway Ref 1. If we were to block mysiteonline.co.uk from having robots crawl this will affect seo rankings and make it inconvenient for users who actively search for mysite to find mysiteonline. Which we can prove is the case for a good portion of our users.

    Read the article

  • android game performance regarding timers

    - by iQue
    Im new to the game-dev world and I have a tendancy to over-simplify my code, and sometimes this costs me alot fo memory. Im using a custom TimerTask that looks like this: public class Task extends TimerTask { private MainGamePanel panel; public Task(MainGamePanel panel) { this.panel=panel; } /** * When the timer executes, this code is run. */ public void run() { panel.createEnemies(); } } this task calls this method from my view: public void createEnemies() { Bitmap bmp = BitmapFactory.decodeResource(getResources(), R.drawable.female); if(enemyCounter < 24){ enemies.add(new Enemy(bmp, this)); } enemyCounter++; } Since I call this in the onCreate-method instead of in my views contructor (because My enemies need to get width and height of view). Im wondering if this will work when I have multiple levels in game (start a new intent). And if this kind of timer really is the best way to add a delay between the spawning-time of my enemies performance-wise. adding code for my timer if any1 came here cus they dont understand timers: private Timer timer1 = new Timer(); private long delay1 = 5*1000; // 5 sec delay public void surfaceCreated(SurfaceHolder holder) { timer1.schedule(new Task(this), 0, delay1); //I call my timer and add the delay thread.setRunning(true); thread.start(); }

    Read the article

  • 12.10 visual performance using nvidia driver

    - by user100485
    My fresh ubuntu 12.10 install is slow, not something extreme but dragging windows, switching workspaces and things like that are just slow and look horrible. it feels like the fps is dropping in a game. Doing some photoshop work in windows was even a relief! This effect gets worse if I connect my external monitor. My system is an intel pentium dual core T4500 with 4gb memory and a GeForce 8200M G/integrated/SSE2 graphics chip. Nothing fancy but should be able to run ok. My "experience" in ubuntu is set to standard. (MSI cr500 laptop) I've installed the nvidia drivers, tried current and experimental and the experimental drivers seem to perform a bit better but overall bad anyway. I set the mode to adaptive in the nvidia-settings tool and it goes to maximum setting directly and doesn't come back. Using htop I found out that compiz or the X server always use a few percent of my cpu, more than I think it should and the time consumed is 5:18 for compiz, 4:33 for /usr/bin/X and 2:41 for google chrome(about 30 tabs open so not too strange I think.) What can I do to increase the visual performance cause this makes me not want to use ubuntu in public!

    Read the article

  • Count a row VS Save the Row count after each update

    - by SAFAD
    I want to know whether saving row count in a table is better than counting it each time of the proccess. Quick Example : A visitor goes to Group Clan, the page displays clan information and Members who have joined the group,Should the page look for all the users who joined the clan and count them, or just display the number of members already saved in table ? I think the first one is not possible to get manipulated with but IT MIGHT cost performance Your Ideas ?

    Read the article

  • SQL SERVER – What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1

    - by Pinal Dave
    This is the first part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 Statistics are considered one of the most important aspects of SQL Server Performance Tuning. You might have often heard the phrase, with related to performance tuning. “Update Statistics before you take any other steps to tune performance”. Honestly, I have said above statement many times and many times, I have personally updated statistics before I start to do any performance tuning exercise. You may agree or disagree to the point, but there is no denial that Statistics play an extremely vital role in the performance tuning. SQL Server 2014 has a new feature called Incremental Statistics. I have been playing with this feature for quite a while and I find that very interesting. After spending some time with this feature, I decided to write about this subject over here. New in SQL Server 2014 – Incremental Statistics Well, it seems like lots of people wants to start using SQL Server 2014′s new feature of Incremetnal Statistics. However, let us understand what actually this feature does and how it can help. I will try to simplify this feature first before I start working on the demo code. Code for all versions of SQL Server Here is the code which you can execute on all versions of SQL Server and it will update the statistics of your table. The keyword which you should pay attention is WITH FULLSCAN. It will scan the entire table and build brand new statistics for you which your SQL Server Performance Tuning engine can use for better estimation of your execution plan. UPDATE STATISTICS TableName(StatisticsName) WITH FULLSCAN Who should learn about this? Why? If you are using partitions in your database, you should consider about implementing this feature. Otherwise, this feature is pretty much not applicable to you. Well, if you are using single partition and your table data is in a single place, you still have to update your statistics the same way you have been doing. If you are using multiple partitions, this may be a very useful feature for you. In most cases, users have multiple partitions because they have lots of data in their table. Each partition will have data which belongs to itself. Now it is very common that each partition are populated separately in SQL Server. Real World Example For example, if your table contains data which is related to sales, you will have plenty of entries in your table. It will be a good idea to divide the partition into multiple filegroups for example, you can divide this table into 3 semesters or 4 quarters or even 12 months. Let us assume that we have divided our table into 12 different partitions. Now for the month of January, our first partition will be populated and for the month of February our second partition will be populated. Now assume, that you have plenty of the data in your first and second partition. Now the month of March has just started and your third partition has started to populate. Due to some reason, if you want to update your statistics, what will you do? In SQL Server 2012 and earlier version You will just use the code of WITH FULLSCAN and update the entire table. That means even though you have only data in third partition you will still update the entire table. This will be VERY resource intensive process as you will be updating the statistics of the partition 1 and 2 where data has not changed at all. In SQL Server 2014 You will just update the partition of Partition 3. There is a special syntax where you can now specify which partition you want to update now. The impact of this is that it is smartly merging the new data with old statistics and update the entire statistics without doing FULLSCAN of your entire table. This has a huge impact on performance. Remember that the new feature in SQL Server 2014 does not change anything besides the capability to update a single partition. However, there is one feature which is indeed attractive. Previously, when table data were changed 20% at that time, statistics update were triggered. However, now the same threshold is applicable to a single partition. That means if your partition faces 20% data, change it will also trigger partition level statistics update which, when merged to your final statistics will give you better performance. In summary If you are not using a partition, this feature is not applicable to you. If you are using a partition, this feature can be very helpful to you. Tomorrow: We will see working code of SQL Server 2014 Incremental Statistics. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • Increase application performance

    - by Prayos
    I'm writing a program for a company that will generate a daily report for them. All of the data that they use for this report is stored in a local SQLite database. For this report, the utilize pretty much every bit of the information in the database. So currently, when I query the datbase, I retrieve everything, and store the information in lists. Here's what I've got: using (var dataReader = _connection.Select(query)) { if (dataReader.HasRows) { while (dataReader.Read()) { _date.Add(Convert.ToDateTime(dataReader["date"])); _measured.Add(Convert.ToDouble(dataReader["measured_dist"])); _bit.Add(Convert.ToDouble(dataReader["bit_loc"])); _psi.Add(Convert.ToDouble(dataReader["pump_press"])); _time.Add(Convert.ToDateTime(dataReader["timestamp"])); _fob.Add(Convert.ToDouble(dataReader["force_on_bit"])); _torque.Add(Convert.ToDouble(dataReader["torque"])); _rpm.Add(Convert.ToDouble(dataReader["rpm"])); _pumpOneSpm.Add(Convert.ToDouble(dataReader["pump_1_strokes_pm"])); _pumpTwoSpm.Add(Convert.ToDouble(dataReader["pump_2_strokes_pm"])); _pullForce.Add(Convert.ToDouble(dataReader["pull_force"])); _gpm.Add(Convert.ToDouble(dataReader["flow"])); } } } I then utilize these lists for the calculations. Obviously, the more information that is in this database, the longer the initial query will take. I'm curious if there is a way to increase the performance of the query at all? Thanks for any and all help. EDIT One of the report rows is called Daily Drilling Hours. For this calculation, I use this method: // Retrieves the timestamps where measured depth == bit depth and PSI >= 50 public double CalculateDailyProjectDrillingHours(DateTime date) { var dailyTimeStamps = _time.Where((t, i) => _date[i].Equals(date) && _measured[i].Equals(_bit[i]) && _psi[i] >= 50).ToList(); return _dailyDrillingHours = Convert.ToDouble(Math.Round(TimeCalculations(dailyTimeStamps).TotalHours, 2, MidpointRounding.AwayFromZero)); } // Checks that the interval is less than 10, then adds the interval to the total time private static TimeSpan TimeCalculations(IList<DateTime> timeStamps) { var interval = new TimeSpan(0, 0, 10); var totalTime = new TimeSpan(); TimeSpan timeDifference; for (var j = 0; j < timeStamps.Count - 1; j++) { if (timeStamps[j + 1].Subtract(timeStamps[j]) <= interval) { timeDifference = timeStamps[j + 1].Subtract(timeStamps[j]); totalTime = totalTime.Add(timeDifference); } } return totalTime; }

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >