Search Results

Search found 30217 results on 1209 pages for 'website performance'.

Page 9/1209 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Tracking state of a one time event on a big website

    - by Mattis
    Assume a website with 250 million active users. I add a new feature to the website. Once a user visits I want to use a short tutorial to teach them how to use said feature. I only want them to complete the tutorial once (or actively click it away). What is the smart way to code the verification check for this? How do I track the progress in the database? Having a separate table with like NewTutorial_completed = 1 for user_id = 21312315 would just snowball. It also feels intuitively bad to check for every one-time event for every user on every page view. While writing the question I got one idea, to have a separate event log that is checked periodically for any new action the user need to see or perform. I push events to this log and once they are completed they are removed from the log. No need to store NewTutorial_completed = 1-type variables this way. I am sure this is a common problem. I would appreciate any input on what best practice is.

    Read the article

  • Infrastructure to effectively set up experiements and learn from them

    - by David
    Open-org.com is in the early stages of creating our first product, a place on the web, where one can ask lawyers questions at a fraction of their normal costs. An early stage front page can be found here. I got inspired by this video, which is recommended by Jeff Atwood, which talks about getting feedback faster, which is the reason for this question. The problem Needless to say, we want our conversion rates to be as high as possible. Therefore, we want to be able to rapidly set up a new experiment where we change something on the site (like moving an image slightly, rewriting a sentence etc.). We then want to present the modified page to a random subset of the users. After that we will compare the conversion rates of the experiment with another version. I could very well imagine that we want to run 10-100 experiments simultaneously and it would be nice to have features, where experiments that obviously have worse results will be ended before schedule. My question Does infrastructure to support the whole process exist? A short description of our infrastructure... We use EC2 and PHP and have a script to automatically start up new instances with all needed software. Still, starting up a new server for every experiment, seems like a bit of overkill, so I am wondering what other options exist. Btw. If you feel like working for Open-org.com, you can pick a task, and start working, or suggest a new task. All profits are given out to the contributors.

    Read the article

  • How can I exclude content in my notifications bar from being indexed?

    - by Liam E-p
    Of course I want my content to be indexed pretty fast by search engines, however not my notifications bar. My notifications bar contains the last 30 changes to content on the site, and I don't want this to show in my SEO meta. As all the notifications are generic, it often doesn't provide any relevant information. As I said the notifications are generic. If an article named "123" was created, it would create a notification that says "Article "123" was created by xxx at 12:00AM". I'm now wondering if this is a content design problem. As only 1/3 of this information is actually relevant to users (the title, what happened). By SEO meta, and irrelevant notification data being shown, I mean this - Basically what I was wondering, is how I could optimise this, so search engines wouldn't show this generic nonsense.

    Read the article

  • Is it okay to showcase templates/layouts recreated in different codes in a portfolio?

    - by Souta
    I have several different templates/layouts, both simple and complex. I recreated these templates multiple times, just using different codes. (Say, a complex one was originally made in only HTML and CSS, I recreated it using HTML, Javascript, CSS, then again with a HTML and PHP concoction, and etc.) I wanted to showcase my work and skills by doing this, but I don't know if it would be okay for that all to go into a resumé/portfolio. This is why: Freelancing Does potential business really care about how their site is made, as long as it looks and functions to their liking? (As in, should I just only show the one example of each template/layout and not the multiple recreations?) Potential Hire However, if a potential employer were to stumble across my resumé/portfolio, would having the multiple recreations do any good for a career outlook? (As in, this potential employer is a company where I could be working on a team to create/develop sites and not be freelancing; would a lack of skill-shining turn this employer away because I didn't set myself apart and show that I'm not just like every other budding web designer?) Those two issues have me wondering if it is okay to have a resumé/portfolio combined for this specific reason. Or does something like this not matter to potential business (as a freelancer) because they wouldn't care either way as long as it looks and functions to their liking and therefore it is okay to showcase the recreations with the originals?

    Read the article

  • Creating an online community - use templates or self-develop?

    - by ican ican
    PHPMotion, Joomla or develop my own? I'm thinking of developing a common interest online community. It will be have UGC, stats, etc.. functionality, and perhaps an online store. Though cost is an issue at this time, I want to be professional and effective. Should I use existing free platform templates, like PHP, Joomla, or should I develop my own? What are the advantages/disadvantages of either option? As a rough estimate, how much will it cost me to develop and manage my own? And how long will it take. In general what should I be careful about on this journey?

    Read the article

  • Python performance: iteration and operations on nested lists

    - by J.J.
    Problem Hey folks. I'm looking for some advice on python performance. Some background on my problem: Given: A mesh of nodes of size (x,y) each with a value (0...255) starting at 0 A list of N input coordinates each at a specified location within the range (0...x, 0...y) Increment the value of the node at the input coordinate and the node's neighbors within range Z up to a maximum of 255. Neighbors beyond the mesh edge are ignored. (No wrapping) BASE CASE: A mesh of size 1024x1024 nodes, with 400 input coordinates and a range Z of 75 nodes. Processing should be O(x*y*Z*N). I expect x, y and Z to remain roughly around the values in the base case, but the number of input coordinates N could increase up to 100,000. My goal is to minimize processing time. Current results I have 2 current implementations: f1, f2 Running speed on my 2.26 GHz Intel Core 2 Duo with Python 2.6.1: f1: 2.9s f2: 1.8s f1 is the initial naive implementation: three nested for loops. f2 is replaces the inner for loop with a list comprehension. Code is included below for your perusal. Question How can I further reduce the processing time? I'd prefer sub-1.0s for the test parameters. Please, keep the recommendations to native Python. I know I can move to a third-party package such as numpy, but I'm trying to avoid any third party packages. Also, I've generated random input coordinates, and simplified the definition of the node value updates to keep our discussion simple. The specifics have to change slightly and are outside the scope of my question. thanks much! f1 is the initial naive implementation: three nested for loops. 2.9s def f1(x,y,n,z): rows = [] for i in range(x): rows.append([0 for i in xrange(y)]) for i in range(n): inputX, inputY = (int(x*random.random()), int(y*random.random())) topleft = (inputX - z, inputY - z) for i in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)): for j in xrange(max(0, topleft[1]), min(topleft[1]+(z*2), y)): if rows[i][j] <= 255: rows[i][j] += 1 f2 is replaces the inner for loop with a list comprehension. 1.8s def f2(x,y,n,z): rows = [] for i in range(x): rows.append([0 for i in xrange(y)]) for i in range(n): inputX, inputY = (int(x*random.random()), int(y*random.random())) topleft = (inputX - z, inputY - z) for i in xrange(max(0, topleft[0]), min(topleft[0]+(z*2), x)): l = max(0, topleft[1]) r = min(topleft[1]+(z*2), y) rows[i][l:r] = [j+1 for j in rows[i][l:r] if j < 255]

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • OBIEE 11.1.1 - User Interface (UI) Performance Is Slow With Internet Explorer 8

    - by Ahmed A
    The OBIEE 11g UI is performance is slow in IE 8 and faster in Firefox.  For VPN or WAN users, it takes long time to display links on Dashboards via IE 8. Cause is IE 8 generates many HTTP 304 return calls and this caused the 11g UI slower when compared to the Mozilla FireFox browser. To resolve this issue, you can implement HTTP compression and caching. This is a best practice.Why use Web Server Compression / Caching for OBIEE? Bandwidth Savings: Enabling HTTP compression can have a dramatic improvement on the latency of responses. By compressing static files and dynamic application responses, it will significantly reduce the remote (high latency) user response time. Improves request/response latency: Caching makes it possible to suppress the payload of the HTTP reply using the 304 status code.  Minimizing round trips over the Web to re-validate cached items can make a huge difference in browser page load times. This screen shot depicts the flow and where the compression and decompression occurs: Solution: a. How to Enable HTTP Caching / Compression in Oracle HTTP Server (OHS) 11.1.1.x 1. To implement HTTP compression / caching, install and configure Oracle HTTP Server (OHS) 11.1.1.x for the bi_serverN Managed Servers (refer to "OBIEE Enterprise Deployment Guide for Oracle Business Intelligence" document for details). 2. On the OHS machine, open the file HTTP Server configuration file (httpd.conf) for editing. This file is located in the OHS installation directory.For example: ORACLE_HOME/Oracle_WT1/instances/instance1/config/OHS/ohs13. In httpd.conf file, verify that the following directives are included and not commented out: LoadModule expires_module "${ORACLE_HOME}/ohs/modules/mod_expires.soLoadModule deflate_module "${ORACLE_HOME}/ohs/modules/mod_deflate.so 4. Add the following lines in httpd.conf file below the directive LoadModule section and restart the OHS: Note: For the Windows platform, you will need to enclose any paths in double quotes ("), for example:Alias "/analytics ORACLE_HOME/bifoundation/web/app"<Directory "ORACLE_HOME/bifoundation/web/app"> Alias /analytics ORACLE_HOME/bifoundation/web/app#Pls replace the ORACLE_HOME with your actual BI ORACLE_HOME path<Directory ORACLE_HOME/bifoundation/web/app>#We don't generate proper cross server ETags so disable themFileETag noneSetOutputFilter DEFLATE# Don't compress imagesSetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary<FilesMatch "\.(gif|jpeg|png|js|x-javascript|javascript|css)$">#Enable future expiry of static filesExpiresActive onExpiresDefault "access plus 1 week"     #1 week, this will stops the HTTP304 calls i.e. generated by IE 8Header set Cache-Control "max-age=604800"</FilesMatch>DirectoryIndex default.jsp</Directory>#Restrict access to WEB-INF<Location /analytics/WEB-INF>Order Allow,DenyDeny from all</Location> Note: Make sure you replace above placeholder "ORACLE_HOME" to your correct path for BI ORACLE_HOME.For example: my BI Oracle Home path is /Oracle/BIEE11g/Oracle_BI1/bifoundation/web/app Important Notes: Above caching rules restricted to static files found inside the /analytics directory(/web/app). This approach is safer instead of setting static file caching globally. In some customer environments you may not get 100% performance gains in IE 8.0 browser. So in that case you need to extend caching rules to other directories with static files content. If OHS is installed on separate dedicated machine, make sure static files in your BI ORACLE_HOME (../Oracle_BI1/bifoundation/web/app) is accessible to the OHS instance. The following screen shot summarizes the before and after results and improvements after enabling compression and caching:

    Read the article

  • Merge sort versus quick sort performance

    - by Giorgio
    I have implemented merge sort and quick sort using C (GCC 4.4.3 on Ubuntu 10.04 running on a 4 GB RAM laptop with an Intel DUO CPU at 2GHz) and I wanted to compare the performance of the two algorithms. The prototypes of the sorting functions are: void merge_sort(const char **lines, int start, int end); void quick_sort(const char **lines, int start, int end); i.e. both take an array of pointers to strings and sort the elements with index i : start <= i <= end. I have produced some files containing random strings with length on average 4.5 characters. The test files range from 100 lines to 10000000 lines. I was a bit surprised by the results because, even though I know that merge sort has complexity O(n log(n)) while quick sort is O(n^2), I have often read that on average quick sort should be as fast as merge sort. However, my results are the following. Up to 10000 strings, both algorithms perform equally well. For 10000 strings, both require about 0.007 seconds. For 100000 strings, merge sort is slightly faster with 0.095 s against 0.121 s. For 1000000 strings merge sort takes 1.287 s against 5.233 s of quick sort. For 5000000 strings merge sort takes 7.582 s against 118.240 s of quick sort. For 10000000 strings merge sort takes 16.305 s against 1202.918 s of quick sort. So my question is: are my results as expected, meaning that quick sort is comparable in speed to merge sort for small inputs but, as the size of the input data grows, the fact that its complexity is quadratic will become evident? Here is a sketch of what I did. In the merge sort implementation, the partitioning consists in calling merge sort recursively, i.e. merge_sort(lines, start, (start + end) / 2); merge_sort(lines, 1 + (start + end) / 2, end); Merging of the two sorted sub-array is performed by reading the data from the array lines and writing it to a global temporary array of pointers (this global array is allocate only once). After each merge the pointers are copied back to the original array. So the strings are stored once but I need twice as much memory for the pointers. For quick sort, the partition function chooses the last element of the array to sort as the pivot and scans the previous elements in one loop. After it has produced a partition of the type start ... {elements <= pivot} ... pivotIndex ... {elements > pivot} ... end it calls itself recursively: quick_sort(lines, start, pivotIndex - 1); quick_sort(lines, pivotIndex + 1, end); Note that this quick sort implementation sorts the array in-place and does not require additional memory, therefore it is more memory efficient than the merge sort implementation. So my question is: is there a better way to implement quick sort that is worthwhile trying out? If I improve the quick sort implementation and perform more tests on different data sets (computing the average of the running times on different data sets) can I expect a better performance of quick sort wrt merge sort? EDIT Thank you for your answers. My implementation is in-place and is based on the pseudo-code I have found on wikipedia in Section In-place version: function partition(array, 'left', 'right', 'pivotIndex') where I choose the last element in the range to be sorted as a pivot, i.e. pivotIndex := right. I have checked the code over and over again and it seems correct to me. In order to rule out the case that I am using the wrong implementation I have uploaded the source code on github (in case you would like to take a look at it). Your answers seem to suggest that I am using the wrong test data. I will look into it and try out different test data sets. I will report as soon as I have some results.

    Read the article

  • ATI proprietary driver performance?

    - by Axel
    I'm about to (at least, want to..) buy a laptop with an ATI Radeon HD 4250, and I haven't a good opinion on ATI's drivers. How is the actual performance of the open/proprietary driver (currently I have nVidia, and I'm very satisfied)? The intended use for the laptop is: watching videos, programming in Java/PHP/maybe Qt... but, I like to know if Compiz runs well. Yes, I'm a hardcore (?) programmer that uses Compiz. :P Someone has this GPU? Experiences? Thoughts? Thanks! :D

    Read the article

  • Intel Xeon 5600 (Westmere-EP) and AMD Magny-Cours Performance Update

    - by jchang
    HP has just released TPC-C and TPC-E results for the ProLiant DL380G7 with 2 Xeon 5680 3.33GHz 6-core processor, allowing a direct comparison with their DL385G& with 2 Opteron 6176 2.3GHz 12-core processors. Last month I complained about the lack of performance results for the Intel Xeon 5600 6-core 32nm processor line for 2-way systems. This might have been deliberate to not complicate the message for the Xeon 7500 8-core 45nm (for 4-way+ systems) launch two weeks later. http://sqlblog.com/blogs/joe_chang/archive/2010/04/07/intel-xeon-5600-westmere-ep-and-7500-nehalem-ex.aspx...(read more)

    Read the article

  • How does ecryptfs impact harddisk performance?

    - by Freddi
    I have my home directy encrypted with ecryptfs. Does ecryptfs lead to fragmentation? I have the feeling that reading files, displaying folders and login became continuously slower and slower (although it was not noticeably slow at the beginning). The hard disk makes a lot of seek noise even if I open only a text file. In /home/.ecryptfs I see many big archives (that probably contain the encrypted files), so I'm wondering if Linux file system online defragmentation gains anything here. What options do I have to increase performance? Should I decide whether I maybe better do without encryption?

    Read the article

  • Quick ways to boost performance and scalability of ASP.NET, WCF and Desktop Clients

    - by oazabir
    There are some simple configuration changes that you can make on machine.config and IIS to give your web applications significant performance boost. These are simple harmless changes but makes a lot of difference in terms of scalability. By tweaking system.net changes, you can increase the number of parallel calls that can be made from the services hosted on your servers as well as on desktop computers and thus increase scalability. By changing WCF throttling config you can increase number of simultaneous calls WCF can accept and thus make most use of your hardware power. By changing ASP.NET process model, you can increase number of concurrent requests that can be served by your website. And finally by turning on IIS caching and dynamic compression, you can dramatically increase the page download speed on browsers and and overall responsiveness of your applications. Read the CodeProject article for more details. http://www.codeproject.com/KB/webservices/quickwins.aspx Please vote for me if you find the article useful.

    Read the article

  • Java performance of StringBuilder append chains

    - by ultimate_guy
    In Java, if I am building a significant number of strings, is there any difference in performance in the following two examples? StringBuilder sb = new StringBuilder(); for (int i = 0; i < largeNumber; i++) { sb.append(var[i]); sb.append('='); sb.append(value[i]); sb.append(','); } or StringBuilder sb = new StringBuilder(); for (int i = 0; i < largeNumber; i++) { sb.append(var[i]).append('=').append(value[i]).append(','); } Thanks!

    Read the article

  • SQL SERVER – Video – Performance Improvement in Columnstore Index

    - by pinaldave
    I earlier wrote an article about SQL SERVER – Fundamentals of Columnstore Index and it got very well accepted in community. However, one of the suggestion I keep on receiving for that article is that many of the reader wanted to see columnstore index in the action but they were not able to do that. Some of the readers did not install SQL Server 2012 or some did not have good machine to recreate the big table involved in the demo. For the same reason, I have created small video for that. I have written two more article on columstore index. Please read them as followup to the video: SQL SERVER – How to Ignore Columnstore Index Usage in Query SQL SERVER – Updating Data in A Columnstore Index Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • How can I improve overall system performance?

    - by Decio Lira
    What are your tips for improving overall system performance on ubuntu? Inspired by this question I realized that some default settings may be rather conservative on Ubuntu and that it's possible to tweak it with little or no risk if you wish to make it faster. This is not meant to be application specific (e.g. make firefox load pages faster), but system wide. Preferably 1 tip per answer, with enough detail for people to implement it. A couple of mine would be: Install Preload (via Software Center or sudo apt-get install preload); Change Swappiness value - "which controls the degree to which the kernel prefers to swap when it tries to free memory"; What are yours? PS: Since this is not intended to have a unique answer but rather, several useful tips, I'm making this community wiki out-of-the-box.

    Read the article

  • Coded ui to measure performance

    - by Mike Weber
    I have been tasked with using coded UI to measure performance on a proprietary windows desktop application. The need is to measure how long it takes for the next page/screen to display after a user clicks on a control. For example - a user enters their ID and PW and clicks sign-in. The need is to measure how long it takes for the next screen to display when the user clicks the sign-in button. I understand the need to define what indicates the screen is loaded and ready for use. One approach is to use control.WaitForControlReady and use BeginTimer/EndTimer. Is coded ui a dependable and accurate way of measuring time? Is WaitForControlReady the best method to determine when a control is ready for use?

    Read the article

  • Poor mobile performance when running from Eclipse

    - by Yajirobe_LOL
    So after weeks of thinking my rendering code was bad, I accidentally discovered the following: Running my game on a Nexus S From Eclipse (Debug as - Android application): 12fps From the device while still attached to USB (getting log info in Eclipse still): 24fps From the device while not attached via USB: 56fps I was wondering if anyone else has issues like this? I mean, the problem really isn't a problem since the final release build will likely have good performance, but for the time being I don't want to have to keep (un)plugging my device in and out when testing code all day long. Is there some remedy for this or does anyone have any input/advice? Thanks.

    Read the article

  • Will having many timers affect my game performance?

    - by iQue
    I'm making a game for android, and earlier today I was trying to add some cool stuff to my game. The problem is this thing needs like 5 timers. I build my timers like this: timer += deltaTime; if(timer >= 2.0f){ doStuff; timer -= 2.0f; } // this timers gets stuff done every 2 secs Will having to many timers like this, getting checked every frame, screw up my games performance? The effect I wanted to add was a crosshair every 2 sec, then remove it after 2 sec and do a timed animation. So an array of crosshairs dependent on a bunch of timers to be exact. This caused my game to shut down when used, so thats why Im wondering if using that many timers causes my game to flip out.

    Read the article

  • Setter Validation can affect performance?

    - by TiagoBrenck
    Whitin a scenario where you use an ORM to map your entities to the DB, and you have setter validations (nullable, date lower than today validation, etc) every time the ORM get a result, it will pass into the setter to instance the object. If I have a grid that usually returns 500 records, I assume that for each record it passes on all validations. If my entity has 5 setter validations, than I have passed in 2.500 validations. Does those 2.500 validations will affect the performance? If was 15.000 validation, it will be different? In my opinion, and according to this answer (http://stackoverflow.com/questions/4893558/calling-setters-from-a-constructor/4893604#4893604), setter validation is usefull than constructors validation. Is there a way to avoid unecessary validation, since I am safe that the values I send to DB when saving the entity wont change until I edit it on my system?

    Read the article

  • Performance tracking/monitoring in games

    - by vitaliy kotik
    Let's say I have an online game with a downloadable client / browser plugin. I want to track performance of my software and automatically send summary to the server. Let it be fps, latency, load time, physics step calc. time, whatever... I also want tools to perform data analysis: per session stats, per hardware stats, avgs, totals, diagrams, etc. So that I could see what are the real world hotspots / bottlenecks. Is there any common out-of-the-box / SaS solution?

    Read the article

  • Using a subset of GetHashCode() to increase AzureTable performance through partitioning

    - by makerofthings7
    Generally speaking, Azure Table IO performance improves as more partitions are used (with some tradeoffs in continuation tokens and batch updates I won't go into). Since the partition key is always a string I am considering using a "natural" load balancing technique based on a subset of the GetHashCode() of the partition key, and appending this subset to the partition key itself. This will allow all direct PK/RK queries to be computed with little overhead and with ease. Batch updates may just need an intermediate to group similar PKs together prior to submission. Question: Should I use GetHashCode() to compute the partition key? Is a better function available? If I use GetHashCode() does it matter which character I use for my PK? Is there an abstraction for Azure Table and Blob storage that does this for me already?

    Read the article

  • Improving grepping over a huge file performance

    - by rogerio_marcio
    I have FILE_A which has over 300K lines and FILE_B which has over 30M lines. I created a bash script that greps each line in FILE_A over in FILE_B and writes the result of the grep to a new file. This whole process is taking over 5+ hours. I'm looking for suggestions on whether you see any way of improving the performance of my script. I'm using grep -F -m 1 as the grep command. FILE_A looks like this: 123456789 123455321 and FILE_B is like this: 123456789,123456789,730025400149993, 123455321,123455321,730025400126097, So with bash I have a while loop that picks the next line in FILE_A and greps it over in FILE_B. When the pattern is found in FILE_B i write it to result.txt. while read -r line; do grep -F -m1 $line 30MFile done < 300KFile Thanks a lot in advance for your help.

    Read the article

  • How do you promote your blog or website?

    - by zcourts
    I tend to get (what I think are good ideas) and I go out and either build software/websites from scratch or use an existing software/tool such as wordpress. But when I'm done, and even though I get a few users that say they really like it, I can't seem to get my apps out there, or rather get a large set of eyes on it. So I'm interested in knowing how others do it. I read people's stories of how they did this amazing thing and within 2-3 months they're getting thousands or hundreds of thousands of users per month. It just seems to be all smoke and mirrors. So how have you done it? Or anyone you know who has... Does everyone throw lots of money into their promotion, something else?

    Read the article

  • game performance

    - by iQue
    I'm making a game for android, and earlier today I was trying to add some cool stuff to my game. The problem is this thing needs like 5 timers. I build my timers like this: timer += deltaTime; if(timer >= 2.0f){ doStuff; timer -= 2.0f; } // this timers gets stuff done every 2 secs Will having to many timers like this, getting checked every frame, screw up my games performance? The effect I wanted to add was a crosshair every 2 sec, then remove it after 2 sec and do a timed animation. So an array of crosshairs dependent on a bunch of timers to be exact. This caused my game to shut down when used, so thats why Im wondering if using that many timers causes my game to flip out.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >