Search Results

Search found 30217 results on 1209 pages for 'website performance'.

Page 431/1209 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • lapply slower than for-loop when used for a BiomaRt query. Is that expected?

    - by ptocquin
    I would like to query a database using BiomaRt package. I have loci and want to retrieve some related information, let say description. I first try to use lapply but was surprise by the time needed for the task to be performed. I thus tried a more basic for-loop and get a faster result. Is that expected or is something wrong with my code or with my understanding of apply ? I read other posts dealing with *apply vs for-loop performance (Here, for example) and I was aware that improved performance should not be expected but I don't understand why performance here is actually lower. Here is a reproducible example. 1) Loading the library and selecting the database : library("biomaRt") athaliana <- useMart("plants_mart_14") athaliana <- useDataset("athaliana_eg_gene",mart=athaliana) 2) Querying the database : loci <- c("at1g01300", "at1g01800", "at1g01900", "at1g02335", "at1g02790", "at1g03220", "at1g03230", "at1g04040", "at1g04110", "at1g05240" ) I create a function for the use in lapply : foo <- function(loci) { getBM("description","tair_locus",loci,athaliana) } When I use this function on the first element : > system.time(foo(cwp_loci[1])) utilisateur système écoulé 0.020 0.004 1.599 When I use lapply to retrieve the data for all values : > system.time(lapply(loci, foo)) utilisateur système écoulé 0.220 0.000 16.376 I then created a new function, adding a for-loop : foo2 <- function(loci) { for (i in loci) { getBM("description","tair_locus",loci[i],athaliana) } } Here is the result : > system.time(foo2(loci)) utilisateur système écoulé 0.204 0.004 10.919 Of course, this will be applied to a big list of loci, so the best performing option is needed. I thank you for assistance. EDIT Following recommendation of @MartinMorgan Simply passing the vector loci to getBM greatly improves the query efficiency. Simpler is better. > system.time(lapply(loci, foo)) utilisateur système écoulé 0.236 0.024 110.512 > system.time(foo2(loci)) utilisateur système écoulé 0.208 0.040 116.099 > system.time(foo(loci)) utilisateur système écoulé 0.028 0.000 6.193

    Read the article

  • Phonegap: Will my mobile app 'feel' faster or slower once ported to phonegap?

    - by user15872
    So I'm designing everything in mobile Safari and I know that phonegap is essentially a stripped webview but... Question: Will my application will run better in phonegap? (revised below) a)I imagine my navigation and core app will load faster as the scripts and images are on the hard drive. Is this True? b)I assume since they've been working on it for 2 years now that they may have made some optimizations to make it quicker than just an average safari window. Is this true? (Assuming both html5/js/css code bases are pretty much the same and app is running on iOS.) Update: Sorry, I meant to compare apples to slightly different apples. Question 1 revised: Will my app see any performance benefits running with in a phonegap environment vs standard mobile safari? (compare mobile - to mobile) 1b) In what ways, other than loading time has phonegap optimized performance over standard mobile safari? Follow ups: 1) Are there any pitfalls, other than large libraries, that may cause phonegap to suffer a serious performance hit vs stand mobile safari? 2) Can I mix native and webview rendering? (i.e the top half of my app is rendered in with html/css/js and the bottom half native)

    Read the article

  • Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • Dealing with large number of text strings

    - by Fadrian
    My project when it is running, will collect a large number of string text block (about 20K and largest I have seen is about 200K of them) in short span of time and store them in a relational database. Each of the string text is relatively small and the average would be about 15 short lines (about 300 characters). The current implementation is in C# (VS2008), .NET 3.5 and backend DBMS is Ms. SQL Server 2005 Performance and storage are both important concern of the project, but the priority will be performance first, then storage. I am looking for answers to these: Should I compress the text before storing them in DB? or let SQL Server worry about compacting the storage? Do you know what will be the best compression algorithm/library to use for this context that gives me the best performance? Currently I just use the standard GZip in .NET framework Do you know any best practices to deal with this? I welcome outside the box suggestions as long as it is implementable in .NET framework? (it is a big project and this requirements is only a small part of it) EDITED: I will keep adding to this to clarify points raised I don't need text indexing or searching on these text. I just need to be able to retrieve them in later stage for display as a text block using its primary key. I have a working solution implemented as above and SQL Server has no issue at all handling it. This program will run quite often and need to work with large data context so you can imagine the size will grow very rapidly hence every optimization I can do will help.

    Read the article

  • SQL Server: Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • PERC H710 mini raid controller advanced settings (BIOS)

    - by gregg
    I upgraded from a PERC h310 to an H710 controller on my Dell R620 but didnt get any increase in performance. This is a ESXi host with a 5 disk RAID 5. I noticed when going to the RAID BIOS that the advanced settings section was not activated/checked off. In that section is the strip element size: 64kb (default) read policy: no read ahead and the write policy: write-through. Will checking that section do any harm to the existing raid array or will it simply enable those policies and hopefully boost performance? Or, lastly, is it already using those policies and the checkmark is simply to activate them for changes

    Read the article

  • Helicon ISAPI Rewrite Proxy 500 Internal Server Error

    - by Rob Stevenson-Leggett
    Hi, I have a website running at www.domain.com. The client now wants the website to appear to be running under www.otherdomain.com/whatson/brand/ Since the website is umbraco it won't run under a subfolder. I wanted to use ISAPI rewrite to proxy requests to www.domain.com using the following rule in a .htaccess at www.otherdomain.com/whatson/brand/ RewriteRule ^(.*)$ http://www.domain.com/$1 [P,L] However, when I apply this I get an ugly 500 Internal Server Error. There's nothing in the event log. So I turned on ISAPI logging and can see the following 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (2) init rewrite engine with requested uri /whatson/brand/home.aspx Then it testing all the other rewrite rules on the server. Then this 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (1) Htaccess process request w:\websites\otherdomain.com\docs2\whatson\brand\.htaccess 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (3) applying pattern '^(.*)$' to uri 'home.aspx' 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (2) forcing proxy-throughput with http://www.domain.com/home.aspx 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (1) go-ahead with proxy request http://www.domain.com/home.aspx [OK] 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (2) rewrite 'home.aspx' -> '/whatson/brand/home.aspxx.rwhlp?p=0' 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (2) internal redirect with /whatson/brand/home.aspxx.rwhlp?p=0 [INTERNAL REDIRECT] So it appears to work according to the logs, but I'm not seeing the page come through.. It's worth noting that www.domain.com and www.otherdomain.com are on the same box. LogLevel is 3 and RewriteLogLevel is 3 (I've tried with 9 and debug but there is too much traffic going through the other sites on the box) Any ideas?

    Read the article

  • Install a web certificate on an Android device

    - by martani_net
    To gain access to WIFI at university I have to login with my user/pass credentials. The certificate of their website (the local home page that asks for the credentials) is not recognized as a trusted certificate, so we install it separately on our computers. The problem is that I don't take my laptop with me often to university, so I usually want to connect using my HTC Magic, but I have no clue on how to install the certificate separately on Android, it is always rejected. [Edit2] : this is what is stated in their website Need for installation of official certificates CyberTrust validated by the CRU (http://www.cru.fr/wiki/scs/) The certificates contain information certified to generate encryption keys for data exchange, called "sensitive" as the password of a user. By connecting to CanalIP-UPMC, for example, the user must validate the identity of the server accepting the certificate appears on the screen in a "popup window". In reality, the user is unable to validate a certificate knowing, because a simple visual check of the license is impossible. Therefore, the certificates of the certification authority (CRU-Cybertrust Educationnal-ca.ca Cybertrust and-global-root-ca.ca) must be installed prior to the browser for the validity of the certificate server can be controlled automatically. Before you connect to the network-UPMC CanalIP you must register in your browser through the certification authority Cybertrust-Educationnal-ca.ca Download the Cybertrust-Educationnal-ca.ca, depending on your browser and select the link below : With Internet Explorer, click on the link following. With Firefox, click on the link following. With Safari, click the link following. If this procedure is not respected, a real risk is incurred by the user: that of being robbed password LDAP directory UPMC. A malicious server may in fact try very easily attack type "man-in-the-middle" by posing as the legitimate server at UPMC. The theft of a password allows the attacker to steal an identity for transactions over the Internet can engage the responsibility of the user trapped ... This is their website : http://www.canalip.upmc.fr/doc/Default.htm (in French, Google-translate it :)) Anyone knows how to install a web certificate on Android?

    Read the article

  • Intel® Core™2 Duo Desktop Processor vs Intel® Core™ i3 Desktop Processor?

    - by metal gear solid
    Intel® Core™2 Duo Desktop Processor vs Intel® Core™ i3 Desktop Processor? Which CPU is better to buy ? Intel® Core™ i3-530 Processor (4M Cache, 2.93 GHz) (it supports DDR3 also) or Intel® Core™2 Duo Processor E7500 (3M Cache, 2.93 GHz, 1066 MHz FSB) (it supports DDR2 only ) Although I do not play games on my PC but I need good performance in Adobe Photoshop, Watching Full HD Movies. I need good performance in Multitasking. Along with any of these CPU I would purchase 2 GB x 2 stick of RAM. and I will use Windows 7. and I will use Microsoft VPC images also with MS Virtual PC.

    Read the article

  • Coldfusion on VPS, how much JVM heap memory?

    - by Steven Filipowicz
    Recently I got a VPS server and I'm running Coldfusion, the website was running fine until it got more and more traffic and I started to encounter 'OutOfMemory' exceptions. I thought simply to rise the memory of the VPS server, but this didn't help. After doing some Google searches I found a setting in de CF Admin settings to set the JVM Heap memory. It was on the standard: Max Heap size 512MB and Min Heap size was empty. After playing around a bit I have now set it to Min 50MB and Max 200MB, good things is that I'm not getting the 'OutOfMemory' exceptions anymore. So far so good! But with about 50 active visitors on the website, the website starts to get slow. The CPU usage is only about 8% (Windows Taskmanager), also the taskmanager show only about 30% of the 3GB RAM in use. So I'm thinking that my values could be tweaked to use more of the RAM. Honestly I don't understand these JVM Memory heap settings, so I have no clue what is a good setting for me. I found a CF script that displays the memory usage, the details are: Heap Memory Usage - Committed 194 MB Heap Memory Usage - Initial 50.0 MB Heap Memory Usage - Max 194 MB Heap Memory Usage - Used 163 MB JVM - Free Memory 31.2 MB JVM - Max Memory 194 MB JVM - Total Memory 194 MB JVM - Used Memory 163 MB Memory Pool - Code Cache - Used 13.0 MB Memory Pool - PS Eden Space - Used 6.75 MB Memory Pool - PS Old Gen - Used 155 MB Memory Pool - PS Perm Gen - Used 64.2 MB Memory Pool - PS Survivor Space - Used 1.07 MB Non-Heap Memory Usage - Committed 77.4 MB Non-Heap Memory Usage - Initial 18.3 MB Non-Heap Memory Usage - Max 240 MB Non-Heap Memory Usage - Used 77.2 MB Free Allocated Memory: 30mb Total Memory Allocated: 194mb Max Memory Available to JVM: 194mb % of Free Allocated Memory: 16% % of Available Memory Allocated: 100% My JVM arguments are: -server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC - Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib Can I give the JVM more memory? If so, what settings should I use? Thanks very much!!

    Read the article

  • Can't connect to computer via SBS2011 RWA

    - by sbrattla
    I've got an SBS 2011 Essentials server. Users a able to log on to Remote Web Access using their username and password. However, the trouble starts when a users attempts to log on remotely to his/her computer from the Remote Web Access website. When the user clicks on his/her computer (in the RWA website), the user is first presented with a window listing Publisher, Type, Remote Computer name and Gateway Server. Everything seems fine here, and the user clicks Connect. The user credentials are provided, and a connection is attempted. However, the logon attempt always fails with the message "The logon attempt failed". The logon attempt always generates three log events in the server log: EventId: 4672 - Special Logon EventId: 4624 - Logon EventId: 4634 - Logoff All events happens have the same timestamp. No events are logged on the client machine which the user attempts to log on to. Others have solved this by going to their IIS server and enable "Windows Authentication" for Rpc and RpcWithCert (in Default Web Site). However, this is in place on the server. I've also got RD CAPs and RD RAPs in place. As a side note; if i try to connect to any of the machines using the Remote Desktop Connection using the "Connect from anywhere" functionality - then things work flawlessly! In other words, the error only occurs when attempting to login to a computer via the Remote Web Access website. I've run out of ideas for how I can solve this (too many hours spent). Any ideas highly appreciated!

    Read the article

  • Remove automatic Aero disabling in Windows 7

    - by Jani Hartikainen
    Sometimes when I'm playing games which are heavy on the GPU, Windows decides to helpfully disable aero, causing everything to freeze for a bit and in the worst case, combined with ATI's brilliant drivers, causes the game to crash. So, How do I stop Windows from automatically disabling Aero when playing games? It has absolutely no effect on the performance of the game itself when it does that. Also, I'd like to get rid of the "You should disable Aero to improve performance" helpful hint popup which sometimes shows up. But I suppose getting rid of the first will get rid of the second, assuming anyone knows how.

    Read the article

  • Why are my uWSGI processes dying immediately?

    - by orokusaki
    I'm using Supervisor and the uWSGI Emperor mode. When I set limit-as to 512 (MB), workers die instantly (respawn, die, respawn, die, every 3/4 of a second or so): [uwsgi] workers = 4 threads = 40 limit-as = 512 harakiri = 20 max-requests = 1600 ... non-performance/memory/processor-related settings ommitted But, if I change limit-as to: [uwsgi] workers = 4 threads = 40 limit-as = 1024 harakiri = 20 max-requests = 1600 ... non-performance/memory/processor-related settings ommitted and restart uwsgi, the problem is gone immediately. In order to put a sham in this, I've modified the setting back to 512, restarted again, and the problem is back immediately. Notes: My app is a simple Django app without much additional Python setup during start-up time.

    Read the article

  • How to automatically remove Flash history/privacy trail? Or stop Flash from storing it?

    - by Arjan van Bentem
    Many people have heard about third-party cookies, and some browsers even block those by default. Some people may even be using Private Browsing modes. However, only few seem to realise that Adobe's Flash player also leaves a cross-browser trail on your local hard drive, and allows for sending cookie-like information back to the server, including third-party sites. And because it is a plugin, Flash does not take any of the browser's privacy settings into account. Sorry for the long post, but first some details about why using Flash raises a privacy concern, followed by the results of my tests: The Flash player keeps a cross-browser history of the domain names of the Flash-sites your computer has visited. Unlike your browser's history, this history is not limited to a certain number of days. History is also recorded while using so-called Private Browsing modes. It is stored on your hard drive (though, as described below, without going to Adobe's site you won't know what is stored). I am not sure if any date and time information is kept about each visit, but to see the domain names: right-click on some Flash content, open the settings dialog, and click the Help icon or click the Advanced button within the Privacy tab. This opens a browser to the help pages on Adobe.com, where one can click through to the Website Storage Settings panel. One can clear the existing list, but one cannot stop it from being recorded again. Flash allows for storing data on your local hard drive, using so-called Local Shared Objects (aka "Flash Cookies"). Just like HTTP cookies, this data can be sent back to the server, for tracking purposes. They are cross-browser, have no expiration date, and no user defined maximum lifetime can be set in the Flash preferences either. These not being HTTP cookies, they are (of course) not blocked by a browser's cookies preferences and are not removed when the normal HTTP cookies are deleted. Adobe has announced that version 10.1 will obey Private Browsing in most popular browsers, but unfortunately no word about also removing the data whenever normal cookies are deleted manually. And its implementation might be confusing: [..] if the browser is in normal browsing mode when the Flash Player instance is created, then that particular instance will forever be in normal browsing mode (private browsing is turned off). Accordingly, toggling private browsing on or off without refreshing the page or closing the private browsing window will not impact Flash Player. Local Shared Objects are not limited to the site you visit, and third-party storage is enabled by default. At the Global Storage Settings panel one can deselect the default Allow third-party Flash content to store data on your computer. Because of the cross-browser and expiration-less nature (and the fact that few people know about it), I feel that the cross-browser third-party Flash Cookies are more dangerous for visitor tracking than third-party normal HTTP cookies. They are even used to restore plain HTTP cookies that the user tried to delete: "All advertisers, websites and networks use cookies for targeted advertising, but cookies are under attack. According to current research they are being erased by 40% of users creating serious problems," says Mookie Tenembaum, founder of United Virtualities. "From simple frequency capping to the more sophisticated behavioral targeting, cookies are an essential part of any online ad campaign. PIE ["Persistent Identification Element"] will give publishers and third-party providers a persistent backup to cookies effectively rendering them unassailable", adds Tenembaum. [..] To justify this tracking mechanism, UV's Tenembaum said, "The user is not proficient enough in technology to know if the cookie is good or bad, or how it works." When selecting None (zero KB) for Specify the amount of disk space that website websites that you haven't yet visited can use to store information on your computer, and checking Never ask again then some sites do not work. However, the same site might work when setting it to None but without selecting Never ask again, and then choose Deny whenever prompted. Both options would result in zero KB of data being allowed, but the behaviour differs. The plugin also provides a Flash Player cache for Adobe-signed files. I guess these files are not an issue. So: how to automatically delete that information? On a Mac, one can find a settings.sol file and a folder for each visited Flash-website in: $HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer/sys/ Deleting the settings.sol file and all the folders in sys, removes the trail from the settings panels. However, the actual Local Shared Ojects are elsewhere (see Wikipedia for locations on other operating systems), in a randomly named subfolder of: $HOME/Library/Preferences/Macromedia/Flash Player/#SharedObjects But then: how to remove this automatically? Simply removing the folders and the settings.sol file every now and then (like by using launchd or Windows' Task Scheduler) may interfere with active browsers. Or is it safe to assume that, given the cross-browser nature, the plugin would not care if things are removed while it is active? Only clearing during log-off may not work for those who hibernate all the time. Firefox users can install BetterPrivacy or Objection to delete the Local Shared Objects (for all others browsers as well). I don't know if that also deletes the trail of website domain names. Or: how to stop Flash from storing a history trail? Change of plans: I'm currently testing prohibiting Flash to write to its own sys and #SharedObjects folders. So far, Flash has not tried to restore permissions (though, when deleting the folders, Flash will of course recreate them). I've not encountered any problems but this may take some while to validate, using multiple browsers and sites. I've not yet found a log that reports errors. On a Mac: cd "$HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer" rm -r sys/* chmod u-w sys cd "$HOME/Library/Preferences/Macromedia/Flash Player" # preserve the randomly named subfolders (only preserving the latest would suffice; see below) rm -r \#SharedObjects/*/* chmod -R u-w \#SharedObjects I guess the above chmods cannot be achieved on an old Windows system (I'm not sure about XP and Vista?). Though maybe on Windows one could replace the folders sys and #SharedObjects with dummy files with the same names? Anyone? Obviously, keeping Flash from storing those Local Shared Objects for all sites may cause problems. Some test results (Flash 10 on Mac OS X): When blocking the sys folder (even when leaving the #SharedObjects folder writable) then YouTube won't remember your volume settings while viewing multiple videos. Temporarily allowing write access to the blocked folders while visiting trusted sites (to only create folders for domains you like, maybe including references in settings.sol) solves that. This way, for YouTube, Flash could be allowed to write to sys/#s.ytimg.com and #SharedObjects/s.ytimg.com, while Flash could not create new folders for other domains. One may also need to make settings.sol read-only afterwards, or delete it again. When blocking both the sys and #SharedObjects folders, YouTube and Vimeo work fine (though they might not remember any settings). However, Bits on the Run refuses to even show the video player. This is solved by temporarily unblocking the #SharedObjects folder, to allow Flash to create a subfolder with some random name. Within this folder, it would create yet another folder for the current Flash website (content.bitsontherun.com). Removing that website-specific folder, and blocking both #SharedObjects and the randomly named subfolder, still seems to allow Bits on the Run to operate, even though it still cannot write anything to disk. So: the existence of the randomly named subfolder (even when write protected) is important for some sites. When I first found the #SharedObjects folder, it held many subfolders with random names, some created on the very same day. I wonder when Flash decides it wants a new folder, and how it determines (and remembers) that random name. For a moment I considered not blocking write access for sys and #SharedObjects, but explicitly creating read-only folders for well-known third-party tracking domains (like based on a list from, for example, AdBlock Plus). That way, any other domain could still create Local Shared Objects. But the list would be long, and the domains from AdBlock Plus are probably all third-party domains anyway, so disabling Allow third-party Flash content to store data on your computer might have the very same result. Any experience anyone? (Final notes: if the above links to the settings panels do not work in the future, then use the URL that is known to Flash player as a starting point: www.adobe.com/go/settingsmanager. See also "You Deleted Your Cookies? Think Again" at Wired.com -- which uses Flash cookies itself as well... For the very suspicious using Time Machine: you may want to exclude both folders, for each user, and remove the trace that is already on your backup.)

    Read the article

  • CgiModule and FastCgiModule in IIS7

    - by Hari
    My web server is IIS7 running on Windows 2008 Web edition. There are nearly 40 modules when checked pre-installed "Modules". It also having "CgiModule and FastCgiModules". All the websites installed on this server purely runs with ASP.NET technology. Can I remove these two modules to improve performance? Same way, my application uses "Forms Authentication" only. In such case can I delete "Windows Authentication and WindowsAuthenticationModule"?. Also please suggest if any other modules can be deleted to improve performance.

    Read the article

  • Is there a Windows equivalent of Unix 'CPU steal time'?

    - by Steffen Opel
    In order to assess performance monitoring accuracy on virtualization platforms, the CPU steal time has become an increasingly relevant metric - see EC2 monitoring: the case of stolen CPU for an instructive summary in the context of Amazon EC2 and IBM's paper on CPU time accounting for a more in-depth technical explanation (including illustrations) of the concept: Steal time is the percentage of time a virtual CPU waits for a real CPU while the hypervisor is servicing another virtual processor. Accordingly, it is exposed in most related Unix/Linux monitoring tools nowadays - see e.g. columns %steal or st in sar or top: st -- Steal Time The amount of CPU 'stolen' from this virtual machine by the hypervisor for other tasks (such as running another virtual machine). I've been unable to figure out how to capture the same metric on Windows though, is this possible already? (Ideally for the Windows 2008 Server R2 AMIs on EC2 and via a respective Windows Performance Counters of course.)

    Read the article

  • Intel P6100 CPU and Mobile Intel® HM55 Express Chipset

    - by Christopher Painter
    I have an Asus K52F-BBR5 notebook that uses an Intel P6100 ( 2GHZ 15x multiplier) and HM55 Express Chipset. I'm looking to replace it's 3GB with 8GB. The Crucial database seems to indicate that a PC3-8500 CAS 7 and PC3-10666 CAS 9 will both work. I'm not up to date on the latest DDR3 nomencalature and I'm wondering which would provide better performance. The price difference is negligible. Drawing on past experiences from many many years ago I could make an argument for either based on sync/async bus speed arguments and CAS latency differences but the truth is I don't know enough about the HM55 chipset to know which would be the correct choice. Does anyone know the answer or point me to information that would help me make the choice? I'm pretty sure the performance difference will be somewhat negligible also but still I'd like to make the optimal choice.

    Read the article

  • Linux memory fragmentation

    - by Raghu
    Hi all, Is there a way to detect memory fragmentation on linux ? This is because on some long running servers I have noticed performance degradation and only after I restart process I see better performance. I noticed it more when using linux huge page support -- are huge pages in linux more prone to fragmentation ? I have looked at /proc/buddyinfo in particular. I want to know whether there are any better ways(not just CLI commands per se, any program or theoretical background would do) to look at it.

    Read the article

  • How do I access the web server on my desktop from my laptop?

    - by Steven
    I'm running Apache on my stationary and I would like to access a website through my laptop. This is some of the Apache config: NameVirtualHost 127.0.0.1:80 <VirtualHost 127.0.0.1:80> ServerName mysite.com DocumentRoot I:/wamp/www/mysite/ </VirtualHost> ServerName localhost:80 <Directory /> Options FollowSymLinks AllowOverride all Order deny,allow Deny from all </Directory> On my laptop I've added the following to the HOSTS file: 10.0.0.3 mysite.com But accessing the page through mysite.com is not very successfull. If I enter the IP address directly, I only get a Forbidden message. What do I need to do in order to get this to work? Update I'm runing WAMPSERVER 2.1 (Apache 2.2.17) Apache is up and running I can ping 10.0.0.3 from laptop I'm not able to ping http://mysite.com from laptop IE gives me a 403 Forbidden - The website declined to show this webpage The only log that get's entries when trying to access the website from my laptop, is access.log. access.log 10.0.0.4 - - [13/Jun/2011:10:14:04 +0200] "GET / HTTP/1.1" 403 202 apache_error.log [Mon Jun 13 10:08:16 2011] [error] VirtualHost localhost:0 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results UPDATE 2 My apache config has the following entry: AllowOverride all Order Deny,Allow Deny from all Allow from 127.0.0.1 Could it be that this Allow from is stopping other computers accessing the page?

    Read the article

  • Chroot jail of Nginx and php

    - by sqren
    I'm hosting multiple websites on one VPS, and want to chroot each website, eg. /chroot/website1 /chroot/website2 I'm using makejail, which is a highlevel tool, for creating the jails, and copying the libraries and dependencies. Easy peasy. Each website will need nginx, php and mysql. For php I'm using php5-fpm which actually supports chroot by configuration, however I'm not using this (maybe I should?) My question is which approach of the following three is the better: 1) Every website will have its own seperated instance of nginx, php and mysql. The downside is, that each webserver + php has to listen to a different port. I also need a "master" nginx web server in front of them, reverse proxying to the chrooted servers behind it. Probably most secure, but also most advanced. 2) I don't make any chroot jails manually. I setup one nginx web server, that proxies php requests to php-fpm, on different ports. I can have multiple php-fpm configurations each with is own chroot'ed folder. This is quite managable - however only php will be chrooted. Not the actual webserver. Is this secure enough. Also, I tried this option out, and it seems I will need to use TCP instead of sockets for connecting to MySQL. 3) You tell me ;) I'm quite new to chroot jailing, so please correct me if I'm wrong in my assumptions. I've been reading all the tutorials I could find, however, I find the market for chroot guides very scarce. Any help or inputs much appreciated!

    Read the article

  • Oracle on NFS vmdk beats native NFS!?

    - by fletch00
    Hi, my colleagues are pursuing this with Netapp and Oracle - but I thought I'd post here on the off chance someone else has seen this We have a RedHat 5 VM (fully up2date) running Oracle 11i with data disks mounted via the VM's linux kernel NFS using Oracle's recommended mount options and the performance is very inconsistent (Querys that should take < 2 seconds sometimes take 60 seconds) Funny thing is we can run the same queries perfectly consistently < 2 seconds on a VMDK residing on SAME NetApp NFS datastore! Makes me wish Oracle and NetApp collaborated as closely as VMware and NetApp did on the Virtual Storage Console we used to perfectly set the NFS options and keep them in compliance... We have tried a few Linux NFS options others have posted and not seen improvement so far. We are now creating VMDK's for the VM to replace the Linux NFS mounted and workaround the issue as our developers need consistent performance ASAP.

    Read the article

  • How to use router QoS?

    - by Nathaniel
    N00b question. How exactly does one use router quality of service settings? I've read up on it a bit but I'm still not exactly sure how to use it. So, my real questions are these: Generally, how does QoS work? How would one use it, say, to guarantee smooth performance in latency sensitive application (cough online gaming cough)? Performance for that sort of stuff bombs out on our connection when somebody is uploading files. I apologize if this is kind of sprawling. Suggestions to clean it up / edits welcome.

    Read the article

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • does my machine configuration make sense?

    - by user1227914
    i couldn't think of a better place to ask this question, so here it goes. we're putting together a dedicated server for a website that will initially host the web server and the mysql database. as the website grows, we'll move the database to a different server and this machine will eventually only server the actual website. so the question is ...does my configuration look okay? it's the first time i'm building a server from scratch so i want to make sure i don't combine components that don't fit or something. things like ..do the drives i picked work for the hot swap ..etc. what do you guys think? am i good to go with this configuration? :) Chassis: Supermicro SuperServer 6016T-MTHF (6x DDR3 SDRAM - ECC DIMM 240-pin, 2x LGA1366 Socket, Power Provided: 600 Watt, 4 (free) x hot-swap - 3.5") CPU: Intel BX80614E5620 Xeon E5620 Processor - 4 Core, 2.40GHz, LGA 1366, 5.86GT/s QPI 12MB Cache, 64-Bit, 80W, HyperThreading Memory: Crucial CT51272BB1339 4GB PC10600 DDR3 Memory - 1333MHz, ECC, Registered, 1x4096MB (possibly 3 or 4 of them) Hard Drives: Western Digital WD2002FAEX Caviar Black Hard Drive - 2TB, 3.5", SATA 6Gbps, 7200 RPM, 64MB (possibly 2 or 3). thank you very much for any professional advice :)

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >