Search Results

Search found 13584 results on 544 pages for 'loading variables'.

Page 292/544 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • Starting/Stopping IBM WebSphere Application Server (WAS) 7 from the Command Line

    - by Christopher Parker
    I've written a script to automate the process of starting, stopping, and restarting WAS7 from the command line. Nothing starts automatically on one of our staging servers, so I have to start everything: deployment manager, node agent, app server, and Web server. The script I wrote seems to work pretty well. A coworker of mine recommended that I structure my commands differently. I'm wondering if there's a good, valid reason for doing so. First, my variables: WAS_HOME="/opt/IBM/WebSphere/AppServer" WAS_PROFILE_NAME="AppSrv01" WAS_APP_SERVER="server1" WAS_WEB_SERVER="webserver1" How I had the start commands: "${WAS_HOME}/bin/startManager.sh" "${WAS_HOME}/bin/startNode.sh" -profileName $WAS_PROFILE_NAME "${WAS_HOME}/bin/startServer.sh" -profileName $WAS_PROFILE_NAME $WAS_APP_SERVER "${WAS_HOME}/bin/startServer.sh" -profileName $WAS_PROFILE_NAME $WAS_WEB_SERVER I was told that I should do it like this, instead: WAS_DMGR="Dmgr01" # Added variable "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startNode.sh" "${WAS_HOME}/profiles/${WAS_DMGR}/bin/startManager.sh" "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startServer.sh" $WAS_APP_SERVER "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startServer.sh" $WAS_WEB_SERVER How is the second way of starting up everything for WebSphere any better or more correct than the first, original, way?

    Read the article

  • racoon-tool doesn't generate full racoon.conf file in /var/lib/racoon/racoon.conf

    - by robthewolf
    I am using ipsec-tools/racoon to create my VPN. I am using racoon-tool to configure racoon.conf but when I run racoon-tool reload it only generates the first section - Global items. When I run racoon-tool I get: # racoon-tool reload Loading SAD and SPD... SAD and SPD loaded. Configuring racoon...done. This is the entire file /var/lib/racoon/racoon.conf # # Racoon configuration for Samuel # Generated on Wed Jan 5 21:31:49 2011 by racoon-tool # # # Global items # path pre_shared_key "/etc/racoon/psk.txt"; path certificate "/etc/racoon/certs"; log debug; I cannot find anywhere a solution as to why this is happening. Please help

    Read the article

  • pam_exec.so PAM module does not export variable PAM_USER as stated in the documentation

    - by davidparks21
    I'm trying to use the pam_exec.so PAM module to execute a script which needs to know the username/password coming from the application (OpenVPN in this case). I have a script that executes printenv >>afile, but I don't see all the environment variables that the man pages states that pam_exec.so exports (namely PAM_USER I think), I only see the following: PAM_SERVICE=openvpn PAM_TYPE=auth PWD=/usr/local/openvpn/bin SHLVL=1 A__z="*SHLVL I do successfully pick up the password off of STDIN and output it with this same script. But for the life of me I can't get the username. Any thoughts on what I should try next?

    Read the article

  • serving static assets via http is really slow compared to sshfs (apache2/nginx)

    - by s1lv3r
    After migrating to a new VPS I had some users complaining about slow loading images on their sites. After creating some test files with dd I realized that I can download all files via sshfs with full speed while downloads via web are painfully slow. The larger the file is and the longer the transfer takes, the slower the transfer speed gets. I thought I had some problems with Apache and just spend the whole evening with replacing Apache2 against nginx for static file serving - with no effect at all. No I/O wait states in top. Tons of RAM free, no high CPU utilization and hdparm shows a decent I/O performance at all times. I just have no idea anymore, what's happening on this server. This is a link to a demo file: http://master.dealux.de/file.tgz Anybody an idea what I can check out?

    Read the article

  • [metasploit] Has anyone gotten multi/browser/java_signed_applet to work?

    - by marc
    Welcome, Today i want test following exploit "exploit/multi/browser/java_signed_applet" on my Ubuntu 10.04 desktop using Metasploit framework. I'm following that guide: http://pauldotcom.com/wiki/index.php/Episode185 When im trying to start exploit, i got error: JVM not initialized. You must install the Java Development Kit, the rjb ruby gem, and set the $JAVA_HOME variable. [-] Falling back to static signed applet. This exploit will still work, but the CERTCN and APPLETNAME variables will be ignored. I have installed sun-java6-jdk, and gem install rjb And patch to JAVA look working because: ls $JAVA_HOME bin ext jre LICENSE README.html COPYRIGHT include lib man THIRDPARTYLICENSEREADME.txt If anyone, have any idea... Except installation of backtrack what is not possible... Because i need use it on my Ubuntu, (have to virtualize XP for test) regards

    Read the article

  • Screenshot before windows starts: without another computer?

    - by Nano8Blazex
    I'm pretty sure that this must have been asked before, but haven't found a duplicate question, thus I shall ask again... Is it possible to take a screenshot of my computer before the OS boots up WITHOUT an external computer or virtualization software like VMWare?? e.g. at the BIOS, or the "Windows is Loading" screen, even at the login screen? Is it possible to take screenshots of BSODs as well? EDIT: I'm using a desktop pc running Windows 7 Ultimate in a home environment. Thanks.

    Read the article

  • XNA: Load and read a XML file?

    - by Rosarch
    I'm having difficulty doing this seemingly simple task. I want to load XML files with the same ease of loading art assets: content = new ContentManager(Services); content.RootDirectory = "Content"; background = content.Load<Texture2D>("images\\ice"); I'm not sure how to do this. This tutorial seems helpful, but how do I get a StorageDevice instance? I do have something working now, but it feels pretty hacky: public IDictionary<string, string> Get(string typeName) { IDictionary<String, String> result = new Dictionary<String, String>(); xmlReader.Read(); // get past the XML declaration string element = null; string text = null; while (xmlReader.Read()) { switch (xmlReader.NodeType) { case XmlNodeType.Element: element = xmlReader.Name; break; case XmlNodeType.Text: text = xmlReader.Value; break; } if (text != null && element != null) { result[element] = text; text = null; element = null; } } return result; } I apply this to the following XML file: <?xml version="1.0" encoding="utf-8" ?> <zombies> <zombie> <health>100</health> <positionX>23</positionX> <positionY>12</positionY> <speed>2</speed> </zombie> </zombies> And it is able to pass this unit test: internal virtual IPersistentState CreateIPersistentState(string fullpath) { IPersistentState target = new ReadWriteXML(File.Open(fullpath, FileMode.Open)); return target; } /// <summary> ///A test for Get with one zombie. ///</summary> //[TestMethod()] public void SimpleGetTest() { string fullPath = "C:\\pathTo\\Data\\SavedZombies.xml"; IPersistentState target = CreateIPersistentState(fullPath); string typeName = "zombie"; IDictionary<string, string> expected = new Dictionary<string, string>(); expected["health"] = "100"; expected["positionX"] = "23"; expected["positionY"] = "12"; expected["speed"] = "2"; IDictionary<string, string> actual = target.Get(typeName); foreach (KeyValuePair<string, string> entry in expected) { Assert.AreEqual(entry.Value, expected[entry.Key]); } } Downsides to the current approach: file loading is done poorly, and matching keys to values seems like it's way more effort than necessary. Also, I suspect this approach would fall apart with more than one entry in the XML.

    Read the article

  • Bash preexecute

    - by Alex_Bender
    I'm trying to write bash command wrapper, which will be patch bash current command on the fly. But i'm faced with the problem. As i'm not a good Shell user, i can't write right expression of variable assignment in string. See bellow: I'm set trap to preexecute, through this: alex@bender:~$ trap "caller >/dev/null || xxx \"\${BASH_COMMAND}"\" DEBUG; I want change variable BASH_COMMAND, do something like BASH_COMMAND=xxx ${BASH_COMMAND} but i don't know, how i need escaping variables in this string NOTE: xxx -- my custom function, which must return some value, if in end of command situated word teststr function xxx(){ # find by grep, if teststr in the end `echo "$1" | grep "teststr$" >/dev/null`; # if true ==> do if [ "$?" == "0" ]; then # cut last 6 chars (len('teststr')==6) var=`echo "$1" | sed 's/......$//'`; echo "$var"; fi } How can i do this stuff?: alex@bender:~$ trap "caller >/dev/null || ${BASH_COMMAND}=`xxx $BASH_COMMAND`" DEBUG;

    Read the article

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • PowerShell fomat.ps1xml not reachable

    - by blsub6
    I'm trying to load Exchange Management Shell and it gives me a big 'ol red error that says: Import-Module : There were errors in loading the format data file: Microsoft.PowerShell, , %APPDATA%\Roaming\Microsoft\Exchange\RemotePowerShell\DOMAINNAME.format.ps1xml : File skipped because of the following validation exception: File %APPDATA%\Roaming\Microsoft\Exchange\RemotePowerShell\DOMAINNAME.format.ps1xml cannot be loaded. The file %APPDATA%\Roaming\Microsoft\ExchangeRemotePowerShell\DOMAINNAME\DOMAINNAME.format.ps1xml is not digitally signed. The script will not execute on the system. Please see "get-help about_signing" for more details... The %APPDATA% is stored on an external server on my network (that I can ping to without problems). I am missing a ton of PS cmdlets too, which I'm presuming are stored in '*.format.ps1xml' I tried finding the directory in which format.ps1xml is supposed to reside on the external server and it's not even created. Can someone tell me where to start?

    Read the article

  • Windows Server 2008 hangs up while booting

    - by Jim R
    Windows Server 2008 hangs up while booting after Windows update applied several updates. The server is a virtual instance on a Server 2008 Hyper-V host. Other virtual servers are fine, but have not been updated. The normal boot shows the horizontal barber poll forever. When I do a safe boot it also hangs up. With a "Please Wait..." after loading many '.sys' files. The last successfully loaded file listed is: '\Windows\system32\drivers\crcdisk.sys' That is the extent of what I have been able to determine.

    Read the article

  • Drupal - Site Building, Menus

    - by Nicholas O'Neil
    Hi, I'm working with Drupal for the first time and trying to figure out how to change the Menu item names through the Administer Site Building configuration menus. I have logged into the Drupal admin site and navigated to Administer/ Site Building/ Menus, select the menu item and click Edit, change the Title from , 'About' to 'About Us' however when loading the site page it still says 'About' Please pardon my lack of understanding on drupal, i am somewhat familiar with joomla another CMS however I just need some pointers in the right direction as to how to edit the links and rename them within drupal. the links are not images as far as i can tell while viewing the source and properties on the links. Thank you!

    Read the article

  • How To Start NX/VNC Session At Ubuntu Boot Time?

    - by darkAsPitch
    I want an NX or VNC session to start automatically when an Ubuntu server boots up - without a monitor connected - loading a certain user's desktop and keeping it loaded and ready until I log in via NX or VNC. How would one accomplish that? This code works via my terminal when I am logged in as the NX user, but not via root, and not in the init.d folder. No idea why? /usr/NX/bin/nxclient --session /home/user/.nx/config/SavedSession.nxs Please provide somewhat simplified instructions! I am certified linux newb.

    Read the article

  • PHP DL Function

    - by Pete Herbert Penito
    Is allowing dynamic extension loading dangerous for some reason? I ask because I need it to include the pecl oauth.so extension to make the Google Adwords PHP SDK work using dl(). I've tried all other alternatives but just can't get it to work: http://php.net/manual/en/function.dl.php enable_dl is set to off by default inside my php.ini, I enabled it, restarted apache and it works. If it's safe to use why is it disabled by default? I'm the only user with access to the server and it will be hosting a web application. Any advice would be helpful!

    Read the article

  • nginx dynamic servername with regular expression doesn't work for co.uk

    - by redn0x
    I'm trying to setup a nginx server which dynamically loads content from a folder for a domain. To do this I'm using regular expressions in the server name like so: server_name ((?<subdomain>.+)\.)?(?<domain>.+)\.(?<tld>.*); This will create a 3 variables for nginx to use later on, for example when using the following url: test.foo.example.com this will evaluate to: $subdomain = test.foo $domain = example $tld = com The problem arises when the co.uk top-level domain is used. In this case when using the url test.foo.example.co.ukit will evaluate to: $subdomain = test.foo.cedira $domain = co $tld = uk How can I edit the regular expression so that it will also work for co.uk?

    Read the article

  • dual boot preinstalled windows 8 laptop with windows 7

    - by sarathi
    I am having a hard disk in gpt partition style and Windows 8 is pre-installed in uefi mode. While trying to dual boot using usb bootable disk(using rufus,gpt partition style,fat32),it displays "Windows is loading files", then while displaying "Starting Windows" it hangs. So, I tried to install it inside Windows using setup.exe. Everything was running fine, but when it gets self restarted again, it got stuck at "Starting Windows". When restarted in Windows 8, it showed a htm file stating that "Windows cannot be installed on a computer using battery power. If the battery runs out of power during the installation, you might lose data. To continue the installation, plug in the computer's power adapter." I am sure that power adapter is connected. I Googled a lot on this, but I didn't get solution.

    Read the article

  • Internet connection slower than network connection speed

    - by Mike Pateras
    I've got a computer connected to a wireless router on a different floor. When I look at the network connection, I'm told the signal strength is low, and that I've got a connection of about 26mbps (often higher). However, my internet connection on that machine is very slow. Speedtests show it at about 1-2mbps, and it really shows when loading pages and video. I have fiber optic internet access, and the machine that's connected to the router/modem via cable gets the 20mbps on speed tests, and is extremely fast in every day use. My question is, is the advertised 26mbps+ connection speed perhaps inaccurate, and that my wireless bandwidth is the likely bottleneck here? Or is the signal strength what's key here? And what might I do about this? Power cycling the router helped a bit, a speed test went as high as 6mbps after doing that.

    Read the article

  • Incremental PCA

    - by smichak
    Hi, Lately, I've been looking into an implementation of an incremental PCA algorithm in python - I couldn't find something that would meet my needs so I did some reading and implemented an algorithm I found in some paper. Here is the module's code - the relevant paper on which it is based is mentioned in the module's documentation. I would appreciate any feedback from people who are interested in this. Micha #!/usr/bin/env python """ Incremental PCA calculation module. Based on P.Hall, D. Marshall and R. Martin "Incremental Eigenalysis for Classification" which appeared in British Machine Vision Conference, volume 1, pages 286-295, September 1998. Principal components are updated sequentially as new observations are introduced. Each new observation (x) is projected on the eigenspace spanned by the current principal components (U) and the residual vector (r = x - U(U.T*x)) is used as a new principal component (U' = [U r]). The new principal components are then rotated by a rotation matrix (R) whose columns are the eigenvectors of the transformed covariance matrix (D=U'.T*C*U) to yield p + 1 principal components. From those, only the first p are selected. """ __author__ = "Micha Kalfon" import numpy as np _ZERO_THRESHOLD = 1e-9 # Everything below this is zero class IPCA(object): """Incremental PCA calculation object. General Parameters: m - Number of variables per observation n - Number of observations p - Dimension to which the data should be reduced """ def __init__(self, m, p): """Creates an incremental PCA object for m-dimensional observations in order to reduce them to a p-dimensional subspace. @param m: Number of variables per observation. @param p: Number of principle components. @return: An IPCA object. """ self._m = float(m) self._n = 0.0 self._p = float(p) self._mean = np.matrix(np.zeros((m , 1), dtype=np.float64)) self._covariance = np.matrix(np.zeros((m, m), dtype=np.float64)) self._eigenvectors = np.matrix(np.zeros((m, p), dtype=np.float64)) self._eigenvalues = np.matrix(np.zeros((1, p), dtype=np.float64)) def update(self, x): """Updates with a new observation vector x. @param x: Next observation as a column vector (m x 1). """ m = self._m n = self._n p = self._p mean = self._mean C = self._covariance U = self._eigenvectors E = self._eigenvalues if type(x) is not np.matrix or x.shape != (m, 1): raise TypeError('Input is not a matrix (%d, 1)' % int(m)) # Update covariance matrix and mean vector and centralize input around # new mean oldmean = mean mean = (n*mean + x) / (n + 1.0) C = (n*C + x*x.T + n*oldmean*oldmean.T - (n+1)*mean*mean.T) / (n + 1.0) x -= mean # Project new input on current p-dimensional subspace and calculate # the normalized residual vector g = U.T*x r = x - (U*g) r = (r / np.linalg.norm(r)) if not _is_zero(r) else np.zeros_like(r) # Extend the transformation matrix with the residual vector and find # the rotation matrix by solving the eigenproblem DR=RE U = np.concatenate((U, r), 1) D = U.T*C*U (E, R) = np.linalg.eigh(D) # Sort eigenvalues and eigenvectors from largest to smallest to get the # rotation matrix R sorter = list(reversed(E.argsort(0))) E = E[sorter] R = R[:,sorter] # Apply the rotation matrix U = U*R # Select only p largest eigenvectors and values and update state self._n += 1.0 self._mean = mean self._covariance = C self._eigenvectors = U[:, 0:p] self._eigenvalues = E[0:p] @property def components(self): """Returns a matrix with the current principal components as columns. """ return self._eigenvectors @property def variances(self): """Returns a list with the appropriate variance along each principal component. """ return self._eigenvalues def _is_zero(x): """Return a boolean indicating whether the given vector is a zero vector up to a threshold. """ return np.fabs(x).min() < _ZERO_THRESHOLD if __name__ == '__main__': import sys def pca_svd(X): X = X - X.mean(0).repeat(X.shape[0], 0) [_, _, V] = np.linalg.svd(X) return V N = 1000 obs = np.matrix([np.random.normal(size=10) for _ in xrange(N)]) V = pca_svd(obs) print V[0:2] pca = IPCA(obs.shape[1], 2) for i in xrange(obs.shape[0]): x = obs[i,:].transpose() pca.update(x) U = pca.components print U

    Read the article

  • Is there a way of disabled byte-range requests in Apache?

    - by Sam Minnée
    I have a web page with a link to a PDF file (target="_blank"). If I click the link, the PDF reader just shows a grey screen within the Firefox browser. If I copy that link and manually open it in a new tab, the PDF will display correctly, and subsequent requests made by clicking the original link now work, suggesting that the problem occurs when loading the file into the cache. It appears as though the Adobe PDF reader plugin is making byte-range requests (I see lots of 206 responses) and I suspect that this may be the cause of the issue. I am running an Apache webserver. Has anyone had problems with Apache and Adobe's byte-range requests? Are there any workarounds? Perhaps a way of configuring Apache to ignore byte-range requests on PDFs?

    Read the article

  • Adobe Plugin in Firefox showing grey screen; something to do with range requests on Apache?

    - by Sam Minnée
    I have a web page with a link to a PDF file (target="_blank"). If I click the link, the PDF reader just shows a grey screen within the Firefox browser. If I copy that link and manually open it in a new tab, the PDF will display correctly, and subsequent requests made by clicking the original link now work, suggesting that the problem occurs when loading the file into the cache. It appears as though the Adobe PDF reader plugin is making byte-range requests (I see lots of 206 responses) and I suspect that this may be the cause of the issue. I am running an Apache webserver. Has anyone had problems with Apache and Adobe's byte-range requests? Are there any workarounds?

    Read the article

  • Webpage redirection time

    - by Abhijeet Ashok Muneshwar
    I want to calculate time consumed in redirecting from 1 webpage to another webpage. For Example: 1) I am using Facebook in Google Chrome browser. I have shared 1 link on my Facebook profile like below: http://www.webdeveloper.com/ (It's not only Facebook. It can be any domain having link to another domain). 2) When I click on this link from my Facebook profile, then this website will open in new tab. 3) I want to calculate time difference in miliseconds or microseconds between below two events: First Event: Time of clicking link "http://www.webdeveloper.com/" from my Facebook profile. Second Event: Time of completely loading webpage of "http://www.webdeveloper.com/". Thank you in advance.

    Read the article

  • How to use iTunes USB File Transfer to copy files from PC to Apple iPad, e.g. PDF files for viewer a

    - by Chris W. Rea
    I'm interested in reading PDF-format ebooks on my Apple iPad. I have half a gig of PDFs I want to transfer to it, from my PC. I'm familiar already with loading EPUB-format titles through iBooks – unfortunately, iBooks doesn't read PDFs so I am looking at using a third-party application. I know many such third-party media viewer applications for the iPad support download from web or email, but that's a hassle. I've heard iTunes 9.1 added support for USB File Transfer, specifically for iPad devices. How does USB File Transfer work in iTunes, for transferring files from my PC to my iPad? Please provide example steps. Moderators: Please remember the FAQ's "except insofar as they interface with your computer." ;-)

    Read the article

  • Gigabyte Motherboard + Adaptec RAID = No Booting from any drives

    - by Farseeker
    I have a brand new PC, just out of the box. It has a Gigabyte GA-P55-USB3 motherboard. I also have an Adaptec ASR-2504 SAS RAID card with 2x 15k Seagate Barracuda SAS drives attached. After the motherboard init's its on-board RAID it then init's the Adaptec RAID. It detects all the RAID devices OK, but when it gets to Loading Operating System... (i.e. right before it should load the OS) it just sits there forever, doing nothing: If I force it to boot from the optical drive, you see it spin up for a few seconds then die down again. If I remove the Adaptec RAID card, everything works perfectly. As soon as it's plugged back in, it never gets past that stage. The RAID card should be perfectly fine (it was before), but I have raised a case with Adaptec anyway. Any suggestions on what I can try to get these two to play nicely together?

    Read the article

  • Chrome Java Plugin not working after moving Java installation directory

    - by Florian Peschka
    I recently moved all my Java installations from C:\Program Files\Java to D:\Java, as my C:\ partition only had a few MB of free space left and windows kept on bugging me about it. I also edited every registry entry in HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft that pointed to the old directory to now point to the new one. Then changed the PATH variables and removed the old files, did a restart, and all Java applications work just fine. Yet, Chrome is the only application that seems to have a problem with what I did, as the Java Plugin isn't recognized any more. Are there any registry entires I may have missed? Maybe in the Chrome registry? How can I make this work again?

    Read the article

  • Windows Swap (Page File): Enable or Disable?

    - by d03boy
    From my personal experience I've noticed that disabling the page file in Windows XP has given me, in general, the most speed gain out of any other software change I can make. Obviously this has to be done when a significant amount of RAM is available. Typically I find that it works nicely with +2GB of RAM. The only issues I've ever really had were loading up Adobe Photoshop. Is this really a speed improvement or am I imagining it? Note: In order to actually turn it off, you must not just set it to 0MB, but disable it. Otherwise Windows will just expand it when it needs to in order to meet its needs.

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >