Search Results

Search found 13909 results on 557 pages for 'old man'.

Page 136/557 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Performance of Virtual machines on very low end machines

    - by TheLQ
    I am managing a few cheap servers as my user base isn't large enough to get much more powerful servers. I also don't have the money lying around to invest in a server to prepare for the larger user base. So I'm stuck with the old hardware I have. I am toying with the idea of virtualizing all the current OS's with most likely VMware vSphere Hypervisor (AKA ESXi) Xen (ESXi has too strict of an HCL, and my hardware is too old). Big reasons for doing so: Ability to upgrade and scale hardware rapidly - This is most likely what I'll be doing as I distribute services, get a bigger server, centralize (electricity bills are horrible), distribute, get a bigger server, etc... Manually doing this by reinstalling the entire OS would be a big pain Safety from me - I've made many rookie mistakes, like doing lots of risky work on a vital production server. With a VM I can just backup the state, work on my machine, test, and revert if necessary. No worries, and no OS reinstallation Safety from other factors - As I scale servers might go down, and a backup VM can instantly be started. Various other reasons. However the limiting factor here is hardware. And I mean very depressing hardware. The current server's run off of a Pentium 3 and 4, and have 512 MB and 768 MB RAM respectively (RAM can be upgraded soon however). Is the Virtualization layer small enough to run itself and a Linux OS effectively? Will performance be acceptable (50% CPU overhead for every operation isn't acceptable)? Does it leave enough RAM for the Linux OS? Is this even feasible?

    Read the article

  • Using multiple computers effectively

    - by Benjamin Oakes
    I have some extra (old) Macs and PCs around the house and a MacBook that's sometimes overworked. I'm looking for tips on using multiple computers effectively. Basically, I'd like to add to the following list. Here's what I'm using so far: Teleport: lets you use a single mouse and keyboard to control several Macs, like Synergy Built-in file sharing: lets me run programs on another Mac, but only maintain one copy of the data Bazaar: distributed version control Mail.app, Thunderbird, etc.: IMAP for my mail accounts TuneConnect: control iTunes on another Mac with a nice interface, using the library on my MacBook (if I choose it by pressing option at startup) over file sharing OmniFocus: syncs across computers pretty seamlessly Web browsing across computers VNC/Remote Desktop Running X-windows programs using ssh -Y hostname for headless operation (but they die when I sleep the connecting computer -- something like GNU screen would be ideal) Plain-old ssh with GNU screen Really, a better idea of what I do might be necessary. Generally though, I'd like to distribute tasks across more than one computer when possible, but not have much overhead in doing so. The perfect solution? An Xgrid-like program that pushes processing across multiple computers automatically and seamlessly (although that seems unlikely). Here's what I have, in case it makes a difference: MacBook (Dual 2.16 GHz, OS X 10.6.3) eMac (1.25 GHz, OS X 10.4.11, soon to be 10.5) Dell Dimension (800 MHz, some version of Ubuntu) -- no dedicated monitor PowerMac G3 (400 MHz, OS X 10.4.11) -- no dedicated monitor iMac G3 DV (400 MHz, OS X 10.4.11) -- currently in the kitchen for recipes, email, web browsing, music, movies (DVDs), etc. (Total, they cost me around $650, mostly for the MacBook. Freecycle is wonderful, just in case you haven't heard of it.) I'm really only using the MacBook and eMac at this point, but I'd like to push more onto it and possibly the PowerMac and Dell.

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths? EDIT Just to clarify, our concern is not to do so much with server performance or bandwith. We could obviously employ an external CDN server but we have plenty CPU and bandwith. Our concern is with how to have "old" sites with plenty semi-static HTML content benefiting from splitting connections for images and static content via apache without having to change the html to absolute paths (ie. image.jpg to cdn.main.com/image.jpg happening on the server not the code)

    Read the article

  • How install ImageMagic 6.6.2 on Ubuntu 10.04 (lucid)

    - by Svyatoslavik
    How install ImageMagic 6.6.2 on Ubuntu 10.04 (lucid) Problem that lucid have old ImageMagic version(6.5.2) Its very important because me need work with SVG grafics, In my local pc I have ubuntu 11.04 and ImageMagic 6.6.2 and all work fine, In server I have 6.5... and I have problem. Reinstall ubuntu to 11.* this is no solution. I tried change /etc/apt/source.list from ubuntu 10.04 (lucid) to list from ubuntu 11.04 (natty) and update ImageMagic. After this action I have ImageMagic 6.6.2 (I looked phpinfo()) but ImageMagick is not work now. If I try do any action I get error: [error] 8996#0: *19983 FastCGI sent in stderr: "PHP Fatal error: Uncaught exception 'ImagickException' with message 'no decode delegate for this image format `/tmp/magick-XXnYKWKC' @ error/constitute.c/ReadImage/532' How it fix? Or how return to old version imagemagick? Problem if I try install from sources: /tmp/image/ImageMagick-6.7.2-7# ./configure configuring ImageMagick 6.7.2-7 checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking whether build environment is sane... yes checking for a BSD-compatible install... /usr/bin/install -c checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... no configure: error: in `/tmp/image/ImageMagick-6.7.2-7': configure: error: C compiler cannot create executables See `config.log' for more details /tmp/image/ImageMagick-6.7.2-7#

    Read the article

  • Numeric UIDs/GIDs in ACLs on OS X server (10.6)

    - by Oliver Humpage
    Hi, On one (old OS X 10.4) server I'm tarring up some files which have ACLs. I'm then using ``tar -xp'' to untar the archive onto a new 10.6 server, which doesn't have any users/groups configured on it yet except the default admin (UID 501) (there's a reason for that, don't ask!). Obviously this means an "ls -lne" will list files and ACLs with numeric UIDs and GIDs. Now for the normal file permissions it makes sense: you get UIDs like "1037". And for some ACLs, it also makes sense: you get things like "AAAABBBB-CCCC-DDDD-EEEE-FFFF00000402" for groups (0x402 = GID 1026) and "FFFFEEEE-DDDD-CCCC-BBBB-AAAA000001F5" for users (0x1F5 = UID 501). However, some ACLs have a UIDs like "E51DA674-AE70-41BC-8340-9B06C243A262" or GIDs like "0A3FCD24-0012-46FA-B085-88519E55EF29" and I have absolutely no idea how to translate these IDs back into something that could be matched back to the original IDs (UID 1072 and GID 1047 respectively in this example). Can anyone help me translate these weird long hex strings? (Basically we're moving from local users to an Active Directory setup, so I want to move all files to the new server with permissions intact, then chmod, chgrp and set ACLs such that we translate old IDs to the new AD IDs. Hence needing some way to map between the sets. I don't believe there's an easier way to do this?) Many thanks, Oliver.

    Read the article

  • Update BIOS on Sun Fire X4150 server

    - by Massimo
    I have some Sun Fire X4150 servers with a very old BIOS release (1ADQW015), which seems to have some compatibility problems with WMware ESX Server 3.5 and Windows 2008 R2 virtual machines; so I want to update the BIOS on them. The problem: according to this page, if your servers run ELOM (mine do), you first need to update to the latest ELOM release, then to the interim transition release, then finally you can update to the latest one. Ok, I'm willing to do that... but it looks like Sun (now Oracle) will happily let you download the latest firmware DVD (3.3.0), but it will not let you download the transition release (2.0) if you don't have a support contract. Well, I actuall don't care at all about the servers' management controllers (we don't even use them), so upgrading from ELOM to ILOM is totally irrelevant to me; but I need to update the servers' BIOS. So my question is: can I update the servers' BIOS to the latest version without doing the full ELOM-to-ILOM migration, or will this not work (or even make the servers unusable)? Do BIOS versions and SP ones need to be matched, or can one be updated without bothering with the other? Bonus question: if this whole ELOM-to-ILOM thing actually is needed in order to update the BIOS, can that 2.0 CD-ROM be obtained without having a support contract with Sun/Oracle (which we are definitely not going to sign, being that quite old hardware)? Update: I tried upgrading only the BIOS on one of the servers, and it didn't boot anymore. So it really looks like a full firmware upgrade is needed, and the management controller and BIOS versions should be kept in sync. So... where can I find that *&!£%$% 2.0 CD-ROM? Or at least the transition firmware that can be found on it?

    Read the article

  • Changing order of Thunderbird email address autocomplete?

    - by Brooks Moses
    I recently did a system wipe and installed Thunderbird 3.0, and imported all of my email setup from a previous Thunderbird 2.0 installation. Almost everything is working fine, but I'm having a problem with the autocomplete in email addresses when writing messages. The relevant behavior is this: In the old 2.0 installation, the autocomplete appeared to know which email addresses I used most frequently, and so when I typed "m" in the address line, it would pick as the default selection the "m"-person who I frequently write email to. (It's possible this is an illusion and it simply picked people in the order I added them to my address book.) Thus, I have become used to typing "m"-"enter" in the address field, and getting this person. In the current 3.0 installation, however, the autocomplete order has changed. It's not the same as it was, and it's not alphabetical. The result is that I'm spending extra time looking at the email address bar, and more annoyingly, half the time the old muscle-memory kicks in and I find myself with an email that's addressed to a couple of customers rather than to my boss and coworker. Thus, two questions: How does Thunderbird determine this autocomplete order, among a set of addresses all of which are in the same address book? How can I change this ordering to be what I want? (I have tried Google-searching, and found a number of incomplete answers, nearly all of which were for version 1.0 or thereabouts, and reference settings dialog boxes that no longer exist.)

    Read the article

  • Moving domain and keeping IMAP email - Linux Evolution, Mac Mail

    - by Douglas Squirrel
    This question is about keeping email during a server move, where the clients are Linux (me) and Mac (my wife) using IMAP. I receive email at [email protected] using a webmail service that my hosting company (1and1) provides. I read it via IMAP in evolution, so I should have copies of all the emails on my local machine. I have just moved mydomain.com from one type of account to another, and the hosting company don't move my existing email on the server when I do this - I assume they move my account to a different mailserver, and don't choose to provide a migration path for the email to move too (yes, this is annoying). Before migrating, I backed up Evolution (File - Backup settings) and did a spot-check in the evolution-backup.tar.gz file to be sure that my mail was in there. After migrating, I restored (File - Restore settings) and had hoped that I would see all my mail again. Unfortunately, Evolution just shows me new mail sent to the account, not the old mail. Is there a way to get the old mail back in the mailserver, or at least displaying in Evolution, as it was before the move? If not, can I read it in some convenient way, e.g. in Evolution offline or in a text file (then I can pick the mails I really want to keep and resend them to myself)? Also, I am about to do a similar move for my wife's domain, [email protected]. She reads her mail on a Mac using IMAP to Apple Mail. Is there anything I can do to make the move smooth for her? (I have backed up [her user]/Library/Mail already, but not sure what to do once the move is done.)

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • Why is windows 7 rejecting my key?

    - by acidzombie24
    I'm extremely confused. I have a genuine key and CD (I can take photos) and I am trying to install windows on my new tower. I believe this would be the 3rd PC i installed it (old, laptop now new tower). However i did change the HDD once but i doubt windows would think its a different computer bc of that. After going through phone activation it said i installed it on to many pcs..... i'm extremely confused. I'll be happy to deactivate it off my old tower if i knew how. I already grabbed all the files off of it. I tried to look up the amount of boxes i can install windows home premium on and found this http://en.wikipedia.org/wiki/Windows_7_editions What stuck out was this Maximum physical CPUs supported[40] 1 1 1 2 2 2 My new tower has 4cores (its the intel i7) but has two threads each so it sees 8. Does that have anything to do with this? But apparently ultimate supports '2'. I'm sure windows support more than 2 cpus so... what gives? Actually its physically one CPU so i guess the number of cores doesn't matter? Why is windows 7 rejecting my key?

    Read the article

  • Apache + Codeigniter + New Server + Unexpected Errors

    - by ngl5000
    Alright here is the situation: I use to have my codeigniter site at bluehost were I did not have root access, I have since moved that site to rackspace. I have not changed any of the PHP code yet there has been some unexpected behavior. Unexpected Behavior: http://mysite.com/robots.txt Both old and new resolve to the robots file http://mysite.com/robots.txt/ The old bluehost setup resolves to my codeigniter 404 error page. The rackspace config resolves to: Not Found The requested URL /robots.txt/ was not found on this server. **This instance leads me to believe that there could be a problem with my mod rewrites or lack there of. The first one produces the error correctly through php while it seems the second senario lets the server handle this error. The next instance of this problem is even more troubling: 'http://mysite.com/search/term/9 x 1-1%2F2 white/' New site results in: Bad Request Your browser sent a request that this server could not understand. Old site results in: The actual page being loaded and the search term being unencoded. I have to assume that this has something to do with the fact that when I went to the new server I went from root level htaccess file to httpd.conf file and virtual server default and default-ssl. Here they are: Default file: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / # force no www. (also does the IP thing) RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} !^mysite\.com [NC] RewriteRule ^(.*)$ http://mysite.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Default-ssl File <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / RewriteCond %{SERVER_PORT} !^443 RewriteRule ^ https://mysite.com%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # Use our self-signed certificate by default SSLCertificateFile /etc/apache2/ssl/certs/www.mysite.com.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.mysite.com.key # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. # SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem # SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown httpd.conf File Just a lot of stuff from html5 boiler plate, I will post it if need be Old htaccess file <IfModule mod_rewrite.c> # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)/$ /$1 [r=301,L] # codeigniter direct RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] </IfModule> Any Help would be hugely appreciated!!

    Read the article

  • Why would my HDD main partition suddenly become hidden?

    - by Luis Oscar
    A few days ago I was using my computer as usual and I turned it off. The next day it wouldn't boot up. It just stayed after the hardware diagnostic window on the intermittent underscore screen. So clearly It wasn't booting up. I tried turning it on and off a couple of times with no avail. Finally I used windows 7 disk and it seemed as if there was no HDD. not even the installation would see the HDD. So i thought it was dead, i bought a new one installed it with windows and used a external case with the old HDD. plug it in and still couldn't see it. I finally downloaded a partition program EASUS or something and my HDD was there listed WITHOUT a system letter. I could however explore it and i set it as Unhidden and it came back to life. I could see it normally. I really just wish someone could explain to me what happened here, was it a virus? does it means the HDD is about to die? How can i prevent this or what should I do now? should i stop using this OLD HDD? Thanks

    Read the article

  • How to enable the 2 concurrent (+1 console) sessions on Windows Server 2012

    - by Dai
    I have a Windows Server 2012 VM running on Windows Azure. I want to enable the ability for 2 simultaneous administrative sessions over Remote Desktop. This is permitted under the EULA for Windows Server 2012. This is NOT the same thing as the fully-blown Terminal Services / RDS feature. In Windows Server 2000 and 2003, multiple concurrent sessions (up to a limit of 2, plus the root /console session) were enabled by default (such that logging-in via RDP without logging-out first would create a new session rather than reconnecting to the old session). In Server 2008 and later it uses single-sessions by default, as this simplifies administration (as most people want to connect to old sessions). In Windows Server 2008 R2, you can add the MMC snap-ins for Remote Desktop Host Configuration which allows you to re-enable concurrent sessions. However, in Server 2012, after adding the Remote Administration snap-ins from Server Manager it seems the Remote Desktop Host Configuration snap-in has been removed. How can I re-enable the multiple concurrent sessions for Remote Desktop for Administration in Windows Server 2012?

    Read the article

  • Configuring port forwarding for SSH - no response outside LAN [migrated]

    - by WinnieNicklaus
    I recently moved, and at the same time purchased a new router (Linksys E1200). Prior to the move, I had my old router set up to forward a port for SSH to servers on my LAN, and I was using DynDNS to manage the external IP address. Everything worked great. I moved and set up the new router (unfortunately, the old one is busted so I can't try things out with it), updated the DynDNS address, and attempted to restore my port forwarding settings. No joy. SSH connections time out, and pings go unanswered. But here's the weird part (i.e., key to the whole thing?): I can ping and SSH just fine from within this LAN. I'm not talking about the local 192.168.1.* addresses. I can actually SSH from a computer on my LAN to the DynDNS external address. It's only when the client is outside the LAN that connections are dropped. This surely suggests a particular point of failure, but I don't know enough to figure out what it is. I can't figure out why it would make a difference where the connections originate, unless there's a filter for "trusted" IP addresses, which is perhaps just restricted to my own. No settings have been touched on the servers, and I can't find any settings suggesting this on the router admin interface. I disabled the router's SPI firewall and "Filter anonymous traffic" setting to no avail. Has anyone heard of this behavior, and what can I do to get past it?

    Read the article

  • Dropbox failing to update?

    - by Mick
    I have two PC's at home. One XP the other Windows 7-64bit. I have a word document that I edit on either PC - to enable this I have the file stored in a Dropbox folder. This usually works fine - but sometimes I find that the file does not get updated on my windows7 PC. I.e. I edit the file on my Windows XP machine, then go to my Windows 7 PC and see that there is a previous, old, datestamp on the file and sure enough if I open up the file to have a look, I see that the latest edits are not included. If I right-click on the file and select "Browse on Dropbox website" I see that the latest file is correctly there. Surely there must be some option to say please update this file - but can find no such thing. Has something gone wrong? I should point out that my wireless internet connection is a little intermittent - could this have caused some glitch? By the way I do not leave the old file open in word on my Windows-7 PC as I can well imagine that would cause trouble. Also I should mention that the icon on the document has the little green tick on it showing that Dropbox is not in the process of doing a transfer. Also the Dropbox icon in my system tray also has the green tick - so Dropbox is not busy transferring some other file(s). If I hover the mouse over the Dropbox icon I get the tooltip "All files up to date".

    Read the article

  • Dual Monitor Setup with Ati Radeon Hd 5700 results in unusabledesktop win7

    - by NorthPole
    I have an ATI Radeon HD 5700 card which i've been using under fully updated windows 7 with its latest drivers. My monitor is a 2004 NEC LCD1703M which despite being pretty old runs fine. A friend gave me an IIYAMA ProLite E1900WS monitor (2009 or 20010 ). Both monitors are vga only. I've been using a DLDVI to VGA adapter to connect my old monitor and tested the same adapter to the new monitor and it worked fine. So I bought an HDMI to vga adapter with the purpose of having a dual monitor setup. But when both screens are connected to the card the following problem occurs: The monitor connected to the hdmi port cycles between sleep and a black screen while the other shows the operating system for about two seconds before getting black for another two seconds. I can "use" the computer (move the mouse,click,type e.t.c.) while this happens but its not something pleasant. I tried reinstalling the driver, booting with both screens connected (in which case the powerup messages and the bios are mirrored in both screens until I get to the login screen where everything falls apart) Funny thing is, everything works if I disconnect the ATI graphics card and use the onboard intel one. So, any suggestions as to what might be the problem and how I can fix it?

    Read the article

  • Configuring port forwarding for SSH - no response outside LAN

    - by WinnieNicklaus
    I recently moved, and at the same time purchased a new router (Linksys E1200). Prior to the move, I had my old router set up to forward a port for SSH to servers on my LAN, and I was using DynDNS to manage the external IP address. Everything worked great. I moved and set up the new router (unfortunately, the old one is busted so I can't try things out with it), updated the DynDNS address, and attempted to restore my port forwarding settings. No joy. SSH connections time out, and pings go unanswered. But here's the weird part (i.e., key to the whole thing?): I can ping and SSH just fine from within this LAN. I'm not talking about the local 192.168.1.* addresses. I can actually SSH from a computer on my LAN to the DynDNS external address. It's only when the client is outside the LAN that connections are dropped. This surely suggests a particular point of failure, but I don't know enough to figure out what it is. I can't figure out why it would make a difference where the connections originate, unless there's a filter for "trusted" IP addresses, which is perhaps just restricted to my own. No settings have been touched on the servers, and I can't find any settings suggesting this on the router admin interface. I disabled the router's SPI firewall and "Filter anonymous traffic" setting to no avail. Has anyone heard of this behavior, and what can I do to get past it?

    Read the article

  • regsvr32 fails on Windows Server 2003 - LoadLibaray ("my.dll") failed

    - by John Mo
    The DLL in question is a VB6 web class. When deploying, the old one needs to be unregistered and the new one registered. This has worked for many years but has failed now with the following error in a message box: LoadLibrary("c:...\my.dll") failed - The filename, directory name, or volume label syntax is incorrect. I don't think there's a problem with the DLL being (un)registered because it's failing to unregister the existing DLL that was at one time successfully registered. I don't think there's a filename or path problem. Regsvr32 is executed using a batch file where the full path to the DLL is specified. The path has not changed. The batch file has not changed. The DLL has not changed (at least not the old one I need to unregister). It's a programming problem to me because I need to release an updated DLL, but I'm thinking it's maybe an admin problem best suited to the talents of the serverfault audience. I'm hoping somebody knows about a Windows Update or something that might hose regsvr32. Thank you.

    Read the article

  • IBM Thinkpad 240 - Best way to boot from floppy to USB - Best Linux for 300 MHz 128 MB RAM 800x600 s

    - by zillion
    Mostly I still have that old 'ultraportable' laptop that is mostly like a pre-netbook era laptop and a friend and programmer needs a computer because the one he was using just broke and he has to wait until the new one arrive in 4-6 weeks ... This laptop has no LAN connection and CD-ROM so be prepared for a real challenge! All hardware is well supported on Windows XP (included drivers on the Windows XP CD) and on Linux out-of-the-box (but the screen need a special configuration.) Mostly any Linux that will work well with Skype (USB or regular headset), any MSN client and a text writer for code will do. What I have tested so far: Slitaz 2 don't boot because the floppy of GRUB4DOS don't see the USB drive (fully working and tested on my regular laptop), Damn Small Linux was working but was needing a special screen configuration that I don't remember (in the boot options of the floppy) and now I'm thinking about Puppy Linux that is seen to work totally out of the box with it but I will need an old Puppy version (1 or 2 I think) and the Wakepup floppy ... If you got some ideas to help or to try I'm open!

    Read the article

  • Mac WLAN 802.11b+g WPA1 connection issues

    - by Peto
    Hi, I have a Telewell TW-EA510v4 ADSL modem+WLAN router configured as follows: Mode: 802.11b+g Security Mode: WPA1 Pre-shared Key WPA Algorithms: TKIP Connections from only certain MAC addresses have been allowed and the MAC address of my Mac is in that list. The WLAN works just fine with iPhone and an old Acer laptop. It has worked for about two months or so with my MacBook Pro (year and a half or so old model). Ocassionally i've had minor problems with it, which have required either reboot of ADSL modem or reboot of my Mac. However now, for the last week or so I haven't been able to connect to it at all. This is what is what i get in the console when i try to connect: 5.5.2010 20.54.53 airportd[73731] Apple80211Associate() failed -3924 (Invalid PMK) 5.5.2010 20.54.53 Apple80211 framework[584] airportd MIG failed (Associate Event) = -3924 (Invalid PMK) (port = 104599) 5.5.2010 20.54.53 SystemUIServer[584] Error joining WLAN-M: Invalid password (-3924 Invalid master key) The pre-shared key I use is not incorrect. I'm 100% sure of that. The Error Log from the router only says this when I try to connect to it: May 05 21:09:54 home.gateway:i802_1x:none: <my mac address> associated May 05 21:10:00 home.gateway:i802_1x:none: <my mac address> disassociated May 05 21:10:01 home.gateway:i802_1x:none: <my mac address> disassociated Any ideas or tips to troubleshoot this further?

    Read the article

  • Windows 7 64bits installation via USB asks for drivers

    - by Shikiryu
    I just bought a new config ( ASUS p8z68-v lx, i5-2500k + ram and new graphic card ). Coming back and installing it in my old computer, I just saw that my DVD player was on IDE (yup…). So, I needed to install windows 7 64bits from my usb key. Fine, I made my usb key bootable and copy the official DVD on it (the same version that was on my old computer), set BIOS to boot on it first and started the computer up. It worked fine until it asks me for cd/dvd drivers (which is funny since, I do it via USB because I can't plug in my DVD player :D) I have 3 SATA HDD plugged in and that's it. I made a small google search and found that it could be SATA or RAID drivers. Fine, I took another USB KEY and put all my motherboard drivers on it (from the CD sold with the MB) and none of those drivers seemed to work. I tried downloading new drivers from ASUS website and same effect. Any idea but no "buy a new DVD player", I'm now broke for the month :) ?

    Read the article

  • GlassFish v3: Security related updates + Repository/Publisher?

    - by chris_l
    I've used GlassFish v3.0 as my main development application server for a few weeks now. Now that I want to install it on my VPS, I'd like to get the latest security updates, because Glassfish v3 Release 3.0 (Open Source Edition or not) is already a few months old, and v3.1 is only available as "early access" nightlies (see https://glassfish.dev.java.net/public/downloadsindex.html). GlassFish offers an update mechanism (via pkg or updateTool), but when I simply try to get the latest updates (pkg image-update), it finds nothing. However, when I change the preferred publisher to dev.glassfish.org, I get a list with lots of updates. The interesting thing is, that I haven't been able to find any description about the contents of the diverse publishers/repositories (release, stable, contrib and dev) anywhere on the web, most importantly answering the question: Am I supposed to use the dev repository for security updates, or does it contain unstable updates? (The name suggests unstable updates, but the version numbers, like "3.0.1,0-11:20100331T082227Z" leave me guessing. The build is more than a week old, so it's obviously not "nightly" or "weekly", but what is it?) Where do I get security updates from then? Or are there simply no security updates yet? Asking on the GlassFish forum resulted in 56 views, but 0 answers.

    Read the article

  • Running Windows Update inside Windowx XP Mode

    - by Noam Gal
    I am working on a Win 7 Ultimate machine, and was using XP for some development tasks (for compatibility checks). Everything worked like a charm on the XP, including updates. Two days ago I had to switch computer (mainly a new motherboard/cpu), and I had just stuck my old HD inside the newer case. Win 7 worked like a charm - installed all the new drivers, identified everything automatically, no sweat. The trouble started when I tried running my old XP mode - it won't launch, complaining about the cpu change. I figured it's not a big deal, and I deleted the VM, and re-ran XP mode. It told me it can't find it, and offered to create a new one, just what I wanted. I had finished setting up the new XP mode VM, and it seems to work just fine. Got it to use the host network adapter, so I can surf from "inside". But I can't get Windows Update to run. Whenever I click on the "Custom" button on the WU site, after a short while, I get the [Error number: 0x80072EFD] page. I tried several solution from around the web for it (clearing some cache and restarting the wuauserv, even a microsoft fix-it run), but still nothing seems to work. Anyone here has any new tip for me? Thanks.

    Read the article

  • Xp SP3 - Non-Functioning

    - by Josh
    Ok Here is a crazy problem, I have a HP dv6000 Laptop that can no longer hold a charge, so I hooked it up to my TV, bought a wireless mouse and keyboard and configured xp to run with the lid closed, It has medium to heavy usage mainly just streaming from sites like Netfilx, Hulu, ABC, etc. And playing movies I ripped of dvd. It ran fine for a while but recently it has been having some weird problems: Problem one: I used to use firefox but now when run it I can type but as soon as I click something it just shuts down, completely, I can't even close it unless I use taskmanager to do it. So I went and got google chrome which is better but still hit or miss, but never completely shuts down, I just can't click anything or type anything, or sometimes I just can't type anything or vice versa. Also when I open a new tab, and try to move back to my old one, it automatically closes the old tab when I click on it. Problem two: When using the internet I can't use any other application or anything windows (ie. Windows explorer) until I force quit all browsers with taskmanger. The reason I can't run anything is because I can't click on it. Problem three: When I try to play a movie (with vlc) Once it starts playing I can't click on anything, but I can use hotkeys, and once it stops everything is fine again. Well I hope somebody knows whats going on because I have no clue, If you need clarification or more info on something I would be happy to provide it...

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >