Search Results

Search found 16902 results on 677 pages for 'strange errors'.

Page 350/677 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • Windows7 - Mouse not working, all windows + tray act like inactive

    - by Teo.sk
    I have this problem for two days now. First when it happened, logging out and back in helped, but now this occurs right after booting. I cannot activate any window by clicking into it, and also cannot minimalise, or close it with mouse. In few applications I cannot even interact with them with mouse (e.g. clicking on links in browser). I can scroll an active window with the trackball, and sometimes everything works normally for a little while. Everything works fine, when I use keyboard for this, I can resize, move, close windows with keyboard shortcuts, etc. I ran a complete Avast! scan, but it found nothing important. Also event viewer did not show any relevant errors. Here's a log from HijackThis if it helps anything.

    Read the article

  • Installing Java 6 on Ubuntu 10.04 fails on missing Java 6 JRE package

    - by David S
    I'm trying to install Java 6 on Ubuntu 10.04 and it's been harder than it should be. In another question about installing Java on Ubuntu/Linux it said that I needed to do the following: sudo add-apt-repository "deb http://archive.canonical.com/ lucid partner" However, that failed and I kept getting: sudo: add-apt-repository: command not found The solution to this, was to run: sudo apt-get install python-software-properties So, that seemed to work and the command above to "add-apt-repository" seems to complete with no errors. And I have run the following to confirm it got added. sudo vi /etc/apt/sources.list But, now when I run the following: sudo apt-get install sun-java6-jre I get: Reading package lists... Done Building dependency tree Reading state information... Done Package sun-java6-jre is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package sun-java6-jre has no installation candidate Where do I go from here?

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • Srvany Starts but Application doesn't

    - by mharran
    I used Instsrv and Srvany as to create a service on W2008; the Srvany service starts okay but the application does not start. The application is TeamSpeak 3 btw. I don't think it's an issue with my W2008 setup as I have a previous version of the application set up the exact same way and running perfectly. Also, I have no problem manually starting the application even when I copy and paste into the 'Run' box the path used for the application by Srvany. I looked at Events but nothing there except notification that the service has entered the running state, didn't really expect any errors as the service has started even though the application hasn't. Any suggestions on what could be the problem?

    Read the article

  • Out Of Memory Error - Magento

    - by robobobobo
    Ok normally I understand when my server is giving me out of memory errors, but this one has me stumped! I'm running a magento based site, with one or two plugins in it and the rest is pretty basic. The site runs and loads fine wiht no issues. However in the backend - Configuration - Payment Methods it gives me the following out of memory error Fatal error: Out of memory (allocated 39059456) (tried to allocate 85 bytes) in ########/Varien/Simplexml/Element.php on line 84 Now this is where I'm confused..it's allocated more than it tried to allocate? Am I correct there? So how is it running out of memory? My server has 6Gb ram, an SSD and 2 CPU's running WHM with a few other low traffic sites on it. I set my php memory limit to 100mb, 1000mb and finally unlimited but all to no avail! I'm completely lost here, would really appreciate some expertise on this Cheers

    Read the article

  • How much does the geographical location of DNS servers matter?

    - by GreatFire
    We have started to run our own DNS servers located in Asia since that's where our main audience is. However, it seems that some users in the US are having difficulties accessing our website sometimes. I've noticed myself that DNS lookups of our domain from the US are relatively slow (500 msec+). Maybe the problems some users are having are due to other DNS configuration errors, but in general, how much of an issue is the geographical location of DNS servers? Should we have an additional server in the US?

    Read the article

  • Macbook Pro 2.2ghz 2011 (OSX 10.6.7) problem with NTFS 3G

    - by James
    I installed NTFS 3G but now get the following error message when I try to plug in my external drive. I also get it on startup about my Windows partition. Uninstall/ reinstall does not work. NTFS-3G could not mount /dev/disk1s1at /volumes/freeagent GoFlex Drive because the following error occured: /library/filesystems/fuse.fs/support/fusefs.kext failed to load- (libkern/kext) link error; check the system/ kernel logs for errors or try kextutil(8). The MacFUSE file system is not available (71) Any help would be great. I'd hope to avoid reinstalling OS X if possible!

    Read the article

  • Why am I getting 'undefined method' exceptions when executing 'run_list add', 'run_list remove' and 'rackspace server delete'?

    - by Peter Groves
    [Originally posted this to opscode forum, got no response] I’m testing out a free hosted chef-server account and multiple subcommands are failing with ‘Unexpected Errors’. Perhaps my version and the server version are incompatible? OS: Ubuntu 12.04LTS Local Chef: 10.12.0 (Installed through gem) Local Ruby: 1.8.7 Also, the workstation machine has been manually configured, but the client(s) I’ve been experimenting with are launched with the Rackspace plugin (using ‘knife rackspace server create…’) The problem commands seem to fail when talking to the host chef-server, however, before it ever tries to modify the client, so I don’t believe that’s where the problem exists. The virtual-servers that are launched by ‘knife rackspace server create’ are launched properly but then deleting them with knife fails. If I include a recipe in the run_list when I create the server, the recipe is properly added to the run_list. If I try to add it later or remove the one that there server was initialized with, those commands fail. Here is the output of a few relevant commands (with stacktraces): https://gist.github.com/7100ada3fd6690113697

    Read the article

  • Windows Login Failure

    - by Chris Bateson
    I'm getting an error in the Event Viewer, which is also generating a lot of Logon Failure messages on our syslog server. Pretty much stuck on how to resolve. EventID: 536 Logon Type: 3 Reason: The NetLogon component is not active This is for a Windows Server 2003 system. I have checked here We're using Shavlik Protect 9 to scan and deploy patches. Shavlik stores the credentials for the systems and uses those stored credentials to deploy patches. This system is able to scan and deploy to other systems on the network using those credentials and no errors are generated. When installing to the local system that Shavlik is physically on then this error is generated. Whats interesting is that it doesn't generate during a scan, and the patches install fine. We've contacted Shavlik to get the response that they are unable to help since it's a Microsoft error. Has anyone seen this?

    Read the article

  • Ubuntu can't install an older version of a package

    - by Trevor Newhook
    When I try to do an apt-get install, I keep getting an error: Depends: libgtk-3-common (= 3.4.1-0ubuntu1) but 3.4.2-0ubuntu0.4 is to be installed when I run sudo apt-get -f install, I get several dpkg: warning: files list file for package 'XXX' missing, assuming package has no files currently installed. then Preparing to replace libgtk-3-bin 3.4.1-0ubuntu1 (using .../libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb) ... Adding 'diversion of /usr/sbin/update-icon-caches to /usr/sbin/update-icon-caches.gtk2 by libgtk-3-bin' dpkg-divert: error: rename involves overwriting `/usr/sbin/update-icon-caches.gtk2' with different file `/usr/sbin/update-icon-caches', not allowed dpkg: error processing /var/cache/apt/archives/libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I'm not sure why it's complaining about a newer version of a package, but any help would be appreciated

    Read the article

  • Terminal mail delivery delay in Mac OS X

    - by cmaughan
    I'm using mail from the Mac OS X terminal to send the results of a database query to me via email. Most of the time it works, but sometimes there is a long delay before the mail arrives (often when another similar script is run). It looks like there is some kind of send queue, but I can't find any documentation mentioning this. Is there something I need to do to flush mail from the terminal? UPDATE: Sometimes delivery doesn't even seem to happen, though I get no errors at the console. Very weird.

    Read the article

  • How do I analyze an Apache Bench result?

    - by Alan Hoffmeister
    I need some help with analyzing a log from Apache Bench: Benchmarking texteli.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Server Hostname: texteli.com Server Port: 80 Document Path: /4f84b59c557eb79321000dfa Document Length: 13400 bytes Concurrency Level: 200 Time taken for tests: 37.030 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 13524000 bytes HTML transferred: 13400000 bytes Requests per second: 27.01 [#/sec] (mean) Time per request: 7406.024 [ms] (mean) Time per request: 37.030 [ms] (mean, across all concurrent requests) Transfer rate: 356.66 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 27 37 19.5 34 319 Processing: 80 6273 1673.7 6907 8987 Waiting: 47 3436 2085.2 3345 8856 Total: 115 6310 1675.8 6940 9022 Percentage of the requests served within a certain time (ms) 50% 6940 66% 6968 75% 6988 80% 7007 90% 7025 95% 7078 98% 8410 99% 8876 100% 9022 (longest request) What this results can tell me? Isn't 27 rps too slow?

    Read the article

  • Error When Trying to Exchange Encrypted Emails with Sender Outside Domain

    - by LucidLuniz
    I have an end user who is trying to exchange encrypted messages with a person outside of our company domain. When receiving emails from the user they receive a message that says: Signed By: (There were errors displaying the signers of this message, click on the signature icon for more details.) However, when you click on the signature icon it says: The digital signature on this message is Valid and Trusted. Then when you look at the "Message Security Properties" it shows two layers, each with a green checkmark beside them. The layers are presented as below: Subject: Digital Signature Layer It also has: Description: OK: Signed message The end result with all of this is that when the user on my side tries to send this user an encrypted message it says: Microsoft Outlook had problems encrypting this message because the following recipients had missing or invalid certificates, or conflicting or unsupported encryption capabilities: Continue will encrypt and send the message but the listed recipients may not be able to read it. However, the only options you are actually given is "Send Unencrypted" and "CanceL" (Continue is grayed out). If anybody can assist I would greatly appreciate it!

    Read the article

  • Nginx & Apache Cannot get try_files to work with permalinks

    - by tcherokee
    I have been working on this for the past two weeks not and for some reason I cannot seem to get nginx's try_files to work with my wordpress permalinks. I am hoping someone will be able to tell me where I am going wrong and also hopefully tell me if I made any major errors with my configurations as well (I am an nginx newbie... but learning :) ). Here are my Configuration files nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## # Defines the cache log format, cache log location # and the main access log location. log_format cache '***$time_local ' '$upstream_cache_status ' 'Cache-Control: $upstream_http_cache_control ' 'Expires: $upstream_http_expires ' '$host ' '"$request" ($status) ' '"$http_user_agent" ' ; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } mydomain.com.conf server { listen 123.456.78.901:80; # IP goes here. server_name www.mydomain.com mydomain.com; #root /var/www/mydomain.com/prod; index index.php; ## mydomain.com -> www.mydomain.com (301 - Permanent) if ($host !~* ^(www|dev)) { rewrite ^/(.*)$ $scheme://www.$host/$1 permanent; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # All media (including uploaded) is under wp-content/ so # instead of caching the response from apache, we're just # going to use nginx to serve directly from there. location ~* ^/(wp-content|wp-includes)/(.*)\.(jpg|png|gif|jpeg|css|js|m$ root /var/www/mydomain.com/prod; } # Don't cache these pages. location ~* ^/(wp-admin|wp-login.php) { proxy_pass http://backend; } location / { if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") { set $do_not_cache 1; } proxy_cache_key "$scheme://$host$request_uri $do_not_cache"; proxy_cache main; proxy_pass http://backend; proxy_cache_valid 30m; # 200, 301 and 302 will be cached. # Fallback to stale cache on certain errors. # 503 is deliberately missing, if we're down for maintenance # we want the page to display. #try_files $uri $uri/ /index.php?q=$uri$args; #try_files $uri =404; proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_504 http_404; } # Cache purge URL - works in tandem with WP plugin. # location ~ /purge(/.*) { # proxy_cache_purge main "$scheme://$host$1"; # } # No access to .htaccess files. location ~ /\.ht { deny all; } } # End server gzip.conf # Gzip Configuration. gzip on; gzip_disable msie6; gzip_static on; gzip_comp_level 4; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; proxy.conf # Set proxy headers for the passthrough proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header X-Cache-Status $upstream_cache_status; backend.conf upstream backend { # Defines backends. # Extracting here makes it easier to load balance # in the future. Needs to be specific IP as Plesk # doesn't have Apache listening on localhost. ip_hash; server 127.0.0.1:8001; # IP goes here. } cache.conf # Proxy cache and temp configuration. proxy_cache_path /var/www/nginx_cache levels=1:2 keys_zone=main:10m max_size=1g inactive=30m; proxy_temp_path /var/www/nginx_temp; proxy_cache_key "$scheme://$host$request_uri"; proxy_redirect off; # Cache different return codes for different lengths of time # We cached normal pages for 10 minutes proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; The two commented out try_files in location \ of the mydomain config files are the ones I tried. This error I found in the error log can be found below. ...rewrite or internal redirection cycle while internally redirecting to "/index.php" Thanks in advance

    Read the article

  • Why does m4 error "linux-gnu.m4 - No such file or directory" appear the first time after updating sendmail.mc?

    - by Mike B
    SendMail 8.14.x | CentOS 5.x I've noticed that if I manually update /etc/mail/sendmail.mc (for example, enable TLS support), and then bounce sendmail, I get the following error: Shutting down sm-client: [ OK ] Shutting down sendmail: [ OK ] Starting sendmail: sendmail.mc:18: m4: cannot open `/usr/share/sendmail-cf/ostype/linux-gnu.mf': No such file or directory [ OK ] Starting sm-client: [ OK ] This only happens one time after I update a sendmail.mc file. If I bounce sendmail again (without making any other change), I don't see the error any more. Any idea why this happens? It doesn't cause any errors - I'm just curious.

    Read the article

  • uTorrent error: Access is denied

    - by Ben Franchuk
    When I booted my computer today and opened up uTorrent it went on, as usual, to check the torrents that I had been running before. On one it returned an "Access is denied" error after it had been checked. I have absolutely no idea why this is happening as I was downloading the same torrent just the night before, and additionally because I have other torrents going in the same downloads folder and they do not have any such errors. It may be worth noting that on said defective torrent, there appear to be no seeds and no connected peers (of the 33 available) and three of the trackers appear to be inactive as well (those being Peer Exchange, Local Peer Discovery and DHT). I have rebooted a few times in an attempt to fix this problem, but to no success. I also tried a few times force rechecking / force starting but neither of those worked either. edit I booted today and it was working for whatever reason. thanks for all of the help anyways guys!

    Read the article

  • Is there a lightweight MTA for Ubuntu 9.10 Desktop?

    - by Joe Casadonte
    I'm writing a Perl script to run as a cron job, and I want to email results & errors to a local account on the laptop. I'd like something that can talk SMTP (do any MTAs not adhere to SMTP?). I use Thunderbird 3, so I'll also need a POP/IMAP server (unless T-Bird can read straight from an mbox file; I'll have to check into that). No need for spam controls as I'll lock it down real tight, only accepting mail originating from the laptop itself. Thanks!

    Read the article

  • Expanding RAID 1 array size (Adaptec 1420SA controller)

    - by cincura.net
    I have an Adaptec 1420SA controller in server and RAID 1 array created. The old disks slowly started to report S.M.A.R.T errors so I replaced first one, rebuild array and then other one and rebuild the array again. But the new drives are bigger, so I'd like to expand the array to use full disk capacity. Is it somehow possible? In the Adaptec configuration utility, after POST, I didn't found anything that could do it. I still have the old drives, if it helps. The array is used for system (Windows 2008 R2 SP1) booting. Thanks.

    Read the article

  • SugarCRM CE Won't Install on Ubuntu 10.10

    - by Trenton Scott
    I have a fresh copy of Ubuntu 10.10 server with a working LAMP installation. I downloaded SugarCRM and browsed to its directory to open the installer (via Firefox). The installer appears fine, I accept the license agreement, and it proceeds to check file permissions. It advises that several directories need looser permissions (chmod 766), and I adjust them accordingly. After making the changes, I click "recheck" and the page just reloads as blank (white). There are no errors visible, nothing in the server logs (Apache/PHP) and installation cannot continue. I'm able to get back to the installation tool by readjusting permissions back to my default (0755 for directories, 0644 for files). All files/folders are owned by root and the www-data group. Any idea about what's wrong?

    Read the article

  • Help, I need to debug my BrowserHelperObject (BHO) (in C++) after a internet explorer 8 crash in Rel

    - by BHOdevelopper
    Hi, here is the situation, i'm developping a Browser Helper Object (BHO) in C++ with Visual Studio 2008, and i learned that the memory wasn't managed the same way in Debug mode than in Release mode. So when i run my BHO in debug mode, internet explorer 8 works just fine and i got no erros at all, the browser stays alive forever, but as soon as i compile it in release mode, i got no errors, no message, nothing, but after 5 minutes i can see through the task manager that internet explorer instances are just eating memory and then the browser just stop responding every time. Please, I really need some hint on how to get a feedback on what could be the error. I heard that, often it was happening because of memory mismanagement. I need a software that just grab a memory dump or something when iexplorer crashes to help me find the problem. Any help is appreciated, I'll be looking for responses every single days, thank you.

    Read the article

  • VirtualBox VM settings between different CPUs (Processor/Acceleration)

    - by Level1Coder
    I have 2 PCs, 1 with a INTEL-Q9550 (YES-VT-x) and another with INTEL-E2160 (NO-VT-x) I will be using the Q9550 first to create a new VM, install XP and install stuff I'll need. Once this VM is golden, I plan on migrating this VM over to the E2160 machine. The question is, while creating/preparing this VM on the Q9550, should I disable all the following settings below to make sure the E2160 can start this VM without errors? Q9550 Default VM Settings ========================= PROCESSOR [X] Enable PAE/NX ACCELERATION [X] Enable VT-x/AMD-V [X] Enable Nested Paging

    Read the article

  • Will Windows Barf on Constant Video Driver Overwrite?

    - by Maarx
    So, I've got a Win7 64-bit gaming PC with GTX 260's. Recently, StarCraft 2 had an issue with flickering, which NVidia fixed with a new set of drivers. However, these new drivers induce unplayable graphical errors with Neverwinter Nights 2, something me and my friends still play from time to time. I am seeking advice on the "best" way to rectify this situation, to be able to switch between two driver releases, without compromising the stability of my system (if Windows stability isn't an oxymoron). I'm wondering if Windows 7 is structured in such a way that I can constantly reinstall these two sets of drivers back and forth overtop each other, possibly six or eight times a day, without very quickly driving myself to reformat to maintain that "just like new" performance. I'm loathe to have to reformat the drive and maintain two copies of the operating system, but I'll do it if I have to.

    Read the article

  • Resizing a System Partition Windows Server 2003 VM (Getting GParted Error)

    - by Dina
    I am getting an error while trying to resize System Partition for Windows 2003 Server (this is a VM on a Hyper-v Windows Server 2008) using GParted Live CD ISO. Followed this tutorial: http://malaysiavm.com/blog/how-to-resize-windows-2003-server-virtual-disk-on-vmware-esx/ and GParted Doc http://gparted.sourceforge.net/larry/resize/resizing.htm (They are very similar) The VM has a Dynamic VHD file, I have already increased it using Hyper-v. GParted doesn't give any clues or details for the error. Just simply errors when trying to grow the partition. Any ideas what I can do? Thanks! Using version of Gparted: gparted-live-0.13.1-2

    Read the article

  • Exim: Change sender address when sending mails out of local network

    - by Esa Varemo
    We have a working exim setup at a site, where users can send and receive mails. We are trying to setup a server to send some warnings and errors using email to an address that is outside the local network. The problem is: The program that sends the mails sends them using the username it runs under and the local hostname of the server. This cause the mails to have a sender of format: [email protected]. Exim sends these mails to the ISP's SMTP server, which rejects the mails as they have an illegal or unverifiable sender (the internal address). I'm thinking I should configure exim to rewrite the sender when: - sender's domain is on the local network - receiver's domain is outside the local network I tried setting some kind of rewriting in the exim config, but did not manage to get it to work. I'd show what I have tried, but I ran out of time on the last visit to the site, and had to revert to the original version losing all the changes I tried.

    Read the article

  • Trying to get a new user up on VPN

    - by Chris
    Caveat: I am not a sysadmin, so please forgive the n00bness of the query. We have a new user and I'm trying to get them up on VPN. We use pfsense as an IPSEC endpoint. This person is using Shrewsoft for the client. I had created an entry in pfsense for them and then edited a previous user's config file. Shrewsoft didn't import the config file very well and I had to hand edit the information. Now we are getting gateway errors. One thing I've noticed is that there is a different between the values of the preshared key stored on the firewall and the psk stored in the config file. I assume it has something to do with a hash, but I've no idea if that's the case and whether that might be what's causing the problem. Any suggestions greatly appreciated! Tangentially, is there some software used to generate these config files?

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >