Search Results

Search found 24814 results on 993 pages for 'linux distro'.

Page 423/993 | < Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >

  • VirtualBox Port Forward not working when Guest IP *IS* specified (while doc says opposite)

    - by Patrick
    Trying to port forward from host (Mac OS X) 127.0.0.1:8282 - guest (CentOS)'s 10.10.10.10:8080. Existing port forwards include 127.0.0.1:8181 and 9191 to guest without any IP specified (so whatever it gets through DHCP, as explained in the documentation). Here is how the non-working binding was added: VBoxManage modifyvm "VM name" --natpf1 "rule3,tcp,127.0.0.1,8282,10.10.10.10,8080" Here is how the working ones were added: VBoxManage modifyvm "VM name" --natpf1 "rule1,tcp,127.0.0.1,8181,,80" VBoxManage modifyvm "VM name" --natpf1 "rule2,tcp,127.0.0.1,9191,,9090" And by "non-working", I of course mean not listening (as a prerequisite to forwarding): $ lsof -Pi -n|grep Virtual|grep LISTEN VirtualBo 27050 user 21u IPv4 0x2bbdc68fd363175d 0t0 TCP 127.0.0.1:9191 (LISTEN) VirtualBo 27050 user 22u IPv4 0x2bbdc68fd0e0af75 0t0 TCP 127.0.0.1:8181 (LISTEN) There should be a similar line above but with 127.0.0.1:8282. Just to be clear, this port is listening perfectly fine on the guest itself. And when I remove the guest IP (i.e., clear the 10.10.10.10) the forward works fine, albeit to eth0 (not eth1 where I need it). I can tcpdump and watch the traffic flow back and forth. And yes, I've disabled iptables entirely while testing -- it's not getting blocked anywhere on the guest. As VirtualBox writes in their documentation, you are required to specify the guest IP if it's static (makes sense, no DHCP record it keeps): "If for some reason the guest uses a static assigned IP address not leased from the built-in DHCP server, it is required to specify the guest IP when registering the forwarding rule:". However, doing so (as I need to), seems to break the port forward with nary a report in any log file I can find. (I've reviewed everything in ~/Library/VirtualBox/). Other notes: While I used the above command to add the third rule, I've also verified it showed up correctly in GUI and then removed/re-added from there just to make sure). This forum link -- while very dated -- looks somewhat related in that a port forward to a static IP was not appearing (perhaps they think due to lack of gratuitous arp being sent for host to know IP is there/avail?). Anyway, what gives? Is this still buggy? Any suggestions? If not, easy enough workarounds? What's interesting is that this works perfectly fine on another user's Mac, however he's running a slightly older version (4.3.6 v. 4.3.12).

    Read the article

  • Mint 13 does nothing when laptop lid is closed

    - by ewok
    I have a laptop running Mint 13. I have it hooked up to a 30" monitor and have no use for the laptop being open, so I put it on a shelf and close it. When I do that, the monitor goes blank. The power manager does not have an option for doing nothing when the lid is closed. The options are "Blank Screen", "Suspend", and "Shutdown". Is there a way to make the laptop not go to a blank screen when the lid is closed?

    Read the article

  • Use preforker(ruby gem) with supervisor

    - by user1548832
    I also asked same question on stackoverflow.com http://stackoverflow.com/questions/13871169/use-preforkerruby-gem-with-supervisor But, superuser.com might much help to me. Can anyone amswer this? I want to run a server program using preforker ruby gem with supervisor. But error has occured. I wrote a following test program using preforker. #!/usr/bin/env ruby require 'rubygems' require 'preforker' Preforker.new(:app_name => 'test-preforker', :timeout => 60, :workers => 1) do |master| while master.wants_me_alive? do puts "hello" sleep 10 end end.run And a following supervisor config. [program:test-preforker] command=/home/tkono/tmp/test-preforker.rb stdout_logfile_maxbytes=1MB stderr_logfile_maxbytes=1MB stdout_logfile=/var/log/%(program_name)s.log stderr_logfile=/var/log/%(program_name)s.log autorestart=true Then, reload supervisor. # supervisorctl reload Restarted supervisord Here is the log file of supervisor. 2012-12-13 17:50:47,161 CRIT Supervisor running as root (no user in config file) 2012-12-13 17:50:47,163 WARN Included extra file "/etc/supervisor.d/test-preforker.ini" during parsing 2012-12-13 17:50:47,209 INFO RPC interface 'supervisor' initialized 2012-12-13 17:50:47,213 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2012-12-13 17:50:47,215 INFO supervisord started with pid 12437 2012-12-13 17:50:48,231 INFO spawned: 'test-preforker' with pid 12440 2012-12-13 17:50:48,233 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:49,248 INFO spawned: 'test-preforker' with pid 12441 2012-12-13 17:50:49,261 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:51,267 INFO spawned: 'test-preforker' with pid 12442 2012-12-13 17:50:51,284 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:54,305 INFO spawned: 'test-preforker' with pid 12443 2012-12-13 17:50:54,308 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:55,311 INFO gave up: test-preforker entered FATAL state, too many start retries too quickly Please tell me what is wrong? A program using preforker cannot run with supervisor? preforker https://github.com/dcadenas/preforker supervisor http://supervisord.org/index.html

    Read the article

  • is there a man in the middle attacking to my server machine?

    - by GongT
    My server works well about half a year. But a strange thing happened (several hours before). This server has two IP-address 58.17.85.19 & 117.21.178.19 When I navigate to http://58.17.85.19, nothing different as before. But http://117.21.178.19 will return a "302 Object moved" and become a "redirect loop" I do some test: ($cmd = "wget http://117.21.178.19/?xx=$RANDOM --max-redirect 0 -S --no-cache -O -") Step by step: run $cmd on my PC and my firend's one (we live in two side of China, far away). - got 302 run $cmd on this server - got 200 OK (content is correct result of index.php) run $cmd on another server in same computer room - got 200 OK telnet from my PC and build an HTTP request (type by hand) - got 200 OK shutdown php-fpm, run $cmd on my PC - got 302 run $cmd on server - 502 Bad Gateway shutdown nginx, run $cmd on both the server and my PC - Connection refused. create iptables rule, refuse any connection to 58.17.85.19:80. run nc -l 80 -k -vvv on server and run $cmd on my PC NC show me that.... Server accept connection (Connection from [my ip]) My connection closed ! (Remove fd xx from list) wget dump out response - got 302 I know that, normaly, NC will accept connection, then dump HTTP request from client, and client will wait for response. this connection will open forever(infact client will close connection becouse timeout), becouse NC can't give any response. So... where my request gone? who send an response to the client? some virus on my server system? If so, why 58.17.85.19 didn't has this error? or... I was attacked by a middleman?

    Read the article

  • When HDD wakes up?

    - by NumberFour
    Im looking for some small script or application which could log the time when a non-system disk wakes up. I cannot identify which application or script wakes up my non-system drive (which has to be asleep until I work with it). I have already set the noatime flag, tried to use powertop and iotop to determine which application could prevent it from going to sleep - but with no result. So my plan is to set this drive asleep (hdparm -Y) and see at what time it gets regularly woken up. Thanks for any advice.

    Read the article

  • Recurring Apache 2.0.52 error on CentOS 4 - 'could not create `rewrite_log_lock`'

    - by warren
    I have been seeing a recurring issue on my web server: [Sun May 16 03:10:19 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 04:10:05 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 05:10:04 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 05:17:13 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed So far, the only fix I have found to this when it happens is to reboot my server. This is non-ideal :-\ Restarting httpd does not clear the error. df indicates I have 20+ gigs free, and top and free both report 800+ megs (or 1.2 gigs) > df -h Filesystem Size Used Avail Use% Mounted on /dev/simfs 40G 18G 23G 44% / # > free total used free shared buffers cached Mem: 1474560 300832 1173728 0 0 0 -/+ buffers/cache: 300832 1173728 Any ideas on why this would recur, and how to prevent/fix it?

    Read the article

  • How can I change the flow through this PAM (programmable authentication module) file?

    - by Jamie
    I'd like the PAM module to skip the pam_mount.so line when a unix login succeeds. I've tried various things including: auth [success=2 default=ignore] pam_unix.so nullok_secure auth [success=2 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth requisite pam_permit.so auth required pam_permit.so auth optional pam_mount.so But can't get it to work. Conversely, when a session shuts down, how can I modify the following os that an unmount command (via pam_mount.so) is avoided during a unix login? session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session required pam_unix.so session optional pam_winbind.so session optional pam_mount.so

    Read the article

  • using "touch" to create directories?

    - by user66732
    1) in the "A" directory: find . -type f a.txt 2) in the "B" directory: cat a.txt | while read FILENAMES; do touch "$FILENAMES"; done 3) Result: the 2) "creates the files" [i mean only with the same filename, but with 0 Byte size] ok. But if there are subdirs in the "A" directory, then the 2) can't create the files in the subdir, because there are no directories in it. Question: is there a way, that "touch" can create directories?

    Read the article

  • Xen virtual host can reach some sites but not others

    - by Tun H S Lee
    Okay, this is killing me. Debian Squeeze, Xen 4.0, brand new install. No iptables rules whatsoever except for the ones added by the default xen bridge script. Dom0 can reach the entire world, no problems. DomU can receive packets from some hosts, but not from others. For instance, if I ping Host A, it works fine. If I ping Host B, the DomU reports 100% packet loss. The hosts are random, but consistent (even after reboots). I can see no pattern to why some work and others don't. In fact, in some cases, different virtual hosts on the same server (an other server at a different data center) are divided; some work and others do not. I can reboot (DomU or Dom0 too) and the same hosts will work or fail as before. If I tcpdump on the Host B while pinging from the DomU, everything looks fine. It sees the echo request coming in and says it's sending one back. However, if I tcpdump peth0 on the Dom0, it never sees the echo reply. Any ideas what could be happening? I'm tearing my hair out here.

    Read the article

  • Determine Configured Location of MySQL's data directory OR all loaded *.cfn Locations

    - by alanstorm
    I'm not a sys-admin, but sometimes I play one at work. I've inherited a virtual server that had MySQL installed from source. I'm gathering as much information about the install as I can (original people who installed it are, of course, not a resource). How can I find The default/current location of the MySQL binary files (often stored in a directory named data?) Any default or custom loaded cnf files? Looking for solutions that are a bit more sophisticated than a find / -iname '*.cnf' :)

    Read the article

  • Where does gcc keep its built-in include directory paths

    - by Charles
    GCC has built in include directories for certain standard headers. I just need to know where this list is. My newly compiled gcc will not compile my little test C++ program because it cannot find standard headers. I think it fails because of some config options I used to make my file system more organized. I set the bindir and libdir, which I think might have screwed up the built-in include paths for some reason. Program (dummy.c): #include <iostream> void main(){} Command: g++ dummy.c Error: dummy.c:1:20: fatal error: iostream: No such file or directory

    Read the article

  • How to mount LUKS partition securely on server

    - by Ency
    I'm curious if it is possible to mount a partition encrypted by cryptsetup with LUKS securely and automatically on Ubuntu 10.0.4 LTS. For example, if I use the key for the encrypted partition, than that key has to be presented on a device that is not encrypted and if someone steals my disk they'll be able to find the key and decrypt the partition. Is there any safe way to mount an encrypted partition? If not, does anything exist to do what I want?

    Read the article

  • RHEL 5.3 Kickstart - How specify location of individual package in Workstation folder?

    - by Ed
    I keep getting "package does not exist" errors during the install. I made a kickstart ISO to create an unattended install of a RHEL 5.3 build machine for C++ software releases. It pulls the kickstart config file from our internal web server. This is handy; it makes it easy to test and modify without having to make a new ISO. And I plan to check it in to version control if I can get it working. Anyway, the rpm packages are located in two folders on the disk; Client and Workstation. The packages install fine for the ones that are physically located under the Client folder. It cannot find those under the Workstation folder such as as doxygen and subversion complaining that packages do not exist. Is there a way to specify the individual package location? # ----------------------------------------------------------------------------- # P A C K A G E S # ----------------------------------------------------------------------------- %packages @gnome-desktop @core @base @base-x @printing @development-tools emacs kexec-tools fipscheck xorg-x11-server-Xnest xorg-x11-server-Xvfb #Packages Located in Workstation Folder *** Install can not find any of these ?? bison doxygen gcc-c++ subversion zlib-devel freetype-devel libxml2-devel Thanks in advance, -Ed

    Read the article

  • How to change my commandline locale after CentOS decided to change it?

    - by Aron Rotteveel
    So apparently, CentOS decided I was Dutch, and thus, should not have a English locale. Apart from the fact that this greatly bothers me, I am having a pretty hard time actually changing it back. There does not seem to be a setlocale function, and system-config-language tells me I am using an English locale, even though my environment says otherwise. Any help would be appreciated. Output from locale: LANG=nl_NL.UTF-8 LC_CTYPE="nl_NL.UTF-8" LC_NUMERIC="nl_NL.UTF-8" LC_TIME="nl_NL.UTF-8" LC_COLLATE="nl_NL.UTF-8" LC_MONETARY="nl_NL.UTF-8" LC_MESSAGES="nl_NL.UTF-8" LC_PAPER="nl_NL.UTF-8" LC_NAME="nl_NL.UTF-8" LC_ADDRESS="nl_NL.UTF-8" LC_TELEPHONE="nl_NL.UTF-8" LC_MEASUREMENT="nl_NL.UTF-8" LC_IDENTIFICATION="nl_NL.UTF-8" LC_ALL= Both my ~/.bashrc as ~/.bash_profile contain no locale settings. Additionally, /etc/bashrc does not contain any locale references either.

    Read the article

  • Input/output (read) errors in Bacula while setting up a Tape Drive + Autochanger

    - by Kyle Brandt
    When running the label barcode command in bacula I am getting Input/output errors. I am just getting started in trying to set this up: Connecting to Storage daemon TapeDevice at ny-back01.ny.stackoverflow.com:9103 ... Sending label command for Volume "ACJ332" Slot 1 ... 3307 Issuing autochanger "unload slot 8, drive 0" command. 3304 Issuing autochanger "load slot 1, drive 0" command. 3305 Autochanger "load slot 1, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ332" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ332", Slot 1 successfully created. Sending label command for Volume "ACJ331" Slot 2 ... 3307 Issuing autochanger "unload slot 1, drive 0" command. 3304 Issuing autochanger "load slot 2, drive 0" command. 3305 Autochanger "load slot 2, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ331" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ331", Slot 2 successfully created. Sending label command for Volume "ACJ328" Slot 3 ... 3307 Issuing autochanger "unload slot 2, drive 0" command. 3304 Issuing autochanger "load slot 3, drive 0" command. 3305 Autochanger "load slot 3, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ328" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ328", Slot 3 successfully created. Sending label command for Volume "ACJ329" Slot 4 ... 3307 Issuing autochanger "unload slot 3, drive 0" command. 3304 Issuing autochanger "load slot 4, drive 0" command. 3305 Autochanger "load slot 4, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ329" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ329", Slot 4 successfully created. Sending label command for Volume "ACJ335" Slot 5 ... 3307 Issuing autochanger "unload slot 4, drive 0" command. 3304 Issuing autochanger "load slot 5, drive 0" command. 3305 Autochanger "load slot 5, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ335" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ335", Slot 5 successfully created. Sending label command for Volume "ACJ334" Slot 6 ... 3307 Issuing autochanger "unload slot 5, drive 0" command. 3304 Issuing autochanger "load slot 6, drive 0" command. 3305 Autochanger "load slot 6, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ334" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ334", Slot 6 successfully created. Sending label command for Volume "ACJ333" Slot 7 ... 3307 Issuing autochanger "unload slot 6, drive 0" command. 3304 Issuing autochanger "load slot 7, drive 0" command. 3305 Autochanger "load slot 7, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ333" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ333", Slot 7 successfully created. Sending label command for Volume "ACJ330" Slot 8 ... 3307 Issuing autochanger "unload slot 7, drive 0" command. Bacula-dir: # Definition of file storage device Storage { Name = TapeDevice # Do not use "localhost" here Address = ny-back01.... # N.B. Use a fully qualified name here SDPort = 9103 Password = "..." Device = ULTRIUM-HH4 Media Type = LTO-4 Media Type = File Autochanger = Yes } Bacula-sd: Autochanger { Name = StorageLoader1U Device = ULTRIUM-HH4 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg5 } Device { Name = ULTRIUM-HH4 Media Type = LTO-4 Archive Device = /dev/st0 AutomaticMount = yes; AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes; RandomAccess = no; } Anyone knows what this means / why I am getting this?

    Read the article

  • My NTFS Partition keeps becoming "unusable" on Ubuntu, Any Ideas?

    - by gopherman
    I just purchased a new 2TB Drive External Seagate, My main system uses both Windows and Ubuntu So I am pretty much stuck with keeping my drive as NTFS. I have done this without any problems before but since I got this new drive I have been having issues. When I first load up Ubuntu the drive mounts and runs fine, after an unspecified amount of time i start getting Input/Output errors when accessing the drive. When I goto the Disk Utility I get a message stating the drive is "Unknown or Unused", If I disconnect and reconnect the drive or reboot everything is fine again. There's no errors coming up with S.M.A.R.T and it seems to work fine while under windows. Any thoughts?

    Read the article

  • rsync without password, none of google (server fault) tutorials worked

    - by Jake Armstrong
    I need to use rsync for a daily backup operation and in the past (on different servers) I managed to just use a rsa key etc, but now none of google (serverfault) tutorials work at all. It keeps asking me for a password. I have webmin and ssh/root access to both servers. My steps: create a key on server 1 send key.pub to server 2 add key.pub to .ssh/authorized_keys chmod 700 .ssh/authorized_keys go back to server 1 and try rsync and it keep asking for password... rsync command: rsync -avz -e ssh file.txt root@server2:/root EDIT: well, I cleaned up everything and this time, instead of inserting a custom name to the key I used the standard one on server1. sent the .pub to server2 and it worked as a charm... So the answer is that server1's ssh wasn't even using the right key...

    Read the article

  • forbidden access on addon domains

    - by ehmad11
    I have one domain hosted on server domain.com, there are about 20 subdomains as addon domains there. For no good reason someone has changed (chgrp) on all files in domain.com directory to domain.com user now all websites are showing 403 forbidden access error. What should i do now to resume websites. I have tried changing php handler but no luck yet :/ php5 handler is suphp and Apache suEXEC is on....

    Read the article

  • Creating a fallback error page for nginx when root directory does not exist

    - by Ruirize
    I have set up an any-domain config on my nginx server - to reduce the amount of work needed when I open a new site/domain. This config allows me to simply create a folder in /usr/share/nginx/sites/ with the name of the domain/subdomain and then it just works.™ server { # Catch all domains starting with only "www." and boot them to non "www." domain. listen 80; server_name ~^www\.(.*)$; return 301 $scheme://$1$request_uri; } server { # Catch all domains that do not start with "www." listen 80; server_name ~^(?!www\.).+; client_max_body_size 20M; # Send all requests to the appropriate host root /usr/share/nginx/sites/$host; index index.html index.htm index.php; location / { try_files $uri $uri/ =404; } recursive_error_pages on; error_page 400 /errorpages/error.php?e=400&u=$uri&h=$host&s=$scheme; error_page 401 /errorpages/error.php?e=401&u=$uri&h=$host&s=$scheme; error_page 403 /errorpages/error.php?e=403&u=$uri&h=$host&s=$scheme; error_page 404 /errorpages/error.php?e=404&u=$uri&h=$host&s=$scheme; error_page 418 /errorpages/error.php?e=418&u=$uri&h=$host&s=$scheme; error_page 500 /errorpages/error.php?e=500&u=$uri&h=$host&s=$scheme; error_page 501 /errorpages/error.php?e=501&u=$uri&h=$host&s=$scheme; error_page 503 /errorpages/error.php?e=503&u=$uri&h=$host&s=$scheme; error_page 504 /errorpages/error.php?e=504&u=$uri&h=$host&s=$scheme; location ~ \.(php|html) { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_intercept_errors on; } } However there is one issue that I'd like to resolve, and that is when a domain that doesn't have a folder in the sites directory, nginx throws an internal 500 error page because it cannot redirect to /errorpages/error.php as it doesn't exist. How can I create a fallback error page that will catch these failed requests?

    Read the article

  • Is there a way to rsync in batches?

    - by Chris
    I have a huge chunk of data (11G) in a subversion repository that I'm using rsync to migrate to Alfresco, which lucene indexes new files as they hit the file system. I'm using a dav mount as a proxy to allow me to rsync. The issue I'm having is the indexing post-rsync is quite an expensive operation for such a huge chunk of data, so I was wondering whether there's a way I could logically separate the rsync into identically-sized batches (say 500MB each) so I could schedule them in cron. At the moment, I'm traversing the top level folders and taking the smallest ones across first, but once I'm done with those, the much larger sub-directories are going to be quite troublesome. Please let me know if you need any further info. Thanks in advance.

    Read the article

  • My NTFS Partition keeps becoming "unusable" on Ubuntu, Any Ideas?

    - by gopherman
    I just purchased a new 2TB Drive External Seagate, My main system uses both Windows and Ubuntu So I am pretty much stuck with keeping my drive as NTFS. I have done this without any problems before but since I got this new drive I have been having issues. When I first load up Ubuntu the drive mounts and runs fine, after an unspecified amount of time i start getting Input/Output errors when accessing the drive. When I goto the Disk Utility I get a message stating the drive is "Unknown or Unused", If I disconnect and reconnect the drive or reboot everything is fine again. There's no errors coming up with S.M.A.R.T and it seems to work fine while under windows. Any thoughts?

    Read the article

  • Wireless driver activation issue in Compaq c700 in Ubuntu 9.04

    - by Fazil
    I am using Ubuntu 9.04, I cant access my wireless driver, I activate the madwifi in administrationhardware drivers, but I could'nt activated the wireless too. when I type lspci I get the following message, ################################################## # 00:00.0 Host bridge: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 1 (rev 04) 00:1d.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #1 (rev 04) 00:1d.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #2 (rev 04) 00:1d.2 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #3 (rev 04) 00:1d.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 04) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev f4) 00:1f.0 ISA bridge: Intel Corporation 82801HEM (ICH8M) LPC Interface Controller (rev 04) 00:1f.1 IDE interface: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation 82801H (ICH8 Family) SMBus Controller (rev 04) 01:00.0 Ethernet controller: Atheros Communications Inc. AR242x 802.11abg Wireless PCI Express Adapter (rev 01) 02:01.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) ################################################## but when I tried in Windows I found that the driver for my laptop is ################################################ atheros AR5007 802.11b/g WiFi Adapter ################################################ so what can I do for solving this problem.

    Read the article

  • Why is 'grep -i' so slow? How to do it faster for ASCII?

    - by Vi.
    Consider: $ time lzop -d < tvtropes-index.lzo | egrep -B 5 '[Dd][eE][sS][cC][eE][nN][dD] ?[Ff][rR][oO][mM]' real 0m0.438s $ time lzop -d < tvtropes-index.lzo | egrep -B 5 'descend ?from' -i real 0m11.294s Both search case insensitively. Why is the -i version so slow? How do I make grep -i fast without entering things like [iI][nN] [tT][hH][iI][sS] [wW][aA][Yy]? For example, perl -ne 'print if /descend ?from/i' works fast, but '-B 5' is not as trivial to implement as in grep (as well as other options).

    Read the article

< Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >