Search Results

Search found 26947 results on 1078 pages for 'util linux'.

Page 419/1078 | < Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >

  • installing software configure.in

    - by ant2009
    Hello, Fedora 12 2.6.32.9-70.fc12.i686 I have downloaded kdirstat from cvs. And I want to compile and install it. However, there is no configure script file. The only file I have is a configure.in.in. How can I create the configure script file? Many thanks for any advice,

    Read the article

  • How can I change the flow through this PAM (programmable authentication module) file?

    - by Jamie
    I'd like the PAM module to skip the pam_mount.so line when a unix login succeeds. I've tried various things including: auth [success=2 default=ignore] pam_unix.so nullok_secure auth [success=2 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth requisite pam_permit.so auth required pam_permit.so auth optional pam_mount.so But can't get it to work. Conversely, when a session shuts down, how can I modify the following os that an unmount command (via pam_mount.so) is avoided during a unix login? session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session required pam_unix.so session optional pam_winbind.so session optional pam_mount.so

    Read the article

  • Random Server shutdown? - CentOS

    - by Kevin Hammett
    My system was working fine, and then it just had a random restart. Anyone have any idea of the problem? The message log: Jul 6 22:56:34 909I7 shutdown[719711]: shutting down for system halt Jul 6 22:56:34 909I7 init: Switching to runlevel: 0 Jul 6 22:56:35 909I7 smartd[10743]: smartd received signal 15: Terminated Jul 6 22:56:35 909I7 smartd[10743]: smartd is exiting (exit status 0) Jul 6 22:56:42 909I7 hcid[8749]: Got disconnected from the system message bus Jul 6 22:56:42 909I7 auditd[8430]: The audit daemon is exiting. Jul 6 22:56:42 909I7 kernel: audit(1341640602.922:344412): audit_pid=0 old=8430 by auid$ Jul 6 22:56:43 909I7 pcscd: pcscdaemon.c:572:signal_trap() Preparing for suicide Jul 6 22:56:43 909I7 pcscd: hotplug_libusb.c:376:HPRescanUsbBus() Hotplug stopped Jul 6 22:56:44 909I7 pcscd: readerfactory.c:1379:RFCleanupReaders() entering cleaning f$ Jul 6 22:56:44 909I7 pcscd: pcscdaemon.c:532:at_exit() cleaning /var/run Jul 6 22:56:44 909I7 kernel: Kernel logging (proc) stopped. Jul 6 22:56:44 909I7 kernel: Kernel log daemon terminating. Jul 6 22:56:45 909I7 exiting on signal 15

    Read the article

  • Configure iptables with a bridge and static IPs

    - by Andrew Koester
    I have my server set up with several public IP addresses, with a network configuration as follows (with example IPs): eth0 \- br0 - 1.1.1.2 |- [VM 1's eth0] | |- 1.1.1.3 | \- 1.1.1.4 \- [VM 2's eth0] \- 1.1.1.5 My question is, how do I set up iptables with different rules for the actual physical server as well as the VMs? I don't mind having the VMs doing their own iptables, but I'd like br0 to have a different set of rules. Right now I can only let everything through, which is not the desired behavior (as br0 is exposed). Thanks!

    Read the article

  • How can I make grub2 boot into Windows 7?

    - by Grzenio
    I had Windows 7 installed on my system, then I installed Debian testing with grub2 as its boot manager. Initially I couldn't see windows entry in grub at all, so I ran: aptitude install os-prober kcpuload update-grub Now I can see the entry, but when I select it I get only Win7 system restore, instead of the the real thing. Any ides how to make it work? EDIT: I tried the suggested approach to add a new file to /etc/grub.d, which generated an entry in grub.cfg, but it does not appear in the grub menu on boot :( I have this: grzes:/home/ga# cat /etc/grub.d/11_Windows #! /bin/sh -e echo Adding Windows >&2 cat << EOF menuentry “Windows 7? { set root=(hd0,2) chainloader +1 } And I have the following grub.cfg file: grzes:/home/ga# cat /boot/grub/grub.cfg # # DO NOT EDIT THIS FILE # # It is automatically generated by /usr/sbin/grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ ${prev_saved_entry} ]; then set saved_entry=${prev_saved_entry} save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z ${boot_once} ]; then saved_entry=${chosen} save_env saved_entry fi } insmod ext2 set root=(hd0,3) search --no-floppy --fs-uuid --set 6ce3ff31-0ef7-41df-a6f5-b6b886db3a94 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi set locale_dir=/boot/grub/locale set lang=en insmod gettext set timeout=5 ### END /etc/grub.d/00_header ###

    Read the article

  • Prefork or Worker MPM for amazon xlarge server?

    - by Netismine
    I'm trying to measure would it be better to have prefork or worker mpm apache module for the server I'm working on, which is Amazon X-Large 15 GB memory 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each) and that will run a Magento website with about 50 active users at once. Site serves a lot of images and about 45 requests per page. Images sometimes hang, so it seems worker would be a better option? Thanks

    Read the article

  • How do i allow users to execute commands via ssh without allocating a psuedo-terminal

    - by Dani El
    I need to allow users to run a limited set of commands. But not to allow them to create interactive sessions. Just like GitHub does. If you try to ssh without a command it greetings you and close the session. I can acquire this by using ForceCommand some-script But getting in some-script i then need to eval user's input. Perhaps any other NoTTY-like option in sshd_config? --- UPDATE --- i'm looking for a pure SSH / Bash solution, not Perl/Python/etc. hacks.

    Read the article

  • Using mongodump with an auth enabled mongodb server

    - by bb-generation
    I'm trying to do a daily backup of my mongodb server (auth enabled) using the mongodump tool. mongodump provides two parameters to set the credentials: -u [ --username ] arg username -p [ --password ] arg password Unfortunately they don't provide any parameter to read the password from stdin. Therefore everytime I run this command, everyone on the server can read the password (e.g. by using ps aux). The only workaround I have found is stopping the database and directly accessing the database files using the --dbpath parameter. Is there any other solution which allows me to backup the mongodb database without stopping the server and without "publishing" my password? I am using Debian squeeze 6.0.5 amd64 with mongodb 1.4.4-3.

    Read the article

  • How do I change the .bash_history file location?

    - by Brian Graham
    I'm running CentOS 6.x and want to move the .bash_history to a different location. The home directories of my users are (because I run a VPS) in /var/www/vhost/<domain>.<tld> which is FTP accessible (and it should be). Because of this, I have changed the AuthorizedKeysFile for SSH connections out of the normal ~/.ssh/authorized_keys since FTP connections would easily be able to locate them. At the same time I want to move the .bash_history file to /home/%u/.bash_history where %u is the current user.

    Read the article

  • Write to stdin of a running process using pipe

    - by aditya
    I am in a similar situation as in this post But I couln't get the solution provided there to work in my situation as the answer seems related to that question only. In particular, I couldnt understand what was the purpose of cat my.fifo | nc remotehost.tld 10000 In my case, I have a process running and waiting for input. how can I send input to that process using named pipes? I've tried echo 'h' > /proc/PID/fd/0 it just displays 'h' on the process' window.

    Read the article

  • Cloning OpenVZ container

    - by Tiffany Walker
    I have an OpenVZ container on 1 host and I would like to clone it over to my server. both run SolusVM. I only have root access to my server and would like to host the container on my server now. Can I use rsync to clone the drive while the OS is running on both? Using a command like this: rsync -uazPx --exclude='/boot' --exclude='/proc' --exclude='/dev' --exclude='/lib' --exclude='/tmp' --exclude='/var/lock' / [email protected]:/ Is there any other areas I should probably not copy over?

    Read the article

  • Is there good FAT driver for FUSE? (Lightweight, not mountlo)

    - by Vi
    FUSE filesystem list show some FuseFat and FatFuse. Both are old, FatFuse is read-only , FuseFat is non-buildable and probably depends on glib. Now I'm using mountlo for the task (mounting USB drives in generic way without root access or suid things (except of fusermount itself)), but it looks too big for such task. Is there good vfat FUSE driver?

    Read the article

  • pcfg_openfile: unable to check htaccess file, ensure it is readable

    - by rxt
    After moving a website folder on my local development machine to another drive, then moving it back, I got a 403 error. Most of this problem had probably to do with rights that got messed up. After deleting the code and restoring it from SVN, the rights seemed allright. The error stayed however. The setup is a bit complex, as follows: I have Ubuntu 10.4 as development machine, trying to mimic the server as much as possible We use Eclipse + SVN and I create all projects in a local folder under my user account In /var/www-vhosts I create folders for each vhost, like this one: test.localhost test.local/index.php: includes the index file of the project test.local/.htaccess is a dynamic link to the htaccess file in a project subfolder I get the following error in the apache error log: [Thu Jul 08 15:55:56 2010] [crit] [client 127.0.0.1] (13)Permission denied: /var/www-vhosts/test.localhost/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable The problem seems to be the .htaccess file, or the link to it. When I empty the htaccess, nothing changes When I remove the link, the index-include produces some output (in the apache error log) When I remove the link and replace it with the actual file, I get another error: [Thu Jul 08 16:47:54 2010] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www-vhosts/test.localhost/test I'm lost here, don't know what to do next. Do you have any ideas what I can try? This setup has worked before, but I don't know what is different now.

    Read the article

  • Nginx Ubuntu Postfix Config - Can't connect to incoming IMAP server 'server not responding' but can send mail via outgoing using same details?

    - by daveaspinall
    I'm pretty to new server admin and especially nginx but seem to be getting ok fine apart from accessing my mail via my iPhone? I've changed my domain to 'domain.com' The thing is I can send mail via my outgoing IMAP server but can't connect to the incoming one? I just get the message "the mail server at mail.domain.com is not responding" /etc/postfix/main.cf alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix home_mailbox = Maildir/ inet_interfaces = all inet_protocols = all mailbox_command = mailbox_size_limit = 0 mydestination = domain.com, mail.domain.com, localhost.com, , localhost, localhost.localdomain mydomain = domain.com myhostname = mail.domain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname recipient_delimiter = + relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_security_level = may smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom telnet localhost 25 ehlo locahost 250-mail.domain.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-AUTH=LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN Using the following details to connect: username password hostname: mail.domain.com port: 25 iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I also sent mail to the server as a test and got this missage if it helps? Technical details of temporary failure: [mail.domain.com. (10): Connection refused] I also looked in /var/log/mail.log and it has multiple entries of: postfix/smtpd[12239]: connect from 5acefc9a.bb.sky.com[90.206.252.xxx] Mar 23 06:47:09 new-domain postfix/smtpd[12239]: lost connection after CONNECT from 5acefc9a.bb.sky.com[90.206.252.154] Notice new-domain which is incorrect but the server hostname and hostname in the configs are correct? I recently moves servers and the host has set the primary domain on the service as new-domain.com so this may be the issue? Like I said, it works to connect to outgoing server, but incoming gets the not responding error? Any idea would be much appreciated!

    Read the article

  • CentOS 5.8 - Can't login to tty1 as root after updates?

    - by slashp
    I've ran a yum update on my CentOS 5.8 box and now I am unable to log into the console as root. Basically what happens is I receive the login prompt, enter the correct username and password, and am immediately spit back to the login prompt. If I enter an incorrect password, I am told the password is incorrect, therefore I know that I am using the proper credentials. The only log I can seem to find of what's going on is /var/log/secure which simply contains: 15:33:41 centosbox login: pam_unix(login:session): session opened for user root by (uid=0) 15:33:41 centosbox login: ROOT LOGIN ON tty1 15:33:42 centosbox login: pam_unix(login:session): session closed for user root The shell is never spawned. I've checked my inittab which looks like so: 1:2345:respawn:/sbin/mingetty tty1 2:2345:respawn:/sbin/mingetty tty2 3:2345:respawn:/sbin/mingetty tty3 4:2345:respawn:/sbin/mingetty tty4 5:2345:respawn:/sbin/mingetty tty5 6:2345:respawn:/sbin/mingetty tty6 And my /etc/passwd which properly has bash listed for my root user: root:x:0:0:root:/root:/bin/bash As well as permissions on /tmp (1777) & /root (750). I've attempted re-installing bash, pam, and mingetty to no avail, and confirmed /bin/login exists. Any thoughts would be greatly appreciated. Thanks!! -slashp

    Read the article

  • Setting umask for all users

    - by Yarin
    I'm trying to set the default umask to 002 for all users including root on my CentOS box. According to this and other answers, this can be achieved by editing /etc/profile. However the comments at the top of that file say: It's NOT a good idea to change this file unless you know what you are doing. It's much better to create a custom.sh shell script in /etc/profile.d/ to make custom changes to your environment, as this will prevent the need for merging in future updates. So I went ahead and created the following file: /etc/profile.d/myapp.sh with the single line: umask 002 Now, when I create a file logged in as root, the file is born with 664 permissions, the way I had hoped. But files created by my Apache wsgi application, or files created with sudo, still default to 644 permissions... $ touch newfile (as root): Result = 664 (Works) $ sudo touch newfile: Result = 644 (Doesn't work) Files created by Apache wsgi app: Result = 644 (Doesn't work) Files created by Python's RotatingFileHandler: Result = 644 (Doesn't work) Why is this happening, and how can I ensure 664 file permissions system wide, no matter what creates the file? UPDATE: I ended up finding a cleaner solution to this on a per-directory basis using ACLs, which I describe here.

    Read the article

  • free -m output, should I be concerend about this servers low memory?

    - by Michael
    This is the output of free -m on a production database (MySQL with machine. 83MB looks pretty bad, but I assume the buffer/cache will be used instead of Swap? [admin@db1 www]$ free -m total used free shared buffers cached Mem: 16053 15970 83 0 122 5343 -/+ buffers/cache: 10504 5549 Swap: 2047 0 2047 top ouptut sorted by memory: top - 10:51:35 up 140 days, 7:58, 1 user, load average: 2.01, 1.47, 1.23 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 6.5%us, 1.2%sy, 0.0%ni, 60.2%id, 31.5%wa, 0.2%hi, 0.5%si, 0.0%st Mem: 16439060k total, 16353940k used, 85120k free, 122056k buffers Swap: 2096472k total, 104k used, 2096368k free, 5461160k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 20757 mysql 15 0 10.2g 9.7g 5440 S 29.0 61.6 28588:24 mysqld 16610 root 15 0 184m 18m 4340 S 0.0 0.1 0:32.89 sysshepd 9394 root 15 0 154m 8336 4244 S 0.0 0.1 0:12.20 snmpd 17481 ntp 15 0 23416 5044 3916 S 0.0 0.0 0:02.32 ntpd 2000 root 5 -10 12652 4464 3184 S 0.0 0.0 0:00.00 iscsid 8768 root 15 0 90164 3376 2644 S 0.0 0.0 0:00.01 sshd

    Read the article

  • Moving a lot of small files between servers using rsync

    - by Adirael
    Hello guys, I'm moving a lot of files (about 2 millions) between two servers on different locations using rsync over ssh, it seems to work fine but I just realised I'm losing some files on the process. I got server 1, with the original data, and server 2, with the copy. Server 1 runs CentOS 5 and Server 2 runs on Ubuntu 10. I'm doing the transfer on the Server's 2 command line like this: rsync -e ssh -avzn usr@server1:/remote/path /local/path The first file movement I did using tar, but I didn't though of piping it through ssh and it failed cause the disk on server 1 was almost full, so I transfered it anyways (it was about 200GB) and got about 80% of the files. Then I piped another tar with the rest of the files (they're in folders, I got 100 folders with about 30 subfolders each, with files inside) and now I got everything on server 2. I wanted to be sure, so I my two options are getting the md5sum of all the files and check them or running an rsync on server 2 against server 1, that's what I did. It got some missing stuff and now it says there's nothing more to do (DRY RUN). But I got at least two files that are missing inside a subfolder. I ran that same rsync on that folder, but still dry run. Am I doing something wrong? Thanks, and sorry for the wall of text.

    Read the article

  • what to do when ctrl-c can't kill a process?

    - by Dustin Boswell
    Ctrl-c doesn't always work to kill the current process (for instance, if that process is busy in certain network operations). In that case, you just see "^C" by your cursor, and can't do much else. What's the easiest way to force that process to die now without losing my terminal? Summary of answers below: Usually, you can Ctrl-z to put the process to sleep, and then do "kill -9 process-pid", where you find the process's pid with 'ps' and other tools. On Bash (and possibly other shells) you can do "kill -9 %1" (or '%N' in general) which is easier. If Ctrl-z doesn't work, you'll have to open another terminal and kill from there.

    Read the article

  • I have a perl script that is supposed to run indefinitely. It's being killed... how do I determine who or what kills it?

    - by John O
    I run the perl script in screen (I can log in and check debug output). Nothing in the logic of the script should be capable of killing it quite this dead. I'm one of only two people with access to the server, and the other guy swears that it isn't him (and we both have quite a bit of money riding on it continuing to run without a hitch). I have no reason to believe that some hacker has managed to get a shell or anything like that. I have very little reason to suspect the admins of the host operation (bandwidth/cpu-wise, this script is pretty lightweight). Screen continues to run, but at the end of the output of the perl script I see "Killed" and it has dropped back to a prompt. How do I go about testing what is whacking the damn thing? I've checked crontab, nothing in there that would kill random/non-random processes. Nothing in any of the log files gives any hint. It will run from 2 to 8 hours, it would seem (and on my mac at home, it will run well over 24 hours without a problem). The server is running Ubuntu version something or other, I can look that up if it matters.

    Read the article

  • Unable to set initcwnd on a Hetzner server

    - by Sergi
    We just ordered a bunch of Hetzner EX40SSD servers with the minimal Debian install image that they provide and everything is just fine except that looking at tcpdumps for fine tuning the network from various locations the initcwnd param seems to be stuck at 6 no matter how we change it. By default Debian 3.2 kernels should have that setting to 10 so it's pretty strange. Is it possible that the NIC driver or a custom setting in the Hetzner Debian image is limiting this param? Even if we set it to 4, like the old kernel default, it doesn't work. Any ideas would be much appreciated! Does anyone know if the NIC drivers provided by default by Debian have some kind on limitation. In a long thread in http://www.webhostingtalk.com/showthread.php?t=1200617&highlight=hetzner they talk about a page http://wiki.hetzner.de/index.php/Installation_des_r8168-Treibers/en where Hetzner states that the included Realtek r8168 driver is not working properly, but nowhere do they say that the initcwnd could be affected. Tomorrow i will try to install a CentOs image and see if Debian is the problem...Last resort would be to install a custom debian image, but that is a pain in the ass! Thanks!

    Read the article

  • How to debug old initd script under systemd?

    - by Gene Vincent
    I have an older initd script to start my application. It worked fine under older versions of SuSE, but fails on Open SuSE 12.3. The strange thing is cd /etc/init.d ; ./script start works fine. /etc/init.d/script start shows a redirection to systemctl, but doesn't start my application (and also doesn't show any output from the initd script). I don't see any log entries showing me what goes wrong. The only entry I see is in /var/log/messages saying the application was started. How do I debug this ?

    Read the article

  • Nginx Server Block Not Working? - Already running other vhosts just this one not working

    - by daveaspinall
    Im running a Debian 6 LEMP server with multiple virtual hosts and everything has been fine for 5 or so sites. But I've just tried adding another but for some reason it's just not working. By not working I mean in Chrome I get the "Oops! Google Chrome could not connect to subdomain.domain.net" error. I've changed the domain for security to subdomain.example.com and the IP is masked. Hosts file (I have multiple sub domains): xxx.xxx.xx.xxx *.example.com *.example Server Block: server { listen 80; server_name subdomain.example.com; access_log /srv/www/subdomain.example.com/logs/access.log; error_log /srv/www/subdomain.example.com/logs/error.log; root /srv/www/subdomain.example.com/public_html; location / { index index.html index.htm index.php; } location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I've created the system link to the file in the /etc/nginx/sites-enabled/ directory and restarted/reloaded nginx. DNS seems fine: # ping -c 2 subdomain PING subdomain.example.com (xxx.xxx.xx.xxx) 56(84) bytes of data. 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=1 ttl=64 time=0.035 ms 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=2 ttl=64 time=0.048 ms Checking the file with cURL works: # curl http://subdomain.example.com HTML - OK Emptied browser cache but still no dice. Anything I'm missing? Like I mentioned, I have a few sites running fine on the server currently so php-fpm etc etc are working. Any help would be much appreciated! Cheers, Dave

    Read the article

  • Model M Keyboard inputs incorrect characters after logging in to Fedora

    - by mickburkejnr
    I recently bought a 24 year old IBM Model M keyboard. From what I gather, it'd been left on a shelf for the last 5 years, so you can imagine the amount of dust dirt and crap that was on it. Before cleaning it, I plugged it in to my laptop (running Fedora 17) using a PS/2 to USB adapter. What I found was, while it still works, the keys I press don't correspond to what is displayed on the screen. So for example, when I type S on the keyboard, I get ß display on the screen instead. At the time, I put this down to the adapter not working properly. Since then, I stripped the keys off the keyboard and cleaned the whole thing. It looks like it's just come out of a box! I then plugged it in to my computer (also running Fedora 17) via a standard PS/2 plug. The computer loaded up to the login screen, and I typed in my password. Pressed enter, and I logged straight in to my machine. At this point, I opened up a text editor and started typing some stuff. To my horror, the keystrokes I was entering weren't coming up as intended. What came up instead were characters that would map to the pressed key but only under a different keyboard language setting. I opened up a program to see what keyboard language had been selected, and the correct one for the keyboard was selected (which is UK in my case). I opened up a window that would show what characters mapped to what keys, and I pressed every single key on the keyboard, and every corresponding block representing each key lit up. I went back to the text editor to try again, but I was still getting these random characters. Whats more is that the backspace key would not work, although in the other utility it would flash when pressed. What I know is that at the login screen the keyboard must have entered the correct characters, otherwise I wouldn't have been able to log in. Further more, keys that don't respond while using a text editor as sending signals to the computer, as illustrated in that keyboard utility. The question is why random characters are displayed when they really shouldn't be? Would this be a hardware fault or a software issue?

    Read the article

< Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >