Search Results

Search found 8381 results on 336 pages for 'bad neighbor'.

Page 12/336 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Recovering from bad ownership

    - by Christian Sciberras
    I was going to change the ownership of a directory to apache:apache, but I ended up running: chown -R apache:apache / Bad! Very bad! I knew what was going on when it started saying: chown: changing ownership of `/proc/2694/fd/48': Permission denied That's when I stopped everything (Ctrl+C). The current system I have is a server running virtualbox running CentOS 5. This problem happened inside the VM. Currently, everything seems to be working, but I have not restarted the system yet, and to be honest, I'm afraid that if I did something will break. I do not know chown's order, should I be concerned and assume something will break after a reboot? Is there a way to recover form this problem without having to rely on backups? I do have a daily one, but I thought there may be a simpler way out.

    Read the article

  • What are good and bad jitter times for a LAN

    - by garyb32234234
    Ive just ran jperf (frontend to iperf) on our network between 2 workstations, its recorded jitter between 0.033ms and 0.048ms. Is this good or bad? Are there more variables that i would need to consider to make the decision? EDIT: TCP/IP Ethernet LAN 43 PCs 1 server, 100Mbits main switch, various small 8 port switches, test was done using UDP, Its a Windows Domain. I want to instal a few voip softphones on the workstations, see how many i can use that reliably work, im testing a few different workstations around the network to see where the best quality network paths are. Will also change some equipment if i identify bad connections.

    Read the article

  • Repair BAD Sectors or Buy a new HDD?

    - by Nehal J. Wani
    I have a Seagate internal hard disk drive. I recently opened up my laptop [Dell Inspiron N5010] [Warranty has expired], cleaned it and it worked normally after waking up from hibernation. However, when I restarted it, it stuck on windows loading screen, then tried to boot from Dell recovery partition but failed. It gave the error: Windows has encounter a problem communicating with a device connected to your computer. This error can be caused by unplugging a removable storage device such as an external USB drive while the device is in use, or by faulty hardware such as a hard drive or CD-ROM drive that is failing. Make sure any removable storage is properly connected and then restart your computer If you continue to receive this error message, contact the hardware manufacturer. Status: 0xc00000e9 Info: An unexpected I/O error has occurred. While cleaning, I had mistakenly touched the round silvery thing at the bottom of the HDD. I don't know whether this has caused the problem or not. Since I have Fedora also installed in the same HDD, I can boot from it but it shows weird read errors when I ask it to mount Windows partitions. The disk utility also says that the Hard Disk has many bad sectors and needs to be replaced. I downloaded Seatools from Seagate website and used it. In the long test, I gave it permission to repair the first 100 errors which it did successfully. Now I am confused at what I should do. Internal Hard Disk Costs: a. Internal HDD 500GB Costs: Rs3518 b.1 External HDD 500GB Costs: Rs3472 b.2 External HDD 1TB Costs: Rs5500 c. Internal to External Converter Costs: Rs650 I have the following options: (i) Buy an External HDD, backup my data. Try to repair bad sectors of HDD. Then two cases arise: (a) My Internal HDD gets repaired [almost] (b) My internal HDD doesn't get repaired. Then I need to buy another internal HDD and replace the damaged one. OR break the seal of the external one and put it inside my laptop as internal. Breaking the case involves risks. (ii) Buy a Internal HDD and an Internal to External Converter Case [Not very reliable], backup my data. Try to repair bad sectors of HDD. Then two cases arise: (a) My Internal HDD gets repaired [almost] (b) My internal HDD doesn't get repaired. Then I need to just put in the new internal HDD I just bought. Experts, please guide me as to what will be the most VFM option? Also, if a HDD is failing, is it that I shouldn't read from it too otherwise there is a chance of other sectors failing? What I mean is, is it wrong to read from the HDD without taking backup first?

    Read the article

  • Why is nesting or piggybacking errors within errors bad in general?

    - by dietbuddha
    Why is nesting or piggybacking errors within errors bad in general? To me it seems bad intuitively, but I'm suspicious in that I cannot adequately articulate why it is bad. This may be because it is not in general bad and that it is only bad in specific instances. Why is it detrimental to design error/exception handling in such a way. The specific instance is that of a REST service. There is a desire by some to use http errors (specifically the 500 response) as a way to indicate any problem with specific instances of a resource. An example of an instance resource in this case would be: http://server/ticket/80 # instance http://server/ticket # not an instance So this is the behavior that is being proposed. If ticket 80 does not exist return a http response code of 500. Within the body of the error return the "real" error as an additional error code and description. If the ticket resource doesn't exist return a response code of 404.

    Read the article

  • MPLS basic configuration

    - by Vineet Menon
    I want to test out MPLS VPN in my lab. I have 3 routers. 2 PEs and 1P router, all cisco 2921. Something like this, ----- ---- ----- | PE1 |.1____192.168.1.0____.2| P |.2____192.168.2.0____.1| PE2 | | | | | | | ----- ---- ----- lo0:10.1.1.1 lo0:10.1.1.2 lo0:10.1.1.3 Here's the configuration file for each of them, PE1 router hostname PE1 ! no ipv6 cef ip source-route ip cef ! ! ! ip vrf cust1 rd 100:100 route-target export 100:100 route-target import 100:100 ! ! interface Loopback0 ip address 10.1.1.1 255.255.255.255 ! interface GigabitEthernet0/0 ip address 192.168.1.1 255.255.255.0 duplex auto speed auto ! interface GigabitEthernet0/1 ip vrf forwarding cust1 ip address 172.16.1.1 255.255.255.0 duplex auto speed auto ! router ospf 1 network 10.1.1.1 0.0.0.0 area 0 network 192.168.1.0 0.0.0.255 area 0 ! router bgp 100 bgp log-neighbor-changes neighbor 10.1.1.3 remote-as 100 neighbor 10.1.1.3 update-source Loopback0 neighbor 172.16.1.2 remote-as 65001 ! address-family vpnv4 neighbor 10.1.1.3 activate neighbor 10.1.1.3 send-community extended exit-address-family For P router: hostname P ! no ipv6 cef ip source-route ip cef ! interface Loopback0 ip address 10.1.1.2 255.255.255.255 ! interface GigabitEthernet0/1 ip address 192.168.1.2 255.255.255.0 duplex auto speed auto ! interface GigabitEthernet0/2 ip address 192.168.2.2 255.255.255.0 duplex auto speed auto ! router ospf 1 network 10.1.1.2 0.0.0.0 area 0 network 192.168.1.0 0.0.0.255 area 0 network 192.168.2.0 0.0.0.255 area 0 ! For PE2 router: ! hostname PE2 ! no ipv6 cef ip source-route ip cef ! ! ! ip vrf cust1 rd 100:100 route-target export 100:100 route-target import 100:100 ! ! ! interface Loopback0 ip address 10.1.1.3 255.255.255.0 ! interface GigabitEthernet0/0 ip address 192.168.2.1 255.255.255.0 duplex auto speed auto ! interface GigabitEthernet0/1 ip vrf forwarding cust1 ip address 172.16.2.1 255.255.255.0 duplex auto speed auto ! router ospf 1 network 10.1.1.3 0.0.0.0 area 0 network 192.168.2.0 0.0.0.255 area 0 ! router bgp 100 bgp log-neighbor-changes neighbor 10.1.1.1 remote-as 100 neighbor 10.1.1.1 update-source Loopback0 neighbor 172.16.2.2 remote-as 65001 ! address-family vpnv4 neighbor 10.1.1.1 activate neighbor 10.1.1.1 send-community extended exit-address-family ! I am following this article form cisco. But things are not working properly. Any help would be appreciated.

    Read the article

  • MySQL SSL: bad other signature confirmation

    - by samJL
    I am trying to enable SSL connections for MySQL-- SSL will show as enabled in MySQL, but I can't make any connections due to this error: ERROR 2026 (HY000): SSL connection error: ASN: bad other signature confirmation I am running the following: Ubuntu Version: 14.04.1 LTS (GNU/Linux 3.13.0-34-generic x86_64) MySQL Version: 5.5.38-0ubuntu0.14.04.1 OpenSSL Version: OpenSSL 1.0.1f 6 Jan 2014 I used these commands to generate my certificates (all generated in /etc/mysql): openssl genrsa -out ca-key.pem 2048 openssl req -new -x509 -nodes -days 3650 -key ca-key.pem -out ca-cert.pem -subj "/C=US/ST=NY/O=MyCompany/CN=ca" openssl req -newkey rsa:2048 -nodes -days 3650 -keyout server-key.pem -out server-req.pem -subj "/C=US/ST=NY/O=MyCompany/CN=server" openssl rsa -in server-key.pem -out server-key.pem openssl x509 -req -in server-req.pem -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem openssl req -newkey rsa:2048 -nodes -days 3650 -keyout client-key.pem -out client-req.pem -subj "/C=US/ST=NY/O=MyCompany/CN=client" openssl rsa -in client-key.pem -out client-key.pem openssl x509 -req -in client-req.pem -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem I put the following in my.cnf: [mysqld] ssl-ca=/etc/mysql/ca-cert.pem ssl-cert=/etc/mysql/server-cert.pem ssl-key=/etc/mysql/server-key.pem When I attempt to connect specifying the client certificates-- I get the following error: mysql -uroot -ppassword --ssl-ca=/etc/mysql/ca-cert.pem --ssl-cert=/etc/mysql/client-cert.pem --ssl-key=/etc/mysql/client-key.pem ERROR 2026 (HY000): SSL connection error: ASN: bad other signature confirmation If I connect without SSL, I can see that MySQL has correctly loaded the certificates: mysql -uroot -ppassword --ssl=false mysql> SHOW VARIABLES LIKE '%ssl%'; +---------------+----------------------------+ | Variable_name | Value | +---------------+----------------------------+ | have_openssl | YES | | have_ssl | YES | | ssl_ca | /etc/mysql/ca-cert.pem | | ssl_capath | | | ssl_cert | /etc/mysql/server-cert.pem | | ssl_cipher | | | ssl_key | /etc/mysql/server-key.pem | +---------------+----------------------------+ 7 rows in set (0.00 sec) My generated certificates pass OpenSSL verification and modulus: openssl verify -CAfile ca-cert.pem server-cert.pem client-cert.pem server-cert.pem: OK client-cert.pem: OK What am I missing? I used this same process before on a different server and it worked- however the Ubuntu version was 12.04 LTS and the OpenSSL version was older (don't remember specifically). Has something changed with the latest OpenSSL? Any help would be appreciated!

    Read the article

  • another "SSH connect to host github.com port 22: Bad file number"

    - by Mariusz
    Hello. I have a problem with my first-time ssh connection. Yes, I've already done yours guides, already tried your "Dealing with firewalls and proxies" article and the problem is still occuring. I am using Win7 32bit, Windows Firewall is disabled, haven't any third-party firewalls, ESET Nod32 Antivirus is not blocking any ports, I am not using any PROXY (neither local proxy) . Here goes the logs: Ordinary SSH connection try C:\Users\Mariusz>ssh -vvv [email protected] OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug2: ssh_connect: needpriv 0 debug1: Connecting to github.com [207.97.227.239] port 22. debug1: connect to address 207.97.227.239 port 22: Not owner ssh: connect to host github.com port 22: Bad file number NCAT connection try C:\Users\Mariusz>ncat github.com 22 Strange connect error from 207.97.227.239 (10013): No error 10013 = WSAEACCES I think that method called "smart-http-support" won't be usable for me because I haven't created repo yet. I have just GIT INIT locally, and finished at step GIT PUSH, which returns the same: ssh: connect to host github.com port 22: Bad file number fatal: The remote end hung up unexpectedly corkscrew method (first article from yours guide) . While PUTTYing (with pageant in bg), after inputing login - an error is occuring (MessageBox): Disconnected: No supported authentication methods available And in terminal such message is printing out: Server refused our key Key I have generated correctly, using ssh-keygen. I tried not method by editing ~/.ssh/config yet because I had thought that because I haven't PUSHed anything to my remote repo so I won't be able to CLONE anything. Method called ssh-forwarding is not for my, cause it "requires access to an external ssh server" and I haven't any at this time. What else could I do? Thanks in advance for any help. Mariusz.

    Read the article

  • 502 Bad Gateway error after failed requests using Passenger

    - by Nicolas Buduroi
    I've got a staging server running nginx 1.0.5 using Rails 3.1 under Passenger 3.0.9. The problem is that a request sent just after one where there's an application error return 502 Bad Gateway. To test it I've set up a simple controller with an action that just raise a dummy exception. One request will show the Rails error message and the next one will show nginx 502 Bad Gateway error, then it goes back to the Rails application error, etc. While investigating this problem I've found out that load testing the application (which increase the number of application processes) make that issue disapear. That is until the extra processes are shutdown, then it reappear. I've tried setting the passenger_min_instances option, but doing so doesn't change anything and in this case each time an application error happen one instance is killed while after load testing all instances are kept alive. P.S.: Some people on my team told me that they've seen the 502 error even when there's no application error but I've not been able to reproduce that. Update: Just found out how to display the responses status codes using ab and most of them are 502!

    Read the article

  • Network connection keeps dropping - bad hardware?

    - by Bill Sambrone
    Hello all, I've into a bit of a wall with a client of mine. In an office of 20 people, he is the only one who experiences broken connections to his mapped network drives. I have everyone set up with about 6 mapped drives, all pointing to the same server (no DFS), and everyone else can access them lightning fast. The environment consists of a mix of Windows 7 and XP machines, all 32-bit. The server holding the data everyone is mapping to is running on Server 2008 R2, and is a domain controller. We recently swapped out their old 10/100 switch for a shiny new Dell PowerConnect gigabit switch. We have also replaced an old dying Sonicwall with a shiny new one. Everything is running on an ESX host except for the DC, where everyone is getting data from. In my client's office, we have done the following: Swapped out his computer (Win7 and XP box) Swapped out the desktop switch in his office Removed the desktop switch in his office Changed out the network cable going to the wall Ran 'net config server /autodisconnect:-1' on the server Disabled remote differential compression on his current Win7 box When we swapped out his network cable, everything seemed fine for about 4 days. Normally I would get a phone call a couple times per day letting me know that Outlook has crashed (there is a 9GB PST living on the server he is always connected to), or that his software he is running from his L drive has crashed. I almost thought I had this solved, but after we rebooted the DC the other night he all of a sudden couldn't stay connected to his mapped network drives for more than 10 minutes. When I ran 'net use' from the command prompt, it listed all the network drives where were randomly in a state of 'OK', 'Disconnected', or 'Reconnecting'. What else should I try? Maybe there is bad wiring in the wall, patch panel, or a bad port in the new switch I have in the server room?

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

  • Transfer disk contents *without* cloning tools

    - by Chris Cummins
    Is it possible to "clone" a disk which contains programs by performing a copy of all the disk contents (preserving file attributes) from source to destination disk, and unplugging the source disk and changing the drive letter of the destination disk to match that of the source? Context I have a two disk Windows 8 system with a system drive and a data drive. Recently, the data drive developed a number of bad sectors leading to IO errors. I have been sent a replacement drive so I simply need to clone the contents of this data drive onto the replacement. The drive contents include documents & media, user folders (My Documents and related), and some programs (games etc). Problem The problem is that the bad sectors on the source disk causes most disk cloning tools to fail with read errors. Attempted approaches include: Disk clone from live boot environment with Acronis True Image. Fails due to read errors. Disk clone from live boot environment with Clonezilla. Fails due to read errors. Disk clone using Roadkil's Unstoppable Copier. Fails due to hardware timeouts in the HDD (application hangs indefinitely). A straightforward copy from source to destination disk using FreeFileSync (preserving file attributes and metadata). This succeeds. So at the moment I have a replacement disk which contains all of the data from the original disk. Now all I need to is somehow get Windows to replace all references to the old disk to the new one. Is this possible by simply swapping the assigned drive letters? Any help would be greatly appreciated, thanks!

    Read the article

  • Mounting NAS share: Bad Address

    - by Korben
    I've faced to the problem that can't solve. Hope you can help me with it. I have a storage QNAP TS-459U, with it's own Linux, and 'massive1' folder shared, which I need to mount to my Debian server. They are connected by regular patch cord. Debian server has two network interfaces - eth0 and eth1. eth0 is for Internet, eth1 is for QNAP. So, I'm saying this: mount -t cifs //169.254.100.100/massive1/ /mnt/storage -o user=admin , where 169.254.100.100 is an IP of QNAP's interface. The result I get (after entering password): mount error(14): Bad address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) Tried: mount.cifs, smbmount, with '/' at the end of the network share and without it, and many other variations of that command. And always its: mount error(14): Bad address Funny thing is when I was in Data Center, I had connected my netbook to QNAP by the same scheme (with Fedora 16 on it), and it connected without any problems, I could read/write files on the QNAP's NAS share! So I'm really stuck with the Debian. I can't undrestand where's the difference with Fedora, making this error. Yeah, I've used Google. Couldn't find any useful info. Ping to the QNAP's IP is working, I can log into QNAP's Linux by ssh, telnet on 139's port is working. This is network interface configuration I use in Debian: IP: 169.254.100.1 Netmask: 255.255.0.0 The only diffence in connecting to Fedora and Debian is that in Fedora I've added gateway - 169.254.100.129, but ping to this IP is not working, so I think it's not necessary at all. P.S. ~# cat /etc/debian_version wheezy/sid ~# uname -a Linux host 2.6.32-5-openvz-amd64 #1 SMP Mon Mar 7 22:25:57 UTC 2011 x86_64 GNU/Linux ~# smbtree WORKGROUP \\HOST host server \\HOST\IPC$ IPC Service (host server) \\HOST\print$ Printer Drivers NAS \\MASSIVE1 NAS Server \\MASSIVE1\IPC$ IPC Service (NAS Server) \\MASSIVE1\massive1 \\MASSIVE1\Network Recycle Bin 1 [RAID5 Disk Volume: Drive 1 2 3 4] \\MASSIVE1\Public System default share \\MASSIVE1\Usb System default share \\MASSIVE1\Web System default share \\MASSIVE1\Recordings System default share \\MASSIVE1\Download System default share \\MASSIVE1\Multimedia System default share Please, help me with solving this strange issue. Thanks before.

    Read the article

  • Bad motherboard / controller / HDs?

    - by quidpro
    On a leased server, I am running into some timing issues with an application that requires precise timing. Server is a Dual Xeon E5410 running on a Supermicro X7DVL-3 motherboard under CentOs 5.5 x64. The application I am running is timer sensitive and keeps sensing drift whether under load or at idle, but especially under load. I did some investigating with atop and dd and found some mind-blowing numbers. Mind you, I am no Linux guru but something sure seems out of whack. I ran: dd bs=4096 if=/dev/zero of=/bigtestfile to generate disk activity. Regardless whether I wrote it to sda or sdb my DSK value in atop would go over 100%, at one time peaking at 1700%. Again it does not matter if I am writing to sda or sdb. DSK | sdb | busy 675% | read 0 | write 110 | avio 78 ms | Here are the smartctl outputs: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0007 165 165 021 Pre-fail Always - 2750 4 Start_Stop_Count 0x0032 100 100 040 Old_age Always - 21 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x000a 200 200 051 Old_age Always - 0 9 Power_On_Hours 0x0032 065 065 000 Old_age Always - 25831 10 Spin_Retry_Count 0x0012 100 253 051 Old_age Always - 0 11 Calibration_Retry_Count 0x0012 100 253 051 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 21 194 Temperature_Celsius 0x0022 116 093 000 Old_age Always - 27 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0012 200 200 000 Old_age Always - 0 199 UDMA_CRC_Error_Count 0x000a 200 253 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 051 Old_age Offline - 0 # smartctl -A /dev/sdb smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0003 180 180 021 Pre-fail Always - 3958 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 22 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0 9 Power_On_Hours 0x0032 068 068 000 Old_age Always - 24087 10 Spin_Retry_Count 0x0013 100 253 051 Pre-fail Always - 0 11 Calibration_Retry_Count 0x0013 100 253 051 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 21 194 Temperature_Celsius 0x0022 122 096 000 Old_age Always - 25 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0009 200 200 051 Pre-fail Offline - 0 Any idea what's wrong here? Bad motherboard? It would seem rare that both drives are going bad (smartctl says they PASS_, so it leaves the mobo as the culprit in my eyes.

    Read the article

  • Bad ways to secure wireless network c

    - by Moshe
    I was wondering if anybody had any thoughts on this, as I recently saw a Verizon DSL network set up where the WEP key was the last 8 characters of the router's MAC address. (It's bad enough that hey were using WEP in the first place...)

    Read the article

  • MyService.svc?wsdl shows 400 Bad Request IIS 7.5

    - by Omu
    I'm on Windows 7 Ultimate IIS 7.5 I have deployed the services to the web server and when I try them in IE like this: MyService.svc?wsdl I get the 400 "Bad Request" page I should get the description of the web service instead, anybody knows how to fix this ?

    Read the article

  • Tor in virtual machine - 502 bad gateway

    - by Kon
    I'm trying to run Tor in virtual machine. It used to work, but now when I try to access sites I get "502 bad gateway" error from Privoxy instead of requested site. I tried fixing time to correct one with date command but I still get 502 error. I use Virtualbox, Linux guest, and Tor+Privoxy setup.

    Read the article

  • Cause for bad font rendering in Chrome?

    - by OverTheRainbow
    I notice that the text on some web pages look bad when viewed in Chrome (16.0.912.77 m) while OK with Firefox (10.0). FWIW, I'm using the Windows versions of those applications, with default settings. As an (ironic) example, www.google.com/webfonts. Does someone know why that is, and if something can be done about it? Thank you. Edit: Another example: Edit: Here's how it looks in FireFox:

    Read the article

  • Bad ways to secure wireless network.

    - by Moshe
    I was wondering if anybody had any thoughts on this, as I recently saw a Verizon DSL network set up where the WEP key was the last 8 characters of the router's MAC address. (It's bad enough that hey were using WEP in the first place...)

    Read the article

  • Is my Cisco switch port bad?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced last week, however the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link (switchport configuration pastebin). We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • Should I always be checking every neighbor when building voxel meshes?

    - by Raven Dreamer
    I've been playing around with Unity3d, seeing if I can make a voxel-based engine out of it (a la Castle Story, or Minecraft). I've dynamically built a mesh from a volume of cubes, and now I'm looking into reducing the number of vertices built into each mesh, as right now, I'm "rendering" vertices and triangles for cubes that are fully hidden within the larger voxel volume. The simple solution is to check each of the 6 directions for each cube, and only add the face to the mesh if the neighboring voxel in that direction is "empty". Parsing a voxel volume is BigO(N^3), and checking the 6 neighbors keeps it BigO(7*N^3)-BigO(N^3). The one thing this results in is a lot of redundant calls, as the same voxel will be polled up to 7 times, just to build the mesh. My question, then, is: Is there a way to parse a cubic volume (and find which faces have neighbors) with fewer redundant calls? And perhaps more importantly, does it matter (as BigO complexity is the same in both cases)?

    Read the article

  • how to connect to server continuously using an bad internet connetion

    - by Nikhil
    I have a bad Internet connection, it disconnects frequently and on reconnect, I'm assigned a different IP address by the ISP. The problem is that I connect to a remote VPS (Ubuntu), and when Internet connection is disrupted n reconnected, I can no longer do anything on the terminal. I have to restart the terminal and re-initiate the connection. Is there a way I can have persistent connection with server.

    Read the article

  • Intermittent 400 bad request header field is missing ':' with Apache and SSL

    - by David Tinker
    Apache is returning rare intermittent 400 "bad request header field is missing ':' olhuaqv3o1t29flvr0 (random string)" errors. This seems to be related to https access and happens from Firefox, IE, Chrome etc. I am using a certificate from rapidssl. Apache/2.2.14 (Ubuntu) DAV/2 SVN/1.6.6 mod_jk/1.2.28 PHP/5.3.2-1ubuntu4.5 with Suhosin-Patch mod_ssl/2.2.14 OpenSSL/0.9.8k Anyone know how to fix this?

    Read the article

  • Eject a bad disk from optical drive

    - by Chuck
    I have an Alienware computer with one of the optical DVD drives that does not have a manual tray, just a slot to insert the disk. I recently inserted a disk that was apparently bad. It is unreadable does not show up in Windows Explorer. I tried right clicking on the Drive letter and hitting eject, but get an error message that there is no disk in the drive. How do I get the d--ned disk out so I can use the drive?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >