Search Results

Search found 8401 results on 337 pages for 'bad habits'.

Page 80/337 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Problems mounting HPUX LVM+VXFS filesystem on Linux

    - by golimar
    I have a physical disk from a HPUX system that I need to access from a Debian Linux for ia64 system. From the hpux-lvm-tools project I have the tools to access the HPUX LVMs (Linux LVM has a different format) and I also have the freevxfs driver. I know beforehand that the disk has three partitions, and that the biggest one contains LVM volumes, and some of those are VxFS filesystems. I can see the partitions: # cat /proc/partitions major minor #blocks name 8 32 143374744 sdc 8 33 512000 sdc1 8 34 142452736 sdc2 8 35 409600 sdc3 It finds a VG in one of the disk partitions: # ./vgscan_hpux On /dev/sdc2 - vg1328874723 # ./pvdisplay_hpux /dev/sdc2 PV General Information ---------------------- VG Creation Time Fri Feb 10 12:52:03 2012 Physical Volume ID 1766760336 1328874723 Volume Group ID 1766760336 1328874723 Physical Volumes in VG 1766760336 1328874723 VG Actication Mode 0 - LOCAL PE Size 64 MBs Lvol sizes ---------- lvol1 - 8 Extents - 512 MBs lvol2 - 192 Extents - 12288 MBs lvol3 - 16 Extents - 1024 MBs ... lvol21 - 13 Extents - 832 MBs lvol22 - 224 Extents - 14336 MBs lvol23 - 16 Extents - 1024 MBs Then I activate that VG and some new devices appear in my system: # ./pvactivate_hpux /dev/sdc2 VG vg1328874723 Activated succesfully with 23 lvols. # # ll /dev/mapper/ total 0 crw------- 1 root root 10, 59 Nov 26 16:08 control lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol1 -> ../dm-0 lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol10 -> ../dm-9 ... lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol8 -> ../dm-7 lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol9 -> ../dm-8 But: # mount /dev/mapper/vg1328874723-lvol18 /mnt/tmp mount: you must specify the filesystem type # mount -t vxfs /dev/mapper/vg1328874723-lvol18 /mnt/tmp mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg1328874723-lvol18, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # lsmod |grep vxfs freevxfs 23905 0 I also tried to identify the raw data with the file command and it just says 'data': # file -s /dev/mapper/vg1328874723-lvol18 /dev/mapper/vg1328874723-lvol18: symbolic link to `../dm-17' # file -s /dev/dm-17 /dev/dm-17: data # Any clues?

    Read the article

  • MacBook Pro (OSX Lion) - shutdown automatically before reaching login screen

    - by mkk
    When I try to lunch my MacBook Pro I can see a progress bar on loading screen. It goes to 1/15 or something like this and then it shut downs - I cannot reach even login screen. It happened to me 2 months ago, I have 'fixed' this by formatting my hard drive and installing OSX (Lion) again. This time I think that situation is a little bit different - I am able to enter single-user mode by pressing cmd + s. I then type /sbin/fsck -yf, I get the error: ** Checking Journaled HFS Plus volume. The volume name is Macintosh HD ** Checking extents overflow file. ** Checking catalog file. Invalid node structure (4, 24704) ** The volume Macintosh HD could not be verified completely. /dev/rdisk0s2 (hfs) EXITED WITH SIGNAL 8 but when I type exit, I can the login screen and I can log in. I tried a lot of things, booting from recovery partition and choosing disk utility to repair the disc, but I get error that it cannot be repaired. I have googled for hours and the only real solution I have found was to buy Disc warrior that might fix the issue. Any other suggestions? Secondary question is what causes this issue? I thought the reason are bad sectors, but Smart Utility haven't found any. I found suggestion that RAM could cause this kind of issue as well, so I downloaded rember and made memory test - all tests passed. Right now I have used my solution of entering single-mode user and then typing exit, however I am not sure how long it will 'work'. Of course I have back-uped what I considered important. Thanks for the help in advance! UPDATE: I guess Smart Utility was not very useful, I mnaged to get input/output error, which I believe is equivalent to bad sector.

    Read the article

  • windows 7: Event 55 The file system structure on the disk is corrupt and unusable

    - by Radio
    Here is a real bad one! Windows 7 RTM with SP1 installed [Version 6.1.7601]. Recently tried to delete some folder on my hard drive and Windows prompted "Error 0x80070570: The file or directory is corrupted and unreadable", and at the same time placed an Event 55 describing "The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on \Device\HarddiskVolume2." Ran chkdsk, first with /f option, then with /r option. Result in both cases was: no errors found, 0 bad sectors. No problems chkdsk found at all! Went through StackExchange, Google and spent over 6 hours on this. Still cannot figure out the problem. Re-installing/Re-Formatting is not an option! What did I try: Hotfix - Windows6.1-KB982927-x64.msu - gave me an error about incompatibility with my computer, however it totally matches my system. CRC of hotfix was ok. Windows Repair Console found startup errors and fixed those, but this didn't help an issue, even by running chkdsk c: /R from it. http://support.microsoft.com/kb/246026 does not promise anything good. sfc /scannow does not help too. Replaced hard drive by cloning an old one using True Image, repeated all steps above. At the same time, some minor glitches started to appear in my Windows, like side panel and notification area settings are getting reset. Goal is to delete the folder and get rid of Event 55. Sounds like NTFS bug. Please help. This is completely ridiculous.

    Read the article

  • Diagnosing a BSOD involving USB

    - by David Ebbo
    [Running Win7 Ultimate 64 bit] My new HP Pavilion Elite HPE-450t has been plagued by BSDO crashes since I got it about 5 weeks ago. The crashes are somewhat rare, sometimes not occurring for 3 or 4 days. I have spent a lot of time trying to isolate the device that could be at fault, but I have seen crashes with only the keyboard and mouse plugged in (as USB devices), and I tried two sets of keyboard/mouse, so I'm running out of ideas. :( The WhoCrashed tool gave this info about my latest BSOD: crash dump file: C:\Windows\Minidump\121310-11887-01.dmp This was probably caused by the following module: usbport.sys (USBPORT+0x2DE4E) Bugcheck code: 0xFE (0x5, 0xFFFFFA8008F571A0, 0x80863B34, 0xFFFFFA80092F2510) Error: BUGCODE_USB_DRIVER file path: C:\Windows\system32\drivers\usbport.sys product: Microsoft® Windows® Operating System company: Microsoft Corporation description: USB 1.1 & 2.0 Port Driver Bug check description: This indicates that an error has occurred in a Universal Serial Bus (USB) driver. The crash took place in a standard Microsoft module. Your system configuration may be incorrect. Possibly this problem is caused by another driver on your system which cannot be identified at this time. I looked at http://msdn.microsoft.com/en-us/library/ff560407(VS.85).aspx, and for Parameter1 = 0x5, it says "A hardware failure has occurred due to a bad physical address found in a hardware data structure. This is not due to a driver bug". Should I conclude that it's a hardware issue in the machine itself, rather than a bad USB driver or USB device? Here is the MiniDump, in case someone can get more info out of it: http://ewt52q.blu.livefilestore.com/y1peS4Ce8nSK1SXghzMDoxDWXlaEu-EKCJsv25y8y5DXXIUzZ9U0_tYgFJXd939fykwa0zRmx98IW0PYG18GioqKAuARYjtspSA/121310-11887-01.dmp?download&psid=2

    Read the article

  • How to find the real IP to which IPVS is routing a virtual IP

    - by Wayne Conrad
    I'm trying to find a problem server hiding behind a virtual IP (using LVS/ipvs). I've got a test program that sends requests to the virtual IP until it gets the bad response, but how can I tell to which real IP a request to the virtual IP got routed? On the box doing the virtual IP magic, here's the virtual IP configuration (for the service I care about): IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn ... TCP 10.1.0.254:5025 nq -> 10.1.0.5:5025 Route 1 0 1 -> 10.1.0.6:5025 Route 1 0 5 -> 10.1.0.7:5025 Route 1 0 2 -> 10.1.0.9:5025 Local 1 0 3 -> 10.1.0.11:5025 Route 1 0 3 ... My client program is sending TCP requests to 10.1.0.254:5025, usually getting a good response but sometimes a bad response. With this few servers, I could send my request to each server in turn until I discover the culprit, but I wonder if that technique will scale as we add servers. What means exist for me to find out where requests got routed? Kernel: Linux 2.6.32 OS: Debian testing (whatever that's called these days). ipvsadm is version 1.25, compiled with ipvs v1.2.1

    Read the article

  • transparent git-svn gateway

    - by azatoth
    Currently we have an subversion repository with the following layout: /trunc /group1 /proj1 /proj2 group2 /proj3 /etc.. /tags /group1 /proj1 /proj2 group2 /proj3 /etc.. /branch /anything temporary I believe this is an rather bad layout, but at the moment it's difficult to change it fully. Personally I dislike subversion, due mostly the long time it takes to check history, and also that branching and merging are cumbersome etc. so I really want to use git instead. Sadly we cant just switch to git as the mental capacity for some might be to overwhelming, so I was looking into git-svn to see if I could practically use that to solve the issue. Sadly that directly ends up in a bad situation as I want to break down each project into one git repo, and I don't want to have to recreate the git-svn checkout on each computer I work on. so I though perhaps there is an possibility to create some sort of transparent git ?? svn proxy/gateway, so that an push to that repo "commits" to the svn repo, and an commit to the svn repo updates the git repo. Google hasn't been my friend, have only found generic usage help to use git-svn, so I ask you if you have some good ideas to accomplish this.

    Read the article

  • Microsoft Windows DHCP: Steering IPv4 clients into specific scopes based on MAC

    - by Easter Sunshine
    We have visitors on our campus who bring their own laptops and devices and use our wireless and wired networks. When we receive a copyright infringement notice (typically BitTorrenting), we are required to quarantine that MAC address so that it no longer has Internet access. No matter what website it tries to visit, it is sent to a web page explaining to the user that the device has been quarantined. We have thus far implemented this in ISC DHCP on Linux. We have multiple VLANs with one or more public-IP subnets and one RFC1918 quarantine subnet each. All clients are leased IPs in the public-IP subnet(s) unless you're in a list of known bad MACs. Then, you are sent to the quarantine subnet so that your traffic is unroutable on the Internet (you are isolated by subnet only, not by VLAN). We would like to move to Windows DHCP in light of the IPAM role but I cannot figure out how to replicate this in Windows DHCP 2012 (Assign DHCP IPs for specific MAC prefixes on Windows Server 2008 R2 suggests it was not possible in 2008 R2), even while using policies. So here's what I'd like: The administrator/help desk provides and maintains a list of MAC addresses that are to be quarantined. The DHCP server places those MACs into the quarantine subnet on the respective VLAN, no matter which VLAN the client is in. I don't think reservations would work: We currently have about 300 registered bad MACs and about 12 VLANs. I don't want to make 300 x 12 reservations nor have to add 12 reservations per new MAC address. Not to mention all of the quarantine subnets are /24s. We do not have NPS/NAC. You do not have to register your MAC address get network access. We use Cisco routers/switches. Thanks.

    Read the article

  • Improving browser performance while using lots of tabs?

    - by Andrew
    My browsing habits cause me to open lots of windows and tabs, either related to different projects I'm working on or things I may want to read later. I use OSX and use about 5 spaces with multiple windows in each space. The problem is eventually I'll have around 200 or more tabs open (spread over 15-20 windows) that I don't want to close. Needless to say, my computer's performance starts to degrade. As I write this on my mobile, Safari on my laptop is locking up the computer. I used to use Chrome but found better performance with Safari. What I'd like to know, is there a graph of browser performance based on tab usage? I don't need a browser that keeps all tabs active. It would be great if the browser could increase performance by "putting tabs to sleep". Or if there was some sort of tool for saving a "workspace" of tabs that you could reactivate the next time you are working on that project. What sort of solution can you recommend to solve this problem?

    Read the article

  • Trouble cloning a Macbook Pro hard drive

    - by Mirko Froehlich
    I am trying to upgrade the 250GB hard drive in my MacBook Pro (early 2008 model) to a 750GB drive. I have connected the new drive via an external USB enclosure. The drive is recognized fine, I can format it, etc. However, every time I try to clone the drive, I am getting Input/Output errors. Before the clone operation, I have verified both the internal and the external drive using Disk Utility, and they both check out fine. After the clone operation, the external drive shows multiple "Invalid node structure" errors: I have tried two approaches for cloning the drive: Using Disk Utility, by starting from the OSX install DVD Using Carbon Copy Cloner The outcome is the same in both cases. The Carbon Copy Cloner logs show a handful of the following types of errors: rsync: mkstemp "<... an external filename ...>" failed: Input/output error (5) rsync: stat "<... an external filename ...>" failed: Input/output error (5) The actual files affected seem to be different across different runs of the application. Before the last run, I used Disk Utility to (once more) reformat the external drive and explicitly overwrite it with zeros, but this made no difference. I also tried running a surface scan in Tech Tool Pro overnight. It got about 2/3 of the way through before I had to disconnect the drive (had to take my MacBook Pro to work), but so far it didn't report any bad blocks. Assuming it scans the drive in the same order in which blocks would be allocated during actual use, it seems like if bad blocks were to blame for the clone failures, they should have been found already (given that the source drive is only 250GB). As a last attempt, I may try SuperDuper as well, although my understanding is that it uses the same underlying rsync approach as Carbon Copy Cloner, so it's unlikely to perform any better. Are there any other things I should try before I send the drive in for a replacement? Could these problems be caused by my internal drive, even though it works fine and checks out fine in Disk Utility?

    Read the article

  • dovecot login issue with plain passwords

    - by user3028
    I am having an odd problem in dovecot, the first time I try to login via telnet dovecot gives a error, the second time it works, both within the same telnet session. This is the telnet session, note the 'BAD Error in IMAP command received by server' and the "a OK" just after that : telnet 192.168.1.2 143 * OK Waiting for authentication process to respond.. * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN] Dovecot ready. a login someUserLogin supersecretpassword * BAD Error in IMAP command received by server. a login someUserLogin supersecretpassword a OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS MULTIAPPEND UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS] Logged in dovecot configuration >dovecot -n # 2.0.19: /etc/dovecot/dovecot.conf # OS: Linux 3.5.0-34-generic x86_64 Ubuntu 12.04.2 LTS auth_debug = yes auth_verbose = yes disable_plaintext_auth = no login_trusted_networks = 192.168.1.0/16 mail_location = maildir:~/Maildir passdb { driver = pam } protocols = " imap" ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { driver = passwd } This is the log file: Jul 3 12:27:51 linuxServer dovecot: auth: Debug: Loading modules from directory: /usr/lib/dovecot/modules/auth Jul 3 12:27:51 linuxServer dovecot: auth: Debug: auth client connected (pid=23499) Jul 3 12:28:06 linuxServer dovecot: auth: Debug: client in: AUTH#0111#011PLAIN#011service=imap#011secured#011no-penalty#011lip=192.168.1.2#011rip=192.169.1.3#011lport=143#011rport=50438#011resp=<hidden> Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: Loading modules from directory: /usr/lib/dovecot/modules/auth Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: pam(someUserLogin,192.169.1.3): lookup service=dovecot Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: pam(someUserLogin,192.169.1.3): #1/1 style=1 msg=Password: Jul 3 12:28:06 linuxServer dovecot: auth: Debug: client out: OK#0111#011user=someUserLogin Jul 3 12:28:06 linuxServer dovecot: auth: Debug: master in: REQUEST#0111823473665#01123499#0111#0113a58da53e091957d3cd306ac4114f0b9 Jul 3 12:28:06 linuxServer dovecot: auth: Debug: passwd(someUserLogin,192.169.1.3): lookup Jul 3 12:28:06 linuxServer dovecot: auth: Debug: master out: USER#0111823473665#011someUserLogin#011system_groups_user=someUserLogin#011uid=1000#011gid=1000#011home=/home/someUserLogin Jul 3 12:28:06 linuxServer dovecot: imap-login: Login: user=<someUserLogin>, method=PLAIN, rip=192.169.1.3, lip=192.168.1.2, mpid=23503, secured

    Read the article

  • Running Debian as guest operating system on a Hyper-V VM

    - by kce
    Hello. Layer-9 considerations are prompting a migration from Citrix XenServer to Hyper-V as our shop's virtualization platform of choice. This will require me to migrate our existing virtual machines from XenServer to Hyper-V. A hand full of these VMs are running Debian. Unfortunately, Debian does not seem to be on the list of approved/supported guest operating systems. In fact it seems that running Debian as a guest operating system of is rather difficult, although apparently not impossible. I have two interrelated questions: Does anyone have any experience running a Debian guest on Hyper-V? Is it one of those things where it just will not work at all or is more along the lines of "it will probably work fine, but we won't support it". Any experience here, positive or negative, would be helpful. How much of a bad idea is it to deviate from Hyper-V's list of supported guest operating systems? Again, is it either basically asking for Bad Things (TM) to happen or is just another instance of "it will probably work fine, but we won't support it"? Or is it somewhere in the middle? Thank you.

    Read the article

  • Unable to receive emails from Ubuntu postfix mail server

    - by Paddington
    I am unable to receive emails on an Ubuntu 11.04 server running postfix with the Plesk control panel. I can't see the mails even on webmail. I am able to send emails and am not getting any error messages on the email client when I try to receive. Here is the output of the logs: *tail -f /usr/local/psa/var/log/maillog Aug 29 10:38:31 cp9 postfix/tlsmgr[3811]: fatal: open database /var/lib/postfix/smtpd_scache.db: Invalid argument Aug 29 10:38:32 cp9 postfix/master[27738]: warning: process /usr/lib/postfix/tlsmgr pid 3811 exit status 1 Aug 29 10:38:32 cp9 postfix/master[27738]: warning: /usr/lib/postfix/tlsmgr: bad command startup -- throttling Aug 29 10:38:36 cp9 pop3d: Connection, ip=[::ffff:196.201.7.158] Aug 29 10:38:36 cp9 pop3d: IMAP connect from @ [::ffff:196.201.7.158]INFO: LOGIN, [email protected], ip=[::ffff:196.201.7.158] Aug 29 10:38:37 cp9 pop3d: 1346229517.874008 LOGOUT, [email protected], ip=[::ffff:196.201.7.158], top=0, retr=0, time=1, rcvd=24, sent=1716, maildir=/var/qmail/mailnames/essentialhuku.co.za/earle/Maildir Aug 29 10:14:05 cp9 postfix/tlsmgr[1133]: fatal: open database /var/lib/postfix/smtpd_scache.db: Invalid argument Aug 29 10:14:06 cp9 postfix/master[27738]: warning: process /usr/lib/postfix/tlsmgr pid 1133 exit status 1 Aug 29 10:14:06 cp9 postfix/master[27738]: warning: /usr/lib/postfix/tlsmgr: bad command startup -- throttling Aug 29 10:14:08 cp9 pop3d: Connection, ip=[::ffff:196.201.7.158

    Read the article

  • How can I mount dd image of a partition?

    - by Puneet Arora
    I created a dd image of a partition (containing an HFS+ FS) of one of my disks (and not the entire disk) a few days ago using the following command - dd conv=sync,noerror bs=8k if=/dev/sdc2 of=/path/to/img How can I mount it? I tried the following but it doesn't work - mount -o loop,ro -t hfsplus /path/to/img /path/to/mntDir It gives me mount: wrong fs type, bad option, bad superblock on /dev/loop1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg | tail gives me - [5248455.568479] hfs: invalid secondary volume header [5248455.568494] hfs: unable to find HFS+ superblock [5248462.674836] hfs: invalid secondary volume header [5248462.674843] hfs: unable to find HFS+ superblock [5248550.672105] hfs: invalid secondary volume header [5248550.672115] hfs: unable to find HFS+ superblock [5248993.612026] hfs: unable to find HFS+ superblock [5248998.103385] hfs: unable to find HFS+ superblock [5249031.441359] hfs: unable to find HFS+ superblock [5249036.274864] hfs: unable to find HFS+ superblock Is there something wrong that I am doing? I tried searching on how to do this but all the results I get only talk about mounting a partition from within a full disk image, using the offset option with mount - none talk about the case where the image itself is that of a partition. Thanks. PS: I'm running 64bit Arch Linux, and the partition from the original disk /dev/sdc2 mounts fine.

    Read the article

  • How do I ensure a process is running, even if it kills itself? (it needs to be restarted then)

    - by le_me
    I'm using linux. I want a process (an irc bot) to run every time I start the computer. But I've got a problem: The network is bad and it disconnects often, so I need to manually restart the bot a few times a day. How do I automate that? Additional information: The bot creates a pid file, called bot.pid The bot reconnects itself, but only a few times. The network is too bad, so the bot kills itself sometimes because it gets no response. What I do currently (aka my approach ;) ) I have a cron job executing startbot.rb every 5 minutes. (The script itself is in the same directory as the bot) The script: #!/usr/bin/ruby require 'fileutils' if File.exists?(File.expand_path('tmp/bot.pid')) @pid = File.read(File.expand_path('tmp/bot.pid')).chomp!.to_i begin raise "ouch" if Process.kill(0, @pid) != 1 rescue puts "Removing abandoned pid file" FileUtils.rm(File.expand_path('tmp/bot.pid')) puts "Starting the bot!" Kernel.exec(File.expand_path('./bot.rb')) else puts "Bot up and running!" end else puts "Starting the bot!" Kernel.exec(File.expand_path('./bot.rb')) end What this does: It checks if the pid file exists, if that's true it checks if kill -s 0 BOT_PID == 1 (if the bot's running) and starts the bot if one of the two checks fail/are not true. My approach seems to be quite dirty so how do I do it better?

    Read the article

  • How can I make Internet Explorer 6 render Web pages like Internet Explorer 11?

    - by gparyani
    Now, I know that this may seem like a bad question in that I can just upgrade to Internet Explorer 8, but I am sticking with IE6 in that IE8 removes valuable features, like the ability to save favorites offline and the fact that a file path turns into a Windows Explorer window and typing a Web address into Windows Explorer changes it into an IE window. I know that Internet Explorer 6 does a really bad job at rendering some pages. I know of the Google Chrome Frame extension that brings Chrome-style rendering into IE, but that will soon be discontinued. So, I tried another thing: I know that C:\Windows\System32\mshtml.dll contains the Trident rendering engine that is used by IE, so I tried something: I first backed up the original file by renaming it on Windows XP to mshtml-old.dll, then I tried to copy in the DLL from a computer running Windows 7 with Internet Explorer 10. I noticed that, after copying, the system had replaced the new DLL with the old one, but left the one I backed up intact. Is there any way I can get the system to not replace the DLL like that so that I can transfer in IE11's mshtml.dll into Windows XP and make IE6 render like IE11? I'm looking for an answer that describes how to tweak my system to make IE6 render like IE11 (or IE10), not one that tells me to upgrade IE or install another browser. I don't care how tedious the method is, just as long as it works.

    Read the article

  • SSH multi-hop connections with netcat mode proxy

    - by aef
    Since OpenSSH 5.4 there is a new feature called natcat mode, which allows you to bind STDIN and STDOUT of local SSH client to a TCP port accessible through the remote SSH server. This mode is enabled by simply calling ssh -W [HOST]:[PORT] Theoretically this should be ideal for use in the ProxyCommand setting in per-host SSH configurations, which was previously often used with the nc (netcat) command. ProxyCommand allows you to configure a machine as proxy between you local machine and the target SSH server, for example if the target SSH server is hidden behind a firewall. The problem now is, that instead of working, it throws a cryptic error message in my face: Bad packet length 1397966893. Disconnecting: Packet corrupt Here is an excerpt from my ~/.ssh/config: Host * Protocol 2 ControlMaster auto ControlPath ~/.ssh/cm_socket/%r@%h:%p ControlPersist 4h Host proxy-host proxy-host.my-domain.tld HostName proxy-host.my-domain.tld ForwardAgent yes Host target-server target-server.my-domain.tld HostName target-server.my-domain.tld ProxyCommand ssh -W %h:%p proxy-host ForwardAgent yes As you can see here, I'm using the ControlMaster feature so I don't have to open more than one SSH connection per-host. The client machine I tested this with is an Ubuntu 11.10 (x86_64) and both proxy-host and target-server are Debian Wheezy Beta 3 (x86_64) machines. The error happens when I call ssh target-server. When I call it with the -v flag, here is what I get additionally: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /home/aef/.ssh/config debug1: Applying options for * debug1: Applying options for target-server.my-domain.tld debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/home/aef/.ssh/cm_socket/[email protected]:22" does not exist debug1: Executing proxy command: exec ssh -W target-server.my-domain.tld:22 proxy-host.my-domain.tld debug1: identity file /home/aef/.ssh/id_rsa type -1 debug1: identity file /home/aef/.ssh/id_rsa-cert type -1 debug1: identity file /home/aef/.ssh/id_dsa type -1 debug1: identity file /home/aef/.ssh/id_dsa-cert type -1 debug1: identity file /home/aef/.ssh/id_ecdsa type -1 debug1: identity file /home/aef/.ssh/id_ecdsa-cert type -1 debug1: permanently_drop_suid: 1000 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.0p1 Debian-3 debug1: match: OpenSSH_6.0p1 Debian-3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent Bad packet length 1397966893. Disconnecting: Packet corrupt

    Read the article

  • both ssl and non-ssl on single port

    - by Zulakis
    I would like to make my apache2 webserver serve both http and https on the same port. With the different method i tried it was either not working on http or on https.. How can I do this? Update: If I enable SSL and then visit the with http I get page like this: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://server/"><b>https://server/</b></a></blockquote></p> <hr> <address>Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny16 with Suhosin-Patch mod_ssl/2.2.9 OpenSSL/0.9.8g Server at server Port 443</address> </body></html> Because of this, it seems very much possible to have both http and https on the same port. A first step would be to change this default-page so it would present a 301-Moved header. Update2: According to this, it is possible. Now, the question is just how to configure apache to do it.

    Read the article

  • Private IP getting routed over Internet

    - by WernerCD
    We are setting up an internal program, on an internal server that uses the private 172.30.x.x subnet... when we ping the address 172.30.138.2, it routes across the internet: C:\>tracert 172.30.138.2 Tracing route to 172.30.138.2 over a maximum of 30 hops 1 6 ms 1 ms 1 ms xxxx.xxxxxxxxxxxxxxx.org [192.168.28.1] 2 * * * Request timed out. 3 12 ms 13 ms 9 ms xxxxxxxxxxx.xxxxxx.xx.xxx.xxxxxxx.net [68.85.xx.xx] 4 15 ms 11 ms 55 ms te-7-3-ar01.salisbury.md.bad.comcast.net [68.87.xx.xx] 5 13 ms 14 ms 18 ms xe-11-0-3-0-ar04.capitolhghts.md.bad.comcast.net [68.85.xx.xx] 6 19 ms 18 ms 14 ms te-1-0-0-4-cr01.denver.co.ibone.comcast.net [68.86.xx.xx] 7 28 ms 30 ms 30 ms pos-4-12-0-0-cr01.atlanta.ga.ibone.comcast.net [68.86.xx.xx] 8 30 ms 43 ms 30 ms 68.86.xx.xx 9 30 ms 29 ms 31 ms 172.30.138.2 Trace complete. This has a number of us confused. If we had a VPN setup, it wouldn't show up as being routed across the internet. If it hit an internet server, Private IP's (such as 192.168) shouldn't get routed. What would let a private IP address get routed across servers? would the fact that it's all comcast mean that they have their routers setup wrong?

    Read the article

  • How to get an inactive RAID device working again?

    - by Jonik
    After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it. * Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so The RAID device appears to be inactive somehow: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d0 : inactive sda4[0](S) 241095104 blocks # mdadm --detail /dev/md_d0 mdadm: md device /dev/md_d0 does not appear to be active. Question is, how to make active the device again (using mdmadm, I presume)? (Other times it's alright (active) after boot, and I can mount it manually without problems. But it still won't mount automatically even though I have it in /etc/fstab: /dev/md_d0 /opt ext4 defaults 0 0 So a bonus question: what should I do to make the RAID device automatically mount at /opt at boot time?) This is an Ubuntu 9.10 workstation. Background info about my RAID setup in this question.

    Read the article

  • Server 2003 Remote Desktop loses its virtual printer image of the local printer

    - by Charles Hart
    Server 2003 Remote Desktop provides service to stores served by several ISPs. The server loses its virtual printer image of the local printer (as seen from the remote store site) and a copy of the original local printer appears on the local computer with a different driver without notice. Specifically: A remote desktop session is opened on a local computer that has a Brother HL2140 USB printer connected and the associated software installed with a correct driver shown under the “advanced” button. The server has the same Brother software and driver. An application that is running on the server attempts to print on the local printer connected to the local computer running Vista Pro or XP Pro. Either it works correctly (Good) or it does not print (Bad) or it prints on another Local Printer connected to another local computer logged into the server (Bad and Odd). When it doesn’t print (or prints somewhere else) we ask the customer to look for the (virtual) printer using the Remote desktop view of the server and the printer is gone. Then we ask the customer to look at the printers folder in the local computer. There are several possibilities: The printer is there, but the driver is mysteriously changed in the drop down to MDX something; we have the customer select the other (proper) Brother driver, and all is well again, as now after the change, the virtual printer in the server (which now matches the local printer) appears again, and so printing can resume. A “copy” of the printer mysteriously appears in the local printer’s folder and after we delete it the virtual printer in the server appears again and so printing can resume. Note that in both case 1 and 2, the server sometimes sends the print job elsewhere, to some other local computer. Meanwhile in the log file, endless errors are reported and the server eventually crashes, sometimes twice a day. I’m puzzled what changes the local printer driver and I’m puzzled what loads the copy 2 or copy 3 of the printer in the local printer folder. This entire description randomly occurs on any of 40+ local computers in eight different locations in different ISPs, all sharing one Domain.

    Read the article

  • Are there any tests I can run on a network to simulate 100 heavy network users?

    - by marc.gayle
    I will be hosting a Ruby on Rails workshop at a small hotel in the near future, and while they have 'Wifi' everywhere on the property, and the property normally hosts 150 - 300 people, I am not 100% confident that they have hosted 150 tech people that tend to have heavy web surfing habits/needs. Their tech department is also 1 or 2 guys. Are there any automated tests I can download and run from my laptop, on the network, that would simulate 100 'heavy users' on the network at the same time? Their broadband pipe is a 15mbps cable connection. Would that suffice for the general surfing needs of 100 - 150 techies? I know all it takes is 1 or 2 bit torrenters to kill the entire network, but assuming we can at the very least block those ports or encourage the attendees not to file share on the network, would that speed suffice for general surfing needs? What are good resources online that would allow me to quickly get up to speed on the IT related issues, so that I can ask their sysadmins the right questions? Edit: Note that I am fairly technical, so assume I can get up to speed quickly even with technical manuals, etc.

    Read the article

  • What kinds of protections against viruses does Linux provide out of the box for the average user?

    - by ChocoDeveloper
    I know others have asked this, but I have other questions related to this. In particular, I'm concerned about the damage that the virus can do the user itself (his files), not the OS in general nor other users of the same machine. This question came to my mind because of that ransomware virus that is encrypting machines all over the world, and then asking the user to send a payment in Bitcoin if he wants to recover his files. I have already received and opened the email that is supposed to contain the virus, so I guess I didn't do that bad because nothing happened. But would I have survived if I opened the attachment and it was aimed at Linux users? I guess not. One of the advantages is that files are not executable by default right after downloading them. Is that just a bad default in Windows and could be fixed with a proper configuration? As a Linux user, I thought my machine was pretty secure by default, and I was even told that I shouldn't bother installing an antivirus. But I have read some people saying that the most important (or only?) difference is that Linux is just less popular, so almost no one writes viruses for it. Is that right? What else can I do to be safe from this kind of ransomware virus? Not automatically executing random files from unknown sources seems to be more than enough, but is it? I can't think of many other things a user can do to protect his own files (not the OS, not other users), because he has full permissions on them.

    Read the article

  • New to building computers worried about temps

    - by dave
    I'm new to building my own computers and I was wondering about maximum temperatures. I understand that the room temp can affect the computers temp but how relevent is it? I understand that if my room temp is 20°C none of my computer parts could be lower than that. But if my room is 27°C instead of 20°C would this cause my computers parts to heat up more/faster? My new computer I built myself for gaming is i7 2600k 16gb ram ddr3 1600 hd6970 2 gb 240gb ssd ( bought a nas with 3 2tb drives in raid 5 for my home network ) 850w modular psu I also have my old hp computer i3 2120 8gb ram hd6770 1tb hdd I also have 3 laptops in my household, but I am not worried about their temps, they heat up my legs but they are never under stress. Due to size and money reasons I used an old case and it only has one of the sides left on it. Is this bad for the computer and will the extra dust cause problems? Or should I leave it this way or take the missus wrath and buy a case? If so is there any certain case I should get? I don't care about looks I just want card reader and usb slots and for it to run as cool or cooler than now, my case has 1 fan. Also what are the max temps for my new and old computer parts? Is 40°C under load ok for my CPU, what about 70°C for my GPU is that ok too, or should I worry? What are normal and safe temps for my components? I have looked around but there seem to be lots of different answers. I know that 100°C is bad but I want my parts to last as long as possible and this site always seems to give good replies without arguing or flaming.

    Read the article

  • What can cause a kernel hang on redhat 4?

    - by Ivan Buttinoni
    I've to solve a nasty problem on a ten machine "cluster": randomly one of these machine hang during an hard computation, sometime still ping sometime not. The problem was described me at the phone, I've still no touch/see these machine, so I can't be more precise. It seem there's no (real) keyboard or monitor linked to them, so I haven't nothing about keyboard led or messages on monitor. Don't worry, what I really need is some suggestion where to search the problem, some suggestions on what can cause a kernel hang on a working machine. I also see this post, but seem same need on a different situation. My ideas since now: - HW problem (ram, cpu, fan etc.) - bad autofs configuration - bad nfs(?) configuration - presence of a trojan/hacker/etc - /dev/"swap" linked to /dev/zero - kernel out of memory(??) - kernel bugged In other words I try to imagine what kind of envent can occour that can crash the kernel insted of the application that generate the event. What hang have YOU experienced before? Write it to me! TIA

    Read the article

  • How to log kernel panics without KVM

    - by Spacedust
    My server is crashing and I can't find an answer why. It all started after my datacenter upgrade RAM from 16 GB to 32 GB. I also found such logs in dmesg - they've started to show itself just before the first kernel panic: EXT4-fs error (device md2): ext4_ext_find_extent: bad header/extent in inode #97911179: invalid magic - magic 5f69, entries 28769, max 26988(0), depth 24939(0) EXT4-fs error (device md2): ext4_ext_remove_space: bad header/extent in inode #97911179: invalid magic - magic 5f69, entries 28769, max 26988(0), depth 24939(0) EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 20974: 8589 blocks in bitmap, 54896 in gd JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. EXT4-fs error (device md2): ext4_ext_split: inode #97911179: (comm pdflush) eh_entries 28769 != eh_max 26988! EXT4-fs (md2): delayed block allocation failed for inode 97911179 at logical offset 1039 with max blocks 1 with error -5 This should not happen!! Data will be lost EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 21731: 5 blocks in bitmap, 60762 in gd JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. My system is CentOS 5.8 64-bit with latest kernel 2.6.18-308.20.1.el5. How can I check what is the reason of kernel panic without having an access to the KVM ? I have told my datacenter admins to check the memory in the server.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >