Search Results

Search found 16009 results on 641 pages for 'side effect'.

Page 170/641 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Mounting 2.5" SSD in Antec Atlas 550's 3.5" bay

    - by cecilkorik
    Just got some new SSDs to add to my servers, unfortunately there doesn't appear to be anywhere to actually mount them. The cases are Antec Atlas 550 mid-tower server cases. The problem is that the 3.5" bays are intended for 3.5" hard drives (obviously) not SSDs, so they are mounted from the bottom with rubber grommets to reduce vibration. There are NO side-holes in any of the 3.5" bays, all mounting has to be done from the bottom of the drive, through the thick rubber grommets, which requires special extra-long screws with large heads. Those screws will not work with the SSD, the thread is too coarse, 2.5" drives apparently use M3 screws with a finer pitch. The case's 3.5" bays have holes aligned for 2.5" drives and by moving the grommets into the appropriate holes they line up with the SSD. But that's as far as I can get. I don't have and cannot find any screws that use the finer M3 pitch that are long enough and have the wide head. I can't remove the grommets because the screw holes are much too wide without them, the entire head of the screw fits through. And again, there are no side-holes, so I can't use the mounting bracket that came with the drive nor any other 2.5"-3.5" bracket I've found. Here is a pic of what I am dealing with (note that the SSD is just resting on the case, it's not currently screwed in if that's not clear) Any ideas for safely mounting these little guys without resorting to duct tape or superglue would be very appreciated. If anyone knows where I could buy the appropriate type of M3 threaded screw or has any other ideas please help me out here.

    Read the article

  • How to configure amavisd-new for only scanning on particular senders/servers?

    - by mailq
    I'd like to know how to configure amavisd-new to only scan for Spam on particular clients (IPs, CIDRs or hostnames) or alternatively sender's email domain. I know that it is possible to do it on a recipient's mail address but not on how to do it for the sender's mail address. It is even possible to do it on a recipient's IP address with policy banks. But my approach should be to be independent of recipient and only relay on the sender. What I want to accomplish is to only scan mails originating from Yahoo, Google, Hotmail and the other big senders. So it is easier to configure which senders should be observed than the ones that shouldn't. I known that it is easier to achieve on the MTA side, but that is not part of the question because I already go a solution on the MTA side. I want to do it on amavisd-new. And it doesn't help to know how to put senders on a whitelist, as this still means that the mail goes through all the scanning but then gets a high negative score. The mail shouldn't be scanned at all unless sent by the big players. So which parameters in amavisd-new is the right one to enable scanning for particular senders and only for these?

    Read the article

  • Limiting bandwidth on internal interface on Linux gateway

    - by Jack Scott
    I am responsible for a Linux-based (it runs Debian) branch office router that takes a single high-speed Internet connection (eth2) and turns it into about 20 internal networks, each with a seperate subnet (192.168.1.0/24 to 192.168.20.0/24) and a seperate VLAN (eth0.101 to eth0.120). I am trying to restrict bandwidth on one of the internal subnets that is consistently chewing up more bandwidth than it should. What is the best way to do this? My first try at this was with wondershaper, which I heard about on SuperUser here. Unfortunately, this is useful for exactly the opposite situation that I have... it's useful on the client side, not on the Internet side. My second attempt was using the script found at http://www.topwebhosts.org/tools/traffic-control.php, which I modified so the active part is: tc qdisc add dev eth0.113 root handle 13: htb default 100 tc class add dev eth0.113 parent 13: classid 13:1 htb rate 3mbps tc class add dev eth0.113 parent 13: classid 13:2 htb rate 3mbps tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip dst 192.168.13.0/24 flowid 13:1 tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip src 192.168.13.0/24 flowid 13:2 What I want this to do is restrict the bandwidth on VLAN 113 (subnet 192.168.13.0/24) to 3mbit up and 3mbit down. Unfortunately, it seems to have no effect at all! I'm very inexperienced with the tc command, so any help getting this working would be appreciated.

    Read the article

  • Odd IIS FTP Failure

    - by Monkey Boson
    We're running a script on our production box that zips up our database and FTPs it to a backup box every night. Our production box is running Redhat Enterprise 5. Our backup box is running Windows XP Pro / IIS 5.1. Both machines are on the same VLAN (not sure if this is imporatant). The backup file usually clocks in at around 3GB. Every now and again (~5% of the time), the backup script fails. The shell script on the "client side" - which looks at return codes - never identifies any problem since ftp always returns 0. On the "server side", IIS writes out a log that looks like this: #Software: Microsoft Internet Information Services 5.1 #Version: 1.0 #Date: 2009-08-08 07:04:25 #Fields: time c-ip cs-method cs-uri-stem sc-status sc-win32-status 07:04:25 192.168.111.235 [15]USER backup 331 0 07:04:25 192.168.111.235 [15]PASS - 230 0 07:05:54 192.168.111.235 [15]created backup_20090808.zip 426 10035 07:06:16 192.168.111.235 [15]QUIT - 426 0 Now, I know that 426 means "Connection closed, transfer aborted", which is sort-of a catch-all for "IIS was not happy". The real puzzler is the wincode: 10035 (WSAEWOULDBLOCK -- Resource temporarily unavailable). My understanding is that this code is normal when using non-blocking socket calls - which would almost certainly be used by any FTP Server implementation. My first guess that it might be a timeout issue doesn't make sense, since we're only talking about a few minutes here and the timeout was left at the default 900 s. Does anybody have any ideas about what is causing this problem, and how it may be fixed? Thanks!

    Read the article

  • Software distribution from web server to client using PHP/FTP

    - by Jenolan
    I develop and maintain a number of add-ons and utilities for various widget (mainly aMember) which generally means I need to install php based codes onto other people's systems. Whilst I have a VPS and have access to rsync and all sorts of yummy tools most of the people I deal with have a basic ftp access and that's all folks. To upload from my local system is also a problem as I am satellite based (two-way) so it is fairly slow and expensive and in any case the files are already on my server. So there is no rsync, fxp, ssh and I can't really install anything as it is obviously not my system, they would be justifiably miffed if I started installing file managers or other things onto their sites. What I have been trying to find is a utility that I can run on my server from the web, preferably php based, that will be like a file manager but a bit different. Two panels. LH-Side the local server .. pretty much like a standard FM application RH-Side ability to login via FTP to the clients system Then I can fiddle as required. The closest thing I have found is net2ftp but it doesn't have the gui interface, at the moment I simply ssh into my server power up ncftp and run that way, but something easier to use would be mucho niceness. Thanks in advance! Larry

    Read the article

  • Lighttpd - byte range request doesn't work. can't stream mp4

    - by w-01
    Am attempting to use the lastest flowplayer. (if it could work it would be pretty awesome btw) http://flowplayer.org One of the cool things about it is it uses the new HTML5 video element and supports random seeking/playback. In order to do this, you need a byte range request capable server on the backend. Luckily I'm using Lighttpd 1.5.0 on the backend. Unfortunately the current behavior is that when I do a random seek, the video simply restarts itself from the beginning. the docs say: "For HTML5 video you don't have to do any client side configuration. If your server supports byte range requests then seeking should work on the fly. Most servers including Apache, Nginx and Lighttpd support this." On my page, using chrome web developer tools, i can see when the video is requested, the server response headers indicate it is able to acce[t byte ranges. Accept-Ranges:bytes when I do random seek in the player, I can see that that byte ranges are request appropriately in the request header: Range: bytes=5668-10785 I can also verify the moov atom is at the front of the video file. My question here is if there is something else on the lighttpd side i'm missing in order to enable byte-range requests? The reason i ask is because the current behavior suggests that the lighttpd simply doesn't understand the byte range request and is just reserving the video from the beginning. Update it's clearer to put this here. As per RJS' suggestion I ran a curl command. in the response it looks like lighttpd is working as expected. Content-Range: bytes 1602355-18844965/18844966 Content-Length: 17242611

    Read the article

  • Migrating to CF9: trouble getting JRun working with SSL

    - by DaveBurns
    I have a client on MX7 who wants to migrate to CF9. I have a dev environment for them on my WinXP machine where I've configured MX7 to run with JRun's built-in web server. I've had that working for a long time with both regular and SSL connections. I installed CF9 yesterday side-by-side with the existing MX7 install to start testing. The install was smooth and detected MX7, adjusted CF9's port numbers for no conflict, etc. Testing started well: MX7 over regular and SSL still worked and CF9 worked over regular HTTP. But I can't get CF9 to work with SSL. I installed a new certificate with keytool, FireFox (v3.6) complained about it being unsigned, I added it to the exception list, and now I get this: Secure Connection Failed An error occurred during a connection to localhost:9101. Peer reports it experienced an internal error. (Error code: ssl_error_internal_error_alert) I've been Googling that in all variations but can't find much help to get past this. I don't see any info in any log files either. FWIW, here's my SSL config from SERVER-INF/jrun.xml: <service class="jrun.servlet.http.SSLService" name="SSLService"> <attribute name="enabled">true</attribute>` <attribute name="interface">*</attribute> <attribute name="port">9101</attribute> <attribute name="keyStore">{jrun.rootdir}/lib/mykey</attribute> <attribute name="keyStorePassword">*deleted*</attribute> <attribute name="trustStore">{jrun.rootdir}/lib/trustStore</attribute> <attribute name="socketFactoryName">jrun.servlet.http.JRunSSLServerSocketFactory</attribute> <attribute name="deactivated">false</attribute> <attribute name="bindAddress">*</attribute> <attribute name="clientAuth">false</attribute> </service> Anyone here know of any issues re setting up SSL and CF9? Anyone had success with it? Dave

    Read the article

  • How to auto-cc a system email account any time a user creates an appointment

    - by Ferdy
    I will not bother explaining my full architecture or reasons for wanting this in order to keep this question short: Is it possible to auto-cc a certain email account any time a Exchange user creates an appointment or meeting in his own calendar? Is it possible using rules? Our Exchange 2007 server is outsourced, I cannot change the configuration or install plugins server-side Preferably, it still should work server-side, because users may use the Outlook client but also Outlook Web Access Is there any other way, perhaps using group policies? My conclusion so far is that the only viable way to accomplish this is to build an Outlook add-on. The problem there is that it will need to be managed for thousands of desktop users and that the add-on will not work when using another client (OWA, mobile). An alternative architecture could be to pull the information from the user's calendar on a scheduled basis. Given that we are talking about a lot of users, scalability is a major issue, this has also been confirmed by Microsoft. Can you confirm that my thinking is correct or do you have any other solutions?

    Read the article

  • Macbook Pro Triple Boot OS X Lion, Windows 7 and Windows 8

    - by Lloyd Sparkes
    MacBook Pro (Summer 2010 Model, Basic Model) I currently have OS X Lion and Windows 7 running side by side on my MacBook Pro. However I have a need to get Windows 8 running as well in this mix (a Virtual Machine is not good enough, I need the performance). I have created a suitably sized parition (80GB) that is recognizable in Boot Camp. However every time I try to boot from the USB stick (that worked to install Windows 8 on my PC) using the latest version of rEFIt, it just boots Windows 7 and not the Windows 8 installer. I cannot start the installation within Windows 7 as it will just install over Windows 7. I'm guessing the Boot Camp emulation is doing something werid to stop the "Press any key to install Windows..." message from appearing (which should happen if the installer detects Windows is already installed (e.g. if you left your install disk in). Is there a way to get around this / force the installer to start? (Note I cannot start the Windows 7 installer either if I wanted to install a second copy of Windows 7 to upgrade to Windows 8)

    Read the article

  • Configuring OS X L2TP VPN to use Certificate for IPSEC layer instead of Pre Shared Key

    - by Matthew Savage
    I'm trying to setup a L2TP VPN on an OS X Snow Leopard Server setup, and have had success using a pre-shared key, however I would rather not rely on a simple string, and use a certificate instead. Setting this up on the server side is seemingly easy, you simply select a certificate you have generated from the list, and hit apply, however when I try to use the certificate on the client side it fails. I have exported the certificate into a P12 file, and then transferred to the client, and imported into the login keychain, however when I try to choose the certificate (from Network preferences, clicking Authentication Settings, then selecting Certificate and pressing Select) I am shown the following error: No machine certificates found Certificate authentication cannot be used because your keychain does not contain any suitable certificates. Use Keychain Access to import the certificate into your keychain. If you do not have the certificates required for authentication, contact your network administrator. Unfortunately even when I try to generate a certificate where I override the defaults, ensure the DNS name etc are set properly this doesn't change. When I select Certificate Authentication for the User Auth, and click Select the certificate for the server shows up there, but obviously this isn't where I need it to be available.

    Read the article

  • osx bash grep - finding search terms in a large file with one single line

    - by unsynchronized
    Is there simple unix command line i can enter which lets me isolate say 512 bytes either side of a search term, even if there is only one "line" in a very large text file? Ok, this should be easy. Famous last words. I'm not that familiar with grep, but it seems it is mainly used to filter out lines in the input that contain search terms. I have a very large json file that I downloaded that i want to search for a particular term. before you click the link - it's over 244MB so be warned - it is from the internet wayback machine and contains lists of zip files of archived photos. i am trying to find mine. Their web interface is broken, so i found the json file that they make public here - it's the last one on the list. when i grep looking for my username, it finds it, but proceeds to dump that line to the console. the problem is that line is 244MB long, and it's the only line in the file. i tried using less, but could not get that to do much - it's very slow, and seems to have the same issue. is there simple unix command line i can enter which lets me isolate say 512 bytes either side of a search term?

    Read the article

  • Redundant OpenVPN connections with advanced Linux routing over an unreliable network

    - by konrad
    I am currently living in a country that blocks many websites and has unreliable network connections to the outside world. I have two OpenVPN endpoints (say: vpn1 and vpn2) on Linux servers that I use to circumvent the firewall. I have full access to these servers. This works quite well, except for the high package loss on my VPN connections. This packet loss varies between 1% and 30% depending on time and seems to have a low correlation, most of the time it seems random. I am thinking about setting up a home router (also on Linux) that maintains OpenVPN connections to both endpoints and sends all packets twice, to both endpoints. vpn2 would send all packets from home to vpn1. Return trafic would be send both directly from vpn1 to home, and also through vpn2. +------------+ | home | +------------+ | | | OpenVPN | | links | | | ~~~~~~~~~~~~~~~~~~ unreliable connection | | +----------+ +----------+ | vpn1 |---| vpn2 | +----------+ +----------+ | +------------+ | HTTP proxy | +------------+ | (internet) For clarity: all packets between home and the HTTP proxy will be duplicated and sent over different paths, to increase the chances one of them will arrive. If both arrive, the first second one can be silently discarded. Bandwidth usage is not an issue, both on the home side and endpoint side. vpn1 and vpn2 are close to each other (3ms ping) and have a reliable connection. Any pointers on how this could be achieved using the advanced routing policies available in Linux?

    Read the article

  • OpenVPN Client timing out

    - by Austin
    I recently installed OpenVPN on my Ubuntu VPS. Whenenver I try to connect to it, I can establish a connection just fine. However, everything I try to connect to times out. If I try to ping something, it will resolve the IP, but will time out after resolving the IP. (So DNS Server seems to be working correctly) My server.conf has this relevant information (At least I think it's relevant. I'm not sure if you need more or not) # Which local IP address should OpenVPN # listen on? (optional) ;local a.b.c.d # Which TCP/UDP port should OpenVPN listen on? # If you want to run multiple OpenVPN instances # on the same machine, use a different port # number for each one. You will need to # open up this port on your firewall. port 1194 # TCP or UDP server? ;proto tcp proto udp # "dev tun" will create a routed IP tunnel, # "dev tap" will create an ethernet tunnel. # Use "dev tap0" if you are ethernet bridging # and have precreated a tap0 virtual interface # and bridged it with your ethernet interface. # If you want to control access policies # over the VPN, you must create firewall # rules for the the TUN/TAP interface. # On non-Windows systems, you can give # an explicit unit number, such as tun0. # On Windows, use "dev-node" for this. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel if you # have more than one. On XP SP2 or higher, # you may need to selectively disable the # Windows firewall for the TAP adapter. # Non-Windows systems usually don't need this. ;dev-node MyTap # SSL/TLS root certificate (ca), certificate # (cert), and private key (key). Each client # and the server must have their own cert and # key file. The server and all clients will # use the same ca file. # # See the "easy-rsa" directory for a series # of scripts for generating RSA certificates # and private keys. Remember to use # a unique Common Name for the server # and each of the client certificates. # # Any X509 key management system can be used. # OpenVPN can also use a PKCS #12 formatted key file # (see "pkcs12" directive in man page). ca ca.crt cert server.crt key server.key # This file should be kept secret # Diffie hellman parameters. # Generate your own with: # openssl dhparam -out dh1024.pem 1024 # Substitute 2048 for 1024 if you are using # 2048 bit keys. dh dh1024.pem # Configure server mode and supply a VPN subnet # for OpenVPN to draw client addresses from. # The server will take 10.8.0.1 for itself, # the rest will be made available to clients. # Each client will be able to reach the server # on 10.8.0.1. Comment this line out if you are # ethernet bridging. See the man page for more info. server 10.8.0.0 255.255.255.0 # Maintain a record of client <-> virtual IP address # associations in this file. If OpenVPN goes down or # is restarted, reconnecting clients can be assigned # the same virtual IP address from the pool that was # previously assigned. ifconfig-pool-persist ipp.txt # Configure server mode for ethernet bridging. # You must first use your OS's bridging capability # to bridge the TAP interface with the ethernet # NIC interface. Then you must manually set the # IP/netmask on the bridge interface, here we # assume 10.8.0.4/255.255.255.0. Finally we # must set aside an IP range in this subnet # (start=10.8.0.50 end=10.8.0.100) to allocate # to connecting clients. Leave this line commented # out unless you are ethernet bridging. ;server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 # Configure server mode for ethernet bridging # using a DHCP-proxy, where clients talk # to the OpenVPN server-side DHCP server # to receive their IP address allocation # and DNS server addresses. You must first use # your OS's bridging capability to bridge the TAP # interface with the ethernet NIC interface. # Note: this mode only works on clients (such as # Windows), where the client-side TAP adapter is # bound to a DHCP client. ;server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. ;push "route 192.168.10.0 255.255.255.0" ;push "route 192.168.20.0 255.255.255.0" # To assign specific IP addresses to specific # clients or if a connecting client has a private # subnet behind it that should also have VPN access, # use the subdirectory "ccd" for client-specific # configuration files (see man page for more info). # EXAMPLE: Suppose the client # having the certificate common name "Thelonious" # also has a small subnet behind his connecting # machine, such as 192.168.40.128/255.255.255.248. # First, uncomment out these lines: ;client-config-dir ccd ;route 192.168.40.128 255.255.255.248 # Then create a file ccd/Thelonious with this line: # iroute 192.168.40.128 255.255.255.248 # This will allow Thelonious' private subnet to # access the VPN. This example will only work # if you are routing, not bridging, i.e. you are # using "dev tun" and "server" directives. # EXAMPLE: Suppose you want to give # Thelonious a fixed VPN IP address of 10.9.0.1. # First uncomment out these lines: ;client-config-dir ccd ;route 10.9.0.0 255.255.255.252 # Then add this line to ccd/Thelonious: # ifconfig-push 10.9.0.1 10.9.0.2 # Suppose that you want to enable different # firewall access policies for different groups # of clients. There are two methods: # (1) Run multiple OpenVPN daemons, one for each # group, and firewall the TUN/TAP interface # for each group/daemon appropriately. # (2) (Advanced) Create a script to dynamically # modify the firewall in response to access # from different clients. See man # page for more info on learn-address script. ;learn-address ./script # If enabled, this directive will configure # all clients to redirect their default # network gateway through the VPN, causing # all IP traffic such as web browsing and # and DNS lookups to go through the VPN # (The OpenVPN server machine may need to NAT # or bridge the TUN/TAP interface to the internet # in order for this to work properly). push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" # Certain Windows-specific network settings # can be pushed to clients, such as DNS # or WINS server addresses. CAVEAT: # http://openvpn.net/faq.html#dhcpcaveats # The addresses below refer to the public # DNS servers provided by opendns.com. ;push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" # Uncomment this directive to allow different # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. ;client-to-client # Uncomment this directive if multiple clients # might connect with the same certificate/key # files or common names. This is recommended # only for testing purposes. For production use, # each client should have its own certificate/key # pair. # # IF YOU HAVE NOT GENERATED INDIVIDUAL # CERTIFICATE/KEY PAIRS FOR EACH CLIENT, # EACH HAVING ITS OWN UNIQUE "COMMON NAME", # UNCOMMENT THIS LINE OUT. ;duplicate-cn # The keepalive directive causes ping-like # messages to be sent back and forth over # the link so that each side knows when # the other side has gone down. # Ping every 10 seconds, assume that remote # peer is down if no ping received during # a 120 second time period. keepalive 10 120 # For extra security beyond that provided # by SSL/TLS, create an "HMAC firewall" # to help block DoS attacks and UDP port flooding. # # Generate with: # openvpn --genkey --secret ta.key # # The server and each client must have # a copy of this key. # The second parameter should be '0' # on the server and '1' on the clients. ;tls-auth ta.key 0 # This file is secret # Select a cryptographic cipher. # This config item must be copied to # the client config file as well. ;cipher BF-CBC # Blowfish (default) ;cipher AES-128-CBC # AES ;cipher DES-EDE3-CBC # Triple-DES # Enable compression on the VPN link. # If you enable it here, you must also # enable it in the client config file. comp-lzo # The maximum number of concurrently connected # clients we want to allow. ;max-clients 100 # It's a good idea to reduce the OpenVPN # daemon's privileges after initialization. # # You can uncomment this out on # non-Windows systems. ;user nobody ;group nogroup # The persist options will try to avoid # accessing certain resources on restart # that may no longer be accessible because # of the privilege downgrade. persist-key persist-tun # Output a short status file showing # current connections, truncated # and rewritten every minute. status openvpn-status.log # By default, log messages will go to the syslog (or # on Windows, if running as a service, they will go to # the "\Program Files\OpenVPN\log" directory). # Use log or log-append to override this default. # "log" will truncate the log file on OpenVPN startup, # while "log-append" will append to it. Use one # or the other (but not both). ;log openvpn.log ;log-append openvpn.log # Set the appropriate level of log # file verbosity. # # 0 is silent, except for fatal errors # 4 is reasonable for general usage # 5 and 6 can help to debug connection problems # 9 is extremely verbose verb 3 # Silence repeating messages. At most 20 # sequential messages of the same message # category will be output to the log. ;mute 20 I've tried on multiple computers by the way. The same result on all of them. What could be wrong? Thanks in advance, and if you need other information I'll gladly post it. Information for new comments root@vps:~# iptables -L -n -v Chain INPUT (policy ACCEPT 862K packets, 51M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 3 packets, 382 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 4641 298K ACCEPT all -- * * 10.8.0.0/24 0.0.0.0/0 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 1671K packets, 2378M bytes) pkts bytes target prot opt in out source destination And root@vps:~# iptables -t nat -L -n -v Chain PREROUTING (policy ACCEPT 17937 packets, 2013K bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 8975 packets, 562K bytes) pkts bytes target prot opt in out source destination 1579 103K SNAT all -- * * 10.8.0.0/24 0.0.0.0/0 to:SERVERIP Chain OUTPUT (policy ACCEPT 8972 packets, 562K bytes) pkts bytes target prot opt in out source destination

    Read the article

  • Windows XP dual screen problems, user account related

    - by Chris
    I have had this issue with a few laptops now and it looks like it is some sort of user account problem. Specifics of the system are: Dell Laptop Windows XP Pro SP3 Non-domain member computer DLP Projector connected to laptop via VGA I use this setup almost daily to do presentations, always the mirrored display mode where I can see on the laptop monitor the same thing that is displayed on the projector. Today, when I boot up, I get the mirrored display at the login screen, but after I log in, it switches to Extended Desktop (like two desktops side-by-side). Fn+F8 just cycles through all the normal settings except the mirrored display. I created a new user account on the computer and it performs normally. Mirrored display works as normal. I have run into this about 4 times now and it always can be solved by creating a new user account on the computer, and then all is well. I would like to either: 1. Find a way to reset the customized settings for a specific user account which would hopefully make this go away, or 2. Find the specific setting that causes this so that I can easily fix it when the problem comes up. Creating new user accounts is kind of a pain and a easy fix must be out there somewhere.

    Read the article

  • How can I tell GoogleBot that a subdirectory is now a subdomain? [migrated]

    - by cwd
    I had about a million pages of a catalog indexed under a subdirectory, and now that's moved to a subdomain. GoogleBot is crawling each one of them and getting a 301 redirect to the new location. Even though I have set up the redirect rule in the apache sites-enabled configuration file, (i.e. it's early on when apache does the redirect - PHP is not even getting loaded), even though I have done that, the server isn't handling the load well. GoogleBot is making around 5 requests per second, and on top of my normal traffic that is hiking up the CPU for a few hours at a time. I checked in Webmaster Tools and the corresponding documentation for a way to let Google know that the content had been moved from a subdirectory to a subdomain, but with little luck. Basically the most helpful thing I saw said to just send 301 headers for the new location. How can I tell GoogleBot that a subdirectory is now a subdomain? If that is not an option, how can I more efficiently send 301 redirects out for a particular subdomain? I was thinking perhaps the Nginx server but I'm not sure that I can run both Apache and Nginx side by side on port 80 for different subdomains.

    Read the article

  • Free web-based software for team collaboration/documentation

    - by Jason Antman
    Looking for some advice here, as my search has turned up to be pretty fruitless. My group (9 people - SAs, programmers, and two network guys) is looking for some sort of web tool to... ahem... "facilitate increased collaboration" (we didn't use a buzzword generator, I swear). At the moment, we have an unified ticketing system that's braindead, but is here to stay for political/logistical reasons. We've got 2 wikis ("old" and "new"), neither of which fulfill our needs, and are therefore not used very often. We're looking for a free (as in both cost and open source) web-based tool. Management side: Wants to be able to track project status, who's doing what, whether deadlines are being met, etc. Doesn't want full-fledged "project management" app, just something where we can update "yeah this was done" or "waiting for Bob to configure the widgets". TeamBox (www.teambox.com) was suggested, but it seems almost too gimmicky, and doesn't meet any of the other requirements: Non-management side: - flexible, powerful wiki for all documentation (i.e. includes good tables, easy markup, syntax highlighting, etc.) - good full text search of everything (i.e. type in a hostname and get every instance anyone ever uttered that name) - task lists or ToDo lists, hopefully about to be grouped into a number of "projects" - file uploads - RSS or Atom feeds, email alerts of updates We're open to doing some customizations (adding some features, notification/feeds, searching, SVN integration, etc.) but need something F/OSS that will run under Apache. My conundrum is that most of the choices I've found so far fall into one of these categories: project management/task tracking with poor wiki/documentation/knowledge base support wiki with no task tracking support ticketing system with everything else bolted on (we already have one that we're stuck with) code-centric application (we do little "development", mostly SA work) Any suggestions? Or, lacking that, any comments on which software would be easiest to add the lacking features to (hopefully ending up with something that actually looks good and works well)?

    Read the article

  • Script to unstick VNC keys?

    - by MidnightLightning
    I've run into dropped key events when connecting via VNC clients, leading to a "stuck key" (usually a meta key like CTRL or ALT) and searching around the common answer on how to solve it is often "press and release each meta key individually until the problem resolves". However, I've found this to be annoying and time consuming to try and solve it this way. Plus on a bad connection, it sometimes will miss the "key up" event for the meta key again, and still keep the key stuck. So I'm looking for an automated way to do this: From a script on the client side or the server side, is there a way to trigger "key up" events for all the meta keys (CTRL, ALT, SHIFT, and WIN/CMD, both Left and Right versions)? Or just a command to release all keys the server thinks are down at the moment? Or some scripted way to at least list which keys the server end thinks are down so I know which key to keep pressing and releasing to try and release it? I've got a Mac on the server end, so a Mac/Linux solution would be needed for my situation.

    Read the article

  • Listing group members using ldapsearch

    - by colemanm
    Our corporate LDAP directory is housed on a Snow Leopard Server Open Directory setup. I'm trying to use the ldapsearch tool to export an .ldif file to import into another external LDAP server to authenticate with externally; basically trying to be able to use the same credentials internally and externally. I've got ldapsearch working and giving me the contents and attributes of everything in the "Users" OU, and even filtering down to only the attributes I need: ldapsearch -xLLL -H ldap://server.domain.net / -b "cn=users,dc=server,dc=domain,dc=net" objectClass / uid uidNumber cn userPassword > directorycontents.ldif That gives me a list of users and properties that I can import to my remote OpenLDAP server. dn: uid=username1,cn=users,dc=server,dc=domain,dc=net objectClass: inetOrgPerson objectClass: posixAccount objectClass: organizationalPerson uidNumber: 1000 uid: username1 userPassword:: (hashedpassword) cn: username1 However, when I try the same query on an OD "group" instead of a "container," the results are something like this: dn: cn=groupname,cn=groups,dc=server,dc=domain,dc=net objectClass: posixGroup objectClass: apple-group objectClass: extensibleObject objectClass: top gidNumber: 1032 cn: groupname memberUid: username1 memberUid: username2 memberUid: username3 What I really want is a list of users from the top example filtered based on their group memberships, but it looks like membership is set from the Group side, rather than the user account side. There must be a way to filter this down and only export what I need, right?

    Read the article

  • Listing group members using ldapsearch

    - by colemanm
    Our corporate LDAP directory is housed on a Snow Leopard Server Open Directory setup. I'm trying to use the ldapsearch tool to export an .ldif file to import into another external LDAP server to authenticate with externally; basically trying to be able to use the same credentials internally and externally. I've got ldapsearch working and giving me the contents and attributes of everything in the "Users" OU, and even filtering down to only the attributes I need: ldapsearch -xLLL -H ldap://server.domain.net / -b "cn=users,dc=server,dc=domain,dc=net" objectClass / uid uidNumber cn userPassword > directorycontents.ldif That gives me a list of users and properties that I can import to my remote OpenLDAP server. dn: uid=username1,cn=users,dc=server,dc=domain,dc=net objectClass: inetOrgPerson objectClass: posixAccount objectClass: organizationalPerson uidNumber: 1000 uid: username1 userPassword:: (hashedpassword) cn: username1 However, when I try the same query on an OD "group" instead of a "container," the results are something like this: dn: cn=groupname,cn=groups,dc=server,dc=domain,dc=net objectClass: posixGroup objectClass: apple-group objectClass: extensibleObject objectClass: top gidNumber: 1032 cn: groupname memberUid: username1 memberUid: username2 memberUid: username3 What I really want is a list of users from the top example filtered based on their group memberships, but it looks like membership is set from the Group side, rather than the user account side. There must be a way to filter this down and only export what I need, right?

    Read the article

  • How do I only dp or do just the lines, not the entire block in Vim diff?

    - by hobbes3
    I'm currently using MacVim (Snapshot 64) "Split Diff by..." menu option. The file is Django's my settings.py from version 1.3.1 to a fresh file from version 1.4. (Open the image on a separate window/tab to enlarge.) I know two basic commands do to "obtain" (and replace) a block from the other side. dp to "put" (and replace) a block to the other side. But those two commands writes the entire block, which in MacVim is the purple highlights. If you look at the 2nd block, you can see that from line 2 and 3 only has 2 words that are different: mysite and hobbes3. I just want to replace per line not the entire block. So what is there a command to replace do do and dp per line as oppose to an entire block or do I have to manually type it out? Bonus question: I noticed that once I manually edit a block, I lose the purple highlighting. How do I "refresh" the diff again to include the highlights without reopening the file? Please try to keep the answers Vim-general as oppose to MacVim-specific. Thanks!

    Read the article

  • SSH login very slow on OS X Leopard

    - by acjohnson55
    My SSH sessions take a very long time to initiate. This applies for logins with and without passwords, interactive and non-interactive. I have tried setting 'GSSAPIAuthentication no' and 'IPQoS 0x00' on the client side, and 'UseDNS no' on the server side, but no dice. I'm really stumped and frustrated. The worst part is that it SFTP takes forever to establish connections too, making file transfer much longer than it would be otherwise. I thought the problem might be something with PAM, because of where the hang is in the sshd log below, so I tried commenting out each line one-by-one in the /etc/pam.d/sshd file. Some caused login to be impossible, some had no apparent effect. I can't really tell if PAM is stalling for other services, but I can say that su'ing into my account from another account with 'su -l' has no apparent delay. I tried creating a new user account, just to see if there was something wrong with my existing account, and the same problem persisted. Any ideas of what's going on? On the client side, the most verbose mode outputs (redacted where reasonable): OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data ... debug1: ... line 1: Applying options for ... debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 53: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ... [x.x.x.x] port 22. debug1: Connection established. debug1: identity file /.../.ssh/id_rsa type -1 debug1: identity file /.../.ssh/id_rsa-cert type -1 debug3: Incorrect RSA1 identifier debug3: Could not load "/.../.ssh/id_dsa" as a RSA1 public key debug1: identity file /.../.ssh/id_dsa type 2 debug1: identity file /.../.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.2 debug1: match: OpenSSH_5.2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "..." from file "/.../.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /.../.ssh/known_hosts:9 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 136/256 debug2: bits set: 523/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA ... debug3: load_hostkeys: loading entries for host "..." from file "/.../.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /.../.ssh/known_hosts:9 debug3: load_hostkeys: loaded 1 keys debug3: load_hostkeys: loading entries for host "x.x.x.x" from file "/.../.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /.../.ssh/known_hosts:9 debug3: load_hostkeys: loaded 1 keys debug1: Host '...' is known and matches the RSA host key. debug1: Found key in /.../.ssh/known_hosts:9 debug2: bits set: 492/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /.../.ssh/id_dsa (0x7f8b7b41d6c0) debug2: key: /.../.ssh/id_rsa (0x0) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug3: start over, passed a different list publickey,password,keyboard-interactive debug3: preferred publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering DSA public key: /.../.ssh/id_dsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-dss blen 434 debug2: input_userauth_pk_ok: fp ... debug3: sign_and_send_pubkey: DSA ... debug1: Authentication succeeded (publickey). Authenticated to ... ([x.x.x.x]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. ****** Hangs here ****** debug2: callback start debug2: client_session2_setup: id 0 debug2: fd 3 setting TCP_NODELAY debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env TERM_PROGRAM debug3: Ignored env SHELL debug3: Ignored env TERM debug3: Ignored env TMPDIR debug3: Ignored env Apple_PubSub_Socket_Render debug3: Ignored env TERM_PROGRAM_VERSION debug3: Ignored env TERM_SESSION_ID debug3: Ignored env USER debug3: Ignored env COMMAND_MODE debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env Apple_Ubiquity_Message debug3: Ignored env __CF_USER_TEXT_ENCODING debug3: Ignored env PATH debug3: Ignored env MKL_NUM_THREADS debug3: Ignored env PWD debug1: Sending env LANG = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env DYLD_LIBRARY_PATH debug3: Ignored env PYTHONPATH debug3: Ignored env LOGNAME debug3: Ignored env DISPLAY debug3: Ignored env SECURITYSESSIONID debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 On the server side, the debug output looks like: Sep 16 18:46:40 ... sshd[31435]: debug1: inetd sockets after dupping: 3, 4 Sep 16 18:46:40 ... sshd[31435]: Connection from x.x.x.x port 52758 Sep 16 18:46:40 ... sshd[31435]: debug1: Current Session ID is 56AC0FB0 / Session Attributes are 00008000 Sep 16 18:46:40 ... sshd[31435]: debug1: Running in inetd mode in a non-root session... assuming inetd created the session for us. Sep 16 18:46:40 ... sshd[31435]: debug1: Client protocol version 2.0; client software version OpenSSH_5.9 Sep 16 18:46:40 ... sshd[31435]: debug1: match: OpenSSH_5.9 pat OpenSSH* Sep 16 18:46:40 ... sshd[31435]: debug1: Enabling compatibility mode for protocol 2.0 Sep 16 18:46:40 ... sshd[31435]: debug1: Local version string SSH-2.0-OpenSSH_5.2 Sep 16 18:46:40 ... sshd[31435]: debug1: Checking with Service ACLs for ssh login restrictions Sep 16 18:46:40 ... sshd[31435]: debug1: call to mbr_user_name_to_uuid with <...> suceeded to retrieve user_uuid Sep 16 18:46:40 ... sshd[31435]: debug1: Call to mbr_check_service_membership failed with status <0> Sep 16 18:46:40 ... sshd[31435]: debug1: PAM: initializing for "..." Sep 16 18:46:40 ... sshd[31435]: debug1: PAM: setting PAM_RHOST to "x.x.x.x" Sep 16 18:46:40 ... sshd[31435]: Failed none for ... from x.x.x.x port 52758 ssh2 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys2 Sep 16 18:46:40 ... sshd[31435]: debug1: fd 5 clearing O_NONBLOCK Sep 16 18:46:40 ... sshd[31435]: debug1: matching key found: file /.../.ssh/authorized_keys2, line 1 Sep 16 18:46:40 ... sshd[31435]: Found matching DSA key: ... Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys2 Sep 16 18:46:40 ... sshd[31435]: debug1: fd 5 clearing O_NONBLOCK Sep 16 18:46:40 ... sshd[31435]: debug1: matching key found: file /.../.ssh/authorized_keys2, line 1 Sep 16 18:46:40 ... sshd[31435]: Found matching DSA key: ... Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: ssh_dss_verify: signature correct Sep 16 18:46:40 ... sshd[31435]: debug1: do_pam_account: called Sep 16 18:46:40 ... sshd[31435]: Accepted publickey for ... from x.x.x.x port 52758 ssh2 Sep 16 18:46:40 ... sshd[31435]: debug1: monitor_child_preauth: ... has been authenticated by privileged process Sep 16 18:46:40 ... sshd[31435]: debug1: PAM: establishing credentials ***** Hangs here ***** Sep 16 18:46:54 ... sshd[31435]: User child is on pid 31654 Sep 16 18:46:54 ... sshd[31654]: debug1: PAM: establishing credentials Sep 16 18:46:54 ... sshd[31654]: debug1: permanently_set_uid: 509/20 Sep 16 18:46:54 ... sshd[31654]: debug1: Entering interactive session for SSH2. Sep 16 18:46:54 ... sshd[31654]: debug1: server_init_dispatch_20 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_open: ctype session rchan 0 win 1048576 max 16384 Sep 16 18:46:54 ... sshd[31654]: debug1: input_session_request Sep 16 18:46:54 ... sshd[31654]: debug1: channel 0: new [server-session] Sep 16 18:46:54 ... sshd[31654]: debug1: session_new: session 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_open: channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_open: session 0: link with channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_open: confirm session Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_global_request: rtype [email protected] want_reply 0 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_req: channel 0 request pty-req reply 1 Sep 16 18:46:54 ... sshd[31654]: debug1: session_by_channel: session 0 channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_input_channel_req: session 0 req pty-req Sep 16 18:46:54 ... sshd[31654]: debug1: Allocating pty. Sep 16 18:46:54 ... sshd[31435]: debug1: session_new: session 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_pty_req: session 0 alloc /dev/ttys008 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_req: channel 0 request env reply 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_by_channel: session 0 channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_input_channel_req: session 0 req env Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_req: channel 0 request shell reply 1 Sep 16 18:46:54 ... sshd[31654]: debug1: session_by_channel: session 0 channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_input_channel_req: session 0 req shell Sep 16 18:46:54 ... sshd[31655]: debug1: Setting controlling tty using TIOCSCTTY.

    Read the article

  • Ubuntu Server mdadm drbd ocfs2 kvm hangs under heavy file reading

    - by Stefano Annese
    I have deployed four ubuntu 10.04 server. They are coupled two by two in a cluster scenario. on both sides we have software raid1 disks, drbd8 and OCFS2 and on top of it some kvm machines run with qcow2 disks. I followed this: Link corosync is just used for DRBD and OCFS, the kvm machines are run "manually" When it works is fine: good performances, good I/O, but at a given time one of the two cluster started hanging. Then we tried with just one server turned on and it hangs the same. It seems to happen when an heavy READ in one of the virtual machines occurs, that is during rsyn backup. When the fact occurs the virtual machines are not reachable any more and the real server responds with good delay to the ping but no screen and no ssh is available. All we can do is force shutdown (hold the button) and restart and when it turns on again the raid on which relay drbd is resyncing. All the time it hangs we see such fact. After a couple of week of pain on one side this morning also the other cluster hung, but it has different moteherboard, ram, kvm instances. What is similar is reading for rsync scenario and Western Digital RAID Edistion disks on both side. Can anybody give me some input to solve such issue?

    Read the article

  • IIS FTP 7.5 Data Channel Problem (SSL)

    - by user59050
    Hey there I wonder if anyone can get me in the right direction. I am setting up both a FTPS Client and Server, FTPS Server using Microsoft’s iis FTP 7.5. On the client side it will be running on Linux and I am using M2crypto for the openssl wrapping (python). I am worried the problem is on the server side (iis7.5) due to the following discovery : If I host using Filezilla with BOTH the control and data channel being forced to be encrypted it works 100% (100% file transmission), if i use iis as the server everything works up to the point when the data channel takes over... i.e. all data of the retrieved file is already received correctly in my basket! The ftp server just won't send the final '226 Transfer complete.' on the cmd socket. Why? If i force the client or server to close the connection the file is 100% intact....If i use iis 7.5 with forced encryption on control channel all works 100% as long as i don’t force data channel... Here are some screenshots to demo this... Client View after Kill Client : pics @ http://forums.iis.net/p/1172936/1960994.aspx#1960994 Summary : We can establish the connection, do directory listings, start the upload, see the file (0bytes) created on the server but then the client hangs. If we terminate the client, the uploaded file on the server suddenly jumps up to full size.

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Expanding to dual video cards

    - by Anthony Greco
    I know a lot of factors can go into play here, so I will list my current hardware and setup: MOBO: GIGABYTE GA-890FXA-UD5 [http://www.newegg.com/Product/Product.aspx?Item=N82E16813128441] Processor: AMD Phenom II X6 1090T Black Edition Thuban 3.2GHz [http://www.newegg.com/Product/Product.aspx?Item=N82E16819103849] Ram: G.SKILL Ripjaws Series 16GB (4 x 4GB) [https://secure.newegg.com/NewMyAccount/OrderHistory.aspx?RandomID=4933910872745320111128011418] Current video card: EVGA 01G-P3-1366-TR GeForce GTX 460 SE [http://www.newegg.com/Product/Product.aspx?Item=N82E16814130591] OS: Windows 7 Ultimate x64 Currently I can run 2 monitors just fine in my setup. However, I want to upgrade this to 4 monitors. My question is, what is the best way to do this? I remember in the past reading I need the same type of video card, however would any GeForce GTX work, or do i need that very specific model (EVGA 01G-P3-1366-TR GeForce GTX 460 SE)? Are there any issues I should be aware of before I order 2 new monitors and a video card? Are there video cards better setup for this? I know NVidia offers SLI, however I do not know if my mobo is compliant. My mobo also offers CrossFireX configuration, though from what it says only Radeon are compliant. Any suggestions / feedbacks on my best route with my current setup is appreciated. Even if you suggest buying 2 new identical video cards, as long as you can mention which and why that is better I really appreciate it. Note: I really do not do any gaming. I sometimes do some 3D work in Unity and very rarely in Maya. Besides that I mostly do all my computer work in Visual Studios and Photoshop. I however need the 2 extra monitors because I monitor sometimes 5 remote desktops at once and switching on only 2 is becoming a very big pain. Also seeing 3 side by side while I work on the 4th will be very helpful. Again, I appreciate any feedback, as I have googled a bunch and just want to make sure what I buy will work.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >