Search Results

Search found 9036 results on 362 pages for 'somebody is in trouble'.

Page 8/362 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Trouble connecting to vsftpd on ubuntu server

    - by littleK
    I have installed Ubuntu Server 10.10 and I am using it to host a domain that I have. I am trying to set up FTP for the server, but I am running into some problems. I have successfully installed vsFTPd and I have opened up ports 20, 21 on my firewall. In my vsFTPd configuration, I have enabled SSL. Every time I try to connect to my server via FTP, I receive a "Connection Refused" error. I have had a little more success with SSL disabled, however the connection process will time out after the LIST command (but it does accept my authentication). Here is my vsFTPd configuration, the SSL stuff is at the bottom: # Example config file /etc/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # # Run standalone? vsftpd can run either from an inetd or as a standalone # daemon started from an initscript. listen=YES # # Run standalone with IPv6? # Like the listen parameter, except vsftpd will listen on an IPv6 socket # instead of an IPv4 one. This parameter and the listen parameter are mutually # exclusive. #listen_ipv6=YES # # Allow anonymous FTP? (Disabled by default) anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) #local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # If enabled, vsftpd will display directory listings with the time # in your local time zone. The default is to display GMT. The # times returned by the MDTM FTP command are also affected by this # option. use_localtime=YES # # Activate logging of uploads/downloads. xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # You may override where the log file goes if you like. The default is shown # below. #xferlog_file=/var/log/vsftpd.log # # If you want, you can have your log file in standard ftpd xferlog format. # Note that the default log file location is /var/log/xferlog in this case. #xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd.banned_emails # # You may restrict local users to their home directories. See the FAQ for # the possible risks in this before using chroot_local_user or # chroot_list_enable below. #chroot_local_user=YES # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_local_user=YES #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd.chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # Debian customization # # Some of vsftpd's settings don't fit the Debian filesystem layout by # default. These settings are more Debian-friendly. # # This option should be the name of a directory which is empty. Also, the # directory should not be writable by the ftp user. This directory is used # as a secure chroot() jail at times vsftpd does not require filesystem # access. secure_chroot_dir=/var/run/vsftpd/empty # # This string is the name of the PAM service vsftpd will use. pam_service_name=vsftpd # # This option specifies the location of the RSA certificate to use for SSL # encrypted connections. rsa_cert_file=/etc/ssl/private/vsftpd.pem # SSL ssl_enable=YES allow_anon_ssl=NO force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES Thanks!

    Read the article

  • WinXP - Having trouble sharing internet with 3G USB modem via ICS

    - by Carlos Nunez
    all! I've been banging my head against a wall with this issue for a few days now and am hoping someone can help out. I recently signed up for T-Mobile's webConnect 3G/4G service to replace the faltering (and slow) DSL connection in my apartment. The goal was to put the SIM in one of my old phones and use its built-in WLAN tethering feature to share Internet out to rest of my computers. I quickly found out that webConnect-provisioned SIMs do not work with regular smartphones, so I was forced to either buy a 4G-compatible router or tether one of my old laptops to my wireless router and share out that way. I chose the latter, and it's sharpening my inner masochistic self by the day. Here's the setup: GSM USB modem (via hub), ICS host - 10/100 Mbps Ethernet NIC, ICS "guest" - WAN port of my SMC WGBR14N wireless router in bridged mode (i.e. wireless access point). Ideally, this would make my laptop the DHCP server and internet gateway with the WAP giving everyone wireless coverage. I can browse internet on the host laptop fine. However, when clients try to connect, they get a DHCP-assigned IP from the laptop and are able to use the Internet for a few minutes before completely dying. After that happens, they are able to re-associate with the WAP and get IP addresses, but are unable to use Internet or resolve IP addresses until the laptop and router are restarted. If they do get access, it's very, very slow. After running Wireshark on the host machine, it turns out that this is because every TCP connection keeps getting RST. DNS seems to work. I would normally think the firewall is the culprit here, but when it drops packets, it drops them completely. The fact that TCP connections are being ACK'ed by the destination rules that out. Of course, none of the event Log isn't saying anything about what's going on. I also tried disabling power management on the NIC, since that's caused problems in the past; that didn't help either. I finally disabled receive-side scaling as per a Microsoft KB (that applied to Windows Server 2003, SP2) to no avail. I'm thinking of trying it with a different NIC (will be tough; don't have a spare Ethernet NIC around for the laptop), but I'm getting the impression that this simply doesn't work. Can anyone please advise? I apologise for the length of this post; all contributions are much appreciated! -Carlos.

    Read the article

  • Trouble installing Lagarith Lossless Codec

    - by Xeoncross
    I had been using the Lagarith Lossless Codec with camstudio on Windows XP Pro SP3 for a little while before I switched to Ubuntu. Now I'm back trying to do something on windows and the codec is now missing. I tried to install it via the .exe and even manually with the .inf but it's not being listed by camstudio as installed. I'm wondering if one of the other codecs I installed might have messed with it and yet some of the files are still fine so whatever method the installers use to check for existing files before install is returning positive and not re-installing it. How can I get Lagarith to work again? Maybe I need to know how to uninstall codecs.

    Read the article

  • Trouble installing php memcache extension

    - by user35346
    I'm trying to install memcache on MAMP but I get the warning below, and when I continue it seems to complete properly. I add the line extension=memcache.so to the php.ini and restart MAMP but phpinfo() doesn't list the memcache extension. $ ./pecl install memcache downloading memcache-2.2.5.tgz ... Starting to download memcache-2.2.5.tgz (35,981 bytes) ..........done: 35,981 bytes 11 source files, building WARNING: php_bin /Applications/MAMP/bin/php5/bin/php appears to have a suffix 5/bin/php, but config variable php_suffix does not match running: phpize Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 Enable memcache session handler support? [yes] : yes ... Build process completed successfully Installing '/Applications/MAMP/bin/php5/lib/php/extensions/no-debug-non-zts-20060613/memcache.so' install ok: channel://pecl.php.net/memcache-2.2.5 configuration option "php_ini" is not set to php.ini location You should add "extension=memcache.so" to php.ini

    Read the article

  • Trouble with site-to-site OpenVPN & pfSense not passing traffic

    - by JohnCC
    I'm trying to get an OpenVPN tunnel going on pfSense 1.2.3-RELEASE running on embedded routers. I have a local LAN 10.34.43.0/254. The remote LAN is 10.200.1.0/24. The local pfSense is configured as the client, and the remote is configured as the server. My OpenVPN tunnel is using the IP range 10.99.89.0/24 internally. There are also some additional LANs on the remote side routed through the tunnel, but the issue is not with those since my connectivity fails before that point in the chain. The tunnel comes up fine and the logs look healthy. What I find is this:- I can ping and telnet to the remote LAN and the additional remote LANs from the local pfSense box's shell. I cannot ping or telnet to any remote LANs from the local network. I cannot ping or telnet to the local network from the remote LAN or the remote pfSense box's shell. If I tcpdump the tun interfaces on both sides and ping from the local LAN, I see the packets hit the tunnel locally, but they do not appear on the remote side (nor do they appear on the remote LAN interface if I tcpdump that). If I tcpdump the tun interfaces on both sides and ping from the local pfSense shell, I see the packets hit the tunnel locally, and exit the remote side. I can also tcpdump the remote LAN interface and see them pass there too. If I tcpdump the tun interfaces on both sides and ping from the remote pfSense shell, I see the packets hit the remote tun but they do not emerge from the local one. Here is the config file the remote side is using:- #user nobody #group nobody daemon keepalive 10 60 ping-timer-rem persist-tun persist-key dev tun proto udp cipher BF-CBC up /etc/rc.filter_configure down /etc/rc.filter_configure server 10.99.89.0 255.255.255.0 client-config-dir /var/etc/openvpn_csc push "route 10.200.1.0 255.255.255.0" lport <port> route 10.34.43.0 255.255.255.0 ca /var/etc/openvpn_server0.ca cert /var/etc/openvpn_server0.cert key /var/etc/openvpn_server0.key dh /var/etc/openvpn_server0.dh comp-lzo push "route 205.217.5.128 255.255.255.224" push "route 205.217.5.64 255.255.255.224" push "route 165.193.147.128 255.255.255.224" push "route 165.193.147.32 255.255.255.240" push "route 192.168.1.16 255.255.255.240" push "route 192.168.2.16 255.255.255.240" Here is the local config:- writepid /var/run/openvpn_client0.pid #user nobody #group nobody daemon keepalive 10 60 ping-timer-rem persist-tun persist-key dev tun proto udp cipher BF-CBC up /etc/rc.filter_configure down /etc/rc.filter_configure remote <host> <port> client lport 1194 ifconfig 10.99.89.2 10.99.89.1 ca /var/etc/openvpn_client0.ca cert /var/etc/openvpn_client0.cert key /var/etc/openvpn_client0.key comp-lzo You can see the relevant parts of the routing tables extracted from pfSense here http://pastie.org/5365800 The local firewall permits all ICMP from the LAN, and my PC is allowed everything to anywhere. The remote firewall treats its LAN as trusted and permits all traffic on that interface. Can anyone suggest why this is not working, and what I could try next?

    Read the article

  • Trouble with NFS file sharing on Synology 211 NAS and Ubuntu Client

    - by Aglystas
    I'm attempting to set up NFS file sharing and keep getting the error... mount.nfs: access denied by server while mounting 192.168.1.110:/myshared Here is the exact command I'm using to mount sudo mount -o nolock 192.168.1.110:/myshared /home/emiller/MyShared I have set 'Enabled NFS' in DSM and set nfs priviledges in the the Shares section of the control panel. Here is the /etc/exports entry from the NAS. volume1/myshared 192.168.1.*(rw,sync,no_wdelay,no_root_squash,insecure_locks,anonuid=0,anongid=0) I read some things about the hosts.allow and hosts.deny but it seems like if they are empty they aren't used for anything. I can see the share when I run ... showmount -e 192.168.1.110 Any help would be appreciated in this matter.

    Read the article

  • Why am I having trouble installing Xenserver?

    - by Lee Tickett
    I have two almost identical servers: Supermicro X8SIL-F Intel x3470 16GB RAM 16GB USB Pendrive I have tried installing XenServer 6 on both servers but when the server then attempts to boot it hangs at the loading screen: If I attempt to validate at the beginning of the installation i get the following error(s): Yet i've checked the md5 hash is spot on and i've tried from virtual disc, physical disc, http and nfs all yield the same result! What's going on? Thanks

    Read the article

  • Trouble installing SSL Certificate on Apache

    - by jahufar
    We have a dedicated server with GoDaddy running Plesk that requires SSL. I've generated the certificate files and I created a vhost_ssl.conf (since I can't edit the default plesk apache configuration http.include, vhost_ssl.conf gets Included to httpd.include) that tells apache where to find the certificate files: SSLCertificateFile /usr/local/psa/var/certificates/domain.com.crt SSLCertificateKeyFile /usr/local/psa/var/certificates/domain.com.key SSLCertificateChainFile /usr/local/psa/var/certificates/sub.class1.server.ca.pem When I stop/start apache, it refuses to start up. The error_log does not have anything on it either (which is strange). Then I opened up httpd.include and found this bit: <VirtualHost 208.xxx.xxx.xxx:443> ServerName domain.com:443 ServerAlias www.domain.com UseCanonicalName Off SSLEngine on SSLVerifyClient none SSLCertificateFile /usr/local/psa/var/certificates/certagC9054 Include /var/www/vhosts/domain.com/conf/vhost_ssl.conf Then I commented out SSLCertificateFile /usr/local/psa/var/certificates/certagC9054 (which is plesk's SSL certificate) and restarted apache and it worked perfectly fine. It seems that Apache does not like multiple SSLCertificateFile within the same VirtualHost directive? As anyone who worked with plesk knows, I can't just remove SSLCertificateFile directive in httpd.include as plesk will overwrite my changes when someone uses it - which is why it's in vhost_ssl.conf. So I'm stuck and this is beyond my meager admin skills. Would appreciate someone who knows what (s)he's doing to tell me whats going on. Thanks in advance.

    Read the article

  • Trouble setting up PATH for Java on Debian

    - by milkmansrevenge
    I am trying to get Oracle Java 7 update 3 working correctly on Debian 6. I have downloaded and set up the files in /usr/java/jre1.7.0_03. I have also set the following two lines at the end of /etc/bash.bashrc: export JAVA_HOME=/usr/java/jre1.7.0_03 export PATH=$PATH:$JAVA_HOME/bin Logging in as other users and root is fine, Java can be found: chris@mc:~$ java -version java version "1.7.0_03" Java(TM) SE Runtime Environment (build 1.7.0_03-b04) Java HotSpot(TM) 64-Bit Server VM (build 22.1-b02, mixed mode) However there are two cases where Java cannot be found as detailed below. Note that both of these worked fine when I have previously installed OpenJDK Java 6 via aptitude, but I need Oracle Java 7 for various reasons. Most importantly, I cannot run commands as another user via su, despite the PATH showing that Java should be present. The user was created with adduser chris root@mc:~# su chris -c "echo $PATH" /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/java/jre1.7.0_03/bin:/bin root@mc:~# su chris -c "java -version" bash: java: command not found root@mc:~# su chris -c "/usr/java/jre1.7.0_03/bin/java -version" java version "1.7.0_03" ... How can it be in the PATH but not be found? Update 05/04/2012: explained by Daniel, to do with it being a non-interactive shell so files such as /etc/profile and /etc/bash.bashrc are not executed. Doing a full swap to that user and running Java works: root@mc:~# su chris chris@mc:/root$ java -version java version "1.7.0_03" ... I run a script on start up which exhibits similar but slightly different problems. The script is located in /etc/init.d/start-mystuff.sh and calls a jar: #!/bin/bash # /etc/init.d/start-mystuff.sh java -jar /opt/Mars.jar I can confirm that the script runs on start up and the exit code is 127, which indicates command not found. Inserting a line to print/save the PATH shows that it is: /sbin:/usr/sbin:/bin:/usr/bin This second problem isn't as important because I can just point directly to the Java executable in the script, but I am still curious! I have tried setting the full PATH and JAVA_HOME explicitly in /etc/environment which didn't help. I have also tried setting them in /etc/profile which doesn't seem to help either. I have tried logging in and out again after setting PATH in the various locations (duh!). Anyway, long post for what will probably have a simple one line solution :( Any help with this would be greatly appreciated, I have spent far too long trying to fix it by myself. Motivation The first problem may seem obscure but in my system I have users that are not allowed SSH access yet I still want to run processes as them. I have a ton of scripts operating in this way and don't want to have to change them all.

    Read the article

  • Trouble getting FTP login to work in IIS6

    - by Frank Rosario
    Hello all, I'm trying to setup an FTP site for one of my clients to pickup files from us using IIS6. I've created the FTP site, have set to not isolate users (not necessary as FTP will be read only with authentication). Here's the problem. The FTP is to be password protected, so I turned of anonymous access on the FTP site. I then created a ftpuser account on the machine, and gave it read and browse directory permissions on the ftp's root directory. However, when I go to test the ftpuser login, I get a 530 "ftpuser cannot login" error. However, if I browse to same directory over HTTP (anonymous access turned off as well) and enter the ftpuser login info, I can download files and browse directories successfully. Why is the ftpuser working over HTTP but not FTP? Shouldn't I be able to login over FTP with the ftpuser login information I just created? Thanks in advance, - Frank

    Read the article

  • Hard drive trouble trying to recover data

    - by DEmetria
    How to get my information off my old hard drive to my new one So I wanna know can anyone help me Laptop stop working and have me a hard disk error I finally got it to boot up again after playing with it, and I tried to copy my stuff and it froze on me and hasn't booted since Bought another hard drive and ended up making an image of my friends computer but couldn't get my stuff off my old drive so I tried the freezer method now and I put it in for two hours and it didn't boot but I'm putting it back in for 12hrs So my end result is I just wanna get my drive up enough to create an image of my CPU to my new hard drive and is there another way I could do it if my hard drive won't boot!! But here's the kickers when I made the image of my friends computerized up loaded it to the new hard drive and I have my old hard drive plugged up to an USB enclosure so I need help. Thanks in advance!!!

    Read the article

  • Trouble with IIS SMTP relaying to Gmail

    - by saille
    I appreciate that similar questions have been asked about how to setup SMTP relaying with IIS's virtual SMTP server. However I'm still completely stumped on this problem. Here's the setup: IIS 6.0 SMTP server running on Win2k3 box with a NAT'ed IP. Company uses Gmail for all email services. An app on the box needs to send email, so normally we'd just set the app up to talk to smtp.gmail.com directly, but this app doesn't support TLS. Easy, we just setup a local SMTP relay right? So I thought. What we have done so far: Setup IIS SMTP server to relay to smtp.gmail.com, as per these excellent instructions: http://fmuntean.wordpress.com/2008/10/26/how-to-configure-iis-smtp-server-to-forward-emails-using-a-gmail-account/ The local SMTP relay allows anonymous access. Both the local IP and the loopback IP have been explicitly allowed in the Connection and Relay dialogs. Tried sending email from 2 different apps via the local SMTP server, but failed (the emails end up in the Queue folder, but never get sent). The IIS logs show the conversation with the local app, but zero conversation happening with smtp.gmail.com. The port used by gmail is open outbound, and indeed the apps we have that support TLS can send email directly via smtp.gmail.com, so there is no problem with the network. At this point I changed the smtp settings in IIS SMTP server to use a different external SMTP server and hey-presto, the local apps can send email via local IIS SMTP relay. So smtp.gmail.com fails to work with our IIS SMTP relay, but another 3rd party SMTP service works fine. We need to use smtp.gmail.com, so how to troubleshoot this one?

    Read the article

  • SYSLOG-NG - Having trouble with a destination

    - by Samuurai
    Hi, I'm trying to set up a seperate log file for all windows messages. I've set up a match for MSWinEventLog, but it's completely ignoring my configuration Here's my config, which is straight after the src object filter f_windows { match("MSWinEventLog"); }; destination winFIFO { file("/var/log/splunk/syslog-ng/winFIFO"); }; log { source(src); filter(f_windows); destination(winFIFO); flags(final); }; It all ends up in this one instead: filter f_messages { not facility(news, mail) and not filter(f_iptables); }; destination messages { file("/var/log/messages"); }; log { source(src); filter(f_messages); destination(messages); }; Can anyone see what i'm doing wrong?

    Read the article

  • Trouble editing ~/.bash_profile: -bash: $'\r': command not found

    - by Dave
    I installed CygWin on Windows 7. Using Notepad, I edited my ~/.bash_profile file to add on to the PATH variable … PATH="${PATH}:/cygdrive/c/apache-ant-1.8.2/bin" Now, when I SSH in to my Windows machine, I get this error … -bash: $'\r': command not found -bash: $'\r': command not found -bash: $'\r': command not found -bash: $'\r': command not found -bash: $'\r': command not found -bash: /home/dev/.bash_profile: line 39: syntax error: unexpected end of file and my PATH is not set. Anyone know how I can correct this?

    Read the article

  • Trouble with wireless driver on a Dell Latitude D830

    - by Kevin
    After uninstalling Dell's wireless utility I get a new hardware found dialog that can not find any driver for my wifi card on it's own. I'm running Windows XP Professional Service Pack 3, and I would like to use the default wifi handler since dell's utility does not work with my company's wireless switch. I did try downloading the recommend driver from the dell support site Network Adapter 2 Model Intel(R) PRO/Wireless 3945ABG Network Connection Description [12] Intel(R) PRO/Wireless 3945ABG Network Connection Status Connected

    Read the article

  • Trouble joining Windows Server 2008 to Domain

    - by Jim R
    When I try to join my new server to my existing domain I get the following error: "An attempt to resolve the DNS name of a DC in the domain being joined has failed. Please verify this client is configured to reach a DNS server that can resove DNS names in the target domain." I have tried all of the following already: Successfully pinged the domain controller. Ping the new server from the domain controller by IP address and by DNS name. Ping the DC server from the new server by IP address and by DNS name. Changed the network to DHCP (it was originally static). No joy as static or DHCP. Turned off all firewall settings. Added the domain name to 'hosts' file. Added the server name of the primary domain controller to the 'hosts' file in the new server. Any ideas? Thanks in advance for any help! Jim Update: With help from J. Brian Kelly (Thanks) I have managed to narrow down the problem to a DNS issue. Specifically, UDP/53 packets are being sent (they are seen in Network Monitor), but are not getting to the DNS server. But, I do not yet know why. Update: The quested output from IPCONFIG for the HyperV host and the virtual machine. IPCONFIG from HyperV Server Windows IP Configuration Host Name . . . . . . . . . . . . : HYPER Primary Dns Suffix . . . . . . . : sfi-wfc.com Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : sfi-wfc.com Ethernet adapter Local Area Connection 4: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Primary Network Physical Address. . . . . . . . . : 00-30-48-CA-CC-7A DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::cd16:3ac2:3d4f:e275%679(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.100.1(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.100.10 DHCPv6 IAID . . . . . . . . . . . : -1476382648 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-12-10-20-E9-00-30-48-CA-CC-7A DNS Servers . . . . . . . . . . . : 192.168.100.5 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter Local Area Connection 3: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : sfi Description . . . . . . . . . . . : Intel(R) 82576 Gigabit Dual Port Network Connection #2 Physical Address. . . . . . . . . : 00-30-48-CA-CC-7B DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPCONFIG from Virtual Machine Windows IP Configuration Host Name . . . . . . . . . . . . : DB Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : sfi Ethernet adapter Local Area Connection 2: Connection-specific DNS Suffix . : sfi Description . . . . . . . . . . . : Microsoft Virtual Machine Bus Network Adapter Physical Address. . . . . . . . . : 00-15-5D-66-03-02 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.100.128(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : Saturday, August 29, 2009 10:44:45 AM Lease Expires . . . . . . . . . . : Tuesday, September 01, 2009 3:08:33 PM Default Gateway . . . . . . . . . : 192.168.100.10 DHCP Server . . . . . . . . . . . : 192.168.100.5 DNS Servers . . . . . . . . . . . : 192.168.102.5 Primary WINS Server . . . . . . . : 192.168.100.5 NetBIOS over Tcpip. . . . . . . . : Enabled Tunnel adapter Local Area Connection* 8: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : sfi Description . . . . . . . . . . . : isatap.sfi Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Tunnel adapter Local Area Connection* 9: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Teredo Tunneling Pseudo-Interface Physical Address. . . . . . . . . : 02-00-54-55-4E-01 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes

    Read the article

  • Trouble upgrading OS X, because HD doesn't use GUID Partition Table Scheme

    - by Erik Vold
    So I have a MacBook with Mac OS X 10.5 and I'm trying to upgrade to 10.6, but when I run the upgrade 'install' I quickly get to a page where I am supposed to 'Select the disk where you want to install Mac OS X' and there is only the one hard drive, so it is auto selected, and below that I see a warning message and the only button available is the 'Go Back' button. The warning message says: "Macintosh HD" can't be used because it doesn't use the GUID Partition Table scheme. Use Disk Utility to change the partition scheme. Select the disk, choose the Partition tab, select the Volume Scheme and then click Options. So I followed the above instructions, and I got to the last step, where I'm supposed to click the 'Options' button, the problem is that I cannot click that button, it is disabled.. So what am I supposed to do?

    Read the article

  • Trouble getting SSL to work with django + nginx + wsgi

    - by Kevin
    I've followed a couple of examples for Django + nginx + wsgi + ssl, but I can't get them to work. I simply get an error in my browser than I can't connect. I'm running two websites off the host. The config files are identical except for the ip addresses, server names, and directories. When neither use SSL, they work fine. When I try to listen on 443 with one of them, I can't connect to either. My config files are below, and any suggestions would be appreciated. server{ listen xxx.xxx.xxx.xxx:80; server_name sub.domain.com; access_log /home/django/logs/nginx_customerdb_http_access.log; error_log /home/django/logs/nginx_customerdb_http_error.log; location / { proxy_pass http://127.0.0.1:8080; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; } location /site_media/ { alias /home/django/customerdb_site_media/; } location /admin-media/ { alias /home/django/django_admin_media/; } } server{ listen xxx.xxx.xxx.xxx:443; server_name sub.domain.com; access_log /home/django/logs/nginx_customerdb_http_access.log; error_log /home/django/logs/nginx_customerdb_http_error.log; ssl on; ssl_certificate sub.domain.com.crt; ssl_certificate_key sub.domain.com.key; ssl_prefer_server_ciphers on; location / { proxy_pass http://127.0.0.1:8080; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Protocol https; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; } location /site_media/ { alias /home/django/customerdb_site_media/; } location /admin-media/ { alias /home/django/django_admin_media/; } } <VirtualHost *:8080> ServerName xxx.xxx.xxx.xxx ServerAlias xxx.xxx.xxx.xxx LogLevel warn ErrorLog /home/django/logs/apache_customerdb_error.log CustomLog /home/django/logs/apache_customerdb_access.log combined WSGIScriptAlias / /home/django/customerdb/apache/django.wsgi WSGIDaemonProcess customerdb_wsgi processes=4 threads=5 WSGIProcessGroup customerdb_wsgi SetEnvIf X-Forwarded-Protocol "^https$" HTTPS=on </VirtualHost> UDPATE: the existence of two sites (on separate IPs) on the host is the issue. if i delete the other site, the setting above mostly work. doing so also brings up another issue: chrome doesn't accept the site as secure saying that some content is not encrypted.

    Read the article

  • Trouble with HP printer

    - by reyjavikvi
    I have a HP Photosmart C3180 (it's a printer/scanner). For some reason, it recently stopped working. The power light is blinking (I think all the other lights are one, but don't remember, I'm not there right now), and the only way to turn it off is to unplug it. It won't print anything, and when you put a page in the tray it sucks it in without printing anything on it and stops when the paper is on its way out (Again, I don't remember how we managed to get it out, sorry). This printer is hooked up to a computer running XP, but it doesn't work either when printing from the network. Weirdly, the scanner works fine. Do you have any ideas on what could be the problem? Could it be a driver problem? By the way, sorry if the question lacks a bit of detail. I don't know much about printers, and I don't have it here so I can't remember exactly all the details. If needed I can update the question tonight or tomorrow.

    Read the article

  • Trouble subnetting...

    - by ???
    I have to learn how to subnet by hand for a test. And I'm having real problems doing it. I keep getting stuck. Here's an example: 138.248.184.17/18 - IP 255.255.192.0 - Subnet Mask 192 = 1100 0000 in binary And I know 184 in the IP address is the "octet of interest". OK I get that far...and then I'm lost. I know I need to set the network bits of 192(I think?) to all 0 for the network ID and then to all 1 for the broadcast ID. Problem is how do I know which part of 11000000 is network and which part is host?

    Read the article

  • Trouble with Debian Lenny and Sphinx

    - by Ando
    I've very basic understanding of linux systems, but I've a server which was setup a while ago to host some web apps. Recently I decided to test out and implement Sphinx but unfortunately I cant get the install to work. I'm running a Debian Lenny distro and when I try to install sphinx it says - checking MySQL include files... configure: error: missing include files. ****************************************************************************** ERROR: cannot find MySQL include files. Check that you do have MySQL include files installed. The package name is typically 'mysql-devel'. If include files are installed on your system, but you are still getting this message, you should do one of the following: 1) either specify includes location explicitly, using --with-mysql-includes; 2) or specify MySQL installation root location explicitly, using --with-mysql; 3) or make sure that the path to 'mysql_config' program is listed in your PATH environment variable. To disable MySQL support, use --without-mysql option. ****************************************************************************** I do have mysql 5.1 installed but I can't find the include files, AND one more thing.. I read around the net that I probably need libmysqlclient15-dev but when I try to install that using apt-get i receive the following error. The following packages were automatically installed and are no longer required: libxcb-aux0 libts-0.0-0 libxcb-atom1 ttf-dejavu-extra hunspell-en-us g++-4.3 libmysql++3 libnspr4-0d libdirectfb-1.0-0 libxcb-event1 libasound2 libstdc++6-4.3-dev libhunspell-1.2-0 ttf-dejavu libmozjs2d conkeror-spawn-process-helper libnss3-1d Use 'apt-get autoremove' to remove them. The following NEW packages will be installed: libmysqlclient15-dev 0 upgraded, 1 newly installed, 0 to remove and 276 not upgraded. Need to get 7590 kB of archives. After this operation, 26.3 MB of additional disk space will be used. WARNING: The following packages cannot be authenticated! libmysqlclient15-dev Install these packages without verification [y/N]? Y Err http://ftp.us.debian.org/debian/ lenny/main libmysqlclient15-dev amd64 5.0.51a-24+lenny5 404 Not Found [IP: 35.9.37.225 80] Err http://security.debian.org/ lenny/updates/main libmysqlclient15-dev amd64 5.0.51a-24+lenny5 404 Not Found [IP: 149.20.20.6 80] Failed to fetch http://security.debian.org/pool/updates/main/m/mysql-dfsg-5.0/libmysqlclient15-dev_5.0.51a-24+lenny5_amd64.deb 404 Not Found [IP: 149.20.20.6 80] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Can you help me out by suggesting how to install the required packages and run the Sphinx.

    Read the article

  • Migrating to CF9: trouble getting JRun working with SSL

    - by DaveBurns
    I have a client on MX7 who wants to migrate to CF9. I have a dev environment for them on my WinXP machine where I've configured MX7 to run with JRun's built-in web server. I've had that working for a long time with both regular and SSL connections. I installed CF9 yesterday side-by-side with the existing MX7 install to start testing. The install was smooth and detected MX7, adjusted CF9's port numbers for no conflict, etc. Testing started well: MX7 over regular and SSL still worked and CF9 worked over regular HTTP. But I can't get CF9 to work with SSL. I installed a new certificate with keytool, FireFox (v3.6) complained about it being unsigned, I added it to the exception list, and now I get this: Secure Connection Failed An error occurred during a connection to localhost:9101. Peer reports it experienced an internal error. (Error code: ssl_error_internal_error_alert) I've been Googling that in all variations but can't find much help to get past this. I don't see any info in any log files either. FWIW, here's my SSL config from SERVER-INF/jrun.xml: <service class="jrun.servlet.http.SSLService" name="SSLService"> <attribute name="enabled">true</attribute>` <attribute name="interface">*</attribute> <attribute name="port">9101</attribute> <attribute name="keyStore">{jrun.rootdir}/lib/mykey</attribute> <attribute name="keyStorePassword">*deleted*</attribute> <attribute name="trustStore">{jrun.rootdir}/lib/trustStore</attribute> <attribute name="socketFactoryName">jrun.servlet.http.JRunSSLServerSocketFactory</attribute> <attribute name="deactivated">false</attribute> <attribute name="bindAddress">*</attribute> <attribute name="clientAuth">false</attribute> </service> Anyone here know of any issues re setting up SSL and CF9? Anyone had success with it? Dave

    Read the article

  • Trouble cloning a Macbook Pro hard drive

    - by Mirko Froehlich
    I am trying to upgrade the 250GB hard drive in my MacBook Pro (early 2008 model) to a 750GB drive. I have connected the new drive via an external USB enclosure. The drive is recognized fine, I can format it, etc. However, every time I try to clone the drive, I am getting Input/Output errors. Before the clone operation, I have verified both the internal and the external drive using Disk Utility, and they both check out fine. After the clone operation, the external drive shows multiple "Invalid node structure" errors: I have tried two approaches for cloning the drive: Using Disk Utility, by starting from the OSX install DVD Using Carbon Copy Cloner The outcome is the same in both cases. The Carbon Copy Cloner logs show a handful of the following types of errors: rsync: mkstemp "<... an external filename ...>" failed: Input/output error (5) rsync: stat "<... an external filename ...>" failed: Input/output error (5) The actual files affected seem to be different across different runs of the application. Before the last run, I used Disk Utility to (once more) reformat the external drive and explicitly overwrite it with zeros, but this made no difference. I also tried running a surface scan in Tech Tool Pro overnight. It got about 2/3 of the way through before I had to disconnect the drive (had to take my MacBook Pro to work), but so far it didn't report any bad blocks. Assuming it scans the drive in the same order in which blocks would be allocated during actual use, it seems like if bad blocks were to blame for the clone failures, they should have been found already (given that the source drive is only 250GB). As a last attempt, I may try SuperDuper as well, although my understanding is that it uses the same underlying rsync approach as Carbon Copy Cloner, so it's unlikely to perform any better. Are there any other things I should try before I send the drive in for a replacement? Could these problems be caused by my internal drive, even though it works fine and checks out fine in Disk Utility?

    Read the article

  • Trouble exporting quicktime movie in Final Cut Pro

    - by Kato
    I'm having a very strange problem exporting a sequence to quicktime in FCP6. I've never had this problem before. After the sequence is exported, the last 3 minutes play very choppy, and the last minute does not export at all! There is nothing unrendered there, and the first 10 minutes play perfectly. The only thing that is different about the last bit is that there are titles there, but the export goes wonky at least 2 minutes before that. I'm exporting using Apple Pro Res 422 HQ 1920x1080 30p, and the footage is from HMC150, shot in 1080p30. Any advice would be appreciated!

    Read the article

  • Trouble using gitweb with nginx

    - by Rayne
    I have a git repository in a directory inside of /home/raynes/pubgit/. I'm trying to use gitweb to provide a web interface to it. I use nginx as my web server for everything else, so I don't really want to have to use another just for this. I'm mostly following this guide: http://michalbugno.pl/en/blog/gitweb-nginx, which is the only guide I can find via google and is really recent. fcgiwrap apparently isn't in Lucid Lynx's repositories, so I installed it manually. I spawn instances via spawn-fcgi: spawn-fcgi -f /usr/local/sbin/fcgiwrap -a 127.0.0.1 -p 9001 That's all good. My /etc/gitweb.conf is as follows: # path to git projects (<project>.git) #$projectroot = "/home/raynes/pubgit"; $my_uri = "http://mc.raynes.me"; $home_link = "http://mc.raynes.me/"; # directory to use for temp files $git_temp = "/tmp"; # target of the home link on top of all pages #$home_link = $my_uri || "/"; # html text to include at home page $home_text = "indextext.html"; # file with project list; by default, simply scan the projectroot dir. $projects_list = $projectroot; # stylesheet to use $stylesheet = "/gitweb/gitweb.css"; # logo to use $logo = "/gitweb/git-logo.png"; # the 'favicon' $favicon = "/gitweb/git-favicon.png"; And my nginx server configuration is this: server { listen 80; server_name mc.raynes.me; location / { root /usr/share/gitweb; if (!-f $request_filename) { fastcgi_pass 127.0.0.1:9001; } fastcgi_index index.cgi; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; } } The only difference here is that I've set fastcgi_pass to 127.0.0.1:9001. When I go to http://mc.raynes.me I'm greeted with a page that simply says "403" and nothing else. I have not the slightest clue what I did wrong. Any ideas?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >