Search Results

Search found 10527 results on 422 pages for 'ccnet config'.

Page 213/422 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • Multiple vlan issue with procurve switch

    - by Chris-AZ
    I have a cisco asa5505 as my rtr/fw(10.1.3.254). I have vlan 1 and vlan 3. Vlan 1 is my default all access vlan. Vlan 3 is my Guest(dmz) vlan. I can't seem to get a dhcp ip address when my laptop is plugged into port 42 on my procurve. I have plugged my laptop directly into the firewall and it gets a dhcp ip fine(the firewall is dhcp server). the firewall is plugged into port 41. Only vlan3 needs to go over port 41. I'm sure I have a bonehead config problem however I'm about ready to pull out what little hair I have left. vlan 1 name "Computers" forbid 45 untagged 1-41,43-44 ip helper-address 10.1.1.16 ip address 10.1.1.1 255.255.255.0 tagged 46-48 no untagged 42,45 exit vlan 3 name "Guest Wireless" ip helper-address 10.1.3.254 ip address 10.1.3.1 255.255.255.0 tagged 41-42,44-48 exit

    Read the article

  • how to properly enable drag-and-drop in Windows 7 (with iTunes in particular)

    - by user38795
    On two computers with Windows 7 at work and at home with the same version of iTunes (10.1.0.56) at work I can drag-and-drop files into iPod, and I can not at home. At home only adding them first through a dialog box to a Library and only after that drag-and-dropping into iPod (or iPad) works. Initially I thought the problem is in iTunes, but looks like it's the same version, so most likely it's the Windows 7 config parameters. It's 64-bit W7, and I have admin privileges in both cases. The only difference I could think of is it's an enterprise version at work and ultimate at home. What should I change at home to enable that drag-and-drop-ing files into the application?

    Read the article

  • Unknown protocol when trying to connect to remote host wit stunnel

    - by RaYell
    I'm trying to set up a stunnel for WebDav on Windows. I want to connect 80 port on my local interface to 443 on another machine in my network. I can ping the machine remote machine. However when I use the tunnel, I'm getting this error all the time SSL state (accept): before/accept initialization SSL_accept: 140760FC: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol There is nothing in the logs on the other machine and here's my stunnel connection config [https] accept = 127.0.0.2:80 connect = 10.0.0.60:443 verify = 0 I've set it up to accept all certificates so this shouldn't be a problem with a self-signed certificate remote host uses. Does anyone knows what might be the problem that this connection cannot be eastablished?

    Read the article

  • configuring two network interfaces in ubuntu 10.04.1

    - by Bill Smith
    I have got two NICs configured on a VM - each is tied to a specific network, one is a DMZ, the other is an internal network. I want MySQL to listen on the internal network only and Apache on the DMZ listening for HTTP and HTTPS. But as soon as I add the second interface I run into trouble. I can hit HTTP on either interface, but can not hit 3306 on the internal network for MySQL. Here's the config... could someone sanity check this please? auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.153.24.230 netmask 255.255.255.240 network 10.153.24.224 broadcast 10.153.24.239 dns-nameservers 8.8.8.8 auto eth1 iface eth1 inet static address 10.153.24.195 netmask 255.255.255.224 gateway 10.153.24.193 broadcast 10.153.23.223

    Read the article

  • Tomcat 6 HTTPS connector: keep alive timeout not being respected

    - by sehugg
    I'm using Tomcat 6.0.24 on Ubuntu (JDK 1.6) with an app that does Comet-style requests on an HTTPS connector (directly against Tomcat, not using APR). I'd like to set the keep-alive to 5 minutes so I don't have to refresh my long-polling connections. Here is my config: <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="1000" keepAliveTimeout="330000" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> Unfortunately it seems that the server closes the connection after 65 seconds. The pcap from a sample session goes something like this: T=0 Client sends SYN to server, handshake etc. T=65 Server sends FIN to client T=307 Client sends FIN to server (I'm guessing the 5 minute timeout on the client is due to the HTTP lib not detecting the socket close on the server end, but in any case -- the server shouldn't be closing the connection that early) (edit: this works as expected when using the standard HTTP connector)

    Read the article

  • Why do I get error, Invalid command 'PythonHandler'?

    - by nbolton
    I'm trying to deploy a Django application, but I've hit a brick wall. On Debian (latest), I've run these commands so far: apt-get install apache2 apache2-doc apache2-mpm-prefork apache2-utils libexpat1 ssl-cert libapache2-mod-python python-django I've tried adding the module manually in the Apache 2 config files, but to be honest I'm totally lost. It's totally different to Apache version 1 which I used years ago. Syntax error on line 7 of /etc/apache2/sites-enabled/000-default: Invalid command 'PythonHandler', perhaps misspelled or defined by a module not included in the server configuration I've added the following to my sites-available/default file, between the tags. <Location "/"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE hellodjango1.settings PythonDebug Off </Location> Here's what tutorials I've used so far, without much luck: Django | How to use Django with Apache and mod_python | Django Documentation How To Install Django On Debian Etch (Apache2/mod_python)

    Read the article

  • Crontab script on Mac OS X Lion does not work anymore

    - by Nopster
    I have a problem with cron tasks. Previously this script worked fine on Mac OS X 10.6 server, but when I initialize it on Lion (client), this script stopped working. Basically, this .bat file calls a jar file (that invokes a loop of mysqldump commands) to backup several databases on several servers, and runs perfectly if launched by the shell. cd /Users/nameoftheuser/Desktop/backupper /usr/bin/java -cp .:Backupper.jar:lib/mail.jar backupper.Main "/Users/nameoftheuser/Desktop/backupper/listasiti.txt" "/Users/nameofthe/Desktop/backupper/config.properties But if the cron launches the same .bat file, the generated database backups are 0 bytes. The cron entry is: 0 0 sh /Users/path/to/file.bat I believe that the problem is that cron doesn't run as root. Or what else?

    Read the article

  • Cannot install xdebug

    - by Nathan Mann
    I'm getting a large number of undefined references when using $ld xdebug.so I have added the "zend_extension="/bla/xdebug.so"" to the conf for apache and it is giving me a config test failed because it cannot find the file. The file has been chown'd to www-data, so I do not believe it is a permissions error. I have run the wizard at xdebug.org/wizard.php to make sure the version would be correct, I updated to the version it recommended with an install from source and still receive the same error with apache and same output from $ld. I originally installed xdebug with: $apt-get install php5-xdebug And have also tried with: $pecl install xdebug

    Read the article

  • mod_fcgid process doesn't respawn

    - by aaronsw
    I have a Python script running on my server as a FastCGI using Apache2 and mod_fcgid. I let it spawn up to five processes. But I soon get messages like these in the Apache logs: [Wed Sep 02 23:16:34 2009] [warn] (103)Software caused connection abort: mod_fcgid: ap_pass_brigade failed in handle_request function [Wed Sep 02 23:16:35 2009] [warn] (103)Software caused connection abort: mod_fcgid: ap_pass_brigade failed in handle_request function and then Apache doesn't seem to recognize that all its processes are dead (I have a max of 5 backends) and refuses to spawn new ones: [Wed Sep 02 23:26:16 2009] [notice] mod_fcgid: /var/www/hacks.og.theinfo.org/picker.fcgi total process count 5 >= 5, skip the spawn request [Wed Sep 02 23:26:17 2009] [notice] mod_fcgid: /var/www/hacks.og.theinfo.org/picker.fcgi total process count 5 >= 5, skip the spawn request at which point it refuses to respond to requests from the outside world. This doesn't seem to happen with my other FastCGIs, which all use the same Apache config: <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi IPCConnectTimeout 20 MaxProcessCount 5 DefaultMaxClassProcessCount 2 DefaultMinClassProcessCount 1 </IfModule> Any idea what causes it?

    Read the article

  • Apache mpm-itk Performance

    - by Matt Beckman
    I manage a bunch of VPSs with memory ranging from 1GB to 8GB. Most of these websites are Joomla websites, and the servers must support multiple sites/users/S-FTP. I use mpm-itk almost exclusively (mostly due to it's convenience in these shared environments). However, I'm aware it isn't known for performance, so I need some advice on making it faster. Due to the lack of documentation when I first went the way of mpm-itk, I included only one setting in the config, and that was to limit each user to 50 clients (the rest I left up to defaults): <IfModule mpm_itk_module> MaxClientsVHost 50 </IfModule> Are there any better alternatives available? Are there any settings supported in mpm-prefork or mpm-worker that are also supported in mpm-itk? Thanks!

    Read the article

  • claws-mail suddenly does not recognize my .claws-mail folder

    - by Mala
    Hi I turned my computer on today like any other day, and fired up my email-client claws-mail. However, instead of showing me my inbox, it just pops up the new-account wizard, as though my .claws-mail folder did not exist! I've double-checked that it's there, and that it (seems to) contain everything that it should. I tried running claws-mail from the terminal to see if it returned any useful messages about why it couldn't find my email account, but no dice... For reference, this is CM version 3.7.2 running on Gentoo kernel 2.6.31-gentoo-r6 (x86_64) Any clues as to how to get my beloved inbox/config back? Thanks!

    Read the article

  • Poor upload/download speed on 2 x ADSL lines into a Cisco 2621XM

    - by 2020mobile
    Hi, Sorry never been on this site before so I apologise if not the right section or even forum. I have users complaining of very slow internetn connectivity on site and have checked with our ISP who have said that the line is testing at 8mb. We have 2 x BT lines that have our ISP broadand on them. Both lines go into a Cisco 2600 series router that then has a PIX firewall off that. Connectivity is successful just gone really slow and unable to download anything. Config is below: version 12.3 no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname ROUTER-ADSL-INTERNET ! logging buffered 16384 informational enable secret xxx enable password xxx ! username xxx username xxx clock summer-time UK recurring last Sun Mar 1:00 last Sun Oct 1:00 aaa new-model ! ! aaa authentication login default local aaa authorization exec default local aaa session-id common ip subnet-zero no ip source-route ! ! ! ip audit notify log ip audit po max-events 100 no ip bootp server ip name-server 213.208.106.212 no mpls ldp logging neighbor-changes no ftp-server write-enable ! ! ! ! ! ! ! ! ! ! no voice hpi capture buffer no voice hpi capture destination ! ! ! ! ! ! ! ! interface ATM0/0 description 01270 111111 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface FastEthernet0/0 ip address 82.133.32.9 255.255.255.248 shutdown speed 100 full-duplex no cdp enable ! interface ATM0/1 description 01270 222222 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface FastEthernet0/1 ip address 217.146.115.49 255.255.255.240 duplex auto speed auto no cdp enable ! interface Dialer0 ip address 217.146.115.250 255.255.255.248 encapsulation ppp dialer pool 1 dialer-group 1 ppp authentication chap callin ppp chap hostname [email protected] ppp chap password 7 xxxxx ppp multilink ! ip classless ip route 0.0.0.0 0.0.0.0 Dialer0 ! no ip http server no ip http secure-server ! no logging trap access-list 10 permit 217.146.115.50 access-list 10 permit 82.133.32.10 access-list 10 deny any access-list 22 permit 217.146.115.50 access-list 22 permit 217.206.239.86 access-list 22 permit 82.133.32.10 access-list 22 deny any dialer-list 1 protocol ip permit no cdp run ! ! snmp-server community xxxxxx RO 10 snmp-server enable traps tty radius-server authorization permit missing Service-Type ! ! ! ! ! ! line con 0 exec-timeout 5 0 password 7 xxxxxx line aux 0 no exec line vty 0 4 access-class 22 in exec-timeout 5 0 password 7 xxxxxx transport input telnet ssh transport output none line vty 5 15 password 7 xxxxxx transport input telnet ssh ! ntp clock-period 17180095 ntp server 130.88.200.98 ! ! end Now my knowledge is very limited but ISP have said that while the lines are bonded each needs a seperate login as they've recently changed their L2TP router and that enforces the use of seperate logins - when the lines were configured we were given two logins. So, my question is what changes do I need to make to the config in order to get this working? it was ok before their change and I do have another login :- 01270 111111 - [email protected] 01270 222222 - [email protected] Apologies for the long email and thanks for taking the time to read it. Any more info I can provide please let me know. Thanks,

    Read the article

  • Bluehost: 1 Minute Delays?

    - by feklee
    On Bluehost shared hosting (Apache 2.2 + FastCGI + APC), I have the problem that some requests take almost exactly one minute to respond. Yet time spent in PHP is only two seconds. To demonstrate the issue, I created a temporary test page. Sample output: When asking Bluehost support about the issue, I got the following reply: “the fastcgi process don't stay running they will only stay running for a certan period which would explain the timeouts you are seeing it traffic would spawn new ones. [...]” I understand that spawning new FastCGI processes takes some time. But almost exactly one minute? That must be some timeout. But which timeout may that be? What I want in the end: No request should take longer than five seconds to respond, even if it fails. When I asked Bluehost support to set the Apache TimeOut directive accordingly, they told me: “we do not modify the Apache Config File even on a virtual host level.”

    Read the article

  • How to set Virtualbox appliance as webdev portable sollution?

    - by tenshimsm
    I just want to set a a Virtualbox virtual appliance to make it portable. Meaning that I'll enable a network config which will not need to be changed when I am using my laptop in a different network. I want the virtual machine to have internet access to keep it updated and be able to always have direct access from host using, for example, the IP 10.0.2.100 even when I am in a 192.168.0.1 network. So the first virtual network adapter will have a static ip (10.0.2.100) and the second will receive it from the DHCP. I don't know if 2 virtual adapters are needed or just one to accomplish that.

    Read the article

  • open_basedir vs sessions

    - by liquorvicar
    On a virtual hosting server I have the open_basedir set to .:/path/to/vhost/web:/tmp:/usr/share/pear for each virtual host. I have a client who's running WordPress and he's complaining about open_basedir errors thus: PHP WARNING: file_exists() [function.file-exists]: open_basedir restriction in effect. File(/var/lib/php/session/sess_42k7jn3vjenj43g3njorrnrmf2) is not within the allowed path(s): (.:/path/to/vhost/web:/tmp:/usr/share/pear) So the PHP session save_path isn't included in open_basedir but sessions across all sites on the server seems to be working fine apart from in this intermittent instance. I thought that perhaps the default session handler ignored open_basedir and this warning was caused by WP accessing the session file directly. However from what I can see PHP 5.2.4 introduced open_basedir checking to the session.save_path config: http://www.php.net/ChangeLog-5.php#5.2.4 (I am on PHP 5.2.13). Any ideas?

    Read the article

  • Amavis-new whitelist

    - by pigeon
    Hi to the Linux community. I'm coming from a Windows Server background so have mercy. I'm attempting to whitelist some domains and although I know this isn't the best way of doing so its just a one off for a couple of domains so I thought it would be the quickest way of doing so. Current setup: Amavis is used to pass emails off the ClamAV and SpamAssasin, currently I make changes in /etc/amavis/conf.d/50-user, as this will overide other settings. Have created a whitelist file that looks like this: .domaintowhitelist.com .domain2towhitelist.com In the 50-user config file: Have tried variants like this: read_hash(\%whitelist_sender, '/etc/amavis/whitelist'); read_hash(\%virus_lovers, '/etc/amavis/whitelist'); And restarting amavis after making those changes. Am I going about this the wrong way? Any help is appreciated.

    Read the article

  • OpenLdap TLS authentication setup

    - by CrazycodeMonkey
    I am trying to setup openldap on ubuntu 12.04 by following this guide https://help.ubuntu.com/12.04/serverguide/openldap-server.html When I tried to enable TLS on the server by creating a self signed crtificate as decribed in the guide above, I got the following error command that I ran ldapmodify -Y EXTERNAL -H ldapi:/// -f /etc/ssl/certinfo.ldif Content of ldif file dn: cn=config add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/cacert.pem - add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/ldap01_slapd_cert.pem - add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/ldap01_slapd_key.pem Error Message ldap_modify: Inappropriate matching (18) additional info: modify/add: olcTLSCertificateFile: no equality matching rule After hours of searching on google, I have not found anything that tells much about this error. Does anyone have any more information on this?

    Read the article

  • nginx terminates connection after 65k bytes

    - by David Wolever
    I've got nginx configured as a front-end to a Python application running under gunicorn, but nginx is terminating connections after about 65k of data have been sent. For example, I've got a view which looks like this: def debug_big_file(request): return HttpResponse("x" * 500000) But when I access that URL through nginx, I only get 65283 bytes: $ curl https://example.com/debug/big-file | wc … curl: (18) transfer closed with outstanding read data remaining 0 1 65283 Note that everything works as expected when accessing gunicorn directly: $ curl http://localhost:1234/debug/big-file | wc … 0 1 500000 The relevant nginx config: location / { proxy_pass http://localhost:1234/; proxy_redirect off; proxy_headers_hash_bucket_size 96; } And nginx version 1.7.0 Some other facts: The number of bytes is consistent from request to request, but it varies based on the content (I first noticed it with a large PNG file, which was cut off after 65,372 bytes, not 65,283) 110k bytes are sent correctly (ie, "x" * 110000 returns all 110,000 bytes), but 120k bytes are not tcpdump suggests that nginx is sending a RST packet to gunicorn:

    Read the article

  • How to get nginx to serve up on an elastic IP

    - by geekbri
    I have an EC2 instance which is serving up PHP pages with nginx and php-fpm. This works perfectly fine when accessed through the public DNS for the instance. However if I try to access the site with the Elastic IP which is bound to it, it serves up a generic "Welcome to nginx" page, even though in my server block i have listen 80 (which i thought listened on all incoming IPs on port 80). Here is my nginx config. server { listen 80; access_log /var/log/nginx/access.log; root "/var/www/clipperz/"; index index.html index.php; # Default location location / { try_files $uri $uri/ index.html; } # Parse all .php file in the $document_root directory location ~ .php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } }

    Read the article

  • Is it possible to change the content database of an existing SharePoint '07 web app?

    - by mcjabberz
    Disclaimer: I'm very much an accidental SharePoint admin so I greatly appreciate your patience. My SharePoint farm's database server is EOL and about to die. I have managed to get the config database (SharePoint_Config) moved over to the new database server and even managed to get the Central Administration site to use the new database server. I have even added the new database server to the farm as a content database server. My problem right now is with the web app's content database (WSS_Content). I already have a copy of the content database setup and ready to go on the new database server but, for the life of me, I can't figure out how to change the SharePoint web app's database over to it. Is this even possible? If so, how would I do it? If not, would I have to create a new web app and then restore (and overwrite) its WSS_Content database with my existing one?

    Read the article

  • How do I get my Nvidia monitor position settings (in Linux) to persist after a restart?

    - by machineghost
    I have two monitors, and I run them both in Linux using the proprietary Nvidia drivers with "TwinView". I just installed Linux Mint 13, and since the install after every reboot my monitors come up in the wrong position (the computer thinks the left monitor is on the right). After boot-up I can run the Nvidia config and fix the monitors' position, and I can even save the configuration file successfully. But as soon as I restart again, the monitors re-appear switched. Does anyone have any idea what might be causing this (and more importantly, how I can solve it?)

    Read the article

  • disallow anonymous bind in openldap

    - by shashank prasad
    Folks, I have followed the instructions here http://tuxnetworks.blogspot.com/2010/06/howto-ldap-server-on-1004-lucid-lynx.html to setup my OpenLdap and its working just fine, except an anonymous user can bind to my server and see the whole user/group structure. LDAP is running over SSL. I have read online that i can add disallow bind_anon and require authc in the slapd.conf file and it will be disabled but there is no slapd.conf file to begin with and since this doesn't use slapd.conf for its configuration as i understand OpenLdap has moved to a cn=config setup so it wont read that file even if i create one. i have looked online without any luck. I believe i need to change something in here olcAccess: to attrs=userPassword by dn="cn=admin,dc=tuxnetworks,dc=com" write by anonymous auth by self write by * none olcAccess: to attrs=shadowLastChange by self write by * read olcAccess: to dn.base="" by * read olcAccess: to * by dn="cn=admin,dc=tuxnetworks,dc=com" write by * read but i am not sure what. Any help is appreciated. Thank you! -shashank

    Read the article

  • VSFTPD and Implicit SSL

    - by Luma
    Hi Everyone, I have a Debian Dedicated server and I want to enable Implicit SSL on it using VSFTPD and I am having a hard time. I have read online and the only thing I can really find is how to enable SSL and in the man pages it lists one implicit ssl command. but since Implicit ssl uses a second listener (990 by default) I have no idea how to make it work on Debian. Has anyone managed to get this working? Here is my config: listen=YES anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 connect_from_port_20=YES pam_service_name=vsftpd ssl_enable=YES allow_anon_ssl=NO force_local_data_ssl=NO force_local_logins_ssl=NO ssl_tlsv1=YES ssl_sslv2=NO ssl_sslv3=NO rsa_cert_file=/etc/ssl/certs/vsftpd.pem if I include Implicit_SSL=YES the server won't even start. thanks

    Read the article

  • How do I remove a repository of yum

    - by sunil
    When I search for a package in yum(centos 6), it tries to search in a repro named 'c6-media' And it gives a bunch of errors as follows file:///media/CentOS/repodata/repomd.xml: [Errno 14] Could not open/read file:///media/CentOS/repodata/repomd.xml Trying other mirror. file:///media/cdrecorder/repodata/repomd.xml: [Errno 14] Could not open/read file:///media/cdrecorder/repodata/repomd.xml Trying other mirror. file:///media/cdrom/repodata/repomd.xml: [Errno 14] Could not open/read file:///media/cdrom/repodata/repomd.xml Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: c6-media. Please verify its path and try again Obviously the error seems to say that yum is trying to search for the CD/DVD which installed the OS. I do not have it now. All I want to do now is to delete this repository from yum. I went to the package manager graphical tool and removed this from the sources. Seems yum and the graphical tool do not use the same config. This is just my guess.

    Read the article

  • PHP throwing XDebug errors ONLY in command line mode...

    - by Wilhelm Murdoch
    Hey, all! I've been having a few problems running PHP-based utilities within the command line ever since I enabled the XDebug. It runs just fine when executing script through a browser, but once I try an execute a script on the command line, it throws the following errors: h:\www\test>@php test.php PHP Warning: PHP Startup: Unable to load dynamic library 'E:\development\xampplite\php\ext\php_curl.dll' - The specified module could not be found in Unknown on line 0 PHP Warning: Xdebug MUST be loaded as a Zend extension in Unknown on line 0 h:\www\test> The script runs just fine after this, but it's something I can't seem to wrap my head around. Could it be a path issue within my php.ini config? I'm not sure if that's the case considering it throws the same error no matter where I access the @php environmental variable. Also, all paths within my php.ini are absolute. Not really sure what's going on here. Any ideas? Thanks!

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >