Search Results

Search found 11568 results on 463 pages for 'config spec'.

Page 101/463 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • cisco 2900xl - SNMP - Get mac address of device connected to an interface

    - by ankit
    Hello all, Basically what i want to do is to find out what is the mac address of a device plugged in to an interface on the switch (FastEthernet0/1 for example) reading through the switch documentaion i found out that i can configure snmp trap on it to make it notify of any new mac address the switch detects by using the command snmp-server enable traps mac-notifiction but for some reason my switch does not support this feature. the only options i see are CORE_SWITCH(config)#snmp-server enable traps ? c2900 Enable SNMP c2900 traps cluster Enable Cluster traps config Enable SNMP config traps entity Enable SNMP entity traps hsrp Enable SNMP HSRP traps snmp Enable SNMP traps vlan-membership Enable VLAN Membership traps vtp Enable SNMP VTP traps <cr> so the other way would be for me to run a cronjon on my gateway to poll the switch periodically using snmp to get new mac addresses i have looked everywhere but cant seem to find the OID that would provide me this information. any help i can get would me very much appreciated ! here's the output from "show version" on my switch Cisco Internetwork Operating System Software IOS (tm) C2900XL Software (C2900XL-C3H2S-M), Version 12.0(5.4)WC(1), MAINTENANCE INTERIM SOFTWARE Copyright (c) 1986-2001 by cisco Systems, Inc. Compiled Tue 10-Jul-01 11:52 by devgoyal Image text-base: 0x00003000, data-base: 0x00333CD8 ROM: Bootstrap program is C2900XL boot loader CORE_SWITCH uptime is 1 hour, 24 minutes System returned to ROM by power-on System image file is "flash:c2900XL-c3h2s-mz.120-5.4.WC.1.bin" cisco WS-C2912-XL (PowerPC403GA) processor (revision 0x11) with 8192K/1024K bytes of memory. Processor board ID FAB0409X1WS, with hardware revision 0x01 Last reset from power-on Processor is running Enterprise Edition Software Cluster command switch capable Cluster member switch capable 12 FastEthernet/IEEE 802.3 interface(s) 32K bytes of flash-simulated non-volatile configuration memory. Base ethernet MAC Address: 00:01:42:D0:67:00 Motherboard assembly number: 73-3397-08 Power supply part number: 34-0834-01 Motherboard serial number: FAB040843G4 Power supply serial number: DAB05030HR8 Model revision number: A0 Motherboard revision number: C0 Model number: WS-C2912-XL-EN System serial number: FAB0409X1WS Configuration register is 0xF thanks, -ankit

    Read the article

  • Apache debugging: where to find error logs?

    - by AP257
    I'm new to Apache and web serving generally, so apologies if this is a very stupid question. I want to configure a new sub-domain on a working site and install a forum there. I'm using a Debian server that already has Apache, mod_wsgi and a bunch of virtual hosts successfully running on it. I first installed my forum app (Django's OSQA). Following the OSQA instructions, I then created an Apache config file that specified ServerName as the new sub-domain. I also created a .wsgi file for the app, and pointed WSGIScriptAlias at it. I then restarted Apache. However, when I go to the new sub-domain, I get a 404 error message. Two questions: Is there a step missing above? Or is simply creating a new Apache config file in sites-available enough to 'tell' Apache about a new sub-domain? If there's something else going wrong, how can I debug it? The ErrorLog and CustomLog specified in the config file are both blank. apache2.conf, which I guess is Apache-wide configuration, specifies ErrorLog /var/log/apache2/error.log, but this is yet another blank file.

    Read the article

  • How to connect 2 routers (Asmax and D-link) RJ11 vs RJ45 issue

    - by piobyz
    I just bought a new router, D-link DSL 2641B and want to connect it to another one, provided by my ISP, Asmax AR 804MP. Previously, I had Linksys WRT350N, and there was no problem, while I had Ethernet cable plugged in to one of LAN ports in Asmax and INTERNET(RJ45) port in Linksys, connection used PPPoE protocol -- worked OK. D-link has DSL(RJ11) port (which I don't want to use as Asmax replacement, while there is a separate Ethernet cable with a TV plugged to Asmax, which I don't want to configure from scratch on D-link). How should I connect my new D-link to work with Asmax? Via DSL port? Via one of the LAN ports (in which case I probably should change the purpose of this port in the config, I guess?). I tried connecting D-link both ways: LAN(ASMAX) to LAN(DLINK) LAN(ASMAX) to DSL(DLINK) (using RJ11 - RJ45 cable) I hope there is some setting in the DLINK's config that I overlooked. I haven't tried to see what's in ASMAX's config, but I guess I don't need to change anything there, while Linksys worked just fine? The only difference I see, is that D-link has RJ11 DSL port as WAN, and Linksys has RJ45 (called by them INTERNET) as a main WAN port.

    Read the article

  • How to connect 2 routers (Asmax and D-link) RJ11 vs RJ45 issue

    - by piobyz
    I just bought a new router, D-link DSL 2641B and want to connect it to another one, provided by my ISP, Asmax AR 804MP. Previously, I had Linksys WRT350N, and there was no problem, while I had Ethernet cable plugged in to one of LAN ports in Asmax and INTERNET(RJ45) port in Linksys, connection used PPPoE protocol -- worked OK. D-link has DSL(RJ11) port (which I don't want to use as Asmax replacement, while there is a separate Ethernet cable with a TV plugged to Asmax, which I don't want to configure from scratch on D-link). How should I connect my new D-link to work with Asmax? Via DSL port? Via one of the LAN ports (in which case I probably should change the purpose of this port in the config, I guess?). I tried connecting D-link both ways: LAN(ASMAX) to LAN(DLINK) LAN(ASMAX) to DSL(DLINK) (using RJ11 - RJ45 cable) I hope there is some setting in the DLINK's config that I overlooked. I haven't tried to see what's in ASMAX's config, but I guess I don't need to change anything there, while Linksys worked just fine? The only difference I see, is that D-link has RJ11 DSL port as WAN, and Linksys has RJ45 (called by them INTERNET) as a main WAN port.

    Read the article

  • How to connect 2 routers (Asmax and D-link) RJ11 vs RJ45 issue

    - by piobyz
    I just bought a new router, D-link DSL 2641B and want to connect it to another one, provided by my ISP, Asmax AR 804MP. Previously, I had Linksys WRT350N, and there was no problem, while I had Ethernet cable plugged in to one of LAN ports in Asmax and INTERNET(RJ45) port in Linksys, connection used PPPoE protocol -- worked OK. D-link has DSL(RJ11) port (which I don't want to use as Asmax replacement, while there is a separate Ethernet cable with a TV plugged to Asmax, which I don't want to configure from scratch on D-link). How should I connect my new D-link to work with Asmax? Via DSL port? Via one of the LAN ports (in which case I probably should change the purpose of this port in the config, I guess?). I tried connecting D-link both ways: LAN(ASMAX) to LAN(DLINK) LAN(ASMAX) to DSL(DLINK) (using RJ11 - RJ45 cable) I hope there is some setting in the DLINK's config that I overlooked. I haven't tried to see what's in ASMAX's config, but I guess I don't need to change anything there, while Linksys worked just fine? The only difference I see, is that D-link has RJ11 DSL port as WAN, and Linksys has RJ45 (called by them INTERNET) as a main WAN port.

    Read the article

  • Dynamic nginx domain root path based on hostname?

    - by Xeoncross
    I am trying to setup my development nginx/PHP server with a basic master/catch-all vhost config so that I can created unlimited ___.framework.loc domains as needed. server { listen 80; index index.html index.htm index.php; # Test 1 server_name ~^(.+)\.frameworks\.loc$; set $file_path $1; root /var/www/frameworks/$file_path/public; include /etc/nginx/php.conf; } However, nginx responds with a 404 error for this setup. I know nginx and PHP are working and have permission because the localhost config I'm using works fine. server { listen 80 default; server_name localhost; root /var/www/localhost; index index.html index.htm index.php; include /etc/nginx/php.conf; } What should I be checking to find the problem? Here is a copy of that php.conf they are both loading. location / { try_files $uri $uri/ /index.php$is_args$args; } location ~ \.php$ { try_files $uri =404; include fastcgi_params; fastcgi_index index.php; # Keep these parameters for compatibility with old PHP scripts using them. fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # Some default config fastcgi_connect_timeout 20; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_pass 127.0.0.1:9000; }

    Read the article

  • I need help choosing between two configurations of the Dell Studio 14

    - by Adnan
    There are two configurations of the Dell Studio 14 (1458) which I'm looking at: Config 1: Core i7-720QM @ 1.6 GHz; ATI Mobility Radeon HD 5450 1GB; 4gb DDR3 RAM @ 1066 MHz; 500 GB SATA HDD @ 7200 RPM; Price: $999 Config 2: Core i5-430M; ATI Mobility Radeon HD 4530 512MB; 4GB DDR3 RAM @ 1066 MHz; 500 GV SATA HDD @ 7200 RPM; Price: $874 What I want to know is, would config 1 still be able to do decent gaming (maybe some Starcraft II), and is there a great performance difference between the i5 and i7 processors? Is the $130 extra worth it for the i7 and better graphics card? I do more than just basic computing. I plan on getting into web design (specifically using Photoshop and Dreamweaver), and I wish to do gaming. I know Conifg 1 is the better value, but I want to be sure that the $130 more is truly worth it. I dont have too much money and want to spend wisely as possible, yet I am a computer geek and plan on doing a lot more than the average user.

    Read the article

  • Linking to a chat room via XMPP: URI

    - by Coderer
    I found out how to link directly to a chat room on a Jabber conference server -- it took a bit of digging, and I wound up actually looking at the spec before I was sure I was doing it right. I confirmed here, so I'm pretty sure I've got it. The results, though, are puzzling. If I click a link of the style xmpp:[email protected] I get a new chat session with user "dude" at example.com, as expected. If I tack on a nonsense query (xmpp:[email protected]?foobar), it's ignored, which is what the spec says should happen. However, if I use xmpp:[email protected]?join, as in the link above, nothing happens. I dug a little deeper, and found out that on my (Linux) system, xmpp URIs are handled via purple-url-handler, so I dropped to a terminal and ran it manually. The result was that any xmpp URI ran fine except one that includes a ?join query. The ?join query results in a dbus crash, pointing specifically to line 2356 of dbus-message.c -- a little Googling suggests this probably is dbus's less-than-elegant way of telling me that somebody is using dbus incorrectly. Am I crafting my link correctly? Is this an OS or maybe application issue? Does this work on other platforms / browsers / etc? More importantly, is there any easy way to fix it?

    Read the article

  • How to get my W2003-server (back) into the web (after setting up bridged networking)

    - by MBaas
    I have recently set up Virtualbox on a W2003-Server (which is also used as webserver, accessed from the web). My vbox worked nicely, but then I wanted more, I wanted to have the vm appear in the intranet like any ordinary pc. I was advised to setup bridged networking as opposed to NAT. I did so, and in the server's network connections have bridged the LAN-Connection and the "VirtualBox Host-Only Network" (yes, it says "host only network", but I assure that VBox networking is configured to use network bridge). So now my VM is visible in the intranet and it also has www-accesss, the server can also access the web. The only problem that came up is that the server is no longer accessible from the web. I've traced an HTTP-Request and it says "Can't connect to *:80 (connect: No route to host)". So maybe something in the router's config needs to be adjusted (yeah, well, the server's IP-Address changed from 192.168.1.199 to ...198). So I went into the router-config, reviewed port-forwarding for port 80 and adjusted the IP there, but it still didn't work. Unsure if it was a router-problem or rather something in the server's config, I've setup a "demilitarized zone" in the router and have put the server into it. (My understanding is that this would put the server straight into the web...) But the result of the HTTP-Requests is still the same :(

    Read the article

  • Unable to install mod_wsgi on CentOS 5.5 VPS...

    - by jasonaburton
    I am trying to install mod_wsgi on my VPS, but it won't work. This is what I am doing: wget http://modwsgi.googlecode.com/files/mod_wsgi-2.5.tar.gz tar xzvf mod_wsgi-2.5.tar.gz cd mod_wsgi-2.5 ./configure --with-python=/opt/python2.5/bin/python After I run the above command, I get this error: checking for apxs2... no checking for apxs... no checking Apache version... ./configure: line 1298: apxs: command not found ./configure: line 1298: apxs: command not found ./configure: line 1299: /: is a directory ./configure: line 1461: apxs: command not found configure: creating ./config.status config.status: creating Makefile config.status: error: cannot find input file: Makefile.in Through some research I've discovered that I need to modify my command: ./configure --with-apxs=/usr/local/apache/bin/apxs \ --with-python=/usr/local/bin/python But, /usr/local/apache/ doesn't exist, or so that's what it is telling me. If it doesn't exist, how do I create it with all the files needed, or if apache is located elsewhere on my VPS where would it be located? I'd also like to mention that I ran a command to install apache before this entire deal: yum install httpd so I assumed that was all I needed but apparently not (I am very new at all this server administration stuff so please be gentle) EDIT: This is the tutorial that I have been using to get this all set up: http://binarysushi.com/blog/2009/aug/19/CentOS-5-3-python-2-5-virtualevn-mod-wsgi-and-mod-rpaf/ I got stuck at the heading "Installing mod_wsgi" Thanks for any help!

    Read the article

  • configure Squid3 proxy server on Ubuntu with caching and logging

    - by Panshul
    I have a ubuntu 11.10 machine. Installed Squid3. When i configure the squid as http_access allow all, everything works fine. my current configuration mostly default is as follows: 2012/09/10 13:19:57| Processing Configuration File: /etc/squid3/squid.conf (depth 0) 2012/09/10 13:19:57| Processing: acl manager proto cache_object 2012/09/10 13:19:57| Processing: acl localhost src 127.0.0.1/32 ::1 2012/09/10 13:19:57| Processing: acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 2012/09/10 13:19:57| Processing: acl SSL_ports port 443 2012/09/10 13:19:57| Processing: acl Safe_ports port 80 # http 2012/09/10 13:19:57| Processing: acl Safe_ports port 21 # ftp 2012/09/10 13:19:57| Processing: acl Safe_ports port 443 # https 2012/09/10 13:19:57| Processing: acl Safe_ports port 70 # gopher 2012/09/10 13:19:57| Processing: acl Safe_ports port 210 # wais 2012/09/10 13:19:57| Processing: acl Safe_ports port 1025-65535 # unregistered ports 2012/09/10 13:19:57| Processing: acl Safe_ports port 280 # http-mgmt 2012/09/10 13:19:57| Processing: acl Safe_ports port 488 # gss-http 2012/09/10 13:19:57| Processing: acl Safe_ports port 591 # filemaker 2012/09/10 13:19:57| Processing: acl Safe_ports port 777 # multiling http 2012/09/10 13:19:57| Processing: acl CONNECT method CONNECT 2012/09/10 13:19:57| Processing: http_access allow manager localhost 2012/09/10 13:19:57| Processing: http_access deny manager 2012/09/10 13:19:57| Processing: http_access deny !Safe_ports 2012/09/10 13:19:57| Processing: http_access deny CONNECT !SSL_ports 2012/09/10 13:19:57| Processing: http_access allow localhost 2012/09/10 13:19:57| Processing: http_access deny all 2012/09/10 13:19:57| Processing: http_port 3128 2012/09/10 13:19:57| Processing: coredump_dir /var/spool/squid3 2012/09/10 13:19:57| Processing: refresh_pattern ^ftp: 1440 20% 10080 2012/09/10 13:19:57| Processing: refresh_pattern ^gopher: 1440 0% 1440 2012/09/10 13:19:57| Processing: refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 2012/09/10 13:19:57| Processing: refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 2012/09/10 13:19:57| Processing: refresh_pattern . 0 20% 4320 2012/09/10 13:19:57| Processing: http_access allow all 2012/09/10 13:19:57| Processing: cache_mem 512 MB 2012/09/10 13:19:57| Processing: logformat squid3 %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru 2012/09/10 13:19:57| Processing: access_log /home/panshul/squidCache/log/access.log squid3 The problem starts when I enable the following line: access_log /home/panshul/squidCache/log/access.log I start to get proxy server is refusing connections error in the browser. on commenting out the above line in my config, things go back to normal. The second problem starts when i add the following line to my config: cache_dir ufs /home/panshul/squidCache/cache 100 16 256 The squid server fails to start. Any suggestions what am I missing in the config. Please help.!!

    Read the article

  • Flushing iptables broke my pipe, how can I save my instance?

    - by Niels
    I was setting up my iptables when I performed a iptables -F and my ssh pipe broke. This is the last output of my session: root@alfapaints:~# iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW,ESTABLISHED tcp dpt:2222 ACCEPT tcp -- li465-68.members.linode.com anywhere state NEW,ESTABLISHED tcp dpt:nrpe ACCEPT tcp -- anywhere anywhere tcp dpt:9200 state NEW,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:http state NEW,ESTABLISHED ACCEPT udp -- anywhere anywhere udp spt:domain Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state ESTABLISHED tcp spt:2222 ACCEPT tcp -- anywhere anywhere state ESTABLISHED tcp spt:nrpe ACCEPT tcp -- anywhere anywhere tcp spt:9200 state ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp spt:http state ESTABLISHED ACCEPT udp -- anywhere anywhere udp dpt:domain root@alfapaints:~# iptables -F Write failed: Broken pipe I tested my connection just before and I was able to connect with ssh. Now I did a nmap scan and not a single port is open anymore. I know my VPS is running on VMWare ESXi, could a reboot help? Or if not could I attach and mount the disk to another vm to save the data? Does anybody have some advise? And maybe an explanation what happend or what could have cause my pipe to break? ps: I didn't save my rules on the config directories of iptables. But used a file I stored in ~/rules.config to apply my rules like this: iptables-restore < rules.config So probably a reboot would help? Thanks a lot in advance.

    Read the article

  • Basic OpenVPN setup

    - by WalterJ89
    I am attempting to connect 2 win7 (x64+ x32) computers (there will be 4 in total) using OpenVPN. Right now they are on the same network but the intention is to be able to access the client remotely regardless of its location. The Problem I am having is I am unable to ping or tracert between the two computers. They seem to be on different subnets even though I have the mask set to 255.255.255.0. The server ends up as 10.8.0.1 255.255.255.252 and the client 10.8.0.6 255.255.255.252. And a third ends up as 10.8.0.10. I don't know if this a Windows 7 problem or something I have wrong in my config. Its a very simple set up, I'm not connecting two LANs. this is the server config (removed all the extra lines because it was too ugly) port 1194 proto udp dev tun ca keys/ca.crt cert keys/server.crt key keys/server.key # This file should be kept secret dh keys/dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-to-client duplicate-cn keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 6 this is the client config client dev tun proto udp remote thisdomainis.random.com 1194 resolv-retry infinite nobind persist-key persist-tun ca keys/ca.crt cert keys/client.crt key keys/client.key ns-cert-type server comp-lzo verb 6 Is there anything I missed in this? keys are all correct and the vpn's connect fine, its just the subnet or route issue. Thank You

    Read the article

  • OpenVPN bad source address from client

    - by Bogdan
    I have one problem with OpenVPN. There are a lot drops records in the openvpn log file on the server: Mon Oct 22 10:14:41 2012 us=726541 laptop/???:1194 MULTI: bad source address from client [192.168.1.107], packet dropped grep -E "^[a-z]" server.conf ----- port 1194 proto udp dev tun ca data/ca.crt cert data/server.crt key data/server.key dh data/dh1024.pem tls-server tls-auth data/ta.key 0 remote-cert-tls client cipher AES-256-CBC tun-mtu 1200 server 10.10.10.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" client-to-client client-config-dir /etc/openvpn/ccd route 10.10.10.0 255.255.255.0 keepalive 10 120 comp-lzo persist-key persist-tun max-clients 5 status /var/log/status-openvpn.log log /var/log/openvpn.log verb 4 auth-user-pass-verify /etc/openvpn/verify.sh via-file tmp-dir /tmp script-security 2 ----- cat ccd/laptop ----- iroute 10.10.10.0 255.255.255.0 ----- cat client.conf ----- remote server ip 1194 client dev tun ping 10 comp-lzo proto udp tls-client tls-auth data/ta.key 1 pkcs12 data/vpn.laptop.p12 remote-cert-tls server #ns-cert-type server persist-key persist-tun cipher AES-256-CBC verb 3 pull auth-user-pass /home/user/.openvpn/users.db ----- According to "Jan Just Keijser - OpenVPN 2 Cookbook" root of the problem is incorrect config options.see the screenshot But, as you see, my config has such options. Could you please help me to solve this problem. @week Verb leverl=6; client log. Mon Oct 22 16:06:02 2012 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0 Mon Oct 22 16:06:02 2012 /sbin/ifconfig tun0 10.10.10.3 pointopoint 10.10.10.5 mtu 1500 Mon Oct 22 16:06:02 2012 /sbin/route add -net xxxx netmask 255.255.255.255 gw 192.168.1.1 Mon Oct 22 16:06:02 2012 /sbin/route add -net 0.0.0.0 netmask 128.0.0.0 gw 10.10.10.5 Mon Oct 22 16:06:02 2012 /sbin/route add -net 128.0.0.0 netmask 128.0.0.0 gw 10.10.10.5 Mon Oct 22 16:06:02 2012 Initialization Sequence Completed cat ccd/latop iroute 10.10.10.0 255.255.255.0 ifconfig-push 10.10.10.3 10.10.10.5

    Read the article

  • nconf deployment.ini configuration for a basic Nagios server on CentOS 6.2

    - by jshin47
    I have set up nconf and Nagios but I cannot figure out how to configure deployment.ini to properly deploy the generated configuration to /usr/local/nagios/etc. Here are the directory listings of interest: [jshin@nag0 tmp]$ ls Default_collector global [jshin@nag0 tmp]$ cd Default_collector/ [jshin@nag0 Default_collector]$ ls advanced_services.cfg hostgroups.cfg service_dependencies.cfg services.cfg host_dependencies.cfg hosts.cfg servicegroups.cfg [jshin@nag0 Default_collector]$ cd .. [jshin@nag0 tmp]$ cd global/ [jshin@nag0 global]$ ls checkcommands.cfg contacts.cfg misccommands.cfg timeperiods.cfg contactgroups.cfg host_templates.cfg service_templates.cfg [jshin@nag0 global]$ cd .. [jshin@nag0 tmp]$ cd /usr/local/nagios/etc/ [jshin@nag0 etc]$ ls cgi.cfg htpasswd.users nagios.cfg objects resource.cfg [jshin@nag0 etc]$ cd objects/ [jshin@nag0 objects]$ ls commands.cfg localhost.cfg switch.cfg timeperiods.cfg contacts.cfg printer.cfg templates.cfg windows.cfg Here is my deployment.ini (pretty much the default setting) ;; LOCAL deployment ;; [extract config] type = local source_file = "/var/www/html/nconf/output/NagiosConfig.tgz" target_file = "/tmp/" action = extract [copy collector config] type = local source_file = "/tmp/Default_collector/" target_file = "/usr/local/nagios/etc/Default_collector/" action = copy [copy global config] type = local source_file = "/tmp/global/" target_file = "/usr/local/nagios/etc/global" action = copy reload_command = "service nagios restart" What I am wondering is why the directory structure that the default deployment.ini seems to suggest, with Default_collector and global, is different from the one that Nagios has by default, with only a folder called objects. What am I missing? Or more importantly, how does your deployment.ini look?

    Read the article

  • IIS7 - how to place application in a folder inside application web site

    - by Nir
    I have a static web site with a blog (an asp.net application), the blog is in a subdirectory of the web site so: example.com/, example.com/Something.htm, example.com/folder/somefile.htm, etc. - are all static files example.com/blog, example.com/blog/categories.aspx, example.com/blog/2011/11/09/post-name.aspx, etc. - all go to the blog app I'm upgrading the static part of the web site to a dynamic site (also an asp.net application) and the blog is incompatible with the new app (the app needs handlers and modules loaded in web.config that don't work with the blog) Also, I have to keep all the old URLs the same - so I can't move the blog to a subdomain or the new app to a folder and the blog generates links based on its folder so clever redirection tricks wouldn't work. Is there a way to place an asp.net application in a folder inside another application (either as a real or virtual folder) so that the root web.config settings don't apply to the application folder? Or some other trick I didn't think of? The system is running IIS7 on Windows Server 2008 64bit, I have full control over the server's configuration. I can't modify the blog's source code but I can edit its web.config and other configuration. I can modify the source of the new application but I can't make it compatible with the blog (most of its usefulness comes from a 3rd party library that is not compatible with the blog). The blog in an asp.net 3.5 webforms application The new root application is an asp.net 4.0 mvc application

    Read the article

  • Incorrect deployment of WSGI app to AWS using Elastic Beanstalk

    - by Dzmitry Zhaleznichenka
    cross-link to AWS forums I have developed a simple Python web service using WSGI and would like to deploy it to AWS cloud using Elastic Beanstalk. My problem is I cannot make all the options I specify in Elastic Beanstalk configuration to be correctly configured in the cloud. For deployment, I use Elastic Beanstalk CLI utility. I have run eb init command and set up the required parameters. After this, a directory named .elasticbeanstalk was created in my source tree. It has two config files that are used for deployment, namely config and optionsettings. The latter one among the other options contains the WSGI configuration that has to update /etc/httpd/conf.d/wsgi.conf at the instances. After some of my adjustments the file has the following settings: [aws:elasticbeanstalk:application:environment] DJANGO_SETTINGS_MODULE = PARAM1 = PARAM2 = PARAM4 = PARAM3 = PARAM5 = [aws:elasticbeanstalk:container:python] WSGIPath = handler.py NumProcesses = 2 StaticFiles = /static= NumThreads = 10 [aws:elasticbeanstalk:container:python:staticfiles] /static = static/ [aws:elasticbeanstalk:hostmanager] LogPublicationControl = false [aws:autoscaling:launchconfiguration] InstanceType = t1.micro EC2KeyName = zmicier-aws [aws:elasticbeanstalk:application] Application Healthcheck URL = [aws:autoscaling:asg] MaxSize = 10 MinSize = 1 Custom Availability Zones = [aws:elasticbeanstalk:monitoring] Automatically Terminate Unhealthy Instances = true [aws:elasticbeanstalk:sns:topics] Notification Endpoint = Notification Protocol = email It turns out that not all of these options are considered when I start the environment or update it. Thus, when I update NumThreads or NumProcesses, the respective parameters get changed in wsgi.conf as expected. But whatever I write to the WSGIPath and StaticFiles parameters, I'm not able to automatically change the respective values of wsgi.conf, they remain Alias /static /opt/python/current/app/ WSGIScriptAlias / /opt/python/current/app/application.py which drives me nuts. Moreover, when I deploy my application using git aws.push and having the following contents of .ebextensions/python.config file, neither of options I specify in it affects the deployment. option_settings: - namespace: aws:elasticbeanstalk:container:python option_name: WSGIPath value: mysite/wsgi.py - namespace: aws:elasticbeanstalk:container:python option_name: NumProcesses value: 5 - namespace: aws:elasticbeanstalk:container:python option_name: NumThreads value: 25 - namespace: aws:elasticbeanstalk:container:python:staticfiles option_name: /static/ value: app/static/ I wonder what I should do to force AWS use all the parameters I specify in the configuration, namely the WSGI Path and path to my static data.

    Read the article

  • Extend RAID 1 (HP SmartArray P410i) running Linux

    - by Oliver
    I took over a fairly simple server setup with the following RAID 1 config running Ubuntu 11.10 (Kernel 3.0.0-12-server x86_64): => ctrl all show config Smart Array P410i in Slot 0 (Embedded) (sn: removed) array A (SAS, Unused Space: 1335535 MB) logicaldrive 1 (279.4 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 1 TB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 1 TB, OK) Initially there were two 300GB disks that got replaced by 1TB disks and I now have to extend the logical volume to use that extra space. However, when trying to do so I get the following warning: => ctrl slot=0 ld 1 modify size=max Warning: Extension may not be supported on certain operating systems. Performing extension on these operating systems can cause data to become inaccessible. See ACU documentation for details. Continue? (y/n) Is it safe to say yes or am I at risk of corrupting the file system / loosing data? Rearranging and extending the file system afterwards shouldn't be an issue as I can take the server offline and boot from a gparted live disk. Here's the config of the RAID controller in use: => ctrl all show detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: removed RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: Rev C Firmware Version: 5.12 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: False Drive Write Cache: Disabled SATA NCQ Supported: True And the partition table: Number Start End Size Type File system Flags 1 1049kB 274GB 274GB primary ext4 boot 2 274GB 300GB 25.8GB extended 5 274GB 300GB 25.8GB logical linux-swap(v1)

    Read the article

  • Apache2 - Hosting two sites on the same domain with different ports

    - by user1026361
    I am hosting a staging site (test.mydomain.com) which currently work well on port 80 for two sites (test.mydomain.com and test.FRmydomain.com) I am working on a new backend and I would like to deploy a third site on this server for testing. My hope is that it will live at test.mydomain.com:4204. I've got some experience with apache and quickly added statements: Listen 4204 NameVirtualHost *:4204 and created a new config for my site. What I imagine are the relevant parts of my config: <VirtualHost *:4204 > ServerAdmin [email protected] ServerName test.mydomain.com:4204 However, the site is not publicly available, by name or ip. If i curl localhost:4204 from the server, I get the expected page content At this point, I'm a bit of a loss on how to go forwards. It seems like my config is correct but not available to be served. Am I better off defining a proxy definition so that, for instance: test.mydomain.com/4204 proxies to my localhost server or is there a way to make the site available via the internet? EDIT: I have added an iptable rule after further Googling with the command: iptables -I INPUT -p tcp --dport 4204 -j ACCEPT I can see apache listening on 4204 and the rule is definitely in place but cant reach the site

    Read the article

  • What does this mean: "SATP VMW_SATP_LOCAL does not support device configuration"?

    - by Jason Tan
    Can anyone tell me what this means in ESXi 5.1?: SATP VMW_SATP_LOCAL does not support device configuration I've googled it and I get a lot of results, but as yet all the pages that contain the string are discussing other matters. The storage array is a HDS HUS-VM and the hosts are HP b460c G8 blades with flex fabric and flex fabric VCs which I am in the process of commissioning and would like to get it started on the right foot - i.e. error and warning free! naa.600508b1001c56ee3d70da65f071da23 Device Display Name: HP Serial Attached SCSI Disk (naa.600508b1001c56ee3d70da65f071da23) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba0:C0:T0:L1;current=vmhba0:C0:T0:L1} Path Selection Policy Device Custom Config: Working Paths: vmhba0:C0:T0:L1 Is Local SAS Device: true Is Boot USB Device: false This is the same LUN: ~ # esxcli storage core device list -d naa.60060e80132757005020275700000016 naa.60060e80132757005020275700000016 Display Name: HITACHI Fibre Channel Disk (naa.60060e80132757005020275700000016) Has Settable Display Name: true Size: 204800 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.60060e80132757005020275700000016 Vendor: HITACHI Model: OPEN-V Revision: 5001 SCSI Level: 2 Is Pseudo: false Status: degraded Is RDM Capable: true Is Local: false Is Removable: false Is SSD: false Is Offline: false Is Perennially Reserved: false Queue Full Sample Size: 0 Queue Full Threshold: 0 Thin Provisioning Status: unknown Attached Filters: VAAI_FILTER VAAI Status: supported Other UIDs: vml.020001000060060e801327570050202757000000164f50454e2d56 Is Local SAS Device: false Is Boot USB Device: false ~ #

    Read the article

  • Debian: Should I add vlan interface into bridge for KVM?

    - by javano
    I am setting up a Debian Squeeze box as a KVM host. I want to add multiple interfaces to each KVM guest so I want them to be on different VLANs. After reading about this, I believe the best method is to add multiple logical VLAN (sub)-interfaces to the physical NICs and then create a bridge adapter for each VLAN interace, and assign each bridge as a NIC for KVM guests. Does this make good sense, or madness? Do I have to use bridged interfaces with KVM like this? Can't I just add eth1.xx and eth1.yy to my interfaces config below and then configure those directly as bridged KVM guest NICs? If so, how should this look in the interfaces config file below? user@host:~$ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # Management Interface auto eth0 iface eth0 inet static address 172.22.0.31 netmask 255.255.255.0 gateway 172.22.0.1 # Interface for guest VMs auto eth1 # Guest1 : Use VLAN 117 auto eth1.117 iface eth1.117 inet manual # Set up br1 for guest 1, bridging with vlan 117 auto br1.117 iface br1.117 inet manual bridge_ports eth1.117 bridge_stp off user@host:~$ uname -a Linux hostname 3.4.9 #1 SMP Wed Aug 22 19:08:46 BST 2012 x86_64 GNU/Linux UPDATE I would really like it if someone could clarify the config for me, as I have also seen the above configured with this syntax, so I don't see why one would be preferred over the other; # Interface for guest VMs auto eth1 allow-hotplug eth1 iface eth1 inet static # Vlan 117 for guest 1 auto vlan 117 iface vlan111 inet static vlan_raw_device eth1 # Guest 1 : NIC 1 auto br1.117 iface br1.117 inet manual bridge_ports vlan117 bridge_stp off

    Read the article

  • Update Grub on Squeeze - Kernel downgrade due VMware Server

    - by vodoo_boot
    Hi! I happen to run into various problems regarding grub and kernels. I don't really care about the kernel internas. All I want is VMware server in that dedicated root-server. 1.) What is a bzImage vs. vmlinuz? kaze:~# ls /boot/ System.map-2.6.32-5-amd64 bzImage-2.6.33.2 config-2.6.33.2 initrd.img-2.6.32-5-amd64 System.map-2.6.33.2 bzImage-2.6.35.6 config-2.6.35.6 vmlinuz-2.6.32-5-amd64 System.map-2.6.35.6 config-2.6.32-5-amd64 grub I updated my menu.lst (grub2): timeout 5 default 0 fallback 1 title 2.6.32.5 kernel (hd0,1)/boot/vmlinuz-2.6.32-5-amd64 root=/dev/sda2 panic=60 noapic acpi=off title 2.6.35.6 kernel (hd0,1)/boot//bzImage-2.6.35.6 root=/dev/sda2 panic=60 noapic acpi=off title 2.6.32.3 kernel (hd0,1)/boot//bzImage-2.6.33.2 root=/dev/sda2 panic=60 noapic acpi=off That doesn't do well... I think the vmlinuz file is missing initrd or so. Dunno. In fact I don't give too much about kernel boot voodoo as long as it works. update-grub(2) does not work. Does anybody know what magical trick there is to get the 2.6.32-5 booting? 2.) I thought t follow the Deban wiki.. I cannot get header-files for the installed 35.6 or 33.2 kernel in the repositories. I cannot build foreign headers because they will not match the running kernel. So how does one deal with that situtation? I'd prefer not to have to downgrade the kernel. Thanks for any answers!

    Read the article

  • Conditionally permitting HTTP-only requests to Tomcat?

    - by Mike
    I have 2 versions of a system: Tomcat webserver Nginx reverse-proxy sitting in front of a tomcat webserver. In version 2, nginx only ever talks to Tomcat over HTTP. A user could configure the system so that only HTTPS requests are allowed. If the user does this in Version 1 and then the XML configuration files for Tomcat takes care of this. In version 2, nginx takes care of this. The problem is this: I cannot force a user to update their Tomcat XML config files when they upgrade from version 1 to version 2 (it will be recommended that they do so) because this is done as part of a larger process. This means that if they upgrade and don't update the Tomcat config, an HTTPS request will arrive at nginx, which will proxy it over HTTP to Tomcat which will reject the request because it is not HTTPS. So I can't force an update to the Tomcat XML, and I have to use HTTP between nginx and Tomcat. Any ideas? Is there some way I can affect how Tomcat reads its config in Version 2 so that it ignores the HTTPS-only section?

    Read the article

  • Setting up Windows network on Xen

    - by samyboy
    I'm trying to install a Windows XP server in a Xen environment. The OS is booting fine. Unfortunately I can't figure out how to set up the network settings. Dom0 is a Debian Lenny currently hosting around 10 Linux virtual servers. Windows tells me I have a "limited connection". It can't get any DHCP response, nor access other hosts in the network Here is the Xen's client config file: kernel = '/usr/lib/xen-3.2-1/boot/hvmloader' builder = 'hvm' memory = '1024' device_model='/usr/lib/xen-3.2-1/bin/qemu-dm' acpi=1 apic=1 pae=1 vcpus=1 name = 'winexchange' # Disks disk = [ 'phy:/dev/wnghosts/exchange-disk,ioemu:hda,w', 'file:/mnt/freespace/ISO/DVD1_Installation.iso,ioemu:hdc:cdrom,r' ] # Networking vif = [ 'mac=00:16:3E:0A:D0:1B, type=ioemu, bridge=xenbr0'] # video stdvga=0 serial='pty' ne2000=0 # Behaviour boot='c' sdl=0 # VNC vfb = [ 'type=vnc' ] vnc=1 vncdisplay=1 vncunused=1 usbdevice='tablet' Server config (/etc/xen/xend-config.sxp) (network-script network-bridge) (network-script network-dummy) (vif-script vif-bridge) (dom0-min-mem 512) (dom0-cpus 0) (vnc-listen '0.0.0.0') Since I use Debian I had to create a link like this: /etc/xen/qemu-ifup - /etc/xen/scripts/qemu-ifup What did I do wrong? Please tell me if you want some more info (logs, etc)

    Read the article

  • Forward real IP through Haproxy => Nginx => Unicorn

    - by Hendrik
    How do I forward the real visitors ip adress to Unicorn? The current setup is: Haproxy => Nginx => Unicorn How can I forward the real IP address from Haproxy, to Nginx, to Unicorn? Currently it is always only 127.0.0.1 I read that the X headers are going to be depreceated. http://tools.ietf.org/html/rfc6648 - how will this impact us? Haproxy Config: # haproxy config defaults log global mode http option httplog option dontlognull option httpclose retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 # Rails Backend backend deployer-production reqrep ^([^\ ]*)\ /api/(.*) \1\ /\2 balance roundrobin server deployer-production localhost:9000 check Nginx Config: upstream unicorn-production { server unix:/tmp/unicorn.ordify-backend-production.sock fail_timeout=0; } server { listen 9000 default; server_name manager.ordify.localhost; root /home/deployer/apps/ordify-backend-production/current/public; access_log /var/log/nginx/ordify-backend-production_access.log; rewrite_log on; try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_redirect off; proxy_pass http://unicorn-production; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; }

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >