Search Results

Search found 7757 results on 311 pages for 'dynamic assemblies'.

Page 280/311 | < Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >

  • Asterisk Outgoing CDR Logging To Mysql

    - by user3295551
    Trying to utilize the cdr logging (to mysql) using custom fields. The problem I am facing is only when an outbound call is placed, during inbound calls the custom field I am able to log no problem. The reason I am having an issue is because the custom cdr field I need is a unique value for each user on the system. sip.conf ... ... [sales_department](!) type=friend host=dynamic context=SalesAgents disallow=all allow=ulaw allow=alaw qualify=yes qualifyfreq=30 ;; company sales agents: [11](sales_agent) secret=xxxxxx callerid="<...>" [12](sales_agent) secret=xxxxxx callerid="<...>" [13](sales_agent) secret=xxxxxx callerid="<...>" [14](sales_agent) secret=xxxxxx callerid="<...>" extensions.conf [SalesAgents] include => Services ; Outbound calls exten=>_1NXXNXXXXXX,1,Dial(SIP/${EXTEN}@myprovider) ; Inbound calls exten=>100,1,NoOp() same => n,Set(CDR(agent_id)=11) same => n,CELGenUserEvent(Custom Event) same => n,Dial(${11_1},25) same => n,GotoIf($["${DIALSTATUS}" = "BUSY"]?busy:unavail) same => n(unavail),VoiceMail(11@asterisk) same => n,Hangup() same => n(busy),VoiceMail(11@asterisk) same => n,Hangup() exten=>101,1,NoOp() same => n,Set(CDR(agent_id)=12) same => n,CELGenUserEvent(Custom Event) same => n,Dial(${12_1},25) same => n,GotoIf($["${DIALSTATUS}" = "BUSY"]?busy:unavail) same => n(unavail),VoiceMail(12@asterisk) same => n,Hangup() same => n(busy),VoiceMail(12@asterisk) same => n,Hangup() ... ... For the inbound section of the dialplan in the above example I am able to insert the custom cdr field (agent_id). But above it you can see for the Oubound section of the dialplan I have been stumped on how I would be able to tell the dialplan which agent_id is making the outbound call. My Question: how to take the agent_id=[11] & agent_id=[12] and agent_id=[13] and agent_id=[14] etc and use that as a custom field for cdr on outbound calls?

    Read the article

  • Tracking down rogue disk usage

    - by Amadan
    I found several other questions regarding the theory behind my problem (e.g. this, this), but I don't know how to apply the answers to my machine. # du -hsx / 11000283 / # df -kT / Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/csisv13-root ext4 516032952 361387456 128432532 74% / There is a big difference between 11G (du) and 345G (df). Where are the remaining 334G? It's not in deleted files. There was only one, it was short, and I truncated it just in case. This is what remains: # lsof -a +L1 / COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME zabbix_ag 4902 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4902 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) I rebooted to see if fsck does anything. But, from /var/log/boot.log, it seems there are no issues: /dev/mapper/server-root: clean, 3936097/32768000 files, 125368568/131064832 blocks Thinking maybe someone overzealously reserved root space, I checked the master record: # tune2fs -l /dev/mapper/server-root tune2fs 1.42 (29-Nov-2011) Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 86430ade-cea7-46ce-979c-41769a41ecbe Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131064832 Reserved block count: 6553241 Free blocks: 5696264 Free inodes: 28831903 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Feb 1 13:44:04 2013 Last mount time: Tue Aug 19 16:56:13 2014 Last write time: Fri Feb 1 13:51:28 2013 Mount count: 9 Maximum mount count: -1 Last checked: Fri Feb 1 13:44:04 2013 Check interval: 0 (<none>) Lifetime writes: 1215 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28836028 Default directory hash: half_md4 Directory Hash Seed: bca55ff5-f530-48d1-8347-25c004f66d43 Journal backup: inode blocks The system is: # uname -a Linux server 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:46:11 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" Does anyone have any tips on what exactly to do to find and hopefully reclaim the missing space?

    Read the article

  • nginx, php-cgi and "No input file specified."

    - by Stephen Belanger
    I'm trying to get nginx to play nice with php-cgi, but it's not quite working how I'd like. I'm using some set variables to allow for dynamic host names--basically anything.local. I know that stuff is working because I can access static files properly, however php files don't work. I get the standard "No input file specified." error which normally occurs when the file doesn't exist, but it definitely does exist and the path is correct because I can access the static files in the same path. It could possibly be a permissions thing, but I'm not sure how that could be an issue. I'm running this on Windows under my own user account, so I think it should have permission unless php-cgi is running under a different user without me telling it to. . Here's my config; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; server { # Listen for HTTP listen 80; # Match to local host names. server_name *.local; # We need to store a "cleaned" host. set $no_www $host; set $no_local $host; # Strip out www. if ($host ~* www\.(.*)) { set $no_www $1; rewrite ^(.*)$ $scheme://$no_www$1 permanent; } # Strip local for directory names. if ($no_www ~* (.*)\.local) { set $no_local $1; } # Define default path handler. location / { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; index index.php index.html index.htm; # Route non-existent paths through Kohana system router. try_files $uri $uri/ /index.php?kohana_uri=$request_uri; } # pass PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; } # Prevent access to system files. location ~ /\. { return 404; } location ~* ^/(modules|application|system) { return 404; } } }

    Read the article

  • Need help making site available externally

    - by White Island
    I'm trying to open a hole in the firewall (ASA 5505, v8.2) to allow external access to a Web application. Via ASDM (6.3?), I've added the server as a Public Server, which creates a static NAT entry [I'm using the public IP that is assigned to 'dynamic NAT--outgoing' for the LAN, after confirming on the Cisco forums that it wouldn't bring everyone's access crashing down] and an incoming rule "any... public_ip... https... allow" but traffic is still not getting through. When I look at the log viewer, it says it's denied by access-group outside_access_in, implicit rule, which is "any any ip deny" I haven't had much experience with Cisco management. I can't see what I'm missing to allow this connection through, and I'm wondering if there's anything else special I have to add. I tried adding a rule (several variations) within that access-group to allow https to the server, but it never made a difference. Maybe I haven't found the right combination? :P I also made sure the Windows firewall is open on port 443, although I'm pretty sure the current problem is Cisco, because of the logs. :) Any ideas? If you need more information, please let me know. Thanks Edit: First of all, I had this backward. (Sorry) Traffic is being blocked by access-group "inside_access_out" which is what confused me in the first place. I guess I confused myself again in the midst of typing the question. Here, I believe, is the pertinent information. Please let me know what you see wrong. access-list acl_in extended permit tcp any host PUBLIC_IP eq https access-list acl_in extended permit icmp CS_WAN_IPs 255.255.255.240 any access-list acl_in remark Allow Vendor connections to LAN access-list acl_in extended permit tcp host Vendor any object-group RemoteDesktop access-list acl_in remark NetworkScanner scan-to-email incoming (from smtp.mail.microsoftonline.com to PCs) access-list acl_in extended permit object-group TCPUDP any object-group Scan-to-email host NetworkScanner object-group Scan-to-email access-list acl_out extended permit icmp any any access-list acl_out extended permit tcp any any access-list acl_out extended permit udp any any access-list SSLVPNSplitTunnel standard permit LAN_Subnet 255.255.255.0 access-list nonat extended permit ip VPN_Subnet 255.255.255.0 LAN_Subnet 255.255.255.0 access-list nonat extended permit ip LAN_Subnet 255.255.255.0 VPN_Subnet 255.255.255.0 access-list inside_access_out remark NetworkScanner Scan-to-email outgoing (from scanner to Internet) access-list inside_access_out extended permit object-group TCPUDP host NetworkScanner object-group Scan-to-email any object-group Scan-to-email access-list inside_access_out extended permit tcp any interface outside eq https static (inside,outside) PUBLIC_IP LOCAL_IP[server object] netmask 255.255.255.255 I wasn't sure if I needed to reverse that "static" entry, since I got my question mixed up... and also with that last access-list entry, I tried interface inside and outside - neither proved successful... and I wasn't sure about whether it should be www, since the site is running on https. I assumed it should only be https.

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • phpMyAdmin causes php-fpm worker to restart (502 Bad Gateway)

    - by rndbit
    I am trying to set up a test site for myself. Everything works fine except phpMyAdmin. php installation loads my test site scripts, they work fine, however trying to load phpMyAdmin i get 502 Bad Gateway error. Judging from logs (that are not too helpful) it seems that php-fpm worker is crashing each time phpmyadmin is being accessed. No clue how or why.. Does anyone have any idea? nginx log: *3 recv() failed (104: Connection reset by peer) while reading response header from upstream, And php-fpm log: [07-Jun-2012 14:19:51] WARNING: [pool www] child 32179 exited on signal 11 (SIGSEGV) after 3.217902 seconds from start [07-Jun-2012 14:19:51] NOTICE: [pool www] child 32351 started My nginx conf: user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; fastcgi_buffers 8 16k; fastcgi_buffer_size 32k; include /etc/nginx/conf.d/*.conf; server { listen 443 ssl; listen 80; server_name testsite.net www.testsite.net; ssl on; ssl_certificate /var/www/html/admin/ssl/certificate.pem; ssl_certificate_key /var/www/html/admin/ssl/privatekey.pem; ssl_session_timeout 1m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5:!kEDH; ssl_prefer_server_ciphers on; access_log off; location ~ \.php$ { root /var/www/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location / { root /var/www/html; index index.php; } } } php.ini is standard, with cgi.fix_pathinfo=0 php-fpm.conf: include=/etc/php-fpm.d/*.conf [global] pid = /var/run/php-fpm/php-fpm.pid error_log = /var/log/php-fpm/error.log log_level = notice php-fpm.d/www.conf: [www] listen = 127.0.0.1:9000 listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 10 pm.start_servers = 1 pm.min_spare_servers = 1 pm.max_spare_servers = 10 slowlog = /var/log/php-fpm/www-slow.log php_flag[display_errors] = on php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on

    Read the article

  • Why might one host be unable to access the Internet, when it can ping the router and when all other hosts can?

    - by user1444233
    I have a Draytek Vigor 2830n. It's kicking out a 192.168.3.0 LAN. It performs load-balancing across dual-WAN ports, although I've turned off the second WAN to simplify testing. There are many hosts on the LAN. All IPs are allocated through DHCP, most freely allocated from the pool, but one or two are bound to NIC MAC addresses. All hosts can access the Internet, save one. That host (192.168.3.100 or 'dot100' for short) gets allocated an IP address (and the right gateway address, DNS server addresses, subnet etc.) dot100 can ping itself. It can ping the gateway, and access the latter's web interface via port 80. It's responsive and loss-free (sustained ping over a couple of minutes reports no data loss). Yet, for some reason that evades me, dot100 can't ping an external IP address or domain name. I suspect it's never been able to, because it was getting some Internet access from a second adaptor (different subnet), but that's now been turned off, which exposed the problem. In dot100, I've tried: two operating systems (Windows 8 and Knoppix), to rule out anti-virus programs etc. two physical adaptors two cables, on each adaptor two IPs (e.g. .100 and .103 assigned by Mac and .26 from the pool) both dynamic and assigned (MAC-bound) DHCP-allocated IPs but none of this experiments yielded any variation in the result. dot100 is a crucial host. It's a file server for the network, so I need it to be reliably allocated a consistent IP. Can anyone offer a potential solution or a way forward with the analysis please? My guess My analysis so far leads me to believe it's a router issue. I've checked the web interface very carefully. There are no filters setup in Firewall - General Setup or Filter Setup. I suspect it's a corrupted internal routing table, but the web UI shows this as the Routing table: Key: C - connected, S - static, R - RIP, * - default, ~ - private * 0.0.0.0/ 0.0.0.0 via 62.XX.XX.X WAN1 * 62.XX.XX.X/ 255.255.255.255 via 62.XX.XX.X WAN1 S 82.YY.YYY.YYY/ 255.255.255.255 via 82.YY.YYY.YYY WAN1 C 192.168.1.0/ 255.255.255.0 directly connected WAN2 C~ 192.168.3.0/ 255.255.255.0 directly connected LAN2

    Read the article

  • Server currently under DDOS, not sure what to do.

    - by Volex
    Hi, My web server is currently under a DDOS attack I believe, the messages log is full of these kind of messages: May 13 15:51:19 kernel: nf_conntrack: table full, dropping packet. May 13 15:51:19 last message repeated 9 times May 13 15:51:24 kernel: __ratelimit: 78 callbacks suppressed May 13 15:51:24 kernel: nf_conntrack: table full, dropping packet. May 13 15:52:06 kernel: possible SYN flooding on port 80. Sending cookies. and a netstat has a huge amount of the following: tcp 0 0 my.host.com:http bb176da0.virtua.com.br:4998 SYN_RECV tcp 0 0 my.host.com:http 187.0.43.109:2694 SYN_RECV tcp 0 0 my.host.com:http 109.229.4.145:1722 SYN_RECV tcp 0 0 my.host.com:http 189-84-163-244.sodobr:63267 SYN_RECV tcp 0 0 my.host.com:http bd66839d.virtua.com.br:3469 SYN_RECV tcp 0 0 my.host.com:http 69.101.56.190.dsl.int:52552 SYN_RECV tcp 0 0 my.host.com:http pc-62-230-47-190.cm.vt:2262 SYN_RECV tcp 0 0 my.host.com:http 189-84-163-244.sodobr:63418 SYN_RECV tcp 0 0 my.host.com:http pc-62-230-47-190.cm.vt:1741 SYN_RECV tcp 0 0 my.host.com:http zaq3d739320.zaq.ne.jp:2141 SYN_RECV tcp 0 0 my.host.com:http netacc-gpn-4-80-73.po:52676 SYN_RECV tcpdump shows: 7:11:08.564510 IP 187-4-1xx-4.xxx.ipd.brasiltelecom.net.br.54821 my.host.com.http: S 999692166:999692166(0) win 65535 17:11:08.566347 IP 114-44-171-67.dynamic.hinet.net.1129 my.host.com.http: S 605369055:605369055(0) win 65535 17:11:08.570210 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5590 my.host.com.http: S 2813379182:2813379182(0) win 16384 17:11:08.571290 IP dsl-189-143-30-99-dyn.prod-infinitum.com.mx.1615 my.host.com.http: S 281542700:281542700(0) win 65535 17:11:08.583847 IP dsl-189-143-30-99-dyn.prod-infinitum.com.mx.1617 my.host.com.http: S 499413892:499413892(0) win 65535 17:11:08.588680 IP 170.51.229.112.2569 my.host.com.http: S 2195084898:2195084898(0) win 65535 17:11:08.588773 IP gw2-1.211.ru.3180 my.host.com.http: F 2315901786:2315901786(0) ack 2620913033 win 64240 17:11:08.590656 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5614 my.host.com.http: S 2813715032:2813715032(0) win 16384 17:11:08.591212 IP 203.82.82.54.15848 my.host.com.http: S 4070423507:4070423507(0) win 16384 17:11:08.591254 IP 203.82.82.54.2545 my.host.com.http: S 1790910784:1790910784(0) win 16384 17:11:08.591289 IP 203.82.82.54.28306 my.host.com.http: S 578615626:578615626(0) win 16384 17:11:08.591591 IP gw2-1.211.ru.3191 my.host.com.http: F 2316435991:2316435991(0) ack 2634205972 win 64240 17:11:08.591790 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5593 my.host.com.http: S 2813659017:2813659017(0) win 16384 17:11:08.593691 IP gw2-1.211.ru.3203 my.host.com.http: F 2316834420:2316834420(0) ack 2629074987 win 64240 I'm not sure what I can do to limit/mitigate this, currently no webpages are being served, any help gratefully appreciated.

    Read the article

  • DNS settings for resolving Host name to IP not working?

    - by Hasas Ali Khan
    I want to access my IIS hosted application over LAN. First I installed a DNS server. The DNS configuration steps are: Go to DNS Manager - right click on System Name - click on configure a DNS Server. DNS Server wizard open -, click on next button - Select radio button "forward lookup zone" click on next button. In the second window. click on radio button "The server maintains the zone" and then click next. Give the zone name "example.com" Click on radio button, "Do Not allow dynamic updates". and then click next button. In the next window, click on radio button "No it should not forward query" and then click next button. Complete the configure a DNS server wizard and then click on finish button. After it is managing the DNS records: In DNS server wizard. open tree of forward lookup zone and right click on the new zone name "example.com" - properties and click on "Start of authority" and write values on text boxes serial number=1 primary server=systemname.domainname responsible person=hostmaster.domainname Click on server name, highlight domain name, click on edit button and enter IP address of the server where I host my application. Highlight new zone name and right click on it and click "New Host" option. In this window there are three text boxes: Name(user parent name if blank)=scoring Fully Qualified Domain Name=scoring.example.com IP Address= My IP Address and check on "Create associated pointer(PTR) record" and click on "Add Host" Host button and then click done button. I have host header for my application is "scoring" on port 80 and its working fine on server my application setting are I have change its, Advance setting --> Application Pool Identity --> Local System application can access on server with host name "scoring" but it can not access on machines on LAN. When I change LAN machine host file that is under, C:/windows/system32/driver/etc/host and edit it and enter host name with hosted machine IP like this: scoring 192.168.1.20 By making these changes I can run the application over LAN machines as I mentioned above DNS setting by which I can run App over LAN with out editing the client's host file. What mistake am I doing in this configuration?

    Read the article

  • Can't get DHCPd to assign IPs to unknown clients

    - by Jakobud
    I'm using Webmin to admin our DHCPd server. But I'm having a hard time getting it to assign IP addresses to unknown clients. The only way I can get it to assign an IP is to make sure a host is added to DHCPd as a host so that it gets a static-lease IP assigned to it. I thought "Allow Unknown Clients" was the key, but it still isn't assigning IPs to unknown clients. I have a pool setup so that the unknown clients should get an IP between 10.20.0.200 - 10.20.0.249. Here is the config file. What am I missing here? allow unknown-clients; # Primary DHCP server config authoritative; ddns-update-style none; failover peer "dhcp-failover" { primary; address 10.20.0.30; port 647; peer address 10.20.0.25; peer port 647; max-response-delay 60; max-unacked-updates 10; load balance max seconds 3; mclt 3600; split 128; } subnet 10.20.0.0 netmask 255.255.255.0 { allow unknown-clients; option subnet-mask 255.255.255.0; option broadcast-address 10.20.0.255; option routers 10.20.0.100; option domain-name "ourdomain.com"; option domain-name-servers 192.168.10.20; default-lease-time 86400; max-lease-time 86400; option ntp-servers 192.168.10.20; option time-offset -25200; pool { allow unknown-clients; failover peer "dhcp-failover"; max-lease-time 86400; range 10.20.0.200 10.20.0.249; deny dynamic bootp clients; } host Server-myserver { option host-name "whatever.ourdomain.com"; hardware ethernet 00:89:D4:35:4F:13; fixed-address 10.20.0.23; } }

    Read the article

  • IIS6 Virtual Directory 500 Error on Remote Share

    - by David Boike
    We have our servers at the server farm in a domain. Let's call it LIVE. Our developer computers live in a completely separate corporate domain, miles and miles away. Let's call it CORP. We have a large central storage unit (unix) that houses images and other media needed by many webservers in the server farm. The IIS application pools run as (let's say) LIVE\MediaUser and use those credentials to connect to a central storage share as a virtual directory, retrieve the images, and serve them as if they were local on each server. The problem is in development. On my development machine. I log in as CORP\MyName. My IIS 6 application pool runs as Network Service. I can't run it as a user from the LIVE domain because my machine isn't (and can not be) joined to that domain. I try to create a virtual directory, point it to the same network directory, click Connect As, uncheck the "Always use the authenticated user's credentials when validating access to the network directory" checkbox so that I can enter the login info, enter the credentails for LIVE\MediaUser, click OK, verify the password, etc. This doesn't work. I get "HTTP Error 500 - Internal server error" from IIS. The IIS log file reports sc-status = 500, sc-substatus = 16, and sc-win32-status = 1326. The documentation says this means "UNC authorization credentials are incorrect" and the Win32 status means "Logon failure: unknown user name or bad password." This would be all and good if it were anywhere close to accurate. I double- and trouble-checked it. Tried multiple known good logins. The IIS manager allows me to view the file tree in its window, it's only the browser that kicks me out. I even tried going to the virtual directory's Directory Security tab, and under Authentication and Access Control, I tried using the same LIVE domain username for the anonymous access credential. No luck. I'm not trying to run any ASP, ASP.NET, or other dynamic anything out of the virtual directory. I just want IIS to be able to load static images, css, and js files. If anyone has some bright ideas I would be most appreciative!

    Read the article

  • Wireless traffic stops when downloading large files at high speed: packets lost (Linksys WRT120N router)

    - by Torious
    The problem Note: First I'd like to understand WHY this is happening. Ofcourse, a solution would be nice too. :) When downloading a large file over HTTP at high-speeds, my wireless traffic basically stops: I can't open webpages and the download itself pauses. It pauses pretty much immediately after starting it; sometimes at 800 KB, sometimes at a few MB. After some time, the download (and other traffic) resumes, but the problem keeps reoccurring during the same download. The problem does not occur when using a wired connection through the same router (Linskys WRT120N). Also note that the connection is not dropped when this happens. It's just that the traffic stops and I can't browse to web pages, etc. (SYN packets are sent but nothing is received, etc.) Inspection with Wireshark shows that the following happens: Server sends data packets which are acknowledged by client Server sends a packet, but SEQ indicates some packets were lost (6 packets in one occurrence). Server sends a few more packets and client acknowledges these using "selective acknowledgement" Server stops sending data for a while (since the lost packets were not acknowledged or the router stops forwarding them?) Eventually, server does a "retransmission" and traffic resumes as normal. This all seems normal behavior to me when packet loss occurs. It's the consistent packet loss throughout a large, high-speed download that puzzles me. What might cause this? My own idea is the following: My internet is pretty fast (100 mbps), so when starting a large-file download, the router buffers the incoming data (since wireless introduces some slight delay / lower speed, in part due to other networks), but the buffer overflows and the router drops packets to regulate traffic (and because it has no choice). But how could that happen? Doesn't the TCP window size limit the amount of data that can go unacknowledged? So how can the router's buffer overflow if there can only be like 64 KB waiting to be acknowledged? Note: I've disabled TCP window scaling and dynamic window size through netsh options, in an attempt to fix this, but it doesn't seem to matter. Also, Wireshark shows a pattern of the server sending 2 packets (of 1514 bytes) and the client sending an ACK, so does that rule out a possible buffer overflow? And a few more subsequent packets are received... I'm at a loss here. Thanks for any insights. Things that are (probably) NOT the cause / I have experimented with The browser Various TCP options in Windows 7 (netsh etc.) Router settings such as MTU, beacon interval, UPnP, ...

    Read the article

  • Php 5.3.3. Access log

    - by irolla
    Hi I'm using php-fpm. In 5.3.2 when I'm opening phpinfo page in access log I get: ip - - [26/Aug/2010:16:35:32 +0400] "GET /phpinfo.php HTTP/1.1" 200 13322 "-" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5" But in 5.3.3 I'm getting: ip - - [26/Aug/2010:16:30:30 +0400] "GET /phpinfo.php HTTP/1.1" 200 11891 "-" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5" ip - - [26/Aug/2010:16:30:30 +0400] "GET /phpinfo.php?=PHPE9568F34-D428-11d2-A769-00AA001ACF42 HTTP/1.1" 200 2536 "http://site.com/phpinfo.php" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5" ip - - [26/Aug/2010:16:30:30 +0400] "GET /phpinfo.php?=SUHO8567F54-D428-14d2-A769-00DA302A5F18 HTTP/1.1" 200 2825 "http://site.com/phpinfo.php" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5" ip - - [26/Aug/2010:16:30:30 +0400] "GET /phpinfo.php?=PHPE9568F35-D428-11d2-A769-00AA001ACF42 HTTP/1.1" 200 2158 "http://site.com/phpinfo.php" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5" Why there is 4 lines insted of 1? And what means "?=PHPE...". Is it PHP sessions? My php5.3.3 fpm config: [global] pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log log_level = notice [pool_0] listen = 127.0.0.1:9000 listen.backlog = -1 listen.allowed_clients = 127.0.0.1 user = www-data group = www-data pm = dynamic pm.max_children = 50 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 500 pm.status_path = /pool_0/status rlimit_files = 1024 rlimit_core = 0 catch_workers_output = yes php_admin_flag[register_globals] = true php_admin_value[error_reporting] = E_ALL & ~E_DEPRECATED php_admin_value[max_execution_time] = 15 php_admin_flag[short_open_tag] = true php_admin_flag[display_errors] = false

    Read the article

  • configuration required for HIVE to be installed on a node

    - by ????? ????????
    I went through the process of manually installing ambari (not through SSH, because I couldnt get keyless to work) and everything installed OK, except for HIVE and GANGLIA. I got this message: stderr: None stdout: warning: Unrecognised escape sequence ‘\;’ in file /var/lib/ambari-agent/puppet/modules/hdp-hive/manifests/hive/service_check.pp at line 32 warning: Dynamic lookup of $configuration is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes. notice: /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[32]/Hdp::Exec[hdp::snappy::package::ln 32]/Exec[hdp::snappy::package::ln 32]/returns: executed successfully notice: /Stage[2]/Hdp-hive::Hive::Service_check/File[/tmp/hiveserver2Smoke.sh]/ensure: defined content as ‘{md5}7f1d24221266a2330ec55ba620c015a9' notice: /Stage[2]/Hdp-hive::Hive::Service_check/File[/tmp/hiveserver2.sql]/ensure: defined content as ‘{md5}0c429dc9ae0867b5af74ef85b5530d84' notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/File[/tmp/hcatSmoke.sh]/ensure: defined content as ‘{md5}bae7742f7083db968cb6b2bd208874cb’ notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:11:56 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException org.apache.hadoop.hive.ql.parse.SemanticException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:12:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException [Error 10001]: Table not found hcatsmokeida8c07401_date102513 notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:12:15 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException o When i go to the alerts and health checks i’m getting this: ive Metastore status check CRIT for 42 minutes CRITICAL: Error accessing hive-metaserver status [13/06/25 03:44:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. What am I doing wrong? I have already tried to do ambari-server reset on the the database without results.

    Read the article

  • Pxe net install Centos with Static IP

    - by Stu2000
    I seem to be unable to perform a kickstart installation of centos5.8 with a netinstall. It correctly gets into the text installer, but keeps sending out a request for the dhcp server and failing. I have tried to manually set the IP everywhere. Here is my pxelinux.cfg file DEFAULT menu PROMPT 0 MENU TITLE Ubuntu MAAS TIMEOUT 200 TOTALTIMEOUT 6000 ONTIMEOUT local LABEL centos5.8-net kernel /images/centos5.8-net/vmlinuz MENU LABEL centos5.8-net append initrd=/images/centos5.8-net/initrd.img ip=192.168.1.163 netmask=255.255.255.0 hostname=client101 gateway=192.168.1.1 ksdevice=eth0 dns=8.8.8.8 ks=http://192.168.1.125/cblr/svc/op/ks/profile/centos5.8-net MENU end and here is my kickstart file: # Kickstart file for a very basic Centos 5.8 system # Assigns the server ip: 192.211.48.163 # DNS 8.8.8.8, 8.8.4.4 # London TZ install url --url http://mirror.centos.org/centos-5/5.8/os/i386 lang en_US.UTF-8 keyboard us network --device=eth0 --bootproto=static --ip=192.168.1.163 --netmask=255.255.255.0 --gateway=192.168.1.1 --nameserver=8.8.8.8,8.8.4.4 --hostname=client1-server --onboot=on rootpw --iscrypted $1$Snrd2bB6$CuD/07AX2r/lHgVTPZyAz/ firewall --enabled --port=22:tcp authconfig --enableshadow --enablemd5 selinux --enforcing timezone --utc Europe/London bootloader --location=mbr --driveorder=xvda --append="console=xvc0" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work part /boot --fstype ext3 --size=100 --ondisk=xvda part pv.2 --size=0 --grow --ondisk=xvda volgroup VolGroup00 --pesize=32768 pv.2 logvol swap --fstype swap --name=LogVol01 --vgname=VolGroup00 --size=528 --grow --maxsize=1056 logvol / --fstype ext3 --name=LogVol00 --vgname=VolGroup00 --size=1024 --grow %packages @base @core @dialup @editors @text-internet keyutils iscsi-initiator-utils trousers bridge-utils fipscheck device-mapper-multipath sgpio emacs Here is my dhcp file: ddns-update-style interim; allow booting; allow bootp; ignore client-updates; set vendorclass = option vendor-class-identifier; subnet 192.168.1.0 netmask 255.255.255.0 { host tower { hardware ethernet 50:E5:49:18:D5:C6; fixed-address 192.168.1.163; option routers 192.168.1.1; option domain-name-servers 8.8.8.8,8.8.4.4; option subnet-mask 255.255.255.0; filename "/pxelinux.0"; default-lease-time 21600; max-lease-time 43200; next-server 192.168.1.125; } } Is it impossible to prevent it asking for a dynamic ip before trying to install from the net? Perhaps there is an error in of my files? My dhcp server is set to ignore client-updates, and is set to only works with one mac address whilst testing.

    Read the article

  • Mixed sessions with Classic ASP on IIS 7.5 and Windows 2008 R2 64 bit

    - by Marcin
    Recently had an issues with a server upgrade from IIS 6 on Windows 2003 to IIS 7.5 on Windows 2008 R2 64 bit. We have a number of websites running on Classic ASP. All the sites sit under a particular site, e.g. www.example.com/foo and www.example.com/foobar. On IIS 6 each site was set up as a virtual directory and things worked fine. Since moving to the new set up, a lot of websites seem to have mixed Sessions. To be clear, this is not a app pool recycling issue; rather the sessions are populated with information when the user hits the site and while browsing they get sessions from different sites. We've determined this based on - a few customers called up and reported having their shopping cart with items with names of items belonging to a different site - also our own testing showed that some queries being run would try to bring products in from a different site We've tried - disabling dynamic caching - converting each site to be a virtual application (if I understand correctly, the virtual directory/application concepts were changed/refined somewhat in IIS 7 although to be honest, I'm not clear what the difference is) - various application pool changes (using .NET 2 framework), classic and integrated modes, changing the Process model to NetworkIdentity), all to no avail. The only thing we haven't tried is changing it to run as a 32 bit application. We're not using http only cookies, so when I open up a browser and type document.cookie into the dev console in Firefox/Chrome/IE that there will be multiple ASPSESSIONID=... values whereas previously I believe there was only one. Finally, we use server side JScript for the classic ASP pages, not VBScript, so we have code similar to the below. //the user's login account as a jscript object Session("user") = { email : "[email protected]", id : 123 }; and if we execute a line of code like below: Response.Write( typeof(Session("user")) ); When things are running correctly, we get "object" - as expected. When the Session gets trashed, the output is "unknown" and we are also unable to access the fields within the JScript object (e.g. the .email or .id fields). Much appreciated if anyone can provide any pointers about how to resolve this, everything on google seems to point to different issues.

    Read the article

  • Adaptec 5805 not recognized after reboot

    - by Rakedko ShotGuns
    After rebooting the system, the controller is not recognized. It only works if the computer is shut down and turned off. I have recently updated the firmware to "Adaptec RAID 5805 Firmware Build 18948". How do I fix the problem? Configuration summary --------------------------- 1. Server name.....................raid_test Adaptec Storage Manager agent...7.31.00 (18856) Adaptec Storage Manager console.7.31.00 (18856) Number of controllers...........1 Operating system................Windows Configuration information for controller 1 ------------------------------------------------------- Type............................Controller Model...........................Adaptec 5805 Controller number...............1 Physical slot...................2 Installed memory size...........512 MB Serial number...................8C4510C6C9E Boot ROM........................5.2-0 (18948) Firmware........................5.2-0 (18948) Device driver...................5.2-0 (16119) Controller status...............Optimal Battery status..................Charging Battery temperature.............Normal Battery charge amount (%).......37 Estimated charge remaining......0 days, 16 hours, 12 minutes Background consistency check....Disabled Copy back.......................Disabled Controller temperature..........Normal (40C / 104F) Default logical drive task priority High Performance mode................Dynamic Number of logical devices.......1 Number of hot-spare drives......0 Number of ready drives..........0 Number of drive(s) assigned to MaxCache cache0 Maximum drives allowed for MaxCache cache8 MaxCache Read Cache Pool Size...0 GB NCQ status......................Enabled Stay awake status...............Disabled Internal drive spinup limit.....0 External drive spinup limit.....0 Phy 0...........................No device attached Phy 1...........................No device attached Phy 2...........................No device attached Phy 3...........................1.50 Gb/s Phy 4...........................No device attached Phy 5...........................No device attached Phy 6...........................No device attached Phy 7...........................No device attached Statistics version..............2.0 SSD Cache size..................0 Pages on fetch list.............0 Fetch list candidates...........0 Candidate replacements..........0 69319...........................31293 Logical device..................0 Logical device name............. RAID level......................Simple volume Data space......................148,916 GB Date created....................09/19/2012 Interface type..................Serial ATA State...........................Optimal Read-cache mode.................Enabled Preferred MaxCache read cache settingEnabled Actual MaxCache read cache setting Disabled Write-cache mode................Enabled (write-back) Write-cache setting.............Enabled (write-back) Partitioned.....................Yes Protected by hot spare..........No Bootable........................Yes Bad stripes.....................No Power Status....................Disabled Power State.....................Active Reduce RPM timer................Never Power off timer.................Never Verify timer....................Never Segment 0.......................Present: controller 1, connector 0, device 0, S/N 9RX3KZMT Overall host IOs................99075 Overall MB......................4411203 DRAM cache hits.................71929 SSD cache hits..................0 Uncached IOs....................29239 Overall disk failures...........0 DRAM cache full hits............71929 DRAM cache fetch / flush wait...0 DRAM cache hybrid reads.........3476 DRAM cache flushes..............-- Read hits.......................0 Write hits......................0 Valid Pages.....................0 Updates on writes...............0 Invalidations by large writes...0 Invalidations by R/W balance....0 Invalidations by replacement....0 Invalidations by other..........0 Page Fetches....................0 0...............................0 73..............................10822 8...............................3 46138...........................4916 27184...........................15226 20875...........................323 16982...........................1771 1563............................5317 1948............................2969 Serial attached SCSI ----------------------- Type............................Disk drive Vendor..........................Unknown Model...........................ST3160815AS Serial Number...................9RX3KZMT Firmware level..................3.AAD Reported channel................0 Reported SCSI device ID.........0 Interface type..................Serial ATA Size............................149,05 GB Negotiated transfer speed.......1.50 Gb/s State...........................Optimal S.M.A.R.T. error................No Write-cache mode................Write back Hardware errors.................0 Medium errors...................0 Parity errors...................0 Link failures...................0 Aborted commands................0 S.M.A.R.T. warnings.............0 Solid-state disk (non-spinning).false MaxCache cache capable..........false MaxCache cache assigned.........false NCQ status......................Enabled Phy 0...........................1.50 Gb/s Power State.....................Full rpm Supported power states..........Full rpm, Powered off 0x01............................113 0x03............................98 0x04............................99 0x05............................100 0x07............................83 0x09............................75 0x0A............................100 0x0C............................99 0xBB............................100 0xBD............................100 0xBE............................61 0xC2............................39 0xC3............................69 0xC5............................100 0xC6............................100 0xC7............................200 0xC8............................100 0xCA............................100 Aborted commands................0 Link failures...................0 Medium errors...................0 Parity errors...................0 Hardware errors.................0 SMART errors....................0 End of the configuration information for controller 1

    Read the article

  • How much did it cost our competitor to DDoS us at 50 Gbps for two weeks?

    - by MiniQuark
    I know that this question may sound like an invalid serverfault question, but I believe that it's quite valid: the amount of time and effort that a sysadmin should spend on DDoS protection is a direct function of typical DDoS prices. Let me rephrase this: protecting a web site against small attacks is one thing, but resisting 50 Gbps of UDP flood is another and requires time & money. Deciding whether or not to spend that time & money depends on whether such an attack is likely or not, and this in turn depends on how cheap and simple such an attack is for the attacker. So here's the full story: our company has been victim to a massive DDoS attack (over 50 Gbps of UDP traffic, full-time during 2 weeks). We are pretty sure that it's one of our competitors, and we actually know which one, because we were the only two remaining competitors on a very big request for proposal, and the DDoS attack magically stopped the day we won (double hurray, by the way)! These people have proved in the past that they are very dishonest, but we know that they are not technical at all, so we believe that they simply paid for some botnet DDoS service. I would like to know how much these services typically cost, for such a large scale attack. Please do not give any link to such services, I would really hate to give these people any publicity. I understand that a hacker could very well do this for free, but what's a typical price for such an attack if our competitors paid for it through some kind of botnet service? It is really starting to scare me (if we're talking thousands of dollars here, then I am really going to freak off: who knows, they might just hire a hit-man one day?). Of course we filed a complaint, but the police says that they cannot do much about it (DDoS attacks are virtually untraceable, so they say), and our suspicions are not enough to justify them raiding our competitor's offices to search for proofs. For your information, we now changed our infrastructure to be able to sustain such attacks: we now use a major CDN service so that our servers are not directly affected by DDoS attacks. Requests for dynamic pages do get proxied to our servers, but for low level attacks (UDP flood, or Syn floods, for example) we only receive legitimate trafic, so we're fine. If they decide to launch higher level attacks (HTTP flood or slowloris attacks for example), most of the load should be handled by the CDN... at least I hope so! Thank you very much for your help.

    Read the article

  • How do I troubleshoot an IPsec tunnel (from a cellular router to a public server)?

    - by Hanno Fietz
    I'm new to IPsec and struggling with a setup that might soon be widely used in our operations (provided I do understand it, eventually...). A cellular router (blackbox by netModule, from its log messages it seems to be running Linux and OpenSwan) connects a sensor network on customers' sites with our public server. We need to be able to connect into the local network, so I had the cell provider give me a public IP (a dynamic one). The way their setup works, the public IPs only allow IPsec traffic. I set up OpenSwan on our Ubuntu server (running Jaunty). This is my connection config from /etc/ipsec.conf: conn gprs-field-devices left=my.pub.lic.ip [email protected] #leftsubnet=192.168.1.129/25 right=%any [email protected] #rightsubnet=192.168.1.1/25 #rightnexthop=%defaultroute auto=add On the router, all I have is the Web UI, in which I made the following settings: "Remote endpoint": public IP of server, same as "left" above "Local Network Address": 192.168.1.1 "Local Network Mask": 255.255.255.128 "Remote Network Address": 192.168.1.129 "Remote Network Mask": 255.255.255.128 The pluto process on the server is listening for connections on port 500. It can't open a tunnel, obviously, because it doesn't know at which IP the client is. I set up a passphrase as PSK for @field.econemon.com in /etc/ipsec.secrets and also configured it in the router (which doesn't seem to support certificates). My problem is, nothing happens. The router just says, IPsec is "down". When I copy-paste the IP into ipsec.conf (for "right="), and ask the server to ipsec auto --up gprs-field-devices, it just hangs until I press Ctrl-C. Is there anything wrong with my setup? How can I debug this further? My router gives the following loglines that seem related, but don't tell me anything: Feb 21 23:08:20 Netbox authpriv.warn pluto[2497]: loading secrets from "/etc/ipsec.secrets" Feb 21 23:08:20 Netbox authpriv.warn pluto[2497]: loading secrets from "/etc/ipsec.d/hostkey.secrets" Feb 21 23:08:20 Netbox authpriv.warn pluto[2497]: loading secrets from "/etc/ipsec.d/netbox0.secrets" Feb 21 23:08:20 Netbox authpriv.warn pluto[2497]: "netbox00" #1: initiating Main Mode Feb 21 23:08:20 Netbox daemon.err ipsec__plutorun: 104 "netbox00" #1: STATE_MAIN_I1: initiate Feb 21 23:08:20 Netbox daemon.err ipsec__plutorun: ...could not start conn "netbox00" Feb 21 23:08:22 Netbox authpriv.warn pluto[2497]: packet from 188.40.57.4:500: ignoring informational payload, type NO_PROPOSAL_CHOSEN Feb 21 23:08:22 Netbox authpriv.warn pluto[2497]: packet from 188.40.57.4:500: received and ignored informational message Feb 21 23:08:28 Netbox user.warn parrot.system_controller[762]: IPSECCTRLR: Tunnel 0 is down for 0 seconds Feb 21 23:08:40 Netbox user.warn parrot.system_controller[762]: IPSECCTRLR: Tunnel 0 is down for 10 seconds Feb 21 23:08:52 Netbox authpriv.warn pluto[2497]: packet from 188.40.57.4:500: ignoring informational payload, type NO_PROPOSAL_CHOSEN

    Read the article

  • Can't get my Raspberry Pi to keep a static IP

    - by JonnyIrving
    I recently got given a Raspberry Pi and I would like to be able to remote into it using puTTy from my laptop so I don't have to sit next to my tv with a keyboard and mouse to use it. I am able to get a puTTy session going when I know the IP address that my router has given the Pi on each session but it keeps changing on each reboot as I would expect. So I followed a number if instruction to go about configuring the RPi to keep a static IP address. This involved changing the file at '/etc/netwrok/interfaces' which now contains (password removed): auto lo iface lo inet loopback iface eth0 inet static address 192.168.1.82 netmask 255.255.255.0 gateway 192.168.1.254 auto wlan0 allow-hotplug wlan0 iface wlan0 inet dhcp wpa-ssid "BeBoxD304BF" wpa-psk "**********" Despite this however, each time I reboot my RPi it gives me a new dynamic IP address still. I also noticed that in the 'ifconfig' output below that the details of the eth0 doesn't contain IP details for inet addr, Bcast or Mask which have been present in all other examples I have seen online. eth0 Link encap:Ethernet HWaddr b8:27:eb:b5:95:da UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 00:87:c6:00:33:77 inet addr:192.168.1.83 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:918 errors:0 dropped:0 overruns:0 frame:0 TX packets:277 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 Also I'm not sure if this is relevant but it can't hurt! The file at '/etc/resolv.conf' contains: domain config search config nameserver 192.168.1.254 ..I heard it might mean something on one of the pages I was looking at. I would be very grateful for any help with this. I have tried everything I can think of and would really like to get this working this weekend so I can use it from work.

    Read the article

  • PHP-FPM performing worse than mod_php

    - by lordstyx
    Recently the website I maintain has been growing a lot and I saw the point coming where I'd want to switch from apache to nginx, because I kept on reading that it performs way better. Now I've done the switch, and I have to say, nginx is keeping up just fine. However, php-fpm is forming a problem. Where the php pages used to take 0.1 second to generate with the same load they now take around 3 seconds! Furthermore the error.log from nginx is being spammed with errors like: upstream timed out (110: Connection timed out) while connecting to upstream, client: ... I also tried using unix sockets instead, but those would complain about the following: connect() to unix:/tmp/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream I've fiddled with settings here and there but nothing seems to work. Changing the amount of pm.max_children doesn't seem to help a lot either, but with it's current amount at 350 it seems to be the lesser of all evil. The server that's being used has 3 GB RAM (not all of it is free due to a MySQL server also running) along with 2 dual-core processors (4 cores in total). Am I doing something majorly wrong with the settings here, or is the server simply not capable enough? EDIT: Here is the nginx server block server { listen 80; listen [::]:80 default ipv6only=on; root /var/www; index index.php index.html index.htm; server_name localhost; location / { try_files $uri $uri/ /index.html; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } location = /50x.html { root /usr/share/nginx/www; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini try_files $uri = 404; # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } location ~ /\.ht { deny all; } } And the php-fpm pool: [www] user = www-data group = www-data listen = 127.0.0.1:9000 ;listen = /tmp/php5-fpm.sock listen.backlog = -1 pm = dynamic pm.max_children = 350 pm.start_servers = 200 pm.min_spare_servers = 10 pm.max_spare_servers = 350 pm.max_requests = 1536 rlimit_files = 65536 rlimit_core = unlimited chdir = /

    Read the article

  • Htaccess strange behaviour with Nginx

    - by Termos
    I have a site running on Nginx (v1.0.14) serving as reverse proxy which proxies requests to Apache (v2.2.19). So Nginx runs on port 80, Apache is on 8080. Overall site works fine except that i cannot block access to certain directories with .htaccess file. For example i have 'my-protected-directory' on 'www.site.com' Inside it i have htaccess with following code: <Files *> order deny,allow deny from all allow from 1.2.3.4 <--- my ip address here </Files> When i try to access this page with my ip (1.2.3.4) i get 404 error which is not what i expect: http://www.site.com/my-protected-directory However everything works as expected when this page is served directly by Apache. I can see this page, everyone else can't. http://www.site.com:8080/my-protected-directory Update. Nginx config (7.1.3.7 is site ip.): user apache; worker_processes 4; error_log logs/error.log; pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; keepalive_timeout 65; gzip on; gzip_min_length 1024; gzip_http_version 1.1; gzip_proxied any; gzip_comp_level 5; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon; server { listen 80; server_name www.site.com site.com 7.1.3.7; access_log logs/host.access.log main; # serve static files location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/www.site.com/httpdocs; proxy_set_header Range ""; expires 30d; } # pass requests for dynamic content to Apache location / { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Range ""; proxy_pass http://7.1.3.7:8080; } } Could please anyone tell me what is wrong and how this can be fixed ?

    Read the article

  • Setting up Virtual Hosts with Apache on Windows 2008 server for multiple sites. Complicated setup, including subversion

    - by Roeland
    I am setting up apache on my windows 2008 server at my home. It will serve 2 functions. Subversion hosting to allow me and some others to manage company documents with version control Local website hosting for web development. Will need to run several websites since I generally work on more then one site at a time. Heres what I have done so far. I set up subversion and apache 2.2 using some walk troughs. I changed the default port to 1337. (im a nerd) Using dyndns.com I created a domain to forward to my home ip which is dynamic. ( company.gotdns.org) I then went into my DNS for my company.com and added a record to point repo.company.com to company.gotdns.org At this point people who need access to my file repository can access by going to repo.company.com/repo which is good so far. My question comes on the next step, setting up virtual hosts with apache. Ideally I would like to have my local website be viewable by some others in the company from their homes. So, say I am working on site1, I would like to have them be able to view this by going site1.roeland.bythepixel.com. At the same time, I would like to have site10.wouter.bythepixel.com go to his local setup for site10. What I have done for this: I went into my DNS for company.com and added a record to point roeland.company.com to company.gotdns.org (which translates to my ip). I added code to my httpd-vhosts.conf (listed at bottom) I added code to my host file (listed at bottom) Hah, so of course this doenst work as excepted.. going to site1.roeland.bythepixel.com doesnt bring up my test1 site. Could anyone point me where I may be going wrong? Thanks! hosts: 127.0.0.1 localhost 127.0.0.1 sensenich.roeland.bythepixel.com ::1 localhost httpd-vhosts.conf: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "F:/Current Projects/sensenich.com" ServerName sensenich.roeland.bythepixel.com ErrorLog "logs/sensenich.roeland.bythepixel.com-error.log" CustomLog "logs/sensenich.roeland.bythepixel.com-access.log" common </VirtualHost>

    Read the article

  • Windows server detected error with hard disk

    - by user53864
    We have hosting Windows server 2008 R2 and I am working as admin in small company. The server is hanging and restarting as the hard disk seems to be damaged due to power fluctutaion(though having inverter) as it's showing the below error message on server reboot: Problem detected with the hard disk Press any key to continue It's Seagate 1TB SATA hard disk and it's booting after pressing enter. So it's clear that the hard disk is dying. Yes, it's in warranty but it's fact that warranty won't recover the lincesed windows server 2008 and it's data. As it's booting now, I backed up required things and I am thinking to clone the entire hard disk. The first thing it striked me is checking on the Seagate site if any tool available for cloning and I found Seagate DiskWizard but not specified it for windows server 2008. Please anybody could help me giving your best ideas for the below: Urgently, What's the best way(free of cost) for me to clone in my case with the new same sized hard disk? It's a one time lincenced and I cannot use the same key again if I reinstall the server. Will the lincense be carried with new disk if cloned? else there is a way to contact Microsoft explaining the problem occurred, to obtain new key for no charge?. I want to take measure for future. How do I keep two disks in continuous sync? mirrored & raid are the only options converting the disks to dynamic? or is there a best way I could do with no additional charge?. Any help is greatly appreciated. Thank you! EDIT:1 I started cloning the disk with CloneZilla and it was going proper showing in GUI. But after some time there is no GUI but a black screen with some codes(looks like disk location numbers) going page by page(I have attached the screenshots below captured from my phone). Do you people think it's actually cloning?. I started in the morning and it's evening now. I left the office now to let it finish what it's trying to do and I'll go & check it tomorrow. Slowly lost hope, don't know what face it's going to show tomorrow. Any ideas?

    Read the article

  • Expendable, Redundant, Easily recoverable

    - by MeIr
    I am desperate at this point, I have been looking for "Big storage" solution for a while on my own and I can't find anything that would suite my needs. But now push came to shove. Current situation: I have about 6TB data storage (already full) - Drobo. Yesterday Drobo died on me and it put me into bad situation - I can't recover my data without buying another Drobo. From extensive research online I realized that Drobo is not the safest bet and by now it seems very poor choice. I ordered new Drobo to try to get my data back, however I don't want to be in the same situation later and continuing using Drobo promises this event to re-occur. What I am looking for: 1) Inexpensive setup. 2) Dynamically extendable - add more drives and/or replace a drive with bigger capacity. 3) Redundant - setup against 1-3 drive failure, will depend on total number of drives. For the sake of argument let's assume for every 4 drives one should be able to fail without data loss. 4) Easy data recovery - let's say unforeseen happens, I would like to be able to recover information without buying new tools or replacements - example: new Drobo. 5) Should be USB or Network Attach Storage 6) No demand on speed. Doesn't have to be fast, I am not doing video editing on the setup. However if option exists, would be nice to have a decent speed. After thoughts: I reviewed few options and FreeNAS looks nice, but it doesn't have #2 - Dynamic extendability. There are work around with Pools but it seems a bit complicated and unnecessary. More over it seems like data safety is a big question - saw some horror stories. Please advise on what options I have and what seems like an optimal solution (if any). I don't care if it has to be Windows or Linux box or any other OS and/or software that has to run on top, but simple solution is more attractive. Thank you! P.S: Feel free to ignore "After thoughts".

    Read the article

< Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >