Search Results

Search found 11208 results on 449 pages for '200 success'.

Page 39/449 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • PAM Winbind Expired Password

    - by kernelpanic
    We've got Winbind/Kerberos setup on RHEL for AD authentication. Working fine however I noticed that when a password has expired, we get a warning but shell access is still granted. What's the proper way of handling this? Can we tell PAM to close the session once it sees the password has expired? Example: login as: ad-user [email protected]'s password: Warning: password has expired. [ad-user@server ~]$ Contents of /etc/pam.d/system-auth: auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_krb5.so use_first_pass auth sufficient pam_winbind.so use_first_pass auth required pam_deny.so account [default=2 success=ignore] pam_succeed_if.so quiet uid >= 10000000 account sufficient pam_succeed_if.so user ingroup AD_Admins debug account requisite pam_succeed_if.so user ingroup AD_Developers debug account required pam_access.so account required pam_unix.so broken_shadow account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account [default=bad success=ok user_unknown=ignore] pam_krb5.so account [default=bad success=ok user_unknown=ignore] pam_winbind.so account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password sufficient pam_krb5.so use_authtok password sufficient pam_winbind.so use_authtok password required pam_deny.so session [default=2 success=ignore] pam_succeed_if.so quiet uid >= 10000000 session sufficient pam_succeed_if.so user ingroup AD_Admins debug session requisite pam_succeed_if.so user ingroup AD_Developers debug session optional pam_mkhomedir.so umask=0077 skel=/etc/skel session optional pam_keyinit.so revoke session required pam_limits.so session optional pam_mkhomedir.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_krb5.so

    Read the article

  • Nginx location regex is not matching

    - by shtuff.it
    The following has been working to cache css and js for me: location ~ "^(.*)\.(min.)?(css|js)$" { expires max; } results: $ curl -I http://mysite.com/test.css HTTP/1.1 200 OK Server: nginx Date: Thu, 16 Jan 2014 18:55:28 GMT Content-Type: text/css Content-Length: 19578 Last-Modified: Mon, 13 Jan 2014 18:54:53 GMT Connection: keep-alive Expires: Thu, 31 Dec 2037 23:55:55 GMT Cache-Control: max-age=315360000 X-Backend: stage01 Accept-Ranges: bytes I am trying to get versioning setup for my js / css using a 10 digit unix timestamp and am having issues getting a regex match with the following valid a regex. location ~ "^(.*)([\d]{10})\.(min\.)?(css|js)$" { expires max; } results: $ curl -I http://mysite.com/test_1234567890.css HTTP/1.1 200 OK Server: nginx Date: Thu, 16 Jan 2014 19:05:03 GMT Content-Type: text/css Content-Length: 19578 Last-Modified: Mon, 13 Jan 2014 18:54:53 GMT Connection: keep-alive X-Backend: stage01 Accept-Ranges: bytes

    Read the article

  • KVM-Guests can't get past bridge - no internet connection

    - by tmn29a
    I'm running a backported KVM on a Debian Squeeze. ATM the KVM-Guest can't connect to the internet through the bridge I have set up. The guests can reach each other, the host but nothing outside. I can neither ping, nslookup or do anything to a remote address. The guest are configured to have a static IP. When I didn;t have the bridge but a virtual bridge (the KVM-default) the guest could connect fine. After setting up the bridge things broke, so I think the problem lies there. # The loopback network interface auto lo br0 iface lo inet loopback # Bonding Interface auto bond0 iface bond0 inet static address 10.XXX.XXX.84 netmask 255.255.255.192 network 10.XXX.XXX.64 gateway 10.XXX.XXX.65 slaves eth0 eth1 bond_mode active-backup bond_miimon 100 bond_downdelay 200 bond_updelay 200 iface br0 inet static bridge_ports eth0 eth1 address 172.xxx.xxx.65 broadcast 172.xxx.xxx.127 netmask 255.255.255.192 gateway 172.xxx.xxx.65 bridge_stp on bridge_maxwait 0 Thanks in advance for your help !

    Read the article

  • Can I alias all directory requests to a single file in nginx?

    - by user749618
    I'm trying to figure out how to take all requests made to a particular directory and return a json string without a redirect, in nginx. Example: curl -i http://example.com/api/call1/ Expected result: HTTP/1.1 200 OK Accept-Ranges: bytes Content-Type: application/json Date: Fri, 13 Apr 2012 23:48:21 GMT Last-Modified: Fri, 13 Apr 2012 22:58:56 GMT Server: nginx X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 38 Connection: keep-alive {"logout": true} Here's what I have so far in my nginx conf: location ~ ^/api/(.*)$ { index /api_logout.json; alias /path/to/file/api_logout.json; types { } default_type "application/json; charset=utf-8"; break; } However, when I try to make the request the Content-Type doesn't stick: $ curl -i http://example.com/api/call1/ HTTP/1.1 200 OK Accept-Ranges: bytes Content-Type: application/octet-stream Date: Fri, 13 Apr 2012 23:48:21 GMT Last-Modified: Fri, 13 Apr 2012 22:58:56 GMT Server: nginx X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 38 Connection: keep-alive {"logout": true} Is there a better way to do this? How can I get the application/json type to stick?

    Read the article

  • Optimized CSF LFD to miminize false positive emails on new install? Centos6.2 + ISPConfig3

    - by Damainman
    I have a remote dedicated server running CentOS 6.2 x64bit with ISPConfig3. This is a brand new install. Server Purpose: Basic LAMP Web Hosting with PureFTPD, BIND, CLAMAV, RKHunter. Any advice or link to a guide which will clearly explain how to optimize the CSF+LFD configuration is greatly appreciated. I am not exactly sure on where to start what I shouldn't loosen the restrictions on. At the moment my inbox is flooding with alerts from LFD such as: Suspicious process running under user postfix Excessive resource usage: haldaemon Account: haldaemon Resource: Process Time Exceeded: 1823 1800 (seconds) Executable: /usr/sbin/hald Command Line: hald PID: 1031 Killed: No Excessive resource usage: amavis Time: Tue Jun 5 12:43:35 2012 -0700 Account: amavis Resource: Virtual Memory Size Exceeded: 330 200 (MB) Executable: /usr/bin/perl Command Line: amavisd (virgin child) PID: 27931 Killed: No Excessive resource usage: apache Time: Tue Jun 5 12:35:33 2012 -0700 Account: apache Resource: Virtual Memory Size Exceeded: 437 200 (MB) Executable: /usr/sbin/httpd Command Line: /usr/sbin/httpd PID: 27286 Killed: No

    Read the article

  • using flock in bash, why does killing a child process kill the parent process too?

    - by Robby
    In the code snippet below, I want the script to be limited to only running one copy at a time, and for it to restart server.x if it dies for any reason. Without flock involved, the loop correctly restarts if I kill the server process, but once I use flock to ensure the script is only running once, if I kill server.x it also kills the parent process. How can I ensure that killing the child process in a flock script keeps the parent around? #!/bin/bash set -e ( flock -x -n 200 while true do ./server.x $1 done ) 200>/var/lock/.m_rst.$1.lock

    Read the article

  • Router reconfigures PC's and they can no longer access the internet via hardwired connection to DSL Modem

    - by zchads
    Router reconfigures PC's and they can no longer access the internet via hardwired connection to DSL Modem. Hardware Information: Buffalo Wireless Router/access point, Model: WZR-HP-G300NH-AP DSL Modem: Generic (actually not sure of manufacture). Service Provider: TOT (Thailand) Laptop-1: Windows XP and Belkin PCMIA Network Card Laptop-2: Windows XP unknown network card & Wifi Laptop-3: Windows 7 unknown network card & Wifi Outline of Problem/actions taken: After a recent power failure the router and laptop-1 connected to router were no longer able to access the internet. Actions taken to try and recover internet access: Using Laptop-1 tried to configure Router with PPPoe settings using software from Router Manufacture. During the installation process a Timeout error was experienced, unable to connect to WAN. Used Internet Explorer to communicate directly with Router using IP address. Changed settings to use PPPoe settings given by ISP. Router was not able to communicate with Internet. Repeated steps 1-4 again with no success. Reset Router and DSL modem. Repeated steps 1-4 again still no success. Tried connecting Laptop-1 directly to DSL to gain access to internet to research problem. No Network connection with DSL could be established…connection would be established for a second and then be lost and didn’t appear long enough to actually connect to DSL. Replugged LAN back into Router and connection was regained with laptop-1. Replugged Laptop-1 directly into DSL and again unable to establish network connection. Uninstalled network card and all of its drivers on Laptop-1. Reinstalled network card and drivers and tried connecting directly to DSL. Still unable to make network connection. Plugged DSL into Laptop-3 and Internet connection was established. Being Laptop-3 does not have a CD-Rom, Laptop-2 was tried to connect to the router. With Laptop-2 steps 1-7 ended up being repeated without success. Tried plugging Laptop-2 directly into DSL and again no network connection could be established. Using Laptop-3 with a direct connection to DSL downloaded latest Router FW. Installed router FW using Laptop-1. Tired the installation process again without success. Being desperate reinstalled OS on Laptop-1 still not success. Tried using “ipconfig” with router to see what was going on without success. With laptop-1 connected to DSL went through the “ipconfig /…” inputs to see if anything made a difference. Being the network card was not able to make a connection this provide very little information “media disconnected”. So now I have a router and two laptops which are unable to connect to the internet and sure could use some advice/help.

    Read the article

  • Will open files limit in Centos affect HTTP connections? Does the limit apply to a single session or all sessions?

    - by forestclown
    When I do a ulimit -n I got 256, I assume it means I can open 256 files at the sametime. Does it means I can open 256 files with one single session? or all sessions? For example, I logined to my server with username "abc" (via putty/ssh), and open 200 files, with the session still running, I logined to the same server again with the same username "abc" (via putty/ssh), I can open only another 56 files? or I can open another 256 files? Lastly, does this limit also restrict number of http connections? e.g. with the above example, I have opened 200 files, and then I use "wget" or "curl" to make http connections. Thanks

    Read the article

  • Send files ending in .mp4 in Apache with HTTP 206 Partial Content

    - by Pacha
    I am using Apache as web server and the return code is always HTTP/1.1 200. I want to set some kind of handler or use a mod to return HTTP/1.1 206 when the extension of the file requested is .mp4 so it can do video seeking, my web server is already returning some headers to do seeking, but it doesn't work. Is this possible? The HTTP headers http://*hidden*/media/movies/file/1080/d3191cd83109c593ec908f3a47efa8a2.mp4 GET /media/movies/file/1080/d3191cd83109c593ec908f3a47efa8a2.mp4 HTTP/1.1 Host: *hidden* User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Firefox/31.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Referer: http://vjs.zencdn.net/4.6/video-js.swf Cookie: csrftoken=zXngwwS1S827g7aAJYbHJS3ajn5BGq9M; sessionid=uj1hlj00c85aoehw0n5fye8waggb7uod Connection: keep-alive HTTP/1.1 200 OK Date: Thu, 21 Aug 2014 15:04:46 GMT Server: Apache/2.2.22 (Debian) X-Mod-H264-Streaming: version=2.2.7 Content-Length: 2148905782 Last-Modified: Wed, 13 Aug 2014 11:36:46 GMT Etag: "8e002a-8015b345-5008133ff23c4;-2146061514" Accept-Ranges: bytes Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: video/mp4

    Read the article

  • Copying 500GB Data to EC2 Instances Local Drive

    - by iCode
    Please do not ask me why (they made me) but I have to copy 500GB data to the local drive every 200 node/instances that I am launching in EC2. For reasons beyond this post, this data must by on the local drive and not EBS drive so I can not benefit from snapshots. What is the fastest way that I can manage to this? Copying from S3 to each node takes a long time. I trying to attached an EBS volume to every node with the data and then copy the data from EBS to the local drive but that also take a long time (several hours_) Now, I am also thinking to use bit torrent but not sure how well it is going to be. What is the best way to copy 500GB of static data to each local drive of 200 ec2 instances? The 500Gb of data is composed several hundred of file with varying size but the biggest file is 20GB.

    Read the article

  • WCF .svc Accessible Over HTTP But Accessing WSDL Causes "Connection Was Reset"

    - by Wolfwyrd
    I have a WCF service which is hosted on IIS6 on a Win2003 SP2 machine. The WCF service is hosting correctly and is visible over HTTP giving the usual "To test this service, you will need to create a client and use it to call the service" message. However accessing the .svc?WSDL link causes the connection to be reset. The server itself is returning a 200 in the logs for the WSDL request, an example of which is shown here, the first call gets a connection reset, the second is a successful call for the .svc file. 2010-04-09 11:00:21 W3SVC6 MACHINENAME 10.79.42.115 GET /IntegrationService.svc wsdl 80 - 10.75.33.71 HTTP/1.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.1;+.NET+CLR+2.0.50727;+.NET+CLR+1.1.4322;+.NET+CLR+1.0.3705;+InfoPath.1;+.NET+CLR+3.0.04506.30;+MS-RTC+LM+8;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;) - - devsitename.mydevdomain.com 200 0 0 0 696 3827 2010-04-09 11:04:10 W3SVC6 MACHINENAME 10.79.42.115 GET /IntegrationService.svc - 80 - 10.75.33.71 HTTP/1.1 Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-GB;+rv:1.9.1.9)+Gecko/20100315+Firefox/3.5.9+(.NET+CLR+3.5.30729) - - devsitename.mydevdomain.com 200 0 0 3144 457 265 My Web.Config looks like: <system.serviceModel> <serviceHostingEnvironment > <baseAddressPrefixFilters> <add prefix="http://devsitename.mydevdomain.com" /> </baseAddressPrefixFilters> </serviceHostingEnvironment> <behaviors> <serviceBehaviors> <behavior name="My.Service.IntegrationServiceBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="My.Service.IntegrationServiceBehavior" name="My.Service.IntegrationService"> <endpoint address="" binding="wsHttpBinding" contract="My.Service.Interfaces.IIntegrationService" bindingConfiguration="NoSecurityConfig" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <bindings> <wsHttpBinding> <binding name="NoSecurityConfig"> <security mode="None" /> </binding> </wsHttpBinding> </bindings> </system.serviceModel> I'm pretty much stumped by this one. It works fine through the local dev server in VS2008 but fails after deployment. For further information, the targeted machine does not have a firewall (it's on the internal network) and the logs show the site thinks it's fine with 200 OK responses. I've also tried updating the endpoint with the full URL to my service however it still makes no difference. I have looked into some of the other options - creating a separate WSDL manually and exposing it through the metadata properties (really don't want to do that due to the maintenance issues). If anyone can offer any thoughts on this or any other workarounds it would be greatly appreciated.

    Read the article

  • How to calculate CIDR notation from entries in a routing table

    - by febreezey
    I have some entries in a routing table that were created using longest prefix matching, and I have to use those entries to determine the a.b.c.d/x notation (CIDR). This is an example entry: 11001000 00010111 00010. That was calculated from the range 11001000 00010111 00010000 00000000 through 11001000 00010111 00010111 11111111. I know the range is from IP addresses 200.23.16.0 to 200.23.23.255, but getting the /x for the subnet # doesn't make sense to me. Anyone know how to properly go about calculating it?

    Read the article

  • internal DNS server limiting the speed as 55kb/sec ?

    - by kartook
    Hi all , Thanks in advance to everyone . Here is my Question . 1 .We have LAN internal DNS server ( 192.168.205.200 ) 2. DNS server Running on my ADDITIONAL DOMAIN CONTROLLER 3. Tested with Nslookup IPADDRESS and hostname resolving without any error . 4 .DHCP server Running on 3750 Switch ( Checked with CISCO Confirmed the configuration ) .DNS name server pointed to 192.168.205.200 . ISSUE : 1.Host getting ipaddress and DNS from DHCP server .Maximum file transfer Bandwidth 55KB/sec . 2. Assigned Static DNS on Host as ISP DNSServer Address, host getting full bandwidth whihc is 1mb/sec Thanks Kartook

    Read the article

  • Send individual e-mails to each contact in Gmail?

    - by Robert C. Cartaino
    I'm trying to send an e-mail to a group of contacts in Gmail (200 recipients, no spam). Is there a way to force Gmail to send the e-mail as 200 individual e-mails, each addressed to that specific person in the list? But I'm trying to protect their privacy: Sending to a contact group puts all their e-mail addresses in the To: field. Adding their addresses to the cc: field means everyone can see all the addresses. Adding their addresses to the bcc: field means that no one sees their address (not even their own) in the to: field. That looks odd and seems like that would trigger a lot of spam filters. So how can I force Gmail to send the e-mail addressed specifically to each contact in the list?

    Read the article

  • TomCat starts, but does not load properly

    - by user37136
    Hey guys, I've been working on this for a day now and still don't know what's wrong. I am essentially building a second environment for our web and app server. I got apache to load up just fine, but tomcat is proving to be difficult. It appears to start and load just fine, but when it comes to loading our application, its just got stuck for 2-5 minutes and then shut down. Here is the log on the original machine where it works fine: 2010-02-12 11:52:40,506 INFO Web application servlet context is initializing... 2010-02-12 11:52:40,540 DEBUG Servlet context attribute added: select_jobType=[{1,Undefined}, {100,Completion}, {200,Plugging}, {300,R+M}, {400,Workover}, {500,Swab - tubing}, {600,Swab - fluid}] 2010-02-12 11:52:40,540 DEBUG Servlet context attribute added: select_jobTaskType=[{1,Undefined}, {100,Rod part}, {200,Tubing leak}, {300,Pump change}, {400,Stripping job}, {500,Long stroke}, {600,A/L optimization}] 2010-02-12 11:52:40,541 DEBUG Servlet context attribute added: select_wellType=[{1,Undefined}, {100,Rod pump}, {200,ESP}, {300,Injector}, {400,PC pump}, {500,Co-Rod}, {600,Flowing}, {700,Storage}] 2010-02-12 11:52:40,541 DEBUG Servlet context attribute added: select_assetType=[{1,Rig}, {100,Disabled rig}] 2010-02-12 11:52:40,542 DEBUG Servlet context attribute added: select_state=[{AL,Alabama}, {AK,Alaska}, {AZ,Arizona}, {AR,Arkansas}, {CA,California}, {CO,Colorado}, {CT,Connecticut}, {DE,Delaware}, {FL,Florida}, {GA,Georgia}, {HI,Hawaii}, {ID,Idaho}, {IL,Illinois}, {IN,Indiana}, {IA,Iowa}, {KS,Kansas}, {KY,Kentucky}, {LA,Louisiana}, {ME,Maine}, {MD,Maryland}, {MA,Massachusetts}, {MI,Michigan}, {MN,Minnesota}, {MS,Mississippi}, {MO,Missouri}, {MT,Montana}, {NE,Nebraska}, {NV,Nevada}, {NH,New Hampshire}, {NJ,New Jersey}, {NM,New Mexico}, {NY,New York}, {NC,North Carolina}, {ND,North Dakota}, {OH,Ohio}, {OK,Oklahoma}, {OR,Oregon}, {PA,Pennsylvania}, {RI,Rhode Island}, {SC,South Carolina}, {SD,South Dakota}, {TN,Tennessee}, {TX,Texas}, {UT,Utah}, {VT,Vermont}, {VA,Virginia}, {WA,Washington}, {WV,West Virginia}, {WI,Wisconsin}, {WY,Wyoming}, {ACO,Atlantic Coast Offshore}, {FOAK,Federal Offshore Alaska}, {NGOM,Northern Gulf of Mexico}, {PCO,Pacific Coastal Offshore}] 2010-02-12 11:52:40,542 INFO KeyviewContextMonitor.contextInitialized: Loaded drop-down lists:com/key/portal/web/common/lists.properties 2010-02-12 11:52:40,937 DEBUG Servlet context attribute added: org.apache.struts.action.SERVLET_MAPPING=*.do 2010-02-12 11:52:40,937 DEBUG Servlet context attribute added: org.apache.struts.action.ACTION_SERVLET=org.apache.struts.action.ActionServlet@155d578 2010-02-12 11:52:41,939 DEBUG Servlet context attribute added: org.apache.struts.action.MODULE=org.apache.struts.config.impl.ModuleConfigImpl@e08e9d 2010-02-12 11:52:41,962 DEBUG Servlet context attribute added: org.apache.struts.action.FORM_BEANS=org.apache.struts.action.ActionFormBeans@b31c3c 2010-02-12 11:52:41,967 DEBUG Servlet context attribute added: org.apache.struts.action.FORWARDS=org.apache.struts.action.ActionForwards@102c646 2010-02-12 11:52:41,973 DEBUG Servlet context attribute added: org.apache.struts.action.MAPPINGS=org.apache.struts.action.ActionMappings@127276a 2010-02-12 11:52:41,974 DEBUG Servlet context attribute added: org.apache.struts.action.MESSAGE=org.apache.struts.util.PropertyMessageResources@18cae13 2010-02-12 11:52:41,984 DEBUG Servlet context attribute added: org.apache.struts.action.PLUG_INS=[Lorg.apache.struts.action.PlugIn;@f875ae 2010-02-12 11:52:46,816 INFO Sucessfully loaded application properties com/key/core/properties/application On my second environment, it didn't execute the last line. I start tomcat with the exact same command line !/bin/ksh export JAVA_HOME=/app/java export CATALINA_HOME=/app/tomcat export CATALINA_BASE=/app/keyview/appserver CATALINA_OPTS=" -Xms128m -Xmx800m -Dapplication.props=com/key/core/properties/application -Dlog4j.configuration=com/key/core/log/log4j.xml -Djava.awt.headless=true -Dlog4j.debug" export CATALINA_OPTS ${CATALINA_HOME}/bin/startup.sh I bolded the line that I think are in error. Thanks

    Read the article

  • Email been marked as spam

    - by Rodrigo Ferrari
    Hello friends! Friends, I tried a lot of changes, but no success to send the email correctly formated, I'm using the same domain to send mail and the email pass trough spf and authentication, but has been marked as spam for some accounts using gmail ou google app's. The header's are: Delivered-To: [email protected] Received: by 10.231.208.5 with SMTP id ga5cs194453ibb; Mon, 17 Jan 2011 11:08:33 -0800 (PST) Received: by 10.142.213.18 with SMTP id l18mr4141524wfg.192.1295291312735; Mon, 17 Jan 2011 11:08:32 -0800 (PST) Return-Path: <[email protected]> Received: from hm1315-29.locaweb.com.br (hm1315-29.locaweb.com.br [201.76.49.185]) by mx.google.com with ESMTP id a70si8528144yhd.33.2011.01.17.11.08.32; Mon, 17 Jan 2011 11:08:32 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 201.76.49.185 as permitted sender) client-ip=201.76.49.185; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 201.76.49.185 as permitted sender) [email protected] Received: from hm1974.locaweb.com.br (189.126.112.86) by hm1315-38.locaweb.com.br (PowerMTA(TM) v3.5r15) id h6i9r00nvfo8 for <[email protected]>; Mon, 17 Jan 2011 17:08:31 -0200 (envelope-from <[email protected]>) X-Spam-Status: No Received: from bart0020.locaweb.com.br (bart0020.email.locaweb.com.br [200.234.210.22]) by hm1974.locaweb.com.br (Postfix) with ESMTP id 9C03511E00B5; Mon, 17 Jan 2011 17:08:31 -0200 (BRST) X-LocaWeb-COR: locaweb_2009_x-mail Received: from admin.domain.com.br (hm686.locaweb.com.br [200.234.200.116]) (Authenticated sender: [email protected]) by bart0020.locaweb.com.br (Postfix) with ESMTPA id 4B2F08CAFD6B; Mon, 17 Jan 2011 17:08:31 -0200 (BRST) Message-ID: <[email protected]> Date: Mon, 17 Jan 2011 17:08:31 -0200 Subject: Domain - Assunto From: Sistema <[email protected]> Reply-To: rodrigo <[email protected]> To: balucia <[email protected]> MIME-Version: 1.0 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Virus-Scanned: clamav-milter 0.96.5 at hm1974 X-Virus-Status: Clean This header has been marked as spam, I had no more ideas how to fix it and there are people borrowing me about this. Thanks and best regard's.

    Read the article

  • Unmounting a zfs pool while it is shared with sharenfs

    - by Ted W.
    I have a Solaris (open indiana) system which is getting poor disk write performance. In order to enable ZIL in this version of zfs I need to add a line to /etc/system. This will not take affect until I've unmounted and remounted the zpool. The trick is that this spool is shared via nfs to about 200 other servers to host users' home directories. I can guarantee that no users will be accessing the disks during this period of maintenance but I would like to avoid having to issue an unmount for 200 systems in order to unmount the disk on the Solaris box. My question is, with sharenfs, is it necessary to have all systems disconnected before unmounting the filesystem on the host? If it's possible, how do you go about it? I've tried unmounting already, the normal way, and it reports the disk is busy. There is no lsof in Solaris and pfiles (I think that's what it was) does not show anything obviously using the mounts.

    Read the article

  • Internet Explorer percent based layout issue

    - by Tom
    Heya, My goal is to make a layout that is 200% width and height, with four containers of equal height and width (100% each), using no javascript as the bear minimum (or preferably no hacks). Right now I am using HTML5, and CSS display:table. It works fine in Safari 4, Firefox 3.5, and Chrome 5. I haven't tested it yet on older versions. Nonetheless, in IE7 and IE8 this layout fails completely. (I do use the Javascript HTML5 enabling script /cc../, so it should not be the use of new HTML5 tags) Here is what I have: <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta charset="UTF-8" /> <title>IE issue with layout</title> <style type="text/css" media="all"> /* styles */ @import url("reset.css"); /* Generall CSS */ .table { display:table; } .row { display:table-row; } .cell { display:table-cell; } /* Specific CSS */ html, body { //overflow:hidden; I later intend to limit the viewport } section#body { position:absolute; width:200%; height:200%; overflow:hidden; } section#body .row { width:200%; height:50%; overflow:hidden; } section#body .row .cell { width:50%; overflow:hidden; } section#body .row .cell section { display:block; width:100%; height:100%; overflow:hidden; } section#body #stage0 section header { text-align:center; height:20%; display:block; } section#body #stage0 section footer { display:block; height:80%; } </style> </head> <body> <section id="body" class="table"> <section class="row"> <section id="stage0" class="cell"> <section> <header> <form> <input type="text" name="q" /> <input type="submit" value="Search" /> </form> </header> <footer> <table id="scrollers"> </table> </footer> </section> </section> <section id="stage1" class="cell"> <section> content </section> </section> </section> <section class="row"> <section id="stage2" class="cell"> <section> content </section> </section> <section id="stage3" class="cell"> <section> content </section> </section> </section> </section> </body> </html> You can see it live here: http://www.tombarrasso.com/ie-issue/

    Read the article

  • How to set parameters of CGContextAddArcToPoint method using a slider

    - by Manish Sahni
    am making an app in which i have to control a smile of a face like graphics. For example if i slide the slider down(to a value less than middle) it should give the arc a sad face(like :-( ) effect and if i slide the slider up the arc should give the effect of smile(like :-) ).initially lips are like :-| . I need to control the lips which is an arc using a slider? smileSliderViewController.h #import <UIKit/UIKit.h> @interface smileSliderViewController : UIViewController { IBOutlet UISlider *slider; } -(IBAction)valueChange:(id)sender; @end smileSliderViewController.m #import "smileSliderViewController.h" //#import "smile.h" @implementation smileSliderViewController -(IBAction)valueChange:(id)sender { int value = (int)(slider.value); if(value>0 && value<25) { //value--; CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetLineWidth(context, 2.0); CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB(); CGFloat components[] = {0.0,0.0,1.0,1.0}; CGColorRef color = CGColorCreate(colorspace, components); CGContextSetStrokeColorWithColor(context, color); CGContextMoveToPoint(context, 120,180); CGContextAddArcToPoint(context, 190 , 170, 270, 200, 0 ); CGContextStrokePath(context); } } smile.h #import <UIKit/UIKit.h> #import "smile.m" @interface smile : UIView { IBOutlet UISlider *slider; } @end smile.m #import "smile.h" #import "smileSliderViewController.h" @implementation smile - (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { // Initialization code } return self; } - (void)drawRect:(CGRect)rect { CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetLineWidth(context, 2.0); //CGContextSetStrokeColor(context, [UIColor redColor].CGColor); CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB(); CGFloat components[] = {0.0,0.0,1.0,1.0}; CGColorRef color = CGColorCreate(colorspace, components); CGContextSetStrokeColorWithColor(context, color); CGRect rectangle = CGRectMake(60, 20, 200,200); CGContextAddEllipseInRect(context, rectangle); CGContextStrokePath(context); /*CGRect rectangle1 = CGRectMake(130,170,50,10); CGContextAddRect(context,rectangle1); CGContextStrokePath(context);*/ /*CGContextMoveToPoint(context,100,200); CGContextAddArcToPoint(context,120,150,400,150,70 ); CGContextStrokePath(context);*/ /*CGContextMoveToPoint(context, 100,100); CGContextAddArcToPoint(context, -100,200, -400,200, 80); CGContextStrokePath(context);*/ CGContextSetStrokeColorWithColor(context,color); CGRect rectangle2 = CGRectMake(80, 90, 50, 8); CGContextAddEllipseInRect(context, rectangle2); CGContextStrokePath(context); CGContextSetStrokeColorWithColor(context,color); CGRect rectangle3 = CGRectMake(180, 90, 50, 8); CGContextAddEllipseInRect(context, rectangle3); CGContextStrokePath(context); CGContextSetStrokeColorWithColor(context,color); CGContextMoveToPoint(context, 155,120); CGContextAddLineToPoint(context, 155,160); CGContextStrokePath(context); /*CGContextSetStrokeColorWithColor(context, color); CGRect rectangle4 = CGRectMake(130, 180, 50,8); CGContextAddEllipseInRect(context, rectangle4); CGContextStrokePath(context);*/ /*CGContextSetStrokeColorWithColor(context,color); CGContextMoveToPoint(context, 120,180); CGContextAddLineToPoint(context, 190,180); CGContextStrokePath(context);*/ int value = (int)(slider.value); if(value == 25) { CGContextMoveToPoint(context, 120,180); CGContextAddArcToPoint(context, 190 , 180, 250 , 180, 0 ); CGContextStrokePath(context); } }

    Read the article

  • Apache2 memory usage when uploading large files

    - by abhaga
    Hi, I am running apache2.2.12 along with PHP 5.2.10. PHP is configured to run as a separate process through fcgid. The problem is that when users upload a file, size of the apache process swells by almost the same amount. So if somebody tries to upload a 200 MB file, one of the child process swells to current size+200 MB. If 2 users simultaneously start uploading, my server crashes. Now it is the virtual memory size which is increasing but since I am on a OpenVZ based VPS, that is what counts. My questions are: Is it the normal Apache behavior or can I do something to fix this? If not, is there a more memory efficient way of handling big file uploads. Going by the current behavior, I will need 1 GB of free RAM for every apache child accepting a upload. Thanks! Abhaya -

    Read the article

  • Nginx no longer servers uwsgi application behind HAProxy - Looks for static file instead

    - by Ralph
    We implemented our web application using web2py. It consists of several modules offering a REST API at various resources (e.g. /dids, /replicas, ...). The API is used by clients implementing requests.py. My problem is that our web app works fine if it's behind HAProxy and hosted by Apache using mod_wsgi. It also works fine if the clients interact with nginx directly. It doesn't work though when using HAProxy in front of nginx. My guess is that HAProxy somehow modifies the request and thus nginx behaves differently i.e. looking for a static file instead of calling the WSGI container. Unfortunately I can't figure out what's exactly going (wr)on(g). Here are the relevant config sections of these three component's config files. At least I guess they are interesting. If you miss anything, please let me know. 1) haproxy.conf frontend app-lb bind loadbalancer:443 ssl crt /etc/grid-security/hostcertkey.pem default_backend nginx-servers mode http backend nginx-servers balance leastconn option forwardfor server nginx-01 nginx-server-int-01.domain.com:80 check 2) nginx.conf: sendfile off; #tcp_nopush on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; server { server_name nginx-server-int-01.domain.com; root /path/to/app/; location / { uwsgi_pass unix:///tmp/app.sock; include uwsgi_params; uwsgi_read_timeout 600; # Requests can run for a serious long time } 3) uwsgi.ini [uwsgi] chdir = /path/to/app/ chmod-socket = 777 no-default-app = True socket = /tmp/app.sock manage-script-name = True mount = /dids=did.py mount = /replicas=replica.py callable = application Now when I let my clients go against nginx-server-int-01.domain.com everything is fine. In the access.log of nginx lines like these are appearing: 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/user.ogueta/cnt_mc12_8TeV.16304.stream_name_too_long.other.notype.004202218365415e990b9997ea859f20.user/dids HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5282 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5094 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 528 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "GET /dids/mc13_14TeV/dids/search?project=mc13_14TeV&stream_name=%2Adummy&type=dataset&datatype=NTUP_SMDYMUMU HTTP/1.1" 401 73 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /replicas/list HTTP/1.1" 200 713 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" But when I switch the clients to go against HAProxy (loadbalancer.domain.com:443), the error.log of nginx shows lines like these: 2014/08/23 01:26:01 [error] 1705#0: *21231 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21232 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21233 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21234 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21235 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer" 2014/08/23 01:26:02 [error] 1705#0: *21238 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21239 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21242 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21244 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" As you can see, that request looks the same, only the client IP changed, from the client's host to the one from loadbalancer.domain.com. But due to what ever reasons ngxin seems to assume that it is a static file to be served which eventually results in the file not found message. I searched the web for multiple hours already, but without much luck so far. Any help is very much appreciated. Cheers, Ralph

    Read the article

  • SQL Azure Federation - how much data before performance benefits?

    - by Donald Hughes
    To avoid premature optimization, I don't want to implement SQL Azure's Federation too early. Is there a rule of thumb for how much data a table would need to have before seeing performance benefits from sharding? I know there won't be a precise answer as there are too many variables to consider, especially with much of SQL Azure's resources being hidden/unknown. To put it into several, more concrete examples, would Federation improve performance in any of the below table scenarios: 100,000 rows (~ 200 MB) 1,000,000 rows (~ 2 GB) 10,000,000 rows (~ 20 GB) 100,000,000 rows (~ 200 GB) For the sake of elaboration, we can assume this is the largest table that would be federated, which consists of order details, which is joined to an orders table with a 'customer_id' foreign key, which would be the distribution key. This is a fairly standard multi-tenant, CRUD order entry system, with a typical assortment of reporting needs (customer order totals by day/month/year, etc).

    Read the article

  • Using wget to download pdf files from a site that requires cookies to be set

    - by matt74tm
    I want to access a newspaper site and then download their epaper copies (in PDF). The site requires me to login using my email address and password and then it permits me to access those PDF URLs. I'm having trouble 'setting my session' in wget. When I login into the site from my browser, it sets two cookie values: [email protected] Password=12345 I tried: wget --post-data "[email protected]&Password=12345" http://epaper.abc.com/login.aspx However, that just downloaded the login page and saved it locally The FORM on the login page has two fields: txtUserID txtPassword and radiobuttons like this: <input id="rbtnManchester" type="radio" checked="checked" name="txtpub" value="44"> Another button: <input id="rbtnLondon" type="radio" name="txtpub" value="64"> If I post this to the login.aspx page, I get the same output wget --post-data "[email protected]&txtPassword=12345&txtpub=44" http://epaper.abc.com/login.aspx If I do: --save-cookies abc_cookies.txt it doesnt seem to have anything other than the default content. For the last if I do --debug as well it says: ... Set-Cookie: ASP.NET_SessionId=05kphcn4hjmblq45qgnjoe41; path=/; HttpOnly ... Stored cookie epaper.abc.com -1 (ANY) / <session> <insecure> [expiry none] ASP.NET_SessionId 05kphcn4hjmblq45qgnjoe41 Length: 107253 (105K) [text/html] Saving to: `login.aspx' ... Saving cookies to abc_cookies.txt. However, abc_cookies.txt shows ONLY the following: # HTTP cookie file. # Generated by Wget on 2011-08-16 08:03:05. # Edit at your own risk. (Not sure why I'm not getting any responses on SO - perhaps SU is a better forum - http://stackoverflow.com/questions/7064171/using-wget-to-download-pdf-files-from-a-site-that-requires-cookies-to-be-set) EDIT 1 C:\Temp>wget --cookies=on --keep-session-cookies --save-cookies abc_cookies.txt --post-data "txtUserID=abc%40gmail.com&txtPassword=password&txtpub=44&chkbox=checkbox&submit.x=48&submit.y=7" http://epaper.abc.com/login.aspx --debug SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc DEBUG output created by Wget 1.11.4 on Windows-MinGW. --2011-08-18 08:15:59-- http://epaper.abc.com/login.aspx Resolving epaper.abc.com... seconds 0.00, 999.999.99.99 Caching epaper.abc.com => 999.999.99.99 Connecting to epaper.abc.com|999.999.99.99|:80... seconds 0.00, connected. Created socket 300. Releasing 0x00a2ae80 (new refcount 1). ---request begin--- POST /login.aspx HTTP/1.0 User-Agent: Wget/1.11.4 Accept: */* Host: epaper.abc.com Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 100 ---request end--- [POST data: txtUserID=abc%40gmail.com&txtPassword=password&txtpub=44&chkbox=checkbox&submit.x=48&submit.y=7] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Connection: keep-alive Date: Thu, 18 Aug 2011 02:46:17 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 Set-Cookie: ASP.NET_SessionId=owcrje55yl45kgmhn43gq145; path=/; HttpOnly Cache-Control: private Content-Type: text/html; charset=utf-8 Content-Length: 107253 ---response end--- 200 OK Registered socket 300 for persistent reuse. Stored cookie epaper.abc.com -1 (ANY) / <session> <insecure> [expiry none] ASP.NET_SessionId owcrje55yl45kgmhn43gq145 Length: 107253 (105K) [text/html] Saving to: `login.aspx.1' 100%[======================================================================================================================>] 107,253 24.9K/s in 4.2s 2011-08-18 08:16:05 (24.9 KB/s) - `login.aspx.1' saved [107253/107253] Saving cookies to abc_cookies.txt. Done saving cookies. C:\Temp>wget --referer=http://epaper.abc.com/login.aspx --cookies=on --load-cookies abc_cookies.txt --keep-session-cookies --save-cookies abc_cookies.txt http://epaper.abc.com/PagePrint/16_08_2011_001.pdf --debug SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc DEBUG output created by Wget 1.11.4 on Windows-MinGW. Stored cookie epaper.abc.com -1 (ANY) / <session> <insecure> [expiry none] ASP.NET_SessionId owcrje55yl45kgmhn43gq145 --2011-08-18 08:16:12-- http://epaper.abc.com/PagePrint/16_08_2011_001.pdf Resolving epaper.abc.com... seconds 0.00, 999.999.99.99 Caching epaper.abc.com => 999.999.99.99 Connecting to epaper.abc.com|999.999.99.99|:80... seconds 0.00, connected. Created socket 300. Releasing 0x00598290 (new refcount 1). ---request begin--- GET /PagePrint/16_08_2011_001.pdf HTTP/1.0 Referer: http://epaper.abc.com/login.aspx User-Agent: Wget/1.11.4 Accept: */* Host: epaper.abc.com Connection: Keep-Alive Cookie: ASP.NET_SessionId=owcrje55yl45kgmhn43gq145 ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Connection: keep-alive Date: Thu, 18 Aug 2011 02:46:30 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 content-disposition: attachement; filename=Default_logo.gif Cache-Control: private Content-Type: image/GIF Content-Length: 4568 ---response end--- 200 OK Registered socket 300 for persistent reuse. Length: 4568 (4.5K) [image/GIF] Saving to: `16_08_2011_001.pdf' 100%[======================================================================================================================>] 4,568 7.74K/s in 0.6s 2011-08-18 08:16:14 (7.74 KB/s) - `16_08_2011_001.pdf' saved [4568/4568] Saving cookies to abc_cookies.txt. Done saving cookies. Contents of abc_cookies.txt epaper.abc.com FALSE / FALSE 0 ASP.NET_SessionId owcrje55yl45kgmhn43gq145

    Read the article

  • Server currently under DDOS, not sure what to do.

    - by Volex
    Hi, My web server is currently under a DDOS attack I believe, the messages log is full of these kind of messages: May 13 15:51:19 kernel: nf_conntrack: table full, dropping packet. May 13 15:51:19 last message repeated 9 times May 13 15:51:24 kernel: __ratelimit: 78 callbacks suppressed May 13 15:51:24 kernel: nf_conntrack: table full, dropping packet. May 13 15:52:06 kernel: possible SYN flooding on port 80. Sending cookies. and a netstat has a huge amount of the following: tcp 0 0 my.host.com:http bb176da0.virtua.com.br:4998 SYN_RECV tcp 0 0 my.host.com:http 187.0.43.109:2694 SYN_RECV tcp 0 0 my.host.com:http 109.229.4.145:1722 SYN_RECV tcp 0 0 my.host.com:http 189-84-163-244.sodobr:63267 SYN_RECV tcp 0 0 my.host.com:http bd66839d.virtua.com.br:3469 SYN_RECV tcp 0 0 my.host.com:http 69.101.56.190.dsl.int:52552 SYN_RECV tcp 0 0 my.host.com:http pc-62-230-47-190.cm.vt:2262 SYN_RECV tcp 0 0 my.host.com:http 189-84-163-244.sodobr:63418 SYN_RECV tcp 0 0 my.host.com:http pc-62-230-47-190.cm.vt:1741 SYN_RECV tcp 0 0 my.host.com:http zaq3d739320.zaq.ne.jp:2141 SYN_RECV tcp 0 0 my.host.com:http netacc-gpn-4-80-73.po:52676 SYN_RECV tcpdump shows: 7:11:08.564510 IP 187-4-1xx-4.xxx.ipd.brasiltelecom.net.br.54821 my.host.com.http: S 999692166:999692166(0) win 65535 17:11:08.566347 IP 114-44-171-67.dynamic.hinet.net.1129 my.host.com.http: S 605369055:605369055(0) win 65535 17:11:08.570210 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5590 my.host.com.http: S 2813379182:2813379182(0) win 16384 17:11:08.571290 IP dsl-189-143-30-99-dyn.prod-infinitum.com.mx.1615 my.host.com.http: S 281542700:281542700(0) win 65535 17:11:08.583847 IP dsl-189-143-30-99-dyn.prod-infinitum.com.mx.1617 my.host.com.http: S 499413892:499413892(0) win 65535 17:11:08.588680 IP 170.51.229.112.2569 my.host.com.http: S 2195084898:2195084898(0) win 65535 17:11:08.588773 IP gw2-1.211.ru.3180 my.host.com.http: F 2315901786:2315901786(0) ack 2620913033 win 64240 17:11:08.590656 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5614 my.host.com.http: S 2813715032:2813715032(0) win 16384 17:11:08.591212 IP 203.82.82.54.15848 my.host.com.http: S 4070423507:4070423507(0) win 16384 17:11:08.591254 IP 203.82.82.54.2545 my.host.com.http: S 1790910784:1790910784(0) win 16384 17:11:08.591289 IP 203.82.82.54.28306 my.host.com.http: S 578615626:578615626(0) win 16384 17:11:08.591591 IP gw2-1.211.ru.3191 my.host.com.http: F 2316435991:2316435991(0) ack 2634205972 win 64240 17:11:08.591790 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5593 my.host.com.http: S 2813659017:2813659017(0) win 16384 17:11:08.593691 IP gw2-1.211.ru.3203 my.host.com.http: F 2316834420:2316834420(0) ack 2629074987 win 64240 I'm not sure what I can do to limit/mitigate this, currently no webpages are being served, any help gratefully appreciated.

    Read the article

  • I am unable to connect to my netbook from any machine on my network until the netbook has pinged it

    - by Samuel Husky
    I have a rather strange issue with my netbook on my local network. When trying to connect to it in any way from a remote system it does not appear to find it. However if I get the netbook to ping the machine trying to connect it mystically appears to work. Below is the ping test from my main PC to the netbook. C:\Users\Sam>ping 192.168.8.102 Pinging 192.168.8.102 with 32 bytes of data: Reply from 192.168.8.100: Destination host unreachable. Reply from 192.168.8.100: Destination host unreachable. Reply from 192.168.8.100: Destination host unreachable. Reply from 192.168.8.100: Destination host unreachable. Ping statistics for 192.168.8.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Now a ping from the netbook to my main PC sam@malamute ~ $ ping 192.168.8.100 PING 192.168.8.100 (192.168.8.100) 56(84) bytes of data. 64 bytes from 192.168.8.100: icmp_req=1 ttl=128 time=2.46 ms 64 bytes from 192.168.8.100: icmp_req=2 ttl=128 time=0.835 ms 64 bytes from 192.168.8.100: icmp_req=3 ttl=128 time=1.60 ms 64 bytes from 192.168.8.100: icmp_req=4 ttl=128 time=1.32 ms 64 bytes from 192.168.8.100: icmp_req=5 ttl=128 time=1.34 ms ^C --- 192.168.8.100 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4004ms rtt min/avg/max/mdev = 0.835/1.514/2.460/0.536 ms And the same ping again from the main PC after the netbook has made a connection to it C:\Users\Sam>ping 192.168.8.102 Pinging 192.168.8.102 with 32 bytes of data: Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Ping statistics for 192.168.8.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 1ms, Average = 1ms The netbook is running Gentoo and is currently connected via wireless. My main PC is running Windows 7 however I get the same result no matter what PC I use on this network. Please see this example from a CentOS machine on the same network [root@tiger ~]# ping 192.168.8.102 PING 192.168.8.102 (192.168.8.102) 56(84) bytes of data. From 192.168.8.200 icmp_seq=2 Destination Host Unreachable From 192.168.8.200 icmp_seq=3 Destination Host Unreachable From 192.168.8.200 icmp_seq=4 Destination Host Unreachable --- 192.168.8.102 ping statistics --- 6 packets transmitted, 0 received, +3 errors, 100% packet loss, time 5000ms , pipe 3 If you need any more information or require logs or config files please let me know and any assistance is greatly appreciated. Additional info: No responses on TCP dump from the netbook. Same result when booting into Ubuntu from a USB key. No issue when using a wired Ethernet connection.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >