Search Results

Search found 50591 results on 2024 pages for 'http protocols'.

Page 49/2024 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • How to implement a network protocol?

    - by gotch4
    Here is a generic question. I'm not in search of the best answer, I'd just like you to express your favourite practices. I want to implement a network protocol in Java (but this is a rather general question, I faced the same issues in C++), this is not the first time, as I have done this before. But I think I am missing a good way to implement it. In fact usually it's all about exchanging text messages and some byte buffers between hosts, storing the status and wait until the next message comes. The problem is that I usually end up with a bunch of switch and more or less complex if statements that react to different statuses / messages. The whole thing usually gets complicated and hard to mantain. Not to mention that sometimes what comes out has some "blind spot", I mean statuses of the protocol that have not been covered and that behave in a unpredictable way. I tried to write down some state machine classes, that take care of checking start and end statuses for each action in more or less smart ways. This makes programming the protocol very complicated as I have to write lines and lines of code to cover every possible situation. What I'd like is something like a good pattern, or a best practice that is used in programming complex protocols, easy to mantain and to extend and very readable. What are your suggestions?

    Read the article

  • What is the best way to download files via HTTP using .NET?

    - by Shamika
    In one of my application I'm using the WebClient class to download files from a web server. Depending on the web server sometimes the application download millions of documents. It seems to be when there are lot of documents, performance vise the WebClient doesn't scale up well. Also it seems to be the WebClient doesn't immediately close the connection it opened for the WebServer even after it successfully download the particular document. I would like to know what other alternatives I have.

    Read the article

  • Fastest reliable way for Clojure (Java) and Ruby apps to communicate

    - by jkndrkn
    Hi There, We have cloud-hosted (RackSpace cloud) Ruby and Java apps that will interact as follows: Ruby app sends a request to Java app. Request consists of map structure containing strings, integers, other maps, and lists (analogous to JSON). Java app analyzes data and sends reply to Ruby App. We are interested in evaluating both messaging formats (JSON, Buffer Protocols, Thrift, etc.) as well as message transmission channels/techniques (sockets, message queues, RPC, REST, SOAP, etc.) Our criteria: Short round-trip time. Low round-trip-time standard deviation. (We understand that garbage collection pauses and network usage spikes can affect this value). High availability. Scalability (we may want to have multiple instances of Ruby and Java app exchanging point-to-point messages in the future). Ease of debugging and profiling. Good documentation and community support. Bonus points for Clojure support. What combination of message format and transmission method would you recommend? Why? I've gathered here some materials we have already collected for review: Comparison of various java serialization options Comparison of Thrift and Protocol Buffers (old) Comparison of various data interchange formats Comparison of Thrift and Protocol Buffers Fallacies of Protocol Buffers RPC features Discussion of RPC in the context of AMQP (Message-Queueing) Comparison of RPC and message-passing in distributed systems (pdf) Criticism of RPC from perspective of message-passing fan Overview of Avro from Ruby programmer perspective

    Read the article

  • HTTP DOM: request.use? Usage?

    - by Jim G.
    I'm looking at the following code block in javascript: var request = new Request(); if(request.Use()) // What exactly does this do? { // ...do stuff } else { // no ajax support? } I've never seen anyone invoke the request.Use() method. My Question: What exactly does request.Use() check? Does it in fact check for AJAX support? Can anyone redirect me to an online API reference?

    Read the article

  • Is there any way to validate the Open Graph protocol meta tags for Facebook integration?

    - by John
    Is there any way to validate the Facebook Open Graph protocol meta tags in the head section of my website? Code below. <meta property="og:title" content="my content" /> <meta property="og:type" content="company" /> <meta property="og:url" content="http://mycompany.com/" /> <meta property="og:image" content="http://mycompany.com/image.png" /> <meta property="og:site_name" content="my site name" /> <meta property="fb:admins" content="my_id" /> <meta property="og:description" content="my description" /> -edit- I mean validating the html. Sorry for the confusion! Right now my site isn't valid because of these tags.

    Read the article

  • Has anyone properly interpreted HTTP request based on this demo of winpcap?

    - by httpinterpret
    The example is here, and I tried it by changing the filter to tcp and dst port 80 and the following: void packet_handler(u_char *param, const struct pcap_pkthdr *header, const u_char *pkt_data) { .... ip_len = (ih->ver_ihl & 0xf) * 4; tcp_len = (((u_char*)ih)[ip_len + 12] >> 4) * 4; tcpPayload = (u_char*)ih + ip_len + tcp_len; /* start of url - skip "GET " */ url = tcpPayload + 4; end_url = strchr((char*)url, ' '); url_length = end_url - url; final_url = (u_char*)malloc(url_length + 1); strncpy((char*)final_url, (char*)url, url_length); final_url[url_length] = '\0'; printf("%s\n", final_url); .... } But through debug, I see tcpPayload is full of messy code,not supposed "GET ..." stuff. What's wrong with my implement?

    Read the article

  • Android while getting HTTP response to file how to know it wasn't fully loaded?

    - by Stan
    I'm using this approach to store a big-sized response from server to parse it later: final HttpClient client = new DefaultHttpClient(new BasicHttpParams()); final HttpGet mHttpGetRequest = new HttpGet(strUrl); mHttpGetRequest.setHeader("Content-Type", "application/x-www-form-urlencoded"); FileOutputStream fos = null; try { final HttpResponse response = client.execute(mHttpGetRequest); final StatusLine statusLine = response.getStatusLine(); lastHttpErrorCode = statusLine.getStatusCode(); lastHttpErrorMsg = statusLine.getReasonPhrase(); if (lastHttpErrorCode == 200) { HttpEntity entity = response.getEntity(); fos = new FileOutputStream(reponseFile); entity.writeTo(fos); entity.consumeContent(); fos.flush(); } } catch (ClientProtocolException e) { e.printStackTrace(); lastHttpErrorMsg = e.toString(); return null; } catch (final ParseException e) { e.printStackTrace(); lastHttpErrorMsg = e.toString(); return null; } catch (final UnknownHostException e) { e.printStackTrace(); lastHttpErrorMsg = e.toString(); return null; } catch (IOException e) { e.printStackTrace(); lastHttpErrorMsg = e.toString(); } finally{ if (fos!=null) try{ fos.close(); } catch (IOException e){} } now how could I ensure the response was completely received and thus saved to file? Assume client's device lost Internet connection while this code was running. So the app received only some part of real response. And I'm pretty sure it happens cuz I got parsing exceptions like "tag not closed", "unexpected end of file" etc. So I need to detect somehow this situation to prevent code from parsing partial response but can't see how. Is it possible at all and how to do it? Or has it has to raise IOException in such cases?

    Read the article

  • How to perform an action when a remote (Http) file changed?

    - by ZeissS
    Hi, I want to create a script that checks an URL and perform an action (download + unzip) when the "Last-Modified" header of the remote file changed. I thought about fetching the header with curl but then I have to store it somewhere for each file and perform a date comparison. Does anyone have a different idea using (mostly) standard unix tools? thanks

    Read the article

  • HTTP Push from SQL Server — Comet SQL

    Article provides example solution for presenting data in "real-time" from Microsoft SQL Server in HTML browser. Article presents how to implement Comet functionality in ASP.NET and how to connect Comet with Query Notification from SQL Server.

    Read the article

  • Getting attacked, please what do I do?

    - by E3pO
    Getting millions of these requests! How can i stop these??? Gecko/20100101 Firefox/12.0" 173.59.227.11 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416620414 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 173.72.197.39 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416641552 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 2.222.7.143 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416647004 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 62.83.154.11 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416572373 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 65.35.221.207 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416453921 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; BOIE8;ENUS)" 68.40.182.244 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338415880184 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 5.1; rv:12.0) Gecko/20100101 Firefox/12.0" 99.244.26.33 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338384208421 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:12.0) Gecko/20100101 Firefox/12.0" 65.12.234.229 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338415812217 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 173.59.227.11 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416620415 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 68.40.182.244 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338415881181 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 5.1; rv:12.0) Gecko/20100101 Firefox/12.0" 188.82.242.197 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338414398872 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:12.0) Gecko/20100101 Firefox/12.0" 99.244.26.33 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338384208454 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:12.0) Gecko/20100101 Firefox/12.0" 173.59.227.11 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416620424 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 68.40.182.244 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338415882180 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 5.1; rv:12.0) Gecko/20100101 Firefox/12.0" 65.12.234.229 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338415812229 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 95.34.134.51 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416367865 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 65.35.221.207 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416453937 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; BOIE8;ENUS)" How can i filter GET requests containing "http://108.166.97.22/" ?

    Read the article

  • /server-status shows over 240 requests like "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy c

    - by Stefan Lasiewski
    Some details: Webserver: Apache/2.2.13 (FreeBSD) mod_ssl/2.2.13 OpenSSL/0.9.8e OS: FreeBSD 7.2-RELEASE This is a FreeBSD Jail. I believe I use the Apache 'prefork' MPM (I run the default for FreeBSD). I use the default values for MaxClients (256) I have enabled mod_status, with "ExtendedStatus On". When I view /server-status , I see a handful of regular requests. I also see over 240 requests from the 'localhost', like these. 37-0 - 0/0/1 . 0.00 1510 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 38-0 - 0/0/1 . 0.00 1509 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 39-0 - 0/0/3 . 0.00 1482 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 40-0 - 0/0/6 . 0.00 1445 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 I also see about 2417 requests yesterday from the localhost, like these: Apr 14 11:16:40 192.168.16.127 httpd[431]: www.example.gov 127.0.0.2 - - [15/Apr/2010:11:16:40 -0700] "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)" The page at http://wiki.apache.org/httpd/InternalDummyConnection says "These requests are perfectly normal and you do not, in general, need to worry about them", but I'm not so sure. Why are there over 230 of these? Are these active connections? If I have "MaxClients 256", and over 230 of these connections, it seems that my webserver is dangerously close to running out of available connections. It also seems like Apache should only need a handful of these "internal dummy connections" We actually had two unexplained outages last night, and I am wondering if these "internal dummy connection" caused us to run out of available connections. UPDATE 2010/04/16 It is 8 hours later. The /server-status page still shows that there are 243 lines which say "www.example.gov OPTION *". I believe these connections are not active. The server is mostly idle (1 requests currently being processed, 9 idle workers). There are only 18 active httpd processes on the Unix host. If these connections are not active, why do they show up under /server-status? I would have expected them to expire a few minutes after they were initialized.

    Read the article

  • Dealing with HTTP w00tw00t attacks

    - by Saif Bechan
    I have a server with apache and I recently installed mod_security2 because I get attacked a lot by this: My apache version is apache v2.2.3 and I use mod_security2.c This were the entries from the error log: [Wed Mar 24 02:35:41 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:31 2010] [error] [client 202.75.211.90] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:49 2010] [error] [client 95.228.153.177] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:48:03 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) Here are the errors from the access_log: 202.75.211.90 - - [29/Mar/2010:10:43:15 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 211.155.228.169 - - [29/Mar/2010:11:40:41 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 211.155.228.169 - - [29/Mar/2010:12:37:19 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" I tried configuring mod_security2 like this: SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind" SecFilterSelective REQUEST_URI "\w00tw00t\.at\.ISC\.SANS" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)" The thing in mod_security2 is that SecFilterSelective can not be used, it gives me errors. Instead I use a rule like this: SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind" SecRule REQUEST_URI "\w00tw00t\.at\.ISC\.SANS" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)" Even this does not work. I don't know what to do anymore. Anyone have any advice? Update 1 I see that nobody can solve this problem using mod_security. So far using ip-tables seems like the best option to do this but I think the file will become extremely large because the ip changes serveral times a day. I came up with 2 other solutions, can someone comment on them on being good or not. The first solution that comes to my mind is excluding these attacks from my apache error logs. This will make is easier for me to spot other urgent errors as they occur and don't have to spit trough a long log. The second option is better i think, and that is blocking hosts that are not sent in the correct way. In this example the w00tw00t attack is send without hostname, so i think i can block the hosts that are not in the correct form. Update 2 After going trough the answers I came to the following conclusions. To have custom logging for apache will consume some unnecessary recourses, and if there really is a problem you probably will want to look at the full log without anything missing. It is better to just ignore the hits and concentrate on a better way of analyzing your error logs. Using filters for your logs a good approach for this. Final thoughts on the subject The attack mentioned above will not reach your machine if you at least have an up to date system so there are basically no worries. It can be hard to filter out all the bogus attacks from the real ones after a while, because both the error logs and access logs get extremely large. Preventing this from happening in any way will cost you resources and they it is a good practice not to waste your resources on unimportant stuff. The solution i use now is Linux logwatch. It sends me summaries of the logs and they are filtered and grouped. This way you can easily separate the important from the unimportant. Thank you all for the help, and I hope this post can be helpful to someone else too.

    Read the article

  • TMG Forefront Proxy blocking internal HTTP requests

    - by Pascal
    I have TMG Forefront with Proxy installed and configured. However, whenever I make internal HTTP requested to servers on the internal network with a fully qualified dns name, the proxy denies the connection. Denied Connection FRW-02 18/03/2011 20:06:37 Log type: Web Proxy (Forward) Status: 12202 Forefront TMG denied the specified Uniform Resource Locator (URL). Rule: Default rule Source: Internal (10.50.75.21:21492) Destination: Internal (10.50.75.10:8080) Request: GET http://app-01.mydomain.com.br:9871/internalwebserver_deploy/MyServiceService.svc?wsdl Filter information: Req ID: 0a157279; Compression: client=No, server=No, compress rate=0% decompress rate=0% Protocol: http User: anonymous How can I get around this block? This is an internal call, so it should block it. If I use only http://app-01:9871/internalwebserver_deploy/MyServiceService.svc?wsdl, without the domain after the server name, then it doesn't get blocked. 10.50.75.10 is the firewall's ip, and the internal network's gateway.

    Read the article

  • How to filter http traffic in Wireshark?

    - by par
    I suspect my server has a huge load of http requests from its clients. I want to measure the volume of http traffic. How can I do it with Wireshark? Or probably there is an alternative solution using another tool? This is how a single http request/response traffic looks in Wireshark. The ping is generated by WinAPI funciton ::InternetCheckConnection() Thanks!

    Read the article

  • Random haproxy 408 errors

    - by Errol Fitzgerald
    I keep getting random 408 timeout messages from haproxy, but can't figure out if its a bug in 1.5-dev25 or something in my config file global log 127.0.0.1 local0 log 127.0.0.1 local1 info #log loghost local0 info maxconn 80000 #debug #quiet user haproxy group haproxy stats socket /tmp/haproxy.sock defaults log global mode http errorfile 503 /etc/haproxy/errors/503.http option httplog option dontlognull retries 3 option redispatch maxconn 80000 timeout client 12s timeout server 120s timeout queue 120s timeout connect 10s timeout http-request 30s option http-server-close timeout http-keep-alive 3000 option abortonclose option httpchk ... Does anyone see anything wrong with these config settings?

    Read the article

  • Google I/O 2012 - Beyond Paper: Google Cloud Print and the Future of Printing

    Google I/O 2012 - Beyond Paper: Google Cloud Print and the Future of Printing Akshay Kannan Use Google Cloud Print's API to send documents to a printer (or anywhere else) quickly and easily. We're currently integrated with Chrome, ChromeOS, mobile Gmail/Docs, and most new printers, and that's just the start. We provide a configurable JavaScript API, an Android Intent, as well as HTTP and XMPP interfaces for sending documents and receiving them in virtually any format. Come learn how to enable printing from your web and mobile apps on any device to any printer in the world, with just a few lines of code! For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 41 1 ratings Time: 01:06:43 More in Science & Technology

    Read the article

  • Xampp : http://localhost/xampp/splash.php :english : nothing happens

    - by Oussama Abdallah
    After I start Xampp on my Ubuntu 12.04 I go to the localhost and I come to the site, where all languages are listed .... when I click any of these languages nothing happens. Follow steps are already done: I started Xampp with: sudo /opt/lamp/lamp start after doing so, I get in the console, that everything is running: Starting XAMPP for Linux ... XAMPP: Starting Apache...ok. XAMPP: Starting MySQL... ok. XAMPP: Starting ProFTPD...ok. Can you please help me solve this problem?

    Read the article

  • Virtualbox HTTP load testing, host CPU overload issues

    - by aschuler
    I'm doing HTTP load testing benchmarks (using Apache Benchmark and Siege) on a small Java EE 1.7.0 / Tomcat 7.0.26 application running on a Debian Squeeze 6.0.4 x64 virtualized with Virtualbox 4.1.8. The computer host is Ubuntu 11.10 x64. I've modified those parameters in the Tomcat server.xml : <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="200000" redirectPort="8443" acceptCount="2000" maxThreads="150" minSpareThreads="50" /> The application executed on the server takes around 300ms. This app is running well until a certain amount of concurrent connections like those one : ab -n 500 -c 150 http://xx.xx.xx.xx:8080/myapp/ ab -n 1000 -c 50 http://xx.xx.xx.xx:8080/myapp/ siege -b -c 100 -r 20 http://xx.xx.xx.xx:8080/myapp/ A lot of socket connection timed out happens and this completly overload the host processor (but the CPU load inside the VM is normal). Doing an htop on the host, i can see that the Virtualbox processus is running under 300% CPU and never come down even after the load test is finished. (I've allocated 4 processors to the VM, if I allocate only one processor, CPU load goes under 100%). Restarting Tomcat don't do anything, i'm forced to restart the whole VM. I've tryed to launch those ab/siege commands locally on the VM and everything goes well. I first thought it was related to a linux network limit as explained here: Running some benchmarks using ab, and tomcat starts to really slow down So I've modified those TCP parameters : echo 15 > /proc/sys/net/ipv4/tcp_fin_timeout echo 30 > /proc/sys/net/ipv4/tcp_keepalive_intvl echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse It seems to be better, but it continues to overload the host CPU and output socket connections time out at a certain amount of concurrent connections. I'm wondering if this is not related to how Virtualbox handles external concurrent connections.

    Read the article

  • How to go to a website on a shared server by its ip address?

    - by user1502776
    I have a few questions, please help: Fist, I can access google search just by typing http://74.125.224.211 because this is the ip address returned by nslookup. However, I could not do so with ip addresses returned from www.yahoo.com. How do I go to yahoo search page by its ip ? Another example, http://www.allaboutcircuits.com will resolve to 68.233.243.63 by DNS server, but if I go to http://68.233.243.63 I got "Hello world!" , lol ! Second, for some reason, there is something wrong with DNS resolvers with my web hosting service (it will not be fixed !!). So command like, get_file_contents("http://www.allaboutcircuits.com"); will return php_network_getaddresses: getaddrinfo failed: Name or service not known How do I get around this with IP address , 68.233.243.63 I mean somehow attach the HTTP hostname parameter to get_file_contents() ? I would like to solve this on my own side (in my code), no troubleshooting/adjustment will be done by server admin.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >