Search Results

Search found 127829 results on 5114 pages for 'http status code 403'.

Page 34/5114 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • C# SOCKS proxy service for HTTP requests

    - by Ed
    I'm trying to build a service that will forward HTTP requests from agents like a browser to the Tor service. Problem is, the Tor service only accepts SOCKS4a connections. So my solution is to listen for HTTP requests, get the URL they're requesting, and make a request via Tor with the help of the Starksoft.Net.Proxy library. Then return the response. The library kind of works, but I'm not happy. It returns HTTP headers with the response and it can't handle images. So the responses are messed up. How could I improve my code? I'm very new to network programming. Sorry for the long example. public AnonymiserService(ILogger logger) { try { _logger = logger; _logger.Log("Listening on port {0}...", Properties.Settings.Default.ListeningPort); StartListener(new string[] { string.Format("http://*:{0}/", Properties.Settings.Default.ListeningPort) }); } catch (Exception ex) { _logger.LogError("Exception!", ex); } } private void StartListener(string[] prefixes) { if (!HttpListener.IsSupported) { _logger.LogError("HttpListener isn't supported on this machine!"); return; } HttpListener listener = new HttpListener(); foreach (string s in prefixes) listener.Prefixes.Add(s); while (true) { listener.Start(); IAsyncResult result = listener.BeginGetContext(new AsyncCallback(ListenerCallback), listener); result.AsyncWaitHandle.WaitOne(); } } private void ListenerCallback(IAsyncResult result) { try { // Get HTTP request HttpListener listener = (HttpListener)result.AsyncState; HttpListenerContext context = listener.EndGetContext(result); _logger.Log("Retrieving [{0}]", context.Request.RawUrl); // Create connection // Use Tor as proxy IProxyClient proxyClient = new Socks4aProxyClient("localhost", 9050); TcpClient tcpClient = proxyClient.CreateConnection(context.Request.UserHostName, 80); // Create message // Need to set Connection: close to close the connection as soon as it's done byte[] data = Encoding.UTF8.GetBytes(String.Format("GET {0} HTTP/1.1\r\nHost: {1}\r\nConnection: close\r\n\r\n", context.Request.Url.PathAndQuery, context.Request.UserHostName)); // Send message NetworkStream ns = tcpClient.GetStream(); ns.Write(data, 0, data.Length); // Pass on HTTP response HttpListenerResponse responseOut = context.Response; if (ns.CanRead) { byte[] buffer = new byte[32768]; int read = 0; string responseString = string.Empty; // Read response while ((read = ns.Read(buffer, 0, buffer.Length)) > 0) { responseString += Encoding.UTF8.GetString(buffer, 0, read); } // Remove headers if (responseString.IndexOf("HTTP/1.1 200 OK") > -1) responseString = responseString.Substring(responseString.IndexOf("\r\n\r\n")); // Forward response byte[] byteArray = Encoding.UTF8.GetBytes(responseString); responseOut.OutputStream.Write(byteArray, 0, byteArray.Length); } // Close streams responseOut.OutputStream.Close(); ns.Close(); // Close connection tcpClient.Close(); _logger.Log("Retrieved [{0}]", context.Request.RawUrl); } catch (Exception ex) { _logger.LogError("Exception in ListenerCallback!", ex); } }

    Read the article

  • Server returned HTTP response code: 500 for URL

    - by user617162
    java.io.IOException: Server returned HTTP response code: 500 for URL: http://ww .huadt.com.cn/zh-cn/i/l/@357671030745308@V500@0000@AUTOLOW@1@11d590f7$GPRMC,065 48.000,A,3959.8587,N,11617.2447,E,0.00,55.32,210311,,,A*56@@ at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown S urce) at hdt.SendCmdToP.Sendplatform(SendCmdToP.java:67) at hdt.SendCmdToP.process(SendCmdToP.java:198) at hdt.SendCmdToP.run(SendCmdToP.java:131) java.lang.NullPointerException at hdt.SendCmdToP.Sendplatform(SendCmdToP.java:91) at hdt.SendCmdToP.process(SendCmdToP.java:198) at hdt.SendCmdToP.run(SendCmdToP.java:131) Appeared pointer, and 500 wrong with the closure of the abnormal, a firewall relationship? If not is it code problems? Please help everybody see how to solve the problem. thanks?

    Read the article

  • Squid closing the connection on long HTTP GET requests

    - by Rhys
    Hello, When running a database query on a specific external site we use, Squid seems to cut off the connection after a consistent period of time (just over a minute). The query is submitted through a standard web form is that uses GET to query their database. Firefox 3 just displays a blank page. Internet Explorer throws a 'Page Cannot Be Displayed' error (tested in v6 and v8). When we perform the same query on the same machine, but bypass the Squid proxy, it works fine. The query takes about two and a half minutes to complete. There are a few timeout settings in Squid, but I honestly don't know what one to be looking at. Any possible solutions would be much appreciated. Cheers

    Read the article

  • Multithread http downloader with webui [closed]

    - by kiler129
    I looking for software similar to JDownloader or PyLoad. JD is pretty good but use heavy Java and for now have very weak web interface. PyLoad is awesome, include simple but powerful web-UI but downloading 10 files (10 threads each, so summary it's 100 connections running at around 8MB/s all) consume a lot of cpu - it's whole core for me. Do you know any lightweight alternatives? Aria2c is good for console but I failed to find any good webui, official one is good but after adding more files almost crashes Chrome :)

    Read the article

  • Windows HTTP tunnel through 2 Linux hosts?

    - by Darkmage
    The localhost only has connection to Host1. Host1 has connection to Host2 and localhost. How can I setup this to use Host2 as a proxy for web trafic from localhost? I have seen similar topics but can't get it to work. How do I set it up on the Windows XP client?

    Read the article

  • HTTP caching headers: how should must-revalidate work?

    - by Bobby Jack
    Using trac, I'm getting a response with the following header: Cache-control: must-revalidate Moreover, no 'Expires' header is being sent. Our local proxy, however, is caching these responses, so when an edit is made, pages need to be 'hard refreshed' to update. Is the proxy misbehaving? Other headers that might be relevant: Connection Keep-Alive Proxy-Connection Keep-Alive Keep-Alive timeout=15, max=100

    Read the article

  • Handling UTF-8 with BOM in HTTP

    - by Alois Mahdal
    Say I have a script which at some point serves a plain text file as a content (right after "\n\n"). These files are provided by users, but I can expect they will be UTF-8. So I hard-wire Content-Type: text/plain; charset=UTF-8. But while I can teach users to save everything in UTF-8, I can't be very sure that the files will be without BOM ("\xEE\xBB\xBF"), as at least on Windows, this is not very clearly distinguished in common plain text editors and not every one of them uses the same default. So what about these files created on Windows, where they may/may not start with BOM? Should/will server or UA get rid of this debris for me? Or is it my task to prepare clean UTF-8, i.e. open each file and check whether BOM needs to be removed?

    Read the article

  • Why do users get an HTTP 404 error when attempting to clone a Mercurial repository over HTTP?

    - by Geoffrey van Wyk
    The repository is hosted on my PC. I use Apache with WAMP and TortoiseHG. I have setup users and passwords and they are able to browse the repository in their browsers after entering their usernames and passwords. The problem is that, when they try to clone the repository, they get an HTTP404 file note found error. However, I can clone the repsoitory on my own PC using their credentials. The problem must lie somehwere with the mercurial setup.

    Read the article

  • Dynamically blocking excessive HTTP bandwidth use?

    - by Jeff Atwood
    We were a little surprised to see this on our Cacti graphs for June 4 web traffic: We ran Log Parser on our IIS logs and it turns out this was a perfect storm of Yahoo and Google bots indexing us.. in that 3 hour period, we saw 287k hits from 3 different google ips, plus 104k from yahoo. Ouch? While we don't want to block Google or Yahoo, this has come up before. We have access to a Cisco PIX 515E, and we're thinking about putting that in front so we can dynamically deal with bandwidth offenders without touching our web servers directly. But is that the best solution? I'm wondering if there is any software or hardware that can help us identify and block excessive bandwidth use, ideally in real time? Perhaps some bit of hardware or open-source software we can put in front of our web servers? We are mostly a windows shop but we have some linux skills as well; we're also open to buying hardware if the PIX 515E isn't sufficient. What would you recommend?

    Read the article

  • Tunneling over HTTP

    - by Morgan
    Hello, I have a network at work that is locked behind a firewall and Internet connection is available only by using a proxy server. At work, I can connect to databases that are distributed across the network. However, at home, I cannot connect to the proxy server or the databases. How can this be done? I can access my workstation via LogMeIn, so I can install anything on it. I thought of installing some kind of tunneling mechanism in my workstation. Then, at home, I could connect to this mechanism, which would in turn do the required connections. So essentially, what I'd like to do can be represented by the following diagram: Home = Workstation = Database. For example, whenever I connect to, say, 10.140.0.1:1234 at home, this would be redirected to 10.140.0.1:1234 of my Workstation, because 10.140.0.1:1234 is only available through the corporate network. NOTE: I'm using Windows XP.

    Read the article

  • HTTP resource caching / fetching

    - by Bobby Jack
    I'm trying to optimise a page, and I'm seeing some strange behaviour. Each time I click on a link to the page, all resources are fetched from the server, responding with 200s. However, when I refresh the page (specifically, F5 in Firefox), all resources return a 304 and - of course - the page loads much faster as a result. The main page returns a 200 in both cases. In the refresh case, If-Modified-Since headers are sent with the requests to the resources. However, in the 'clicking a link' case, they are not. What's the reason for that, and can I control it?

    Read the article

  • cannot connect via http but can via ssh in Windows 7

    - by Tim
    Hi, I have a strange problem in my Windows 7. Sometimes web browsers (ie firefox and chrome) works sometimes don't. But ssh is always working. What could be the reason and how to fix it? My router is Linksys WRT54GL. Web browsing via firefox in my Ubuntu is okay. Thanks and regards!

    Read the article

  • How to implement a secure authentication over HTTP?

    - by Zagorax
    I know that we have HTTPS, but I would like to know if there's an algorithm/approach/strategy that grants a reasonable security level without using SSL. I have read many solution on the internet. Most of them are based on adding some time metadata to the hashes, but it needs that both server and client has the time set equal. Moreover, it seems to me that none of this solution could prevent a man in the middle attack.

    Read the article

  • Squid closing the connection on long HTTP GET requests

    - by Rhys
    When running a database query on a specific external site we use, Squid seems to cut off the connection after a consistent period of time (just over a minute). The query is submitted through a standard web form is that uses GET to query their database. Firefox 3 just displays a blank page. Internet Explorer throws a 'Page Cannot Be Displayed' error (tested in v6 and v8). When we perform the same query on the same machine, but bypass the Squid proxy, it works fine. The query takes about two and a half minutes to complete. There are a few timeout settings in Squid, but I honestly don't know what one to be looking at. Any possible solutions would be much appreciated. Cheers

    Read the article

  • Redirecting HTTP traffic from a local server on the web

    - by MrJackV
    Here is the situation: I have a webserver (let's call it C1) that is running an apache/php server and it is port forwarded so that I can access it anywhere. However there is another computer within the webserver LAN that has a apache server too (let's call it C2). I cannot change the port forwarding nor I can change the apache server (a.k.a. install custom modules). My question is: is there a way to access C2 within a directory of C1? (e.g. going to www.website.org/random_dir will allow me to browse the root of C2 apache server.) I am trying to change as little as possible of the config/other (e.g. activating modules etc.) Is there a possible solution? Thanks in advance.

    Read the article

  • Remote HTTP to FTP

    - by jamd12
    I am on a very slow download with the internet that I have and unfortunately I only have access to expensive and slow wireless or satelite. I have set up an FTP with a computer supplier locally who has a nice 2 Mbps speed and am trying to set up a way of adding links remotely (Hotfile, rapidshare, Fileserve, etc.) so that they can be downloaded onto the FTP and then transfered a few times a week manually onto a portable HDD. On my home PC I use Internet download manager for all my downloads. Is there a simple way that I can add links remotely to Internet Download Manager on the FTP or perhaps another solution? The OS on the FTP is Linux - I use Windows XP SP3 and Windows 7. I have not used FTP very much before so any suggestions on how best to do this would be much appreciated.

    Read the article

  • Browser http port-forwarding

    - by Kakao
    When using a browser like Firefox I need that any url of the domain example.com to have appended the port :8008. Not only when I type it at address bar but any where it is referenced within the served html page. All the other domains should be left as is. I know I can setup a proxy like Squid or use a pac file in a web site but I want it simpler if possible.

    Read the article

  • Logging Into a site that uses Live.com authentication

    - by Josh
    I've been trying to automate a log in to a website I frequent, www.bungie.net. The site is associated with Microsoft and Xbox Live, and as such makes uses of the Windows Live ID API when people log in to their site. I am relatively new to creating web spiders/robots, and I worry that I'm misunderstanding some of the most basic concepts. I've simulated logins to other sites such as Facebook and Gmail, but live.com has given me nothing but trouble. Anyways, I've been using Wireshark and the Firefox addon Tamper Data to try and figure out what I need to post, and what cookies I need to include with my requests. As far as I know these are the steps one must follow to log in to this site. 1. Visit https: //login.live.com/login.srf?wa=wsignin1.0&rpsnv=11&ct=1268167141&rver=5.5.4177.0&wp=LBI&wreply=http:%2F%2Fwww.bungie.net%2FDefault.aspx&id=42917 2. Recieve the cookies MSPRequ and MSPOK. 3. Post the values from the form ID "PPSX", the values from the form ID "PPFT", your username, your password all to a changing URL similar to: https: //login.live.com/ppsecure/post.srf?wa=wsignin1.0&rpsnv=11&ct= (there are a few numbers that change at the end of that URL) 4. Live.com returns the user a page with more hidden forms to post. The client then posts the values from the form "ANON", the value from the form "ANONExp" and the values from the form "t" to the URL: http ://www.bung ie.net/Default.aspx?wa=wsignin1.0 5. After posting that data, the user is returned a variety of cookies the most important of which is "BNGAuth" which is the log in cookie for the site. Where I am having trouble is on fifth step, but that doesn't neccesarily mean I've done all the other steps correctly. I post the data from "ANON", "ANONExp" and "t" but instead of being returned a BNGAuth cookie, I'm returned a cookie named "RSPMaybe" and redirected to the home page. When I review the Wireshark log, I noticed something that instantly stood out to me as different between the log when I logged in with Firefox and when my program ran. It could be nothing but I'll include the picture here for you to review. I'm being returned an HTTP packet from the site before I post the data in the fourth step. I'm not sure how this is happening, but it must be a side effect from something I'm doing wrong in the HTTPS steps. using System; using System.Collections.Generic; using System.Collections.Specialized; using System.Text; using System.Net; using System.IO; using System.IO.Compression; using System.Security.Cryptography; using System.Security.Cryptography.X509Certificates; using System.Web; namespace SpiderFromScratch { class Program { static void Main(string[] args) { CookieContainer cookies = new CookieContainer(); Uri url = new Uri("https://login.live.com/login.srf?wa=wsignin1.0&rpsnv=11&ct=1268167141&rver=5.5.4177.0&wp=LBI&wreply=http:%2F%2Fwww.bungie.net%2FDefault.aspx&id=42917"); HttpWebRequest http = (HttpWebRequest)HttpWebRequest.Create(url); http.Timeout = 30000; http.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)"; http.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; http.Headers.Add("Accept-Language", "en-us,en;q=0.5"); http.Headers.Add("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.7"); http.Headers.Add("Keep-Alive", "300"); http.Referer = "http://www.bungie.net/"; http.ContentType = "application/x-www-form-urlencoded"; http.CookieContainer = new CookieContainer(); http.Method = WebRequestMethods.Http.Get; HttpWebResponse response = (HttpWebResponse)http.GetResponse(); StreamReader readStream = new StreamReader(response.GetResponseStream()); string HTML = readStream.ReadToEnd(); readStream.Close(); //gets the cookies (they are set in the eighth header) string[] strCookies = response.Headers.GetValues(8); response.Close(); string name, value; Cookie manualCookie; for (int i = 0; i < strCookies.Length; i++) { name = strCookies[i].Substring(0, strCookies[i].IndexOf("=")); value = strCookies[i].Substring(strCookies[i].IndexOf("=") + 1, strCookies[i].IndexOf(";") - strCookies[i].IndexOf("=") - 1); manualCookie = new Cookie(name, "\"" + value + "\""); Uri manualURL = new Uri("http://login.live.com"); http.CookieContainer.Add(manualURL, manualCookie); } //stores the cookies to be used later cookies = http.CookieContainer; //Get the PPSX value string PPSX = HTML.Remove(0, HTML.IndexOf("PPSX")); PPSX = PPSX.Remove(0, PPSX.IndexOf("value") + 7); PPSX = PPSX.Substring(0, PPSX.IndexOf("\"")); //Get this random PPFT value string PPFT = HTML.Remove(0, HTML.IndexOf("PPFT")); PPFT = PPFT.Remove(0, PPFT.IndexOf("value") + 7); PPFT = PPFT.Substring(0, PPFT.IndexOf("\"")); //Get the random URL you POST to string POSTURL = HTML.Remove(0, HTML.IndexOf("https://login.live.com/ppsecure/post.srf?wa=wsignin1.0&rpsnv=11&ct=")); POSTURL = POSTURL.Substring(0, POSTURL.IndexOf("\"")); //POST with cookies http = (HttpWebRequest)HttpWebRequest.Create(POSTURL); http.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)"; http.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; http.Headers.Add("Accept-Language", "en-us,en;q=0.5"); http.Headers.Add("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.7"); http.Headers.Add("Keep-Alive", "300"); http.CookieContainer = cookies; http.Referer = "https://login.live.com/login.srf?wa=wsignin1.0&rpsnv=11&ct=1268158321&rver=5.5.4177.0&wp=LBI&wreply=http:%2F%2Fwww.bungie.net%2FDefault.aspx&id=42917"; http.ContentType = "application/x-www-form-urlencoded"; http.Method = WebRequestMethods.Http.Post; Stream ostream = http.GetRequestStream(); //used to convert strings into bytes System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); //Post information byte[] buffer = encoding.GetBytes("PPSX=" + PPSX +"&PwdPad=IfYouAreReadingThisYouHaveTooMuc&login=YOUREMAILGOESHERE&passwd=YOURWORDGOESHERE" + "&LoginOptions=2&PPFT=" + PPFT); ostream.Write(buffer, 0, buffer.Length); ostream.Close(); HttpWebResponse response2 = (HttpWebResponse)http.GetResponse(); readStream = new StreamReader(response2.GetResponseStream()); HTML = readStream.ReadToEnd(); response2.Close(); ostream.Dispose(); foreach (Cookie cookie in response2.Cookies) { Console.WriteLine(cookie.Name + ": "); Console.WriteLine(cookie.Value); Console.WriteLine(cookie.Expires); Console.WriteLine(); } //SET POSTURL value string POSTANON = "http://www.bungie.net/Default.aspx?wa=wsignin1.0"; //Get the ANON value string ANON = HTML.Remove(0, HTML.IndexOf("ANON")); ANON = ANON.Remove(0, ANON.IndexOf("value") + 7); ANON = ANON.Substring(0, ANON.IndexOf("\"")); ANON = HttpUtility.UrlEncode(ANON); //Get the ANONExp value string ANONExp = HTML.Remove(0, HTML.IndexOf("ANONExp")); ANONExp = ANONExp.Remove(0, ANONExp.IndexOf("value") + 7); ANONExp = ANONExp.Substring(0, ANONExp.IndexOf("\"")); ANONExp = HttpUtility.UrlEncode(ANONExp); //Get the t value string t = HTML.Remove(0, HTML.IndexOf("id=\"t\"")); t = t.Remove(0, t.IndexOf("value") + 7); t = t.Substring(0, t.IndexOf("\"")); t = HttpUtility.UrlEncode(t); //POST the Info and Accept the Bungie Cookies http = (HttpWebRequest)HttpWebRequest.Create(POSTANON); http.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)"; http.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; http.Headers.Add("Accept-Language", "en-us,en;q=0.5"); http.Headers.Add("Accept-Encoding", "gzip,deflate"); http.Headers.Add("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.7"); http.Headers.Add("Keep-Alive", "115"); http.CookieContainer = new CookieContainer(); http.ContentType = "application/x-www-form-urlencoded"; http.Method = WebRequestMethods.Http.Post; http.Expect = null; ostream = http.GetRequestStream(); int test = ANON.Length; int test1 = ANONExp.Length; int test2 = t.Length; buffer = encoding.GetBytes("ANON=" + ANON +"&ANONExp=" + ANONExp + "&t=" + t); ostream.Write(buffer, 0, buffer.Length); ostream.Close(); //Here lies the problem, I am not returned the correct cookies. HttpWebResponse response3 = (HttpWebResponse)http.GetResponse(); GZipStream gzip = new GZipStream(response3.GetResponseStream(), CompressionMode.Decompress); readStream = new StreamReader(gzip); HTML = readStream.ReadToEnd(); //gets both cookies string[] strCookies2 = response3.Headers.GetValues(11); response3.Close(); } } } This has given me problems and I've put many hours into learning about HTTP protocols so any help would be appreciated. If there is an article detailing a similar log in to live.com feel free to point the way. I've been looking far and wide for any articles with working solutions. If I could be clearer, feel free to ask as this is my first time using Stack Overflow.

    Read the article

  • Default /etc/apt/sources.list?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • PPA causing 404 error?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • When to do code reviews when doing continuous integration?

    - by SpecialEd
    We are trying to switch to a continuous integration environment but are not sure when to do code reviews. From what I've read of continuous integration, we should be attempting to check in code as often as multiple times a day. I assume, this even means for features that are not yet complete. So the question is, when do we do the code reviews? We can't do it before we check in the code, because that would slow down the process where we will not be able to do daily checkins, let alone multiple checkins per day. Also, if the code we are checking in merely compiles but is not feature complete, doing a code review then is pointless, as most code reviews are best done as the feature is finalized. Does this mean we should do code reviews when a feature is completed, but that unreviewed code will get into the repository?

    Read the article

  • Twitter status id conundrum

    - by jamiet
    I have an interest, a slightly perverse one some might say, in using online services and trying to figure out what the underlying (logical) data model is and in this day and age Twitter is one that lends itself very well to scrutiny. Consider this recent tweet of mine: The URL that enables you to see that tweet is http://twitter.com/jamiet/status/12154647354. We can interpret that URL to mean "a tweet by jamiet with an id of 12154647354" and hence we might further assume that the unique identifier for the tweet is {jamiet,12154647354}. However, its well-known that Twitter gives each status a unique ID regardless of who tweeted it so we might expect we could reach that tweet just by using a URL of http://twitter.com/status/12154647354 however (at the time of writing) that only redirects to Twitter's homepage. That seems strange to me especially given that we can use Twitter's API to access information about that tweet using only the id of the status. Witness http://api.twitter.com/1/statuses/show/12154647354.xml: [We can also access a JSON version of that information using http://api.twitter.com/1/statuses/show/12154647354.json] I'm puzzled as to why a tweet can't be accessed using on the main twitter website using the id alone. Anyone have any suggestions? @jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Richmond Code Camp 2010.1

    - by andyleonard
    I can't believe it - Richmond Code Camp 2010.1 is less than two weeks away! Once again, the leadership team has outdone themselves. We have a bunch of great speakers, 9 tracks, 45 sessions - there's something for everyone. If you're going to be in the area and are interested, register today. :{> Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • What is this code?

    - by Aerovistae
    This is from the Evolution of a Programmer "joke", at the "Master Programmer" level. It seems to be C++, but I don't know what all this bloated extra stuff is, nor did any Google searches turn up anything except the joke I took it from. Can anyone tell me more about what I'm reading here? [ uuid(2573F8F4-CFEE-101A-9A9F-00AA00342820) ] library LHello { // bring in the master library importlib("actimp.tlb"); importlib("actexp.tlb"); // bring in my interfaces #include "pshlo.idl" [ uuid(2573F8F5-CFEE-101A-9A9F-00AA00342820) ] cotype THello { interface IHello; interface IPersistFile; }; }; [ exe, uuid(2573F890-CFEE-101A-9A9F-00AA00342820) ] module CHelloLib { // some code related header files importheader(<windows.h>); importheader(<ole2.h>); importheader(<except.hxx>); importheader("pshlo.h"); importheader("shlo.hxx"); importheader("mycls.hxx"); // needed typelibs importlib("actimp.tlb"); importlib("actexp.tlb"); importlib("thlo.tlb"); [ uuid(2573F891-CFEE-101A-9A9F-00AA00342820), aggregatable ] coclass CHello { cotype THello; }; }; #include "ipfix.hxx" extern HANDLE hEvent; class CHello : public CHelloBase { public: IPFIX(CLSID_CHello); CHello(IUnknown *pUnk); ~CHello(); HRESULT __stdcall PrintSz(LPWSTR pwszString); private: static int cObjRef; }; #include <windows.h> #include <ole2.h> #include <stdio.h> #include <stdlib.h> #include "thlo.h" #include "pshlo.h" #include "shlo.hxx" #include "mycls.hxx" int CHello:cObjRef = 0; CHello::CHello(IUnknown *pUnk) : CHelloBase(pUnk) { cObjRef++; return; } HRESULT __stdcall CHello::PrintSz(LPWSTR pwszString) { printf("%ws\n", pwszString); return(ResultFromScode(S_OK)); } CHello::~CHello(void) { // when the object count goes to zero, stop the server cObjRef--; if( cObjRef == 0 ) PulseEvent(hEvent); return; } #include <windows.h> #include <ole2.h> #include "pshlo.h" #include "shlo.hxx" #include "mycls.hxx" HANDLE hEvent; int _cdecl main( int argc, char * argv[] ) { ULONG ulRef; DWORD dwRegistration; CHelloCF *pCF = new CHelloCF(); hEvent = CreateEvent(NULL, FALSE, FALSE, NULL); // Initialize the OLE libraries CoInitiali, NULL); // Initialize the OLE libraries CoInitializeEx(NULL, COINIT_MULTITHREADED); CoRegisterClassObject(CLSID_CHello, pCF, CLSCTX_LOCAL_SERVER, REGCLS_MULTIPLEUSE, &dwRegistration); // wait on an event to stop WaitForSingleObject(hEvent, INFINITE); // revoke and release the class object CoRevokeClassObject(dwRegistration); ulRef = pCF->Release(); // Tell OLE we are going away. CoUninitialize(); return(0); } extern CLSID CLSID_CHello; extern UUID LIBID_CHelloLib; CLSID CLSID_CHello = { /* 2573F891-CFEE-101A-9A9F-00AA00342820 */ 0x2573F891, 0xCFEE, 0x101A, { 0x9A, 0x9F, 0x00, 0xAA, 0x00, 0x34, 0x28, 0x20 } }; UUID LIBID_CHelloLib = { /* 2573F890-CFEE-101A-9A9F-00AA00342820 */ 0x2573F890, 0xCFEE, 0x101A, { 0x9A, 0x9F, 0x00, 0xAA, 0x00, 0x34, 0x28, 0x20 } }; #include <windows.h> #include <ole2.h> #include <stdlib.h> #include <string.h> #include <stdio.h> #include "pshlo.h" #include "shlo.hxx" #include "clsid.h" int _cdecl main( int argc, char * argv[] ) { HRESULT hRslt; IHello *pHello; ULONG ulCnt; IMoniker * pmk; WCHAR wcsT[_MAX_PATH]; WCHAR wcsPath[2 * _MAX_PATH]; // get object path wcsPath[0] = '\0'; wcsT[0] = '\0'; if( argc > 1) { mbstowcs(wcsPath, argv[1], strlen(argv[1]) + 1); wcsupr(wcsPath); } else { fprintf(stderr, "Object path must be specified\n"); return(1); } // get print string if(argc > 2) mbstowcs(wcsT, argv[2], strlen(argv[2]) + 1); else wcscpy(wcsT, L"Hello World"); printf("Linking to object %ws\n", wcsPath); printf("Text String %ws\n", wcsT); // Initialize the OLE libraries hRslt = CoInitializeEx(NULL, COINIT_MULTITHREADED); if(SUCCEEDED(hRslt)) { hRslt = CreateFileMoniker(wcsPath, &pmk); if(SUCCEEDED(hRslt)) hRslt = BindMoniker(pmk, 0, IID_IHello, (void **)&pHello); if(SUCCEEDED(hRslt)) { // print a string out pHello->PrintSz(wcsT); Sleep(2000); ulCnt = pHello->Release(); } else printf("Failure to connect, status: %lx", hRslt); // Tell OLE we are going away. CoUninitialize(); } return(0); }

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >