Search Results

Search found 58228 results on 2330 pages for 'http api'.

Page 10/2330 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Http header 302 error

    - by Katherine Katie
    Response Headers status HTTP/1.1 302 Found connection close pragma no-cache cache-control no-cache location / location /NKiXN/ I don't know how it became so but i used w3 total cache plugin but at this time i have deactivated it please help me to solve it, site is coming down in search engine ranking and google bot is unable to follow it. Urgent help required, if this is configuration problem with server please let me know the solution. Site: http://onlinecheapestcarinsurance.co.uk/

    Read the article

  • HTTP Header - ntCoent-Length

    - by DMcKenna
    I get the following HTTP response headers in a particular response. All looks okay. However I have noticed that the content-length appears twice... Content-Length: 2424 ntCoent-Length: 2424 Is there a particular reason why the content-length is returned a second time as ntCoent-Length? HTTP/1.0 200 OK Date: Wed, 26 May 2010 09:38:19 GMT Server: Apache P3P: CP="NOI DSP COR CURa ADMa TA1a OUR BUS IND UNI COM NAV INT" Accept-Charset: iso-8859-1, unicode-1-1;q=0.8 Expires: Sun, 15 Jul 1990 00:00:00 GMT Pragma: no-cache Cache-Control: no-cache Content-Language: en ntCoent-Length: 2424 Connection: close Content-Type: text/html;charset=iso-8859-1 Content-Length: 2424

    Read the article

  • Oracle HTTP Server access_log - GET /error/404.html HTTP/1.0 200 7001 entries

    - by Pavan
    access_log shows the following entries repeatedly, seems like it is polling some thing. There were so many entries keep on adding to the log, making it difficult to debug for actual error message. aaa.bbb.ccc.ddd - - [07/Nov/2012:00:02:48 -0800] "HEAD /index.html HTTP/1.1" 200 - abc.bcd.cda.dab - - [07/Nov/2012:00:02:50 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dac - - [07/Nov/2012:00:02:51 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dab - - [07/Nov/2012:00:02:56 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dac - - [07/Nov/2012:00:02:56 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dab - - [07/Nov/2012:00:03:01 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dac - - [07/Nov/2012:00:03:01 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dab - - [07/Nov/2012:00:03:06 -0800] "GET /error/404.html HTTP/1.0" 200 7001 abc.bcd.cda.dac - - [07/Nov/2012:00:03:06 -0800] "GET /error/404.html HTTP/1.0" 200 7001 aaa.bbb.ccc.ddd - - [07/Nov/2012:00:03:08 -0800] "HEAD /index.html HTTP/1.1" 200 - how to avoid these repeating entries?

    Read the article

  • How reliable is HTTP compression using gzip?

    - by Liam
    YSlow has suggested that I use HTTP compression to improve the performance of my site. However, as noted by Yahoo that are some problems. There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically. I understand that the most common problem occurs with IE6 behind a proxy. But how common are these problems today? To quantify it, roughly what percentage of web users experience bugs with HTTP compression?

    Read the article

  • java.io.IOException: Server returned HTTP response code: 403 for URL

    - by Adao
    I want to download the mp3 file from url : "http://upload13.music.qzone.soso.com/30671794.mp3", i always got java.io.IOException: Server returned HTTP response code: 403 for URL. But it's ok when open the url using browser. Below is part of my code: BufferedInputStream bis = null; BufferedOutputStream bos = null; try { URL url = new URL(link); URLConnection urlConn = url.openConnection(); urlConn.addRequestProperty("User-Agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"); String contentType = urlConn.getContentType(); System.out.println("contentType:" + contentType); InputStream is = urlConn.getInputStream(); bis = new BufferedInputStream(is, 4 * 1024); bos = new BufferedOutputStream(new FileOutputStream( fileName.toString()));? Anyone could help me? Thanks in advance!

    Read the article

  • HTTP status 405 - HTTP method POST is not supported by this URL

    - by Wont Say
    I am getting this "HTTP method POST is not supported by this URL" error when I run my project. The funny thing is, it was running perfectly fine two days ago. After I made a few changes to my code but then restored my original code and its giving me this error. Could you please help me? Here is my index.html: <form method="post" action="login.do"> <div> <table> <tr><td>Username: </td><td><input type="text" name="e_name"/> </td> </tr> <tr><td> Password: </td><td><input type="password" name="e_pass"/> </td> </tr> <tr><td></td><td><input type="submit" name ="e_submit" value="Submit"/> Here is my Login servlet: public class Login extends HttpServlet { /** * Processes requests for both HTTP * <code>GET</code> and * <code>POST</code> methods. * * @param request servlet request * @param response servlet response * @throws ServletException if a servlet-specific error occurs * @throws IOException if an I/O error occurs */ protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException, SQLException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { /* * TODO output your page here. You may use following sample code. */ int status; String submit = request.getParameter("e_submit"); String submit2 = request.getParameter("a_submit"); out.println("Here1"); String e_name = request.getParameter("e_name"); String e_password = request.getParameter("e_pass"); String a_name = request.getParameter("a_name"); String a_password = request.getParameter("a_pass"); out.println(e_name+e_password+a_name+a_password); Author author = new Author(a_name,a_password); Editor editor = new Editor(e_name,e_password); // If it is an AUTHOR login: if(submit==null){ status = author.login(author); out.println("Author Login"); //Incorrect login details if(status==0) { out.println("Incorrect"); RequestDispatcher view = request.getRequestDispatcher("index_F.html"); view.forward(request, response); } //Correct login details --- AUTHOR else { out.println("Correct login details"); HttpSession session = request.getSession(); session.setAttribute(a_name, "a_name"); RequestDispatcher view = request.getRequestDispatcher("index_S.jsp"); view.forward(request, response); } } //If it is an EDITOR login else if (submit2==null){ status = editor.login(editor); //Incorrect login details if(status==0) { RequestDispatcher view = request.getRequestDispatcher("index_F.html"); view.forward(request, response); } //Correct login details --- EDITOR else { out.println("correct"); HttpSession session = request.getSession(); session.setAttribute(e_name, "e_name"); session.setAttribute(e_password, "e_pass"); RequestDispatcher view = request.getRequestDispatcher("index_S_1.html"); view.forward(request, response); } } out.println("</body>"); out.println("</html>"); } finally { out.close(); } } @Override protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { super.doPost(req, resp); } @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { super.doGet(req, resp); }} And my web.xml looks like this: <?xml version="1.0" encoding="UTF-8"?> <servlet> <servlet-name>action</servlet-name> <servlet-class>org.apache.struts.action.ActionServlet</servlet-class> <init-param> <param-name>config</param-name> <param-value>/WEB-INF/struts-config.xml</param-value> </init-param> <init-param> <param-name>debug</param-name> <param-value>2</param-value> </init-param> <init-param> <param-name>detail</param-name> <param-value>2</param-value> </init-param> <load-on-startup>2</load-on-startup> </servlet> <servlet> <servlet-name>Login</servlet-name> <servlet-class>controller.Login</servlet-class> </servlet> <servlet-mapping> <servlet-name>action</servlet-name> <url-pattern>*.do</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>Login</servlet-name> <url-pattern>/login.do</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> I use Glassfish v3 server - let me know anything else you need to know

    Read the article

  • Unserializing an API return object (PHP/Ebay API)

    - by DavidYell
    I have been working with the Ebay api for a project and have found it great. I have however found a problem now, more PHP related. When I read my items from Ebay, I store a bunch of details in the database. Currently, just for the sake of it really, I serialize the whole return object and store it in the database in a related table. The idea being, that when I display my information, I have all the details to hand should I need them. The problem arises in that the pricing information is always in a sub object. [ConvertedAdjustmentAmount] => __PHP_Incomplete_Class Object ( [__PHP_Incomplete_Class_Name] => eBayAmountType [_] => 0 [currencyID] => USD ) As you can see when I unserialize my object, my cunning plan falls foul of the Incomplete class problem. I have checked the following question, without success. http://stackoverflow.com/questions/965611/forcing-access-to-php-incomplete-class-object-properties The main issue lies, as far as I can see, in that the price class is stored in the Ebay api, so how do I recreate it? I have been reading this page, http://uk3.php.net/manual/en/function.unserialize.php and trying to figure out, unserialize_callback_func which I can't figure out either, so any help would be appreciated!

    Read the article

  • Fixing the Model Binding issue of ASP.NET MVC 4 and ASP.NET Web API

    - by imran_ku07
            Introduction:                     Yesterday when I was checking ASP.NET forums, I found an important issue/bug in ASP.NET MVC 4 and ASP.NET Web API. The issue is present in System.Web.PrefixContainer class which is used by both ASP.NET MVC and ASP.NET Web API assembly. The details of this issue is available in this thread. This bug can be a breaking change for you if you upgraded your application to ASP.NET MVC 4 and your application model properties using the convention available in the above thread. So, I have created a package which will fix this issue both in ASP.NET MVC and ASP.NET Web API. In this article, I will show you how to use this package.           Description:                     Create or open an ASP.NET MVC 4 project and install ImranB.ModelBindingFix NuGet package. Then, add this using statement on your global.asax.cs file, using ImranB.ModelBindingFix;                     Then, just add this line in Application_Start method,   Fixer.FixModelBindingIssue(); // For fixing only in MVC call this //Fixer.FixMvcModelBindingIssue(); // For fixing only in Web API call this //Fixer.FixWebApiModelBindingIssue(); .                     This line will fix the model binding issue. If you are using Html.Action or Html.RenderAction then you should use Html.FixedAction or Html.FixedRenderAction instead to avoid this bug(make sure to reference ImranB.ModelBindingFix.SystemWebMvc namespace). If you are using FormDataCollection.ReadAs extension method then you should use FormDataCollection.FixedReadAs instead to avoid this bug(make sure to reference ImranB.ModelBindingFix.SystemWebHttp namespace). The source code of this package is available at github.          Summary:                     There is a small but important issue/bug in ASP.NET MVC 4. In this article, I showed you how to fix this issue/bug by using a package. Hopefully you will enjoy this article too.

    Read the article

  • Questions re: Eclipse Jobs API

    - by BenCole
    Similar to http://stackoverflow.com/questions/8738160/eclipse-jobs-api-for-a-stand-alone-swing-project This question mentions the Jobs API from the Eclipse IDE: ...The disadvantage of the pre-3.0 approach was that the user had to wait until an operation completed before the UI became responsive again. The UI still provided the user the ability to cancel the currently running operation but no other work could be done until the operation completed. Some operations were performed in the background (resource decoration and JDT file indexing are two such examples) but these operations were restricted in the sense that they could not modify the workspace. If a background operation did try to modify the workspace, the UI thread would be blocked if the user explicitly performed an operation that modified the workspace and, even worse, the user would not be able to cancel the operation. A further complication with concurrency was that the interaction between the independent locking mechanisms of different plug-ins often resulted in deadlock situations. Because of the independent nature of the locks, there was no way for Eclipse to recover from the deadlock, which forced users to kill the application... ...The functionality provided by the workspace locking mechanism can be broken down into the following three aspects: Resource locking to ensure multiple operations did not concurrently modify the same resource Resource change batching to ensure UI stability during an operation Identification of an appropriate time to perform incremental building With the introduction of the Jobs API, these areas have been divided into separate mechanisms and a few additional facilities have been added. The following list summarizes the facilities added. Job class: support for performing operations or other work in the background. ISchedulingRule interface: support for determining which jobs can run concurrently. WorkspaceJob and two IWorkspace#run() methods: support for batching of delta change notifications. Background auto-build: running of incremental build at a time when no other running operations are affecting resources. ILock interface: support for deadlock detection and recovery. Job properties for configuring user feedback for jobs run in the background. The rest of this article provides examples of how to use the above-mentioned facilities... In regards to above API, is this an implementation of a particular design pattern? Which one?

    Read the article

  • PHP - Internal APIs/Libraries - What makes sense?

    - by Mark Locker
    I've been having a discussion lately with some colleagues about the best way to approach a new project, and thought it'd be interesting to get some external thoughts thrown into the mix. Basically, we're redeveloping a fairly large site (written in PHP) and have differing opinions on how the platform should be setup. Requirements: The platform will need to support multiple internal websites, as well as external (non-PHP) projects which at the moment consist of a mobile app and a toolbar. We have no plans/need in the foreseeable future to open up an API externally (for use in products other than our own). My opinion: We should have a library of well documented native model classes which can be shared between projects. These models will represent everything in our database and can take advantage of object orientated features such as inheritance, traits, magic methods, etc. etc. As well as employing ORM. We can then add an API layer on top of these models which can basically accept requests and route them to the appropriate methods, translating the response so that it can be used platform independently. This routing for each method can be setup as and when it's required. Their opinion: We should have a single HTTP API which is used by all projects (internal PHP ones or otherwise). My thoughts: To me, there are a number of issues with using the sole HTTP API approach: It will be very expensive performance wise. One page request will result in several additional http requests (which although local, are still ones that Apache will need to handle). You'll lose all of the best features PHP has for OO development. From simple inheritance, to employing the likes of ORM which can save you writing a lot of code. For internal projects, the actual process makes me cringe. To get a users name, for example, a request would go out of our box, over the LAN, back in, then run through a script which calls a method, JSON encodes the output and feeds that back. That would then need to be JSON decoded, and be presented as an array ready to use. Working with arrays, as appose to objects, makes me sad in a modern PHP framework. Their thoughts (and my responses): Having one method of doing thing keeps things simple. - You'd only do things differently if you were using a different language anyway. It will become robust. - Seeing as the API will run off the library of models, I think my option would be just as robust. What do you think? I'd be really interested to hear the thoughts of others on this, especially as opinions on both sides are not founded on any past experience.

    Read the article

  • Uploading on Youtube via HTTP Post

    - by sajid.nizami
    I am following the steps provided on this link [http://code.google.com/apis/youtube/2.0/developers_guide_dotnet.html#Browser_based_Upload][1] Whenever I try to upload anything using this method, I get a HTTP 400 error saying that the next_url is not provided. Code is pretty simple and is a copy of Google's own code. <%@ Page Language="C#" AutoEventWireup="true" CodeFile="BrowserUpload.aspx.cs" Inherits="BrowserUpload" %> <%@ Import Namespace="Google.YouTube" %> <%@ Import Namespace="Google.GData.Extensions.MediaRss" %> <%@ Import Namespace="Google.GData" %> <%@ Import Namespace="Google.GData.YouTube" %> <%@ Import Namespace="Google.GData.Client" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> <script type="text/javascript"> function checkForFile() { if (document.getElementById('file').value) { return true; } document.getElementById('errMsg').style.display = ''; return false; } </script> </head> <body> <% YouTubeRequestSettings settings = new YouTubeRequestSettings("Danat", "API-KEY", "loginid", "password" ); YouTubeRequest request = new YouTubeRequest(settings); Video newVideo = new Video(); newVideo.Title = "My Test Movie"; newVideo.Tags.Add(new MediaCategory("Autos", YouTubeNameTable.CategorySchema)); newVideo.Keywords = "cars, funny"; newVideo.Description = "My description"; newVideo.YouTubeEntry.Private = false; newVideo.Tags.Add(new MediaCategory("mydevtag, anotherdevtag", YouTubeNameTable.DeveloperTagSchema)); FormUploadToken token = request.CreateFormUploadToken(newVideo); %> <form action="<%= token.Url %>?next_url=<%= Server.UrlEncode("http://www.danatev.com") %>" name="PostToYoutube" method="post" enctype="multipart/form-data" onsubmit="return checkForFile();" > <input id="file" type="file" name="file" /> <div id="errMsg" style="display: none; color: red"> You need to specify a file. </div> <input type="hidden" name="token" value="<%= token.Token %>" /> <input type="submit" value="go" /> </form>

    Read the article

  • Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

    - by Neil Pitman
    I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load. I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night. Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution Thanks

    Read the article

  • Integrate Bing Search API into ASP.Net application

    - by sreejukg
    Couple of months back, I wrote an article about how to integrate Bing Search engine (API 2.0) with ASP.Net website. You can refer the article here http://weblogs.asp.net/sreejukg/archive/2012/04/07/integrate-bing-api-for-search-inside-asp-net-web-application.aspx Things are changing rapidly in the tech world and Bing has also changed! The Bing Search API 2.0 will work until August 1, 2012, after that it will not return results. Shocked? Don’t worry the API has moved to Windows Azure market place and available for you to sign up and continue using it and there is a free version available based on your usage. In this article, I am going to explain how you can integrate the new Bing API that is available in the Windows Azure market place with your website. You can access the Windows Azure market place from the below link https://datamarket.azure.com/ There is lot of applications available for you to subscribe and use. Bing is one of them. You can find the new Bing Search API from the below link https://datamarket.azure.com/dataset/5BA839F1-12CE-4CCE-BF57-A49D98D29A44 To get access to Bing Search API, first you need to register an account with Windows Azure market place. Sign in to the Windows Azure market place site using your windows live account. Once you sign in with your windows live account, you need to register to Windows Azure Market place account. From the Windows Azure market place, you will see the sign in button it the top right of the page. Clicking on the sign in button will take you to the Windows live ID authentication page. You can enter a windows live ID here to login. Once logged in you will see the Registration page for the Windows Azure market place as follows. You can agree or disagree for the email address usage by Microsoft. I believe selecting the check box means you will get email about what is happening in Windows Azure market place. Click on continue button once you are done. In the next page, you should accept the terms of use, it is not optional, you must agree to terms and conditions. Scroll down to the page and select the I agree checkbox and click on Register Button. Now you are a registered member of Windows Azure market place. You can subscribe to data applications. In order to use BING API in your application, you must obtain your account Key, in the previous version of Bing you were required an API key, the current version uses Account Key instead. Once you logged in to the Windows Azure market place, you can see “My Account” in the top menu, from the Top menu; go to “My Account” Section. From the My Account section, you can manage your subscriptions and Account Keys. Account Keys will be used by your applications to access the subscriptions from the market place. Click on My Account link, you can see Account Keys in the left menu and then Add an account key or you can use the default Account key available. Creating account key is very simple process. Also you can remove the account keys you create if necessary. The next step is to subscribe to BING Search API. At this moment, Bing Offers 2 APIs for search. The available options are as follows. 1. Bing Search API - https://datamarket.azure.com/dataset/5ba839f1-12ce-4cce-bf57-a49d98d29a44 2. Bing Search API – Web Results only - https://datamarket.azure.com/dataset/8818f55e-2fe5-4ce3-a617-0b8ba8419f65 The difference is that the later will give you only web results where the other you can specify the source type such as image, video, web, news etc. Carefully choose the API based on your application requirements. In this article, I am going to use Web Results Only API, but the steps will be similar to both. Go to the API page https://datamarket.azure.com/dataset/8818f55e-2fe5-4ce3-a617-0b8ba8419f65, you can see the subscription options in the right side. And in the bottom of the page you can see the free option Since I am going to use the free options, just Click the Sign Up link for that. Just select I agree check box and click on the Sign Up button. You will get a recipt pagethat detail your subscription. Now you are ready Bing Search API – Web results. The next step is to integrate the API into your ASP.Net application. Now if you go to the Search API page (as well as in the Receipt page), you can see a .Net C# Class Library link, click on the link, you will get a code file named “BingSearchContainer.cs”. In the following sections I am going to demonstrate the use of Bing Search API from an ASP.Net application. Create an empty ASP.Net web application. In the solution explorer, the application will looks as follows. Now add the downloaded code file (“BingSearchContainer.cs”) to the project. Right click your project in solution explorer, Add -> existing item, then browse to the downloaded location, select the “BingSearchContainer.cs” file and add it to the project. To build the code file you need to add reference to the following library. System.Data.Services.Client You can find the library in the .Net tab, when you select Add -> Reference Try to build your project now; it should build without any errors. Add an ASP.Net page to the project. I have included a text box and a button, then a Grid View to the page. The idea is to Search the text entered and display the results in the gridview. The page will look in the Visual Studio Designer as follows. The markup of the page is as follows. In the button click event handler for the search button, I have used the following code. Now run your project and enter some text in the text box and click the Search button, you will see the results coming from Bing, cool. I entered the text “Microsoft” in the textbox and clicked on the button and I got the following results. Searching Specific Websites If you want to search a particular website, you pass the site url with site:<site url name> and if you have more sites, use pipe (|). e.g. The following search query site:microsoft.com | site:adobe.com design will search the word design and return the results from Microsoft.com and Adobe.com See the sample code that search only Microsoft.com for the text entered for the above sample. var webResults = bingContainer.Web("site:www.Microsoft.com " + txtSearch.Text, null, null, null, null, null, null); Paging the results returned by the API By default the BING API will return 100 results based on your query. The default code file that you downloaded from BING doesn’t include any option for this. You can modify the downloaded code to perform this paging. The BING API supports two parameters $top (for number of results to return) and $skip (for number of records to skip). So if you want 3rd page of results with page size = 10, you need to pass $top = 10 and $skip=20. Open the BingSearchContainer.cs in the editor. You can see the Web method in it as follows. public DataServiceQuery<WebResult> Web(String Query, String Market, String Adult, Double? Latitude, Double? Longitude, String WebFileType, String Options) {  In the method signature, I have added two more parameters public DataServiceQuery<WebResult> Web(String Query, String Market, String Adult, Double? Latitude, Double? Longitude, String WebFileType, String Options, int resultCount, int pageNo) { and in the method, you need to pass the parameters to the query variable. query = query.AddQueryOption("$top", resultCount); query = query.AddQueryOption("$skip", (pageNo -1)*resultCount); return query; Note that I didn’t perform any validation, but you need to check conditions such as resultCount and pageCount should be greater than or equal to 1. If the parameters are not valid, the Bing Search API will throw the error. The modified method is as follows. The changes are highlighted. Now see the following code in the SearchPage.aspx.cs file protected void btnSearch_Click(object sender, EventArgs e) {     var bingContainer = new Bing.BingSearchContainer(new Uri(https://api.datamarket.azure.com/Bing/SearchWeb/));     // replace this value with your account key     var accountKey = "your key";     // the next line configures the bingContainer to use your credentials.     bingContainer.Credentials = new NetworkCredential(accountKey, accountKey);     var webResults = bingContainer.Web("site:microsoft.com" +txtSearch.Text , null, null, null, null, null, null,3,2);     lstResults.DataSource = webResults;     lstResults.DataBind(); } The following code will return 3 results starting from second page (by skipping first 3 results). See the result page as follows. Bing provides complete integration to its offerings. When you develop search based applications, you can use the power of Bing to perform the search. Integrating Bing Search API to ASP.Net application is a simple process and without investing much time, you can develop a good search based application. Make sure you read the terms of use before designing the application and decide which API usage is suitable for you. Further readings BING API Migration Guide http://go.microsoft.com/fwlink/?LinkID=248077 Bing API FAQ http://go.microsoft.com/fwlink/?LinkID=252146 Bing API Schema Guide http://go.microsoft.com/fwlink/?LinkID=252151

    Read the article

  • Understanding HTTP Cookies in Indy 10 for Delphi XE2

    - by Jerry Dodge
    I have been working with Indy 10 HTTP Servers / Clients lately in Delphi XE2, and I need to make sure I'm understanding session management correctly. In the server, I have a "bucket" of sessions, which is a list of objects which each represent a unique session. I don't use username and password to authenticate users, but I rather use a unique API key which is issued to a client, and has an expiration. When a client wishes to connect to the server, it first logs in by calling the "login" command, which is a path like this: http://localhost:1234/login?APIKey=abcdefghij. The server checks this API Key against the database, and if it's valid, it creates a new session in the bucket, issues a new cookie (unique string), and sets the response cookies with Success=Y and Cookie=abcdefghij. This is where I have the question. Assuming the client end has its own method of cookie management, the client will receive this login response back from the server and automatically save the cookies as necessary. Any future request from the client to the server shall automatically send along these cookies, and the client side doesn't have to necessarily worry about setting these cookies when sending requests to the server. Right? PS - I'm asking this question here on programmers.stackexchange.com because I didn't see it fit to ask on stackoverflow.com. If anyone thinks this is appropriate enough for stackoverflow.com, please let me know.

    Read the article

  • Apache DVB http video Streaming bandwidth or priority problem

    - by igino manfre'
    I'm streaming few precompressed DVB videos from cloud. The streams are generated from VLC on "impossible" ports (such as 64085, 64086 etc) reverse proxed by Apache on port 80 and 8080. All the generated streams are listed in "http://95.110.164.61/indexv.html". From an ADSL connection with enough downlink bandwidth, recalling the stream generated by VLC (such as "http://95.110.164.61:64087/mpg2_6.4") it flows fluently. Recalling the same stream proxed by Apache ("http://95.110.164.61/mpg2_6.4") the stream stops and goes. The only situation in which the Apache proxed streams flow regularly is from a site connected through 64 Mbps warranted bandwith with RTT to the server less than 10 mseconds. Please note that streams below 2 Mbps are fluently proxed. The system is a single core xeon with windows 2008 R2 on 4 GB of RAM with 1 Gbps of network bandwidth. The drain of computational and bandwidth resources is negligeable, the RAM usage always lower than 50%. On the system I run many VLC streamers. Any of them drains a variable amount of RAM (from about 25 to 70 MB). On the contrary the couple of httpd.exe processes drain no more than 7 MB. Using Wireshark (on the server) I see that VLC directy send to the client much more packets than Apache, and the stream is framgmented on many frames. I'm not a programmer, a newby of Apache. Can anyone please address me to a specific portion of the Apache's huge documentation? Thank you. igino

    Read the article

  • sudo apt-get update errors

    - by Adrian Begi
    Here is what I get on my terminal when running sudo apt-get update errors. I dont know if the issue is from my sources.list or my proxy setup(have not made any changes to proxies). Thank you for any help in advanced. Ign http://security.ubuntu.com oneiric-security Release.gpg Ign http://security.ubuntu.com oneiric-security Release Ign http://security.ubuntu.com oneiric-security/main Sources/DiffIndex Ign http://security.ubuntu.com oneiric-security/restricted Sources/DiffIndex Ign http://security.ubuntu.com oneiric-security/universe Sources/DiffIndex Ign http://security.ubuntu.com oneiric-security/multiverse Sources/DiffIndex Ign http://security.ubuntu.com oneiric-security/main amd64 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/restricted amd64 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/universe amd64 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/multiverse amd64 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/main i386 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/restricted i386 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/universe i386 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/multiverse i386 Packages/DiffIndex Ign http://security.ubuntu.com oneiric-security/main TranslationIndex Ign http://security.ubuntu.com oneiric-security/multiverse TranslationIndex Ign http://security.ubuntu.com oneiric-security/restricted TranslationIndex Ign http://security.ubuntu.com oneiric-security/universe TranslationIndex Err http://security.ubuntu.com oneiric-security/main Sources 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/restricted Sources 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/universe Sources 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/multiverse Sources 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/main amd64 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/restricted amd64 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/universe amd64 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/multiverse amd64 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/main i386 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/restricted i386 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/universe i386 Packages 404 Not Found [IP: 91.189.91.15 80] Err http://security.ubuntu.com oneiric-security/multiverse i386 Packages 404 Not Found [IP: 91.189.91.15 80] Ign http://security.ubuntu.com oneiric-security/main Translation-en_US Ign http://security.ubuntu.com oneiric-security/main Translation-en Ign http://security.ubuntu.com oneiric-security/multiverse Translation-en_US Ign http://security.ubuntu.com oneiric-security/multiverse Translation-en Ign http://security.ubuntu.com oneiric-security/restricted Translation-en_US Ign http://security.ubuntu.com oneiric-security/restricted Translation-en Ign http://security.ubuntu.com oneiric-security/universe Translation-en_US Ign http://security.ubuntu.com oneiric-security/universe Translation-en W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/main/source/Sources 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/restricted/source/Sources 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/universe/source/Sources 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/multiverse/source/Sources 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/main/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/restricted/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/universe/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/multiverse/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/main/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/restricted/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/universe/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/oneiric-security/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80] E: Some index files failed to download. They have been ignored, or old ones used instead. HERE IS MY SOURCES.LIST # # deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ dists/oneiric/main/binary-i386/ # deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ dists/oneiric/restricted/binary-i386/ # deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ oneiric main restricted #deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ dists/oneiric/main/binary-i386/ #deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ dists/oneiric/restricted/binary-i386/ #deb cdrom:[Ubuntu-Server 11.10 _Oneiric Ocelot_ - Release amd64 (20111011)]/ oneiric main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ oneiric main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ oneiric-updates main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ oneiric universe deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric universe deb http://us.archive.ubuntu.com/ubuntu/ oneiric-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://us.archive.ubuntu.com/ubuntu/ oneiric multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric multiverse deb http://us.archive.ubuntu.com/ubuntu/ oneiric-updates multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ oneiric-backports main restricted universe multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu oneiric-security main restricted deb-src http://security.ubuntu.com/ubuntu oneiric-security main restricted deb http://security.ubuntu.com/ubuntu oneiric-security universe deb-src http://security.ubuntu.com/ubuntu oneiric-security universe deb http://security.ubuntu.com/ubuntu oneiric-security multiverse deb-src http://security.ubuntu.com/ubuntu oneiric-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu oneiric partner # deb-src http://archive.canonical.com/ubuntu oneiric partner ## Uncomment the following two lines to add software from Ubuntu's ## 'extras' repository. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://extras.ubuntu.com/ubuntu oneiric main # deb-src http://extras.ubuntu.com/ubuntu oneiric main

    Read the article

  • wireshark does not show any http traffic

    - by hayat
    fMy pc is running windows XP. I have been trying to capture http traffic but if apply following filters, the wireshark gives results not required http result: it only show ssdp traffic http (!udp) result: it does not show any thing http (!http contains ssdp) result: it does not show any thing http && tcp result: it does not show any thing http (!udp.dstport == 1900) result: it does not show any thing tcp result: it shows tcp traffic but i cannot reach to my required http messages please guide what may be the problem, as the same thing is happening when i run it on a different OS (windows7) on my laptop

    Read the article

  • API access to a manually-created Google Map

    - by rutherford
    I have a number of public custom Google Maps created via http://maps.google.com/ - obviously associated with my google account. Can I access these maps via the Google Maps javascript api? The api doesn't appear to work with the manually created maps located on maps.google.com from what I can tell? And if not, is there another way to store overlay data (markers, etc) that the javascript api can grab and load into the map on the client's browser? Am thinking a service like dabbleDB, except that I don't think they offer write access via javascript (this would be necessary for the user adding markers to the map, for example) Obviously I could create a db layer on my server, but am looking for a 'cloud' solution that removes the strain from my databases!!

    Read the article

  • How to get JSON back from HTTP POST Request (to another domain)

    - by roman m
    I'm trying to use the API on a website, here's the part of the manual: Authenticated Sessions (taken from here) To create an authenticated session, you need to request an authToken from the '/auth' API resource. URL: http://stage.amee.com/auth (this is not my domain) Method: POST Request format: application/x-www-form-urlencoded Response format: application/xml, application/json Response code: 200 OK Response body: Details of the authenticated user, including API version. Extra data: "authToken" cookie and header, containing the authentication token that should be used for subsequent calls. Parameters: username / password Example Request POST /auth HTTP/1.1 Accept: application/xml Content-Type: application/x-www-form-urlencoded username=my_username&password=my_password Response HTTP/1.1 200 OK Set-Cookie: authToken=1KVARbypAjxLGViZ0Cg+UskZEHmqVkhx/Pm...; authToken: 1KVARbypAjxLGViZ0Cg+UskZEHmqVkhx/PmEvzkPGp...== Content-Type: application/xml; charset=UTF-8 QUESTION: How do I get that to work? I tried jQuery, but it seems to have problem with XSS. Actual code snippet would be greatly appreciated. p.s. All I was looking for was WebClient class in C#

    Read the article

  • Consume RESt API from .NET

    - by Ajish
    Hi All, I am trying to consume REST API from my .NET Application. This API's are all written in JAVA. I am asked to pass the authentication credentials vis HTTP headers. How can I pass these authentication credentials like 'DATE', 'AUTHORIZATION' and 'Accept' via HTTP headers. Which class in .NET can I use to accomplish this task. Can anyone help me with this? All your help will be appreciated. Ajish.

    Read the article

  • How do I use an API?

    - by GRardB
    Background I have no idea how to use an API. I know that all APIs are different, but I've been doing research and I don't fully understand the documentation that comes along with them. There's a programming competition at my university in a month and a half that I want to compete in (revolved around APIs) but nobody on my team has ever used one. We're computer science majors, so we have experience programming, but we've just never been exposed to an API. I tried looking at Twitter's documentation, but I'm lost. Would anyone be able to give me some tips on how to get started? Maybe a very easy API with examples, or explaining essential things about common elements of different APIs? I don't need a full-blown tutorial on Stack Overflow; I just need to be pointed in the right direction. Update The programming languages that I'm most fluent in are C (simple text editor usually) and Java (Eclipse). In an attempt to be more specific with my question: I understand that APIs (and yes, external libraries are what I was referring to) are simply sets of functions. Question I guess what I'm trying to ask is how I would go about accessing those functions. Do I need to download specific files and include them in my programs, or do they need to be accessed remotely, etc.?

    Read the article

  • How do I use an API?

    - by GRardB
    Background I have no idea how to use an API. I know that all APIs are different, but I've been doing research and I don't fully understand the documentation that comes along with them. There's a programming competition at my university in a month and a half that I want to compete in (revolved around APIs) but nobody on my team has ever used one. We're computer science majors, so we have experience programming, but we've just never been exposed to an API. I tried looking at Twitter's documentation, but I'm lost. Would anyone be able to give me some tips on how to get started? Maybe a very easy API with examples, or explaining essential things about common elements of different APIs? I don't need a full-blown tutorial on Stack Overflow; I just need to be pointed in the right direction. Update The programming languages that I'm most fluent in are C (simple text editor usually) and Java (Eclipse). In an attempt to be more specific with my question: I understand that APIs (and yes, external libraries are what I was referring to) are simply sets of functions. Question I guess what I'm trying to ask is how I would go about accessing those functions. Do I need to download specific files and include them in my programs, or do they need to be accessed remotely, etc.?

    Read the article

  • Nginx Browser Caching using HTTP Headers outside server/location block

    - by Danny O'Sullivan
    I am having difficulty setting the HTTP expires headers for Nginx outside of specific server (and then location) blocks. What I want is to something like the following: location ~* \.(png|jpg|jpeg|gif|ico)$ { expires 1y; } But not have to repeat it in every single server block, because I am hosting a large number of sites. I can put it in every server block, but it's not very DRY. If I try to put that into an HTTP block or outside of all other blocks, I get "location directive is not allowed here." It seems I have to put it into a server block, and I have a different server block for every virtual host. Any help/clarification would be appreciated.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >