Search Results

Search found 19155 results on 767 pages for 'url redirection'.

Page 372/767 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Retrieving Json via HTML request from Jboss server

    - by Seth Solomon
    I am running into a java.net.SocketException: Unexpected end of file from server when I am trying to query some JSON from my JBoss server. I am hoping someone can spot where I am going wrong. Or does anyone have any suggestions of a better way to pull this JSON from my Jboss server? try{ URL u = new URL("http://localhost:9990/management/subsystem/datasources/data-source/MySQLDS/statistics?read-resource&include-runtime=true&recursive=true"); HttpURLConnection c = (HttpURLConnection) u.openConnection(); String encoded = Base64.encode(("username"+":"+"password").getBytes()); c.setRequestMethod("POST"); c.setRequestProperty("Authorization", "Basic "+encoded); c.setRequestProperty("Content-Type","application/json"); c.setUseCaches(false); c.setAllowUserInteraction(false); c.setConnectTimeout(5000); c.setReadTimeout(5000); c.connect(); int status = c.getResponseCode(); // throws the exception here switch (status) { case 200: case 201: BufferedReader br = new BufferedReader(new InputStreamReader(c.getInputStream())); StringBuilder sb = new StringBuilder(); String line; while ((line = br.readLine()) != null) { sb.append(line+"\n"); } br.close(); System.out.println(sb.toString()); break; default: System.out.println(status); break; } } catch (Exception e) { e.printStackTrace(); }

    Read the article

  • Rails routing problem

    - by Steve
    I am new to Rails routing and I currently have a problem and hope someone can explain it to me. I am using Rails 2.3.5 Firstly, let me describe my working-fine code: I have a text example, which has a controller (cars_controller) with an update action (along with some other actions). The update action needs the :id parameter. The edit.html.erb has a form: <% form_for :car, :url = {:controller = 'cars', :action = 'update' } % ... # rest of the form content. In the configuration/routes.rb, I have a self-defined routing rule for update: map.connect 'car/update/:id', :controller = 'cars', :action = 'update' This works fine. Secondly, I change the code. All I change is the self-defined routing rule to map.connect 'car/:action/:id, :controller = 'cars' To me, this rule covers the self-written routing rule. Of course, this rule is also used by other actions such as edit. But the edit.html.erb doesn't work. It complains that update action misses the :id parameter. I have to change the form_for helper to: <% form_for :car, :url = {:controller = 'cars', :action = 'update', :id = @car }% ... # @car is the instance passed to edit view. I know that if missing the :id parameter, update action will complain. What I don't understand is why my first code works (with my self-defined routing rule) but my second code fails. It seems to me that I didn't provide :id parameter in my self-defined routing rule. Anyone has an idea?

    Read the article

  • Google Reader API - feed/[FEEDURL]/ is coming back as Not found

    - by JustinXXVII
    There is one feed I'm subscribed to which always turns up as NOT FOUND when I try to use the API. I return an array of Dictionaries, containing 3 objects. The first in the list represents the user himself, like so: { FeedID = "user/MY_UNIQUE_NUMBER/state/com.google/reading-list"; Timestamp = 1273448807271463; Unread = 59; } The Unread count is very important. My client depends on downloading 59 items from Google before it refreshes. If a feed doesn't download properly, the count is off and the client won't update. An example of a working Feed is here: { FeedID = "feed/http://arstechnica.com/index.rssx"; Timestamp = 1273447158484528; Unread = 13; } The FeedID value combines with a specially formatted URL string and gives back a list of articles. The above example works fine. However, the following feed always returns NOT FOUND on Google, and if I paste the URL verbatim into a browser, it never turns up. See here: { FeedID = "feed/http://www.peopleofwalmart.com/?feed=rss2"; Timestamp = 1273424138183529; Unread = 6; } http://www.google.com/reader/api/0/stream/contents/feed/http://www.peopleofwalmart.com/?feed=rss2?ot=1&r=n&xt=user/-/state/com.google/read&n=6&ck=1273449028&client=testClient If you are at all proficient with the API, can you please help me? Like I said, since Google always says NOT FOUND when I search for that feed, my download count is off by N articles and won't update. I would rather not hack around it, honestly. Thanks!

    Read the article

  • Updating a specific key/value inside of an array field with MongoDB

    - by Jesta
    As a preface, I've been working with MongoDB for about a week now, so this may turn out to be a pretty simple answer. I have data already stored in my collection, we will call this collection content, as it contains articles, news, etc. Each of these articles contains another array called author which has all of the author's information (Address, Phone, Title, etc). The Goal - I am trying to create a query that will update the author's address on every article that the specific author exists in, and only the specified author block (not others that exist within the array). Sort of a "Global Update" to a specific article that affects his/her information on every piece of content that exists. Here is an example of what the content with the author looks like. { "_id" : ObjectId("4c1a5a948ead0e4d09010000"), "authors" : [ { "user_id" : null, "slug" : "joe-somebody", "display_name" : "Joe Somebody", "display_title" : "Contributing Writer", "display_company_name" : null, "email" : null, "phone" : null, "fax" : null, "address" : null, "address2" : null, "city" : null, "state" : null, "zip" : null, "country" : null, "image" : null, "url" : null, "blurb" : null }, { "user_id" : null, "slug" : "jane-somebody", "display_name" : "Jane Somebody", "display_title" : "Editor", "display_company_name" : null, "email" : null, "phone" : null, "fax" : null, "address" : null, "address2" : null, "city" : null, "state" : null, "zip" : null, "country" : null, "image" : null, "url" : null, "blurb" : null }, ], "tags" : [ "tag1", "tag2", "tag3" ], "title" : "Title of the Article" } I can find every article that this author has created by running the following command: db.content.find({authors: {$elemMatch: {slug: 'joe-somebody'}}}); So theoretically I should be able to update the authors record for the slug joe-somebody but not jane-somebody (the 2nd author), I am just unsure exactly how you reach in and update every record for that author. I thought I was on the right track, and here's what I've tried. b.content.update( {authors: {$elemMatch: {slug: 'joe-somebody'} } }, {$set: {address: '1234 Avenue Rd.'} }, false, true ); I just believe there's something I am missing in the $set statement to specify the correct author and point inside of the correct array. Any ideas? **Update** I've also tried this now: b.content.update( {authors: {$elemMatch: {slug: 'joe-somebody'} } }, {$set: {'authors.$.address': '1234 Avenue Rd.'} }, false, true );

    Read the article

  • NoClassDefFoundError: HttpClient 4 (APACHE)

    - by Pujan Srivastava
    Hello All, I am using HC APACHE. I have added both httpcore-4.0.1.jar and httpclient-4.0.1.jar in the classpath of netbeans. I am getting error: java.lang.NoClassDefFoundError: org/apache/http/impl/client/DefaultHttpClient My Code is as follows. Please help. import org.apache.http.client.HttpClient; import org.apache.http.client.ResponseHandler; import org.apache.http.client.methods.HttpGet; import org.apache.http.impl.client.BasicResponseHandler; import org.apache.http.impl.client.DefaultHttpClient; public class HttpClientManager { public HttpClient httpclient; public HttpClientManager() { this.init(); } public void init() { try { httpclient = new DefaultHttpClient(); } catch (Exception e) { e.printStackTrace(); } } public void getCourseList() { String url = "http://exnet.in.th/api.php?username=demoinst&ha=2b62560&type=instructor"; HttpGet httpget = new HttpGet(url); ResponseHandler<String> responseHandler = new BasicResponseHandler(); try { String responseBody = httpclient.execute(httpget, responseHandler); System.out.println(responseBody); } catch (Exception e) { } } }

    Read the article

  • Login with Kohana auth module - what am I doing wrong?

    - by keithjgrant
    I'm trying to login with the following controller action, but my login attempt keeps failing (I get the 'invalid username and/or password' message). What am I doing wrong? I also tried the other method given in the examples in the auth documentation, Auth::instance()->login($user->username, $form->password);, but I get the same result. Kohana version is 2.3.4. public function login() { $auth = Auth::instance(); if ($auth->logged_in()) { url::redirect('/account/summary'); } $view = new View('login'); $view->username = ''; $view->password = ''; $post = $this->input->post(); $form = new Validation($post); $form->pre_filter('trim', 'username') ->pre_filter('trim', 'password') ->add_rules('username', 'required'); $failed = false; if (!empty($post) && $form->validate()) { $login = array( 'username' => $form->username, 'password' => $form->password, ); if (ORM::factory('user')->login($login)) { url::redirect('/accounts/summary'); } else { $view->username = $form->username; $view->message = in_array('required', $form->errors()) ? 'Username and password are required.' : 'Invalid username and/or password.'; } } $view->render(true); }

    Read the article

  • How do I sign requests reliably for the Last.fm api in C#?

    - by Arda Xi
    I'm trying to implement authorization through Last.fm. I'm submitting my arguments as a Dictionary to make the signing easier. This is the code I'm using to sign my calls: public static string SignCall(Dictionary<string, string> args) { IOrderedEnumerable<KeyValuePair<string, string>> sortedArgs = args.OrderBy(arg => arg.Key); string signature = sortedArgs.Select(pair => pair.Key + pair.Value). Aggregate((first, second) => first + second); return MD5(signature + SecretKey); } I've checked the output in the debugger, it's exactly how it should be, however, I'm still getting WebExceptions every time I try. Here's my code I use to generate the URL in case it'll help: public static string GetSignedURI(Dictionary<string, string> args, bool get) { var stringBuilder = new StringBuilder(); if (get) stringBuilder.Append("http://ws.audioscrobbler.com/2.0/?"); foreach (var kvp in args) stringBuilder.AppendFormat("{0}={1}&", kvp.Key, kvp.Value); stringBuilder.Append("api_sig="+SignCall(args)); return stringBuilder.ToString(); } And sample usage to get a SessionKey: var args = new Dictionary<string, string> { {"method", "auth.getSession"}, {"api_key", ApiKey}, {"token", token} }; string url = GetSignedURI(args, true); EDIT: Oh, and the code references an MD5 function implemented like this: public static string MD5(string toHash) { byte[] textBytes = Encoding.UTF8.GetBytes(toHash); var cryptHandler = new System.Security.Cryptography.MD5CryptoServiceProvider(); byte[] hash = cryptHandler.ComputeHash(textBytes); return hash.Aggregate("", (current, a) => current + a.ToString("x2")); }

    Read the article

  • JSF works only with the .xhtml ending

    - by ThreeFingerMark
    Hello, i start with the programming of a JSF Website. At the moment all files have the .xhtml ending. When i go to http://localhost:8080/myProject/start.jsf everything is all right. But when i rename the file from start.xhtml to start.jsf i became a NoClassDefFound Error. What is my mistake? <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.jsf</url-pattern> </servlet-mapping> <context-param> <param-name>javax.faces.PROJECT_STAGE</param-name> <param-value>Development</param-value> </context-param>

    Read the article

  • Dynamic checkboxlist

    - by Steve
    Hi, I am currently building a dynamic url tab system that user can say what urls they want to be displayed on a tabpage control. In the database i have the following columns. userID, int URLName var Enabled bit I am pulling the data back ok but what i am trying do is populate a checkbox list with the urlname and its status on a user options page so they can say what tabs they want displayed. I have wrote the following methods to get the urls and create the checkboxes however i keep getting the following error. ex = {"InvalidArgument=Value of '1' is not valid for 'index'.\r\nParameter name: index"} It reads the first row ok but when it hits the 2nd row that is being returned i get that error. Has anyone got any ideas? Thanks private void GetUserURLS() { db.initiateCommand("[Settings].[LoadAllUserURLS]", CommandType.StoredProcedure); sqlp = db.addParameter("@UserID", _UserID, SqlDbType.Int, ParameterDirection.Input); sqlp = db.addParameter("@spErrorID", DBNull.Value, SqlDbType.Int, ParameterDirection.InputOutput); db.executeCommand(); CreateCheckBoxes(db.getTable(0).Rows); db.releaseCommand(); } private void CreateCheckBoxes(DataRowCollection rows) { try { int i = 0; foreach (DataRow row in rows) { //Gets the url name and path when the status is enabled. The status of Enabled / Disabled is setup in the users option page string URLName = row["URLName"].ToString(); bool enabled = Convert.ToBoolean(row["Enabled"]); CheckedListBox CB = new CheckedListBox(); CB.Items.Insert(i, URLName); CB.Tag = "CB" + i.ToString(); checkedListBox1.Items.Add(CB); i++; } } catch (Exception ex) { //Log error Functionality func = new Functionality(); func.LogError(ex); //Error message the user will see string FriendlyError = "There has been populating checkboxes with the urls - A notification has been sent to development"; Classes.ShowMessageBox.MsgBox(FriendlyError, "There has been an Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } }

    Read the article

  • Increase width of divs, displayed side by side, using draggable events

    - by Vaibhav Shukla
    I have two divs of fixed length as of now, which loads external URL by dynamically embedding iframes inside them. Divs are appearing next to each other - one on left and other right. As of now, I have fixed their width to 50% each. But, I want to give user a flexibility to increase the width of any div to view the URL inside easily without scrolling horizontally. Something like dragging the border separating the two divs to either left or right according to his need. Is there a way I could achieve this? Please suggest any library or something. I have gone through a library twentytwenty which is used for images. I don't know how will that work for dynamic iframes. Here is the JSFiddle which displays the divs. <div> <div id="originalPage" style="width:54%;height: 730px;float:left"> <p>one div </p> </div> <div id="diffReport" style="width:45%; height: 730px;float:right"> <p>another div</p> </div> </div>

    Read the article

  • Webbrowser locks first Excel Instance

    - by figus
    VB.NET 2008 - WinForms Hi Everyone! I've been looking for an answer to my problem but I only find more questions related to my problem... The thing is that I'm using a AxWebBrowser control in VB.NET 2008 to open a excel file in a WindowsForm, it opens succesfully and everything... but the problem is this: If I had a Excel File Open before opening another Excel File in my WebBrowser Control, the window that contains the file that was already open gets locked... (I can't give focus to that Excel window... If I click on it, the focus returns to my application) I tried creating a new excel instance with CreateObject("Excel.Application") just before loading the file, but the WebBrowser locks the first Excel Instance... Is there a way to lock it to the last instance??? I use the WebBrowser1.Navigate(Url) method to load the file... where Url is a String like C:\myExcelFile.xlsm WebBrowser1.Document.Application.UserControl is also True, but it get's locked. Any Suggestion?? Thanks in Advance!!!

    Read the article

  • .NET: Is it possible to get HttpWebRequest to automatically decompress gzip'd responses?

    - by Cheeso
    In this answer, I described how I resorted to wrappnig a GZipStream around the response stream in a HttpWebResponse, in order to decompress it. The relevant code looks like this: HttpWebRequest hwr = (HttpWebRequest) WebRequest.Create(url); hwr.CookieContainer = PersistentCookies.GetCookieContainerForUrl(url); hwr.Accept = "text/xml, */*"; hwr.Headers.Add(HttpRequestHeader.AcceptEncoding, "gzip, deflate"); hwr.Headers.Add(HttpRequestHeader.AcceptLanguage, "en-us"); hwr.UserAgent = "My special app"; hwr.KeepAlive = true; var resp = (HttpWebResponse) hwr.GetResponse(); using(Stream s = resp.GetResponseStream()) { Stream s2 = s; if (resp.ContentEncoding.ToLower().Contains("gzip")) s2 = new GZipStream(s2, CompressionMode.Decompress); else if (resp.ContentEncoding.ToLower().Contains("deflate")) s2 = new DeflateStream(s2, CompressionMode.Decompress); ... use s2 ... } Is there a way to get HttpWebResponse to provide a de-compressing stream, automatically? In other words, a way for me to eliminate the following from the above code: Stream s2 = s; if (resp.ContentEncoding.ToLower().Contains("gzip")) s2 = new GZipStream(s2, CompressionMode.Decompress); else if (resp.ContentEncoding.ToLower().Contains("deflate")) s2 = new DeflateStream(s2, CompressionMode.Decompress); Thanks.

    Read the article

  • ASP.NET MVC: Returning a view with querystring intact

    - by ajbeaven
    I'm creating a messaging web app in ASP.NET and are having some problems when displaying an error message to the user if they go to send a message and there is something wrong. A user can look through profiles of people and then click, 'send a message'. The following action is called (url is /message/create?to=username) and shows them a page where they can enter their message and send it: public ActionResult Create(string to) { ViewData["recipientUsername"] = to; return View(); } On the page that is displayed, the username is entered in to a hidden input field. When the user clicks 'send': [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(FormCollection collection, string message) { try { //do message stuff that errors out } catch { ModelState.AddModelErrors(message.GetRuleViolations()); //adding errors to modelstate } return View(); } So now the error message is displayed to the user fine, however the url is changed in that it no longer has the querystring (/message/create). Again, this would be fine except that when the user clicks the refresh button, the page errors out as the Create action no longer has the 'to' parameter. So I'm guessing that I need to maintain my querystring somehow. Is there any way to do this or do I need to use a different method altogether?

    Read the article

  • problems with Console.SetOut in Release Mode?

    - by Matt Jacobsen
    i have a bunch of Console.WriteLines in my code that I can observe at runtime. I communicate with a native library that I also wrote. I'd like to stick some printf's in the native library and observe them too. I don't see them at runtime however. I've created a convoluted hello world app to demonstrate my problem. When the app runs, I can debug into the native library and see that the hello world is called. The output never lands in the textwriter though. Note that if the same code is run as a console app then everything works fine. C#: [DllImport("native.dll")] static extern void Test(); StreamWriter writer; public Form1() { InitializeComponent(); writer = new StreamWriter(@"c:\output.txt"); writer.AutoFlush = true; System.Console.SetOut(writer); } private void button1_Click(object sender, EventArgs e) { Test(); } and the native part: __declspec(dllexport) void Test() { printf("Hello World"); } Update: hamishmcn below started talking about debug/release builds. I removed the native call in the above button1_click method and just replaced it with a standard Console.WriteLine .net call. When I compiled and ran this in debug mode the messages were redirected to the output file. When I switched to release mode however the calls weren't redirected. Console redirection only seems to work in debug mode. How do I get around this?

    Read the article

  • UITableView Inactive Until Scroll iPhone

    - by dubbeat
    HI, I'm trying to find the cause of some annoying behaviour with my UITableView. In each cell of my table (10 of them ) I asynchronously load an image. The problem is, is that if I don't touch the app at all the imageviews will permanently show a loading icon. However as soon as I scroll a cell off the screen and back on again the image shows straight away. What could be causing this "stuck in the mud" behaviour? Is there a way to force whatever gets called when the list is scrolled to make the needed update happen? #define ASYNC_IMAGE_TAG 9999 #define LABEL_TAG 8888 // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { PromoListItem *promo = [promoList objectAtIndex:indexPath.row]; static NSString *CellIdentifier = @"Cell"; AsyncImageView *asyncImageView = nil; UILabel *label = nil; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; CGRect frame; frame.origin.x = 0; frame.origin.y = 0; frame.size.width = 100; frame.size.height = 100; asyncImageView = [[[AsyncImageView alloc] initWithFrame:frame] autorelease]; asyncImageView.tag = ASYNC_IMAGE_TAG; [cell.contentView addSubview:asyncImageView]; frame.origin.x = 110; frame.size.width =100; label = [[[UILabel alloc] initWithFrame:frame] autorelease]; label.tag = LABEL_TAG; [cell.contentView addSubview:label]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; } else { asyncImageView = (AsyncImageView *) [cell.contentView viewWithTag:ASYNC_IMAGE_TAG]; label = (UILabel *) [cell.contentView viewWithTag:LABEL_TAG]; } NSURL *url = [NSURL URLWithString:promo.imgurl]; [asyncImageView loadImageFromURL:url]; label.text = promo.artistname; return cell; }

    Read the article

  • Equal Column Heights

    - by cjmcjm
    I want the east and west divs to extend down to match the height of the center div... possible? Thanks so much. .con{ padding-left: 22px; padding-right: 22px; overflow: hidden; } .col{ position: relative; float: left; } .west{ width: 7px; right: 22px; margin-left: -100%; background: url(http://www.example.com/west.png) repeat-y; padding: 0 0 0 15; } .east{ width: 7px; margin-right: -22px; background: url(http://www.example.com/east.png) repeat-y; padding: 0 15 0 0; } .center{ width: 100%; } <div class="con"> <div class="center col"> Test Text<br/> Test Text<br/> Test Text<br/> Test Text<br/> </div> <div class="west col"> </div> <div class="east col"> </div> </div>

    Read the article

  • How do I reject if exists? for non-nested attributes?

    - by GoodGets
    Currently my controller lets a user submit muliple "links" at a time. It collects them into an array, creates them for that user, but catches any errors for the User to go back and fix. How can I ignore the creation of any links that already exist for that user? I know that I can use validates_uniqueness_of with a scope for that user, but I'd rather just ignore their creation completely. Here's my controller: @links = params[:links].values.collect{ |link| current_user.links.create(link) }.reject { |p| p.errors.empty? } Each link has a url, so I thought about checking if that link.url already exists for that user, but wasn't really sure how, or where, to do that. Should I tack this onto my controller somehow? Or should it be a new method in the model, like as in a before_validation Callback? (Note: these "links" are not nested, but they do belong_to :user.) So, I'd like to just be able to ignore the creation of these links if possible. Like if a user submits 5 links, but 2 of them already exist for him, then I'd just like for those 2 to be ignored, while the other 3 are created. How should I go about doing this?

    Read the article

  • Jquery class changing works, but doesn't effect another function like it should

    - by Qwibble
    So I have content boxes that close and expand when you click an arrow. The content box has two classes for telling whether it is open or closed (.box_open, .box_closed). A hover function is assigned to box_open so when it is open and you hover over the header, the arrow appears. However, I don't want this to happen when the box is closed, as I want to arrow to remain visible when the box is closed. When the box closes, the box_open class is removed, but the function assigned to that class still works. Here's the jquery code for the two functions. You can also see them in the head of the demo below. // Display Arrow on Box Header Hover $(".box_open").hover( function () { $(this).find('a').show(); }, function () { $(this).find('a').hide(); } ); // Open and Close Boxes: $(".box_header a").click( function () { $(this).parent().next('.box_border').stop().toggle(); $(this).parent().toggleClass("box_open"); $(this).parent().toggleClass("box_closed"); return false; } ); Can anyone take a look at what the problem is? Here's the demo url: Demo Url

    Read the article

  • UISplitView's DetailView not changing its content

    - by Knodel
    I have an iPad app with a UISplitView. In the Root View I have a two level UITableView (it takes its content from a plist). In the Detail View I have a UIWebView. - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { //Get the dictionary of the selected data source. NSDictionary *dictionary = [self.tableDataSource objectAtIndex:indexPath.row]; //Get the children of the present item. NSArray *Children = [dictionary objectForKey:@"Children"]; RootViewController *rvController = [[RootViewController alloc] init]; rvController.CurrentTitle = [dictionary objectForKey:@"Title"]; rvController.CurrentLevel = +1; //Push the new table view on the stack [self.navigationController pushViewController:rvController animated:YES]; rvController.tableDataSource = Children; NSString *name = [NSString stringWithFormat:@"%@.rtfd.zip", rvController.CurrentTitle ]; NSString* resourcePath = [[NSBundle mainBundle] resourcePath]; NSString* sourceFilePath = [resourcePath stringByAppendingPathComponent:name]; NSURL* url = [NSURL fileURLWithPath:sourceFilePath isDirectory:NO]; NSURLRequest* request = [NSURLRequest requestWithURL:url]; [detailViewController.webView loadRequest:request]; } That's how I want the UIWebView's content to be changed. If I name the .rtfd.zip file the same as a first level cell of the UITableView, it works, UIWebView's content changes. But this doesn't work with the second level UITableView. What am I doing wrong? Thanks in advance!

    Read the article

  • Java webapp: how to implement a web bug (1x1 pixel)?

    - by NoozNooz42
    In the accepted answer in the following question, a SO regular with 13K+ rep suggests to use a "web bug" (non-cacheable 1x1 img) to be able to track requests in the logs: http://stackoverflow.com/questions/1784893 How can I do this in Java? Basically, I've got two issues: how to make sure the 1x1 image is not cacheable (how to set the header)? how to make sure the query for these 1x1 image will appear in the logs? I'm looking for exact piece of code because I know how to write a .jsp/servlet and I know how to serve an 1x1 image :) My question is really about the exact .jsp/servlet that I should write and how/what needs to be done so that Tomcat logs the request. For example I plan to use the following mapping: <servlet-mapping> <servlet-name>WebBugServlet</servlet-name> <url-pattern>/webbug*</url-pattern> </servlet-mapping> and then use an img tag referencing a "webbug.png" (or .gif), so how do I write the .jsp/servlet? What/where should I look for in the logs?

    Read the article

  • Really frustrated: Help writing a sample twitter app

    - by Jack
    I have installed WAMP. I have enable cURL in php.ini. I want to implement a twitter app that posts a new status message for a user. Here's my code <?php function updateTwitter($status) { $username = 'xxxxxx'; $password = 'xxxx'; $url = 'http://twitter.com/statuses/update.xml'; $postargs = 'status='.urlencode($status); $responseInfo=array(); $ch = curl_init($url); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 2); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_PROXY,"localhost:80"); curl_setopt ($ch, CURLOPT_POST, true); // Give CURL the arguments in the POST curl_setopt ($ch, CURLOPT_POSTFIELDS, $postargs); // Set the username and password in the CURL call curl_setopt($ch, CURLOPT_USERPWD, $username.':'.$password); // Set some cur flags (not too important) $response = curl_exec($ch); if($response === false) { echo 'Curl error: ' . curl_error($ch); } else { echo 'Operation completed without any errors<br/>'; } // Get information about the response $responseInfo=curl_getinfo($ch); // Close the CURL connection curl_close($ch); // Make sure we received a response from Twitter if(intval($responseInfo['http_code'])==200){ // Display the response from Twitter echo $response; }else{ // Something went wrong echo "Error: " . $responseInfo['http_code']; } curl_close($ch); } updateTwitter("Just finished a sweet tutorial on http://brandontreb.com"); ?> I am getting the following output Operation completed without any errors Error: 404 Please help.

    Read the article

  • Show progress during a long Ajax call

    - by kousen
    I have a simple web site (http://www.kousenit.com/twitterfollowervalue) that computes a quantity based on a person's Twitter followers. Since the Twitter API only returns followers 100 at a time, a complete process may involve lots of calls. At the moment, I have an Ajax call to a method that runs the Twitter loop. The method looks like (Groovy): def updateFollowers() { def slurper = new XmlSlurper() followers = [] def next = -1 while (next) { def url = "http://api.twitter.com/1/statuses/followers.xml?id=$id&cursor=$next" def response = slurper.parse(url) response.users.user.each { u -> followers << new TwitterUser(... process data ...) } next = response.next_cursor.toBigInteger() } return followers } This is invoked from a controller called renderTTFV.groovy. I call the controller via an Ajax call using the prototype library: On my web page, in the header section (JavaScript): function displayTTFV() { new Ajax.Updater('ttfv','renderTTFV.groovy', {}); } and there's a div in the body of the page that updates when the call is complete. Everything is working, but the updateFollowers method can take a fair amount of time. Is there some way I could return a progress value? For example, I'd like to update the web page on every iteration. I know ahead of time how many iterations there will be. I just can't figure out a way to update the page in the middle of that loop. Any suggestions would be appreciated.

    Read the article

  • realtime visitors with nodejs & redis & socket.io & php

    - by orhan bengisu
    I am new to these tecnologies. I want to get realtime visitor for each products for my site. I mean a notify like "X users seeing this product". Whenever an user connects to a product counter will be increased for this product and when disconnects counter will be decreased just for this product. I tried to search a lots of documents but i got confused. I am using Predis Library for PHP. What i have done may totaly be wrong. I am not sure Where to put createClient , When to subscribe & When to unsubscribe. What I have done yet: On product detail page: $key = "product_views_".$product_id; $counter = $redis->incr($key); $redis->publish("productCounter", json_encode(array("product_id"=> "1000", "counter"=> $counter ))); In app.js var app = require('express')() , server = require('http').createServer(app) , socket = require('socket.io').listen(server,{ log: false }) , url = require('url') , http= require('http') , qs = require('querystring') ,redis = require("redis"); var connected_socket = 0; server.listen(8080); var productCounter = redis.createClient(); productCounter.subscribe("productCounter"); productCounter.on("message", function(channel, message){ console.log("%s, the message : %s", channel, message); io.sockets.emit(channel,message); } productCounter.on("unsubscribe", function(){ //I think to decrease counter here, Shold I? and How? } io.sockets.on('connection', function (socket) { connected_socket++; socket_id = socket.id; console.log("["+socket_id+"] connected"); socket.on('disconnect', function (socket) { connected_socket--; console.log("Client disconnected"); productCounter.unsubscribe("productCounter"); }); }) Thanks a lot for your answers!

    Read the article

  • What techniques can be used to detect so called "black holes" (a spider trap) when creating a web crawler?

    - by Tom
    When creating a web crawler, you have to design somekind of system that gathers links and add them to a queue. Some, if not most, of these links will be dynamic, which appear to be different, but do not add any value as they are specifically created to fool crawlers. An example: We tell our crawler to crawl the domain evil.com by entering an initial lookup URL. Lets assume we let it crawl the front page initially, evil.com/index The returned HTML will contain several "unique" links: evil.com/somePageOne evil.com/somePageTwo evil.com/somePageThree The crawler will add these to the buffer of uncrawled URLs. When somePageOne is being crawled, the crawler receives more URLs: evil.com/someSubPageOne evil.com/someSubPageTwo These appear to be unique, and so they are. They are unique in the sense that the returned content is different from previous pages and that the URL is new to the crawler, however it appears that this is only because the developer has made a "loop trap" or "black hole". The crawler will add this new sub page, and the sub page will have another sub page, which will also be added. This process can go on infinitely. The content of each page is unique, but totally useless (it is randomly generated text, or text pulled from a random source). Our crawler will keep finding new pages, which we actually are not interested in. These loop traps are very difficult to find, and if your crawler does not have anything to prevent them in place, it will get stuck on a certain domain for infinity. My question is, what techniques can be used to detect so called black holes? One of the most common answers I have heard is the introduction of a limit on the amount of pages to be crawled. However, I cannot see how this can be a reliable technique when you do not know what kind of site is to be crawled. A legit site, like Wikipedia, can have hundreds of thousands of pages. Such limit could return a false positive for these kind of sites. Any feedback is appreciated. Thanks.

    Read the article

  • is it possible to display video information from an rtsp stream in an android app UI

    - by Joseph Cheung
    I have managed to get a working video player that can stream rtsp links, however im not sure how to display the videos current time position in the UI, i have used the getDuration and getCurrentPosition calls, stored this information in a string and tried to display it in the UI but it doesnt seem to work in main.xml: TextView android:id="@+id/player" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="1px" android:text="@string/cpos" / in strings.xml: string name="cpos""" /string in Player.java private void playVideo(String url) { try { media.setEnabled(false); if (player == null) { player = new MediaPlayer(); player.setScreenOnWhilePlaying(true); } else { player.stop(); player.reset(); } player.setDataSource(url); player.getCurrentPosition(); player.setDisplay(holder); player.setAudioStreamType(AudioManager.STREAM_MUSIC); player.setOnPreparedListener(this); player.prepareAsync(); player.setOnBufferingUpdateListener(this); player.setOnCompletionListener(this); } catch (Throwable t) { Log.e(TAG, "Exception in media prep", t); goBlooey(t); try { try { player.prepare(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } Log.v(TAG, "Duration: === " + player.getDuration()); } catch (IllegalStateException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } private Runnable onEverySecond = new Runnable() { public void run() { if (lastActionTime 0 && SystemClock.elapsedRealtime() - lastActionTime 3000) { clearPanels(false); } if (player != null) { timeline.setProgress(player.getCurrentPosition()); //stores getCurrentPosition as a string cpos = String.valueOf(player.getCurrentPosition()); System.out.print(cpos); } if (player != null) { timeline.setProgress(player.getDuration()); //stores getDuration as a string cdur = String.valueOf(player.getDuration()); System.out.print(cdur); } if (!isPaused) { surface.postDelayed(onEverySecond, 1000); } } };

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >