Search Results

Search found 13987 results on 560 pages for 'browser scrollbars'.

Page 222/560 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • How can I set QNetworkReply properties to get correct NCBI pages?

    - by Claire Huang
    I try to get this following url using the downloadURL function: http://www.ncbi.nlm.nih.gov/nuccore/27884304 But the data is not as what we can see through the browser. Now I know it's because that I need to give the correct information such as browser, how can I know what kind of information I need to set, and how can I set it? (By setHeader function??) In VC++, we can use CInternetSession and CHttpConnection Object to get the correct information without setting any other detail information, is there any similar way in Qt or other cross-platform C++ network lib?? (Yes, I need the the cross-platform property.) QNetworkReply::NetworkError downloadURL(const QUrl &url, QByteArray &data) { QNetworkAccessManager manager; QNetworkRequest request(url); request.setHeader(QNetworkRequest::ContentTypeHeader ,"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 (.NET CLR 3.5.30729)"); QNetworkReply *reply = manager.get(request); QEventLoop loop; QObject::connect(reply, SIGNAL(finished()), &loop, SLOT(quit())); loop.exec(); int direction; QVariant statusCodeV = reply->attribute(QNetworkRequest::RedirectionTargetAttribute); QUrl redirectTo = statusCodeV.toUrl(); if (!redirectTo.isEmpty()) { if (redirectTo.host().isEmpty()) { const QByteArray newaddr = ("http://"+url.host()+redirectTo.encodedPath()).toAscii(); redirectTo.setEncodedUrl(newaddr); redirectTo.setHost(url.host()); } return (downloadURL(redirectTo, data)); } if (reply->error() != QNetworkReply::NoError) { return reply->error(); } data = reply->readAll(); delete reply; return QNetworkReply::NoError; }

    Read the article

  • Which language should I use to program a GUI application?

    - by Roman
    I would like to write a GUI application for management of information (text documents). In more details, it should be similar to the TiddlyWiki. I would like to have there some good visual effects (like nice representation for three structures, which you can rotate, some sound). I also would like to include some communication via Internet (for sharing and collaboration). In should include some features of such applications as a web browser, word processor, Skype. Which programming language should I use? I like the idea of usage of JavaScripts (like TddlyWiki). The good thing about that, is that user should not install anything. They open a file in a browser and it works! The bad thing is that JavaScript cannot communicate via internet with other applications. I think the choice of the programming language, in my case, id conditioned by 2 things: What can be done with this programming language (which restrictions are there). How easy to program. I would like to have "block" which can do a lot of things (rather than to program then and, in this way, to "rediscover a bicycle") ADDED: I would like to make it platform independent.

    Read the article

  • jQuery: preventDefault() not working on input/click events?

    - by Jason
    I want to disable the default contextMenu when a user right-clicks on an input field so that I can show a custom contextMenu. Generally speaking, its pretty easy to disable the right-click menu by doing something like: $([whatever]).bind("click", function(e) { e.preventDefault(); }); And in fact, I can do this on just about every element EXCEPT for input fields in FF - anyone know why or could point me towards some documentation? Here is the relevant code I am working with, thanks guys. HTML: <script type="text/javascript"> var r = new RightClickTool(); </script> <div id="main"> <input type="text" class="listen rightClick" value="0" /> </div> JS: function RightClickTool(){ var _this = this; var _items = ".rightClick"; $(document).ready(function() { _this.init(); }); this.init = function() { _this.setListeners(); } this.setListeners = function() { $(_items).click(function(e) { var webKit = !$.browser.msie && e.button == 0; var ie = $.browser.msie && e.button == 1; if(webKit||ie) { // Left mouse...do something() } else if(e.button == 2) { e.preventDefault(); // Right mouse...do something else(); } }); } } // Ends Class

    Read the article

  • Pass Variable to Java Method from an Ant Target

    - by user200317
    At the moment I have a .properties file to store settings related to the framework. Example: default.auth.url=http://someserver-at008:8080/ default.screenshots=false default.dumpHTML=false And I have written a class to extract those values and here is the method of that class. public static String getResourceAsStream(String defaultProp) { String defaultPropValue = null; //String keys = null; try { InputStream inputStream = SeleniumDefaultProperties.class.getClassLoader().getResourceAsStream(PROP_FILE); Properties properties = new Properties(); //load the input stream using properties. properties.load(inputStream); defaultPropValue = properties.getProperty(defaultProp); }catch (IOException e) { log.error("Something wrong with .properties file, check the location.", e); } return defaultPropValue; } Throughout the application I use method like follows to just exact property needed, public String getBrowserDefaultCommand() { String bcmd = SeleniumDefaultProperties.getResourceAsStream("default.browser.command"); if(bcmd.equals("")) handleMissingConfigProperties(SeleniumDefaultProperties.getResourceAsStream("default.browser.command")); return bcmd; } But I have not decided do a change to this and use ant and pass a parameter instead of using it from .properties file. I was wondering how could I pass a value to a Java Method using ANT. Non of these classes have Main methods and will not have any main. Due to this I was unable to use it as a java system properties. Thanks in advance.

    Read the article

  • Rails : fighting long http response times with ajax. Is it a good idea? Please, help with implementa

    - by baranov
    Hi, everybody! I've googled some tutorials, browsed some SO answers, and was unable to find a recipe for my problem. I'm writing a web site which is supposed to display almost realtime stock chart. Data is stored in constantly updating MySQL database, I wrote a find_by_sql query code which fetches all the data I need to get my chart drawn. Everything is ok, except performance - it takes from one second to one minute for different queries to fetch all the data from the database, this time includes necessary (My)SQL-server side calculations. This is simply unacceptable. I got the following idea: if the data is queried from the MySQL server one point a time instead of entire dataset, it takes only about 1-100ms to get an individual point. I imagine the data fetch process might be browser-driven. After the user presses the button in order to get a chart drawn, controller makes one request to the database and renders, say, a progress bar, say 1% ready. When the browser gets the response, it immediately makes an (ajax) request, and the server fetches the next piece of data and renders "2%". And so on, until all the data is ready and the server displays the requested chart. Could this be implemented in rails+js, is there a tutorial for solving a similar problem on the Web? I suppose if the thing is feasible at all, somebody should have already done this before. I have read several articles about ajax, I believe I do understand general principles, but never did nontrivial ajax programming myself. Thanks for your time!

    Read the article

  • production vs dev server content-disposition filename encoding

    - by rgripper
    I am using asp.net mvc3, download file in the same browser (Chrome 22). Here is the controller code: [HttpPost] public ActionResult Uploadfile(HttpPostedFileBase file)//HttpPostedFileBase file, string excelSumInfoId) { ... return File( result.Output, "application/vnd.ms-excel", String.Format("{0}_{1:yyyy.MM.dd-HH.mm.ss}.xls", "????????????", DateTime.Now)); } On my dev machine I download a programmatically created file with the correct name "????????????_2012.10.18-13.36.06.xls". Response: Content-Disposition:attachment; filename*=UTF-8''%D0%A1%D1%83%D0%BC%D0%BC%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5_2012.10.18-13.36.06.xls Content-Length:203776 Content-Type:application/vnd.ms-excel Date:Thu, 18 Oct 2012 09:36:06 GMT Server:ASP.NET Development Server/10.0.0.0 X-AspNet-Version:4.0.30319 X-AspNetMvc-Version:3.0 And from production server I download a file with the name of the controller's action + correct extension "Uploadfile.xls", which is wrong. Response: Content-Disposition:attachment; filename="=?utf-8?B?0KHRg9C80LzQuNGA0L7QstCw0L3QuNC1XzIwMTIuMTAuMTgtMTMuMzYu?=%0d%0a =?utf-8?B?NTUueGxz?=" Content-Length:203776 Content-Type:application/vnd.ms-excel Date:Thu, 18 Oct 2012 09:36:55 GMT Server:Microsoft-IIS/7.5 X-AspNet-Version:4.0.30319 X-AspNetMvc-Version:3.0 X-Powered-By:ASP.NET Web.config files are the same on both machines. Why does filename gets encoded differently for the same browser? Are there any kinds of default settings in web.config that are different on machines that I am missing?

    Read the article

  • how can i change the back arrow in navigation toolbar function

    - by jackrobert
    hi i have developed simple web application using jsp... Step 1 : I have one login page : (contains username ,password and sumbit button) Step2 : I have one logout page : it has one link (Go to login page), just u click this link go to login page.. step3 : And i have some pages like (page1,page2,page3, page4.....) Normal application working following way : after login comes page1.. Then click some action and it will go to page2 Then i click some action and it will goto page3 Then i click some action and it will goto page4 So now i am in page4... now click back arrow in navigation toolbar in browser, generally go to previous page (page3). For example... If u seeing gmail inbox msg... that time u click back arrow..then automatically go to previous page.. But i need , if click back arrow in navigation toolbar (from any page in my application) ,then go to logout page... For example.... Suppose i am in page4... here i click backarrow in browser then i want to go logout page, don't go to page3 how to acheive this... Is there any possible....Any javascript or any otherway pls help me...

    Read the article

  • Problem with TTWebController… alternative view controller template for UIWebView?

    - by prendio2
    I have a UIWebView with content populated from a Last.fm API call. This content contains links, many of which are handled by parsing info from the URL in: - (BOOL)webView:(UIWebView *)aWbView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType; …and passing that info on to appropraite view controllers. When I cannot handle a URL I would like to push a new view controller on the stack and load the webpage into a UIWebView there. The TTWebController in Three20 looks very promising since it has also implemented a navigation toolbar, loading indicators etc. Somewhat naively perhaps I thought I would be able to use this controller to display web content in my app, without implementing Three20 throughout the app. I followed the instructions on github to add Three20 to my project and added the following code to deal with URLs in shouldStartLoadWithRequest: TTWebController* browser = [[TTWebController alloc]init]; [browser openURL:@"http://three20.info"]; //initially test with this URL [self.navigationController pushViewController:navigator animated:YES]; This crashes my app however with the following notice: *** -[TTNavigator setParentViewController:]: unrecognized selector sent to instance 0x3d5db70 What else do I need to do to implement the TTWebController in my app? Or do you know of an alternative view controller template that I can use instead of the Three20 implementation?

    Read the article

  • Problem facing to run ruport from other machine

    - by shabi
    I am using SQL Server 2008 Reporting Services and set mode remotely. All is going fine and reports running on my machine. I am not using report viewer control, but switch to browser. Problem is that when I access the report from any other system in browser by providing required url. I m getting the following premission error: Server Error in /ReportServer Application. Access is denied: Description: An error is occured while accessing the resources required to serve for this request. You might have not premission to view the requested resources. Error message: 401.3 : You dont have the premission to view this directory or page using the creditinals you supplied. I have go through all step of this article "http://msdn.microsoft.com/en-us/library/ms365170.aspx" and set remotly premession but after all changes no success and getting same error. Please some one can tell me or provide step list, that how can I set the premession? that the report can run from other machine. Quick and detail response will

    Read the article

  • SEO: Where do I start?

    - by James
    Hi, I am primarily a software developer however I tend to delve in some web development from time to time. I have recently been asked to have a look at a friends website as they are wanting to improve their position in search engine results i.e. google/yahoo etc. I am aware there is no guarentee that their position will change, however, I do know there are techniques/ways to make your website more visible to search engine spiders and to consequently improve your position in the rankings i.e. performing SEO. Before I started looking at the SEO of the site I did the following prerequsite checks: Ran the website through the W3C Markup Validator and the W3C CSS Validator services. Looked through the markup code manually (check for meta tags etc) Performed a thorough cross browser compatibility test. From those checks, the following was evident: No SEO has been performed on the site before. The website has been developed using a visual editing tool such as dreamweaver (as it failed the validation services miserably and tables where being used everywhere!) The site is fairly cross browser compatibile (only some slight issues with IE8 which are easily resolved). How the site navigation is, isn't very search engine friendly (e.g. index.php?page=home) I can see right away a major improvement for SEO (or I at least think) would be to change the way the website is structured i.e. change from using dynamic pages such as "index.php?page=home" and actually having pages called "home.html". Other area's would be to add meta tags to identify keywords, and then sprinkling these keywords over the pages. As I am a rookie in this department, could anyone give me some advice on how I could perform thorough SEO on this website? Thanks in advance.

    Read the article

  • Python - Open default mail client using mailto, with multiple recipients

    - by victorhooi
    Hi, I'm attempting to write a Python function to send an email to a list of users, using the default installed mail client. I want to open the email client, and give the user the opportunity to edit the list of users or the email body. I did some searching, and according to here: http://www.sightspecific.com/~mosh/WWW_FAQ/multrec.html It's apparently against the RFC spec to put multiple comma-delimited recipients in a mailto link. However, that's the way everybody else seems to be doing it. What exactly is the modern stance on this? Anyhow, I found the following two sites: http://2ality.blogspot.com/2009/02/generate-emails-with-mailto-urls-and.html http://www.megasolutions.net/python/invoke-users-standard-mail-client-64348.aspx which seem to suggest solutions using urllib.parse (url.parse.quote for me), and webbrowser.open. I tried the sample code from the first link (2ality.blogspot.com), and that worked fine, and opened my default mail client. However, when I try to use the code in my own module, it seems to open up my default browser, for some weird reason. No funny text in the address bar, it just opens up the browser. The email_incorrect_phone_numbers() function is in the Employees class, which contains a dictionary (employee_dict) of Employee objects, which themselves have a number of employee attributes (sn, givenName, mail etc.). Full code is actually here (http://stackoverflow.com/questions/2963975/python-converting-csv-to-objects-code-design) from urllib.parse import quote import webbrowser .... def email_incorrect_phone_numbers(self): email_list = [] for employee in self.employee_dict.values(): if not PhoneNumberFormats.standard_format.search(employee.telephoneNumber): print(employee.telephoneNumber, employee.sn, employee.givenName, employee.mail) email_list.append(employee.mail) recipients = ', '.join(email_list) webbrowser.open("mailto:%s?subject=%s&body=%s" % (recipients, quote("testing"), quote('testing')) ) Any suggestions? Cheers, Victor

    Read the article

  • Monitor web sites visited using Internet Explorer, Opera, Chrome, Firefox and Safari in Python

    - by Zachary Brown
    I am working on a project for work and have seemed to run into a small problem. The project is a similar program to Web Nanny, but branded to my client's company. It will have features such as website blocking by URL, keyword and web activity logs. I would also need it to be able to "pause" downloads until an acceptable username and password is entered. I found a script to monitor the URL visited in Internet Explorer (shown below), but it seems to slow the browser down considerably. I have not found any support or ideas onhow to implement this in other browsers. So, my questions are: 1). How to I monitor other browser activity / visited URLs? 2). How do I prevent downloading unless an acceptable username and password is entered? from win32com.client import Dispatch,WithEvents import time,threading,pythoncom,sys stopEvent=threading.Event() class EventSink(object): def OnNavigateComplete2(self,*args): print "complete",args stopEvent.set() def waitUntilReady(ie): if ie.ReadyState!=4: while 1: print "waiting" pythoncom.PumpWaitingMessages() stopEvent.wait(.2) if stopEvent.isSet() or ie.ReadyState==4: stopEvent.clear() break; time.clock() ie=Dispatch('InternetExplorer.Application',EventSink) ev=WithEvents(ie,EventSink) ie.Visible=1 ie.Navigate("http://www.google.com") waitUntilReady(ie) print "location",ie.LocationName ie.Navigate("http://www.aol.com") waitUntilReady(ie) print "location",ie.LocationName print ie.LocationName,time.clock() print ie.ReadyState

    Read the article

  • server caching problem on ASP.NET MVC page

    - by Rita
    Hi I have server caching error on ASP.NET MVC Pages. The scenario is like this. I have two applications (1).External Website and (2).Internal Adminsite, both pointing to the same Database. There is one page called EditProfile Page on the External Website that registered customer can update his profile information like Firstname, Lastname and Address…etc. Similarly there is similar functionality on the Internal Adminsite on the page called CustomerProfile Page where the Site Admin can update all these fields. When the user updates the profile information from the Adminsite, those updates are not reflecting back to the Website. Now I tried restarting the Website on IIS and that din’t help. Again I tried both restarting the Website on IIS and opening a new browser, then those updates are reflecting back. I am wondering how I can come out of this caching problem without restarting the site and open a new browser window everytime? Are there any IIS settings that could help? This caching is happening only on couple of tables only and all the updates are showing up in the database. Appreciate your responses. Thanks

    Read the article

  • How to access and run field events from extension js?

    - by Dan Roberts
    I have an extension that helps in submitting forms automatically for a process at work. We are running into a problem with dual select boxes where one option is selected and then that selection changes another field's options. Since setting an option selected property to true doesn't trigger the field's onchange event I am trying to do so through code. The problem I've run into is that if I try to access or run functions on the field object from the extension, I get the error Error: uncaught exception: [Exception... "Component is not available" nsresult: "0x80040111 (NS_ERROR_NOT_AVAILABLE)" location: "JS frame :: chrome://webformsidebar/content/webformsidebar.js :: WebFormSidebar_FillProcess :: line 499" data: no] the line causing the error is... if (typeof thisField.onchange === 'function') The line right before it works just fine... thisField.options[t].selected=true; ...so I'm not sure why this is resulting in such an error. What surprises me most I guess is that checking for the existence of the function leads to an error. It feels like the problem is related to the code running in the context of the extension instead of the browser window document. If so, is there any way to call a function in the browser window context instead? Do I need to actually inject code into the page somehow? Any other ideas? Thanks

    Read the article

  • PHP Single Sign On (SSO) generating new session id

    - by bigstylee
    I am trying to create a single sign on process. The method I have implemented makes use of storing session data in a database. When a new user comes to the website (www.example2.com) a table of authentication is checked. As this is their first visit to the website, there will be no match. The browser is redicted to the authentication server www.example1.com/authenticate.php?session_id=ABC123 where ABC123 represents the session id created on www.example2.com. THe session id which is then generated on www.example1.com is stored along side the session id using the parameter set in the URL. The user is then redirected back to the www.example2.com and a match of session ids should be found. This WAS working fine in FireFox but when I tried it in Chrome I noticed that the session id being generated when the browser is redirected back to www.example2.com is a new session id. As a result an infinite loop is created. This behaviour has not manifested itself in FireFox aswell. What is causing the new session id to be generated? More importantly, what can I do to stop it? Thanks in advance! EDIT I had a logically error that was causing an infinite loop. This now works fine again in FireFox but the infinite loop is still occuring in Chrome and Internet Explorer.

    Read the article

  • E-Commerce Security: Only Credit Card Fields Encrypted?!

    - by bizarreunprofessionalanddangerous
    I'd like your opinions on how a major bricks-and-mortar company is running the security for its shopping Web site. After a recent update, when you are logged into your shopping account, the session is now not secured. No 'https', no browser 'lock'. All the personal contact info, shopping history -- and if I'm not mistaken submit and change password -- are being sent unencrypted. There is a small frame around the credit card fields that is https. There's a little notice: "Our website is secure. Our website uses frames and because of this the secure icon will not appear in your browser" On top of this the most prominent login fields for the site are broken, and haven't gotten fixed for a week or longer (giving the distinct impression they have no clue what's going on and can't be trusted with anything). Now is it just me -- or is this simply incomprehensible for a billion dollar company, significant shopping site, in the year 2010. No lock. "We use frames" (maybe they forget "Best viewed in IE4"). Customers complaining, as you can see from their FAQ "explaining" why you aren't seeing https. I'm getting nowhere trying to convince customer service that they REALLY need to do something about this, and am about to head for the CEO. But I just want to make sure this is as BIZARRE and unprofessional and dangerous a situation as I think it is. (I'm trying to visualize what their Web technical team consists of. I'm getting A) some customer service reps who were given a 3 hour training course on Web site maintenance, B) a 14 year old boy in his bedroom masquerading as a major technical services company, C) a guy in a hut in a jungle with an e-commerce book from 1996.)

    Read the article

  • django - variable declared in base project does not appear in app

    - by unsorted
    I have a variable called STATIC_URL, declared in settings.py in my base project: STATIC_URL = '/site_media/static/' This is used, for example, in my site_base.html, which links to CSS files as follows: <link rel="stylesheet" href="{{ STATIC_URL }}css/site_tabs.css" /> I have a bunch of templates related to different apps which extend site_base.html, and when I look at them in my browser the CSS is linked correctly as <link rel="stylesheet" href="/site_media/static/css/site_tabs.css" /> (These came with a default pinax distribution.) I created a new app called 'courses' which lives in the ...../apps/courses folder. I have a view for one of the pages in courses called courseinstance.html which extends site_base.html just like the other ones. However, when this one renders in my browser it comes out as <link rel="stylesheet" href="css/site_tabs.css" /> as if STATIC_URL were equal to "" for this app. Do I have to make some sort of declaration to get my app to take on the same variable values as the project? I don't have a settings.py file for the app. by the way, the app is listed in my list of INSTALLED_APPS and it gets served up fine, just without the link to the CSS file (so the page looks funny). Thanks in advance for your help.

    Read the article

  • Simulating mouse clicks on Mac OS X does not work for some applications.

    - by Thomi
    I'm writing an application for Mac OS X 10.6 and later in C++. One part of the application needs to simulate mouse movement and mouse clicks. I do this currently by posting CGEvent objects using CGEventPost(kCGHIDEventTap, event);. This works, for the most part - I can simulate mouse movement and clicks just fine, but it seems to fail in some areas. For example: In Mozilla Firefox and Safari, I can click on all the menus, but cannot click on a link within a website. When I try, the link is highlighted, but the browser never follows the link. However, I can right-click on a link, select "open link in new tab", and everything works as expected. Solved - creating the mouse event using CGEventCreateMouseEvent(...) makes the event work within web browser. I can click on the "Dashboard" icon to brink up the dashboard, but I cannot click on the "i" button on any of the dashboard widgets. Similarly, clicking on any of the search results from the spotlight search widget doesn't work either. This inconsistency is along application boundaries. What might be the cause?

    Read the article

  • How to implement web cache: internal fragmentation VS external fragmentation

    - by Summer_More_More_Tea
    Hi there: I come up with this question when play with Firefox web cache: in which approach does the browser cache a response in limited disk space(take my configuration as an example, 50MB is the upper bound)? I think two ways can be employed. One is cache the total response object one by one, but this is inefficient and will introduce external fragmentation, thus the total cache space may not be fully used. The second is take the total space(50MB) as a consecutive file, splitting it into fixed-length slots; incoming response objects will also be treated blocks of data with the same length as the slots. We can fill slots until the whole file is run out of, then some displacement algorithm can be used to swap out the old cached objects. The latter approach will of course bing in internal fragmentation, but in my opinion is easier to implement and maintain than the first strategy. But when I enter Firefox's Cache directory, I find it (maybe) use a different method: a lot of varied-length files reside in that directory and all those files are filled with undisplayable characters. I don't but really want to know what mechanism that a commercial browser, e.g. Firefoix, employed to implement web cache. Regards.

    Read the article

  • Cancel page forward/back hotkeys in Firefox with Greasemonkey

    - by Stimulating Pixels
    First the background: In Firefox 3.6.3 on Mac OS X 10.5.8 when entering text into a standard the hotkey combination of Command+LeftArrow and Command+RightArrow jump the cursor to the start/end of the current line, respectively. However, when using CKEditor, FCKEditor and YUI Editor, Firefox does not seem to completely recognize that it's a text area. Instead, it drops back to the default function for those hotkeys which is to move back/forward in the browser history. After this occurs, the text in the editor is also cleared when you return to the page making it very easy to loose whatever is being worked on. I'm attempting to write a greasemonkey script that I can use to capture the events and prevent the page forward/back jumps from being executed. So far, I've been able to see the events with the following used as a .user.js script in GreaseMonkey: document.addEventListener('keypress', function (evt) { // grab the meta key var isCmd = evt.metaKey; // check to see if it is pressed if(isCmd) { // if so, grab the key code; var kCode = evt.keyCode; if(kCode == 37 || kCode == 39) { alert(kCode); } } }, false ); When installed/enabled, pressing command+left|right arrow key pops an alert with the respective code, but as soon as the dialog box is closed, the browser executes the page forward/back move. I tried setting a new code with evt.keyCode = 0, but that didn't work. So, the question is, can this Greasemonkey script be updated so that it prevents the back/forward page moves? (NOTE: I'm open to other solutions as well. Doesn't have to be Greasemonkey, that's just the direction I've tried. The real goal is to be able to disable the forward/back hotkey functionality.)

    Read the article

  • Applet Not Loading In Java 1.6.0_16

    - by Wayne Hartman
    I am running a Java applet compiled in 1.5 and am experiencing odd behavior when running it on computers running 1.6.0_07 and 1.6.0_16. On the *_07 version, the applet initializes and loads in the browser perfectly fine. However, computers with the *_16 are not loading at all. Even more strange, there is nothing in the Java Console to indicate any problems will loading the applet in the browser. If I run the applet from as a standalone app on said machines, it loads up just fine. The compatibility mode in the JNLP is set to 1.5+. Firefox reports no errors attempting to load the applet. Even with full tracing and logging set to all, nothing is reported in the console window. Quick facts: The JAR is signed Compiled in 1.5 Works flawlessly in browsers (FF & IE) running *_07 Does not work in browsers (FF & IE) running *_16 Works running as stand alone app in *_16 JNLP set to 1.5+ Clients are mixed *_07, *_16, and other version of 1.6 Things I have tried: Forcing the JVM to use version 1.6.0_07. This requires the user to download *_07. In my situation this is not an option, unfortunately. Ran the app on *_16 as a standalone app. This works fine, but this needs to run as an applet. Ideas?

    Read the article

  • Why am I getting "(304) Not Modified" error on some links when using HttpWebRequest?

    - by Greg
    Hi, Any ideas why on some links that I try to access using HttpWebRequest I am getting "The remote server returned an error: (304) Not Modified." in the code? The code I'm using is from Jeff's post here. Note the concept of the code is a simple proxy server, so I'm pointing my browser at this locally running piece of code, which gets my browsers request, and then proxies it on by creating a new HttpWebRequest, as you'll see in the code. It works great for most sites/links, but for some this error comes up. You will see one key bit in the code is where it seems to copy the http header settings from the browser request to it's request out to the site, and it copies in the header attributes. Not sure if the issue is something to do with how it mimics this aspect of the request and then what happens as the result comes back? case "If-Modified-Since": request.IfModifiedSince = DateTime.Parse(listenerContext.Request.Headers[key]); break; I get the issue for example from http://en.wikipedia.org/wiki/Main_Page thanks

    Read the article

  • User Friendly Video Review with Locking

    - by James Cori
    We have a jquery/php/mysql system that allows a user to log in and review videos built by a system for online viewing. When a user begins reviewing a video, the video is marked as such. But now we've cornered ourselves into the classic browser-based application problem of the user navigating away or closing the browser without completing review. That video would then enter a state of limbo of constantly being reviewed, but never completed, and never re-entering the queue. Options we have are: Build a service (which we already have others) to find review sessions that are outside a duration boundary and reset them back into the queue. Reset review sessions outside a duration boundary when that user logs in. Essentially, if a user locks out a video for review, it'll be unlocked the next time they log in. A suggestion made to me was to use the php/apache session length and on expiration, reset any pending review jobs. I don't even know where to look to implement this as this is one project on a shared server, so it shouldn't be an apache config, but the reset mechanism would need to know the database credentials to be able to reset it... The worst solution everyone hates is preventing the user from navigating away with javascript, asking "Are you sure?!" This system is used by a few hired reviewers, so I'm not exactly dealing with the public here, but I can't prevent users from sharing logins for speedier review, which would knock out the 2nd option above because it would unlock a video being reviewed by someone else using the same login.

    Read the article

  • Libraries and pseudocode for physical Dashboard/Status board

    - by dani
    OK, so I bought a 46" screen for the office yesterday, and with the imminent risk of being accused for setting up an "elaborate World Cup procrastination scheme", I'd better show my colleagues what it's meant for ;) Looking at my simple sketch, and at these great projects from which I was inspired, I would like to get some input on the following: Pseudocode for the skeleton: As some methods should be called every 24 hours ("Today's date in the heading"), others at 60 second intervals ("Twitter results"), what would be a good approach using JavaScript (jQuery) and PHP? EDIT: Alsciende: I can agree that #1 and #8 are too vague. Therefore I remove #8 and try to clarify #1: With "Pseudocode for the skeleton", I basically mean could this be done entirely using JavaScript timers and how would you set up the various timers? Library for Google Analytics: Which libraries support the Google Analytics API and can produce neat charts. Preferably HTML5, JavaScript-based like Protovis. Library for Twitter: Which libraries would you recommend for fetching twitter search results and latest tweets from profiles. Libraries for Typography/CSS/HTML5: Trying to learn some HTML5 etc. in the process, please advice on any other typography/css libraries that could be of relevance. Scraping/Parsing? I'll give you a concrete example: Trying to fetch today's menu from this restaurant's website, how would you go about? (it's in Swedish - but you get the point - sorry ;) ) Real-time stats? I'm using the WassUp-plugin for WordPress to track real-time visitors on our website. Other logging software (AWStats etc.) is probably also installed on the webserver. Any ideas on how to extract information from these and present in real-time on the dashboard? Browser choice? Which Browser and OS would you pick? Stable, Full-screen, HTML5.

    Read the article

  • Rails' page caching vs. HTTP reverse proxy caches

    - by John Topley
    I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question). What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here. This is my understanding of how both techniques work (maybe I'm wrong): With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >