Search Results

Search found 13705 results on 549 pages for 'out of browser'.

Page 214/549 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • PHP Single Sign On (SSO) generating new session id

    - by bigstylee
    I am trying to create a single sign on process. The method I have implemented makes use of storing session data in a database. When a new user comes to the website (www.example2.com) a table of authentication is checked. As this is their first visit to the website, there will be no match. The browser is redicted to the authentication server www.example1.com/authenticate.php?session_id=ABC123 where ABC123 represents the session id created on www.example2.com. THe session id which is then generated on www.example1.com is stored along side the session id using the parameter set in the URL. The user is then redirected back to the www.example2.com and a match of session ids should be found. This WAS working fine in FireFox but when I tried it in Chrome I noticed that the session id being generated when the browser is redirected back to www.example2.com is a new session id. As a result an infinite loop is created. This behaviour has not manifested itself in FireFox aswell. What is causing the new session id to be generated? More importantly, what can I do to stop it? Thanks in advance! EDIT I had a logically error that was causing an infinite loop. This now works fine again in FireFox but the infinite loop is still occuring in Chrome and Internet Explorer.

    Read the article

  • django - variable declared in base project does not appear in app

    - by unsorted
    I have a variable called STATIC_URL, declared in settings.py in my base project: STATIC_URL = '/site_media/static/' This is used, for example, in my site_base.html, which links to CSS files as follows: <link rel="stylesheet" href="{{ STATIC_URL }}css/site_tabs.css" /> I have a bunch of templates related to different apps which extend site_base.html, and when I look at them in my browser the CSS is linked correctly as <link rel="stylesheet" href="/site_media/static/css/site_tabs.css" /> (These came with a default pinax distribution.) I created a new app called 'courses' which lives in the ...../apps/courses folder. I have a view for one of the pages in courses called courseinstance.html which extends site_base.html just like the other ones. However, when this one renders in my browser it comes out as <link rel="stylesheet" href="css/site_tabs.css" /> as if STATIC_URL were equal to "" for this app. Do I have to make some sort of declaration to get my app to take on the same variable values as the project? I don't have a settings.py file for the app. by the way, the app is listed in my list of INSTALLED_APPS and it gets served up fine, just without the link to the CSS file (so the page looks funny). Thanks in advance for your help.

    Read the article

  • E-Commerce Security: Only Credit Card Fields Encrypted?!

    - by bizarreunprofessionalanddangerous
    I'd like your opinions on how a major bricks-and-mortar company is running the security for its shopping Web site. After a recent update, when you are logged into your shopping account, the session is now not secured. No 'https', no browser 'lock'. All the personal contact info, shopping history -- and if I'm not mistaken submit and change password -- are being sent unencrypted. There is a small frame around the credit card fields that is https. There's a little notice: "Our website is secure. Our website uses frames and because of this the secure icon will not appear in your browser" On top of this the most prominent login fields for the site are broken, and haven't gotten fixed for a week or longer (giving the distinct impression they have no clue what's going on and can't be trusted with anything). Now is it just me -- or is this simply incomprehensible for a billion dollar company, significant shopping site, in the year 2010. No lock. "We use frames" (maybe they forget "Best viewed in IE4"). Customers complaining, as you can see from their FAQ "explaining" why you aren't seeing https. I'm getting nowhere trying to convince customer service that they REALLY need to do something about this, and am about to head for the CEO. But I just want to make sure this is as BIZARRE and unprofessional and dangerous a situation as I think it is. (I'm trying to visualize what their Web technical team consists of. I'm getting A) some customer service reps who were given a 3 hour training course on Web site maintenance, B) a 14 year old boy in his bedroom masquerading as a major technical services company, C) a guy in a hut in a jungle with an e-commerce book from 1996.)

    Read the article

  • How to implement web cache: internal fragmentation VS external fragmentation

    - by Summer_More_More_Tea
    Hi there: I come up with this question when play with Firefox web cache: in which approach does the browser cache a response in limited disk space(take my configuration as an example, 50MB is the upper bound)? I think two ways can be employed. One is cache the total response object one by one, but this is inefficient and will introduce external fragmentation, thus the total cache space may not be fully used. The second is take the total space(50MB) as a consecutive file, splitting it into fixed-length slots; incoming response objects will also be treated blocks of data with the same length as the slots. We can fill slots until the whole file is run out of, then some displacement algorithm can be used to swap out the old cached objects. The latter approach will of course bing in internal fragmentation, but in my opinion is easier to implement and maintain than the first strategy. But when I enter Firefox's Cache directory, I find it (maybe) use a different method: a lot of varied-length files reside in that directory and all those files are filled with undisplayable characters. I don't but really want to know what mechanism that a commercial browser, e.g. Firefoix, employed to implement web cache. Regards.

    Read the article

  • Simulating mouse clicks on Mac OS X does not work for some applications.

    - by Thomi
    I'm writing an application for Mac OS X 10.6 and later in C++. One part of the application needs to simulate mouse movement and mouse clicks. I do this currently by posting CGEvent objects using CGEventPost(kCGHIDEventTap, event);. This works, for the most part - I can simulate mouse movement and clicks just fine, but it seems to fail in some areas. For example: In Mozilla Firefox and Safari, I can click on all the menus, but cannot click on a link within a website. When I try, the link is highlighted, but the browser never follows the link. However, I can right-click on a link, select "open link in new tab", and everything works as expected. Solved - creating the mouse event using CGEventCreateMouseEvent(...) makes the event work within web browser. I can click on the "Dashboard" icon to brink up the dashboard, but I cannot click on the "i" button on any of the dashboard widgets. Similarly, clicking on any of the search results from the spotlight search widget doesn't work either. This inconsistency is along application boundaries. What might be the cause?

    Read the article

  • Cancel page forward/back hotkeys in Firefox with Greasemonkey

    - by Stimulating Pixels
    First the background: In Firefox 3.6.3 on Mac OS X 10.5.8 when entering text into a standard the hotkey combination of Command+LeftArrow and Command+RightArrow jump the cursor to the start/end of the current line, respectively. However, when using CKEditor, FCKEditor and YUI Editor, Firefox does not seem to completely recognize that it's a text area. Instead, it drops back to the default function for those hotkeys which is to move back/forward in the browser history. After this occurs, the text in the editor is also cleared when you return to the page making it very easy to loose whatever is being worked on. I'm attempting to write a greasemonkey script that I can use to capture the events and prevent the page forward/back jumps from being executed. So far, I've been able to see the events with the following used as a .user.js script in GreaseMonkey: document.addEventListener('keypress', function (evt) { // grab the meta key var isCmd = evt.metaKey; // check to see if it is pressed if(isCmd) { // if so, grab the key code; var kCode = evt.keyCode; if(kCode == 37 || kCode == 39) { alert(kCode); } } }, false ); When installed/enabled, pressing command+left|right arrow key pops an alert with the respective code, but as soon as the dialog box is closed, the browser executes the page forward/back move. I tried setting a new code with evt.keyCode = 0, but that didn't work. So, the question is, can this Greasemonkey script be updated so that it prevents the back/forward page moves? (NOTE: I'm open to other solutions as well. Doesn't have to be Greasemonkey, that's just the direction I've tried. The real goal is to be able to disable the forward/back hotkey functionality.)

    Read the article

  • Applet Not Loading In Java 1.6.0_16

    - by Wayne Hartman
    I am running a Java applet compiled in 1.5 and am experiencing odd behavior when running it on computers running 1.6.0_07 and 1.6.0_16. On the *_07 version, the applet initializes and loads in the browser perfectly fine. However, computers with the *_16 are not loading at all. Even more strange, there is nothing in the Java Console to indicate any problems will loading the applet in the browser. If I run the applet from as a standalone app on said machines, it loads up just fine. The compatibility mode in the JNLP is set to 1.5+. Firefox reports no errors attempting to load the applet. Even with full tracing and logging set to all, nothing is reported in the console window. Quick facts: The JAR is signed Compiled in 1.5 Works flawlessly in browsers (FF & IE) running *_07 Does not work in browsers (FF & IE) running *_16 Works running as stand alone app in *_16 JNLP set to 1.5+ Clients are mixed *_07, *_16, and other version of 1.6 Things I have tried: Forcing the JVM to use version 1.6.0_07. This requires the user to download *_07. In my situation this is not an option, unfortunately. Ran the app on *_16 as a standalone app. This works fine, but this needs to run as an applet. Ideas?

    Read the article

  • Libraries and pseudocode for physical Dashboard/Status board

    - by dani
    OK, so I bought a 46" screen for the office yesterday, and with the imminent risk of being accused for setting up an "elaborate World Cup procrastination scheme", I'd better show my colleagues what it's meant for ;) Looking at my simple sketch, and at these great projects from which I was inspired, I would like to get some input on the following: Pseudocode for the skeleton: As some methods should be called every 24 hours ("Today's date in the heading"), others at 60 second intervals ("Twitter results"), what would be a good approach using JavaScript (jQuery) and PHP? EDIT: Alsciende: I can agree that #1 and #8 are too vague. Therefore I remove #8 and try to clarify #1: With "Pseudocode for the skeleton", I basically mean could this be done entirely using JavaScript timers and how would you set up the various timers? Library for Google Analytics: Which libraries support the Google Analytics API and can produce neat charts. Preferably HTML5, JavaScript-based like Protovis. Library for Twitter: Which libraries would you recommend for fetching twitter search results and latest tweets from profiles. Libraries for Typography/CSS/HTML5: Trying to learn some HTML5 etc. in the process, please advice on any other typography/css libraries that could be of relevance. Scraping/Parsing? I'll give you a concrete example: Trying to fetch today's menu from this restaurant's website, how would you go about? (it's in Swedish - but you get the point - sorry ;) ) Real-time stats? I'm using the WassUp-plugin for WordPress to track real-time visitors on our website. Other logging software (AWStats etc.) is probably also installed on the webserver. Any ideas on how to extract information from these and present in real-time on the dashboard? Browser choice? Which Browser and OS would you pick? Stable, Full-screen, HTML5.

    Read the article

  • Why am I getting "(304) Not Modified" error on some links when using HttpWebRequest?

    - by Greg
    Hi, Any ideas why on some links that I try to access using HttpWebRequest I am getting "The remote server returned an error: (304) Not Modified." in the code? The code I'm using is from Jeff's post here. Note the concept of the code is a simple proxy server, so I'm pointing my browser at this locally running piece of code, which gets my browsers request, and then proxies it on by creating a new HttpWebRequest, as you'll see in the code. It works great for most sites/links, but for some this error comes up. You will see one key bit in the code is where it seems to copy the http header settings from the browser request to it's request out to the site, and it copies in the header attributes. Not sure if the issue is something to do with how it mimics this aspect of the request and then what happens as the result comes back? case "If-Modified-Since": request.IfModifiedSince = DateTime.Parse(listenerContext.Request.Headers[key]); break; I get the issue for example from http://en.wikipedia.org/wiki/Main_Page thanks

    Read the article

  • User Friendly Video Review with Locking

    - by James Cori
    We have a jquery/php/mysql system that allows a user to log in and review videos built by a system for online viewing. When a user begins reviewing a video, the video is marked as such. But now we've cornered ourselves into the classic browser-based application problem of the user navigating away or closing the browser without completing review. That video would then enter a state of limbo of constantly being reviewed, but never completed, and never re-entering the queue. Options we have are: Build a service (which we already have others) to find review sessions that are outside a duration boundary and reset them back into the queue. Reset review sessions outside a duration boundary when that user logs in. Essentially, if a user locks out a video for review, it'll be unlocked the next time they log in. A suggestion made to me was to use the php/apache session length and on expiration, reset any pending review jobs. I don't even know where to look to implement this as this is one project on a shared server, so it shouldn't be an apache config, but the reset mechanism would need to know the database credentials to be able to reset it... The worst solution everyone hates is preventing the user from navigating away with javascript, asking "Are you sure?!" This system is used by a few hired reviewers, so I'm not exactly dealing with the public here, but I can't prevent users from sharing logins for speedier review, which would knock out the 2nd option above because it would unlock a video being reviewed by someone else using the same login.

    Read the article

  • Rails' page caching vs. HTTP reverse proxy caches

    - by John Topley
    I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question). What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here. This is my understanding of how both techniques work (maybe I'm wrong): With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?

    Read the article

  • Query to MySQL from c# returns System.Byte[]

    - by karthik
    I am using the below SP to return the value of Generated Insert statement and it works fine when executed in Query browser. When i try to get the value from C#, it give's me "System.Byte[]" as return value. When i try to get the value from MySql query browser, it give's me return value as : 'insert into admindb.accounts values("54321","2","karthik2","karthik2","1");' I guess the problem is with the single quotes of the returned value. Is it so ? DELIMITER $$ DROP PROCEDURE IF EXISTS `admindb`.`InsGen` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsGen`( in_db varchar(20), in_table varchar(20), in_ColumnName varchar(20), in_ColumnValue varchar(20) ) BEGIN declare Whrs varchar(500); declare Sels varchar(500); declare Inserts varchar(2000); declare tablename varchar(20); declare ColName varchar(20); set tablename=in_table; # Comma separated column names - used for Select select group_concat(concat('concat(\'"\',','ifnull(',column_name,','''')',',\'"\')')) INTO @Sels from information_schema.columns where table_schema=in_db and table_name=tablename; # Comma separated column names - used for Group By select group_concat('`',column_name,'`') INTO @Whrs from information_schema.columns where table_schema=in_db and table_name=tablename; #Main Select Statement for fetching comma separated table values set @Inserts=concat("select concat('insert into ", in_db,".",tablename," values(',concat_ws(',',",@Sels,"),');') as MyColumn from ", in_db,".",tablename, " where ", in_ColumnName, " = " , in_ColumnValue, " group by ",@Whrs, ";"); PREPARE Inserts FROM @Inserts; EXECUTE Inserts; END $$ DELIMITER ;

    Read the article

  • Dynamic margin (or simulation of margin) between left floated divs

    - by BugBusterX
    I have a number of divs floated left. When browser is resized they move down or up based on how many can fit on the line. I was wondering if there is a way to dynamically (with css) have those divs align (or have margin) in a way, that they would always fill the entire screen space by having their marhin resize? In other words margin between them would resize while browser is resized, but as soon as another div can fit it will be added in the line, or if minimum margin is reached and passed another div goes to next line while margins expand again. Here's an example how it is now, resize the wondow to see he leftover space that I want to "fill" <html> <head> <style> .test { float:left; width: 100px; height:100px; background-color: grey; margin: 0 10px 10px 0; } </style> </head> <body> <div class="test"></div> <div class="test"></div> <div class="test"></div> <div class="test"></div> <div class="test"></div> <div class="test"></div> <div class="test"></div> <div class="test"></div> <div class="test"></div> <div class="test"></div> <div class="test"></div> <div class="test"></div> </body> </html>

    Read the article

  • Two column layout, navigation div on the right, solution from previous thread didn't seem to work

    - by Tom
    I tried the solution from this thread, but I must be missing something because it doesn't work: <div style="float:left;margin-right:200px"> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p> </div> <div style="float:right;width:200px"> <p>navigation</p> </div> It works when the text in the content div (the left one) is short, but when it's long then the div takes up the whole width of the browser and the margin is there, but the right div is pushed below the first one nevertheless. What am I missing? Edit: The goal is to have a fix sized navigation column on the right of the browser window and the left div should get all the space left by the right navigation column (liquid layout).

    Read the article

  • Android : Handle OAuth callback using intent-filter

    - by Dave Allison
    I am building an Android application that requires OAuth. I have all the OAuth functionality working except for handling the callback from Yahoo. I have the following in my AndroidManifest.xml : <intent-filter> <action android:name="android.intent.action.VIEW"></action> <category android:name="android.intent.category.DEFAULT"></category> <category android:name="android.intent.category.BROWSABLE"></category> <data android:host="www.test.com" android:scheme="http"></data> </intent-filter> Where www.test.com will be substituted with a domain that I own. It seems : This filter is triggered when I click on a link on a page. It is not triggered on the redirect by Yahoo, the browser opens the website at www.test.com It is not triggered when I enter the domain name directly in the browser. So can anybody help me with When exactly this intent-filter will be triggered? Any changes to the intent-filter or permissions that will widen the filter to apply to redirect requests? Any other approaches I could use? Thanks for your help.

    Read the article

  • Apache HttpClient 4.0. Weird behavior.

    - by Mikhail T
    Hello. I'm using Apache HttpClient 4.0 for my web crawler. The behavior i found strange is: i'm trying to get page via HTTP GET method and getting response about 404 HTTP error. But if i try to get that page using browser it's done successfully. Details: 1. I upload multipart form to server this way: HttpPost httpPost = new HttpPost("http://[host here]/in.php"); MultipartEntity entity = new MultipartEntity(HttpMultipartMode.BROWSER_COMPATIBLE); entity.addPart("method", new StringBody("post")); entity.addPart("key", new StringBody("223fwe0923fjf23")); FileBody fileBody = new FileBody(new File("photo.jpg"), "image/jpeg"); entity.addPart("file", fileBody); httpPost.setEntity(entity); HttpResponse response = httpClient.execute(httpPost); HttpEntity result = response.getEntity(); String responseString = ""; if (result != null) { InputStream inputStream = result.getContent(); byte[] buffer = new byte[1024]; while(inputStream.read(buffer) > 0) responseString += new String(buffer); result.consumeContent(); } Uppload succefully ends. I'm getting some results from web server: HttpGet httpGet = new HttpGet("http://[host here]/res.php?key="+myKey+"&action=get&id="+id); HttpResponse response = httpClient.execute(httpGet); HttpEntity entity = response.getEntity(); I'm getting ClientProtocolException while execute method run. I was debugging this situation with log4j. Server answers "404 Not Found". But my browser loads me that page with no problem. Can anybody help me? Thank you.

    Read the article

  • a question related to URL

    - by Robert
    Dear all,Now i have this question in my java program,I think it should be classified as URL problem,but not 100% sure.If you think I am wrong,feel free to recategorize this problem,thanks. I would state my problem as simply as possible. I did a search on the famouse Chinese search engine baidu.com for a Chinese key word "???" (Obama in English),and the way I do that is to pass a URL (in a Java Program)to the browser like: http://news.baidu.com/ns?word=??? and it works perfectly just like I input the "???”keyword in the text field on baidu.com. However,now my advisor wants another thing.Since he can not read the Chinese webpages,but he wants to make sure the webpages I got from Baidu.com is related to "Obama",he asked me to google translate it back,i.e,using google translate and translate the Chinese webpage to English one. This sounds straightforward.However,I met my problem here. If I simply pass the URL "http://news.baidu.com/ns?word=???" into Google Translate and tick "Chinese to English" translating option,the result looks awful.(I don't know the clue here,maybe related to Chinese character encoding). Alternatively,if now my browser opens ""http://news.baidu.com/ns?word=???" webpage,but I click on the "????" button (that simply means "search"),you will notice the URL will get changed,now if I pass this URL into the Google translate and do the same thing,the result works much better. I hope I am not making this problem sound too complicated,and I appologize for some Chinese words invovled,but I really need your guys' help here.Becasue I did all this in a Java program,I couldn't figure out how to realize that "????"(pressing search button) step then get the new URL.If I could get that new URL,things are easy,I could just call Google translate in my Java code,and pops out the new window to show my advisor. Please share any of your idea or thougts here.Thanks a lot. Robert

    Read the article

  • managing html rich text selections

    - by swami
    Hi, I am writing a component for a web app which will display some html, and let me capture and manipulate the selection boundaries (of the text selected by the user). I have done this successfully (for Mozilla) with a simple div element using window.getSelection(). However, the browser selection API is different for IE. If I were to use a textarea instead (for interacting with the selection api), is there a uniform API across the browsers? Then I would need to overlay a DIV on top of this to display the styled text, and presumably I'd need to manage the cursor etc... Basically I want a rich text editor - but without editing. Does anyone have any advice on the best way to go about this, which is quick, simple and cross browser compatible. I don't want to spend ages reinventing the wheel... (If anyone's interested - this is for an online xml editor. I capture the users selection on a html version of the xml doc and then send the selection offsets info to the server, where the real xml doc gets marked up). Kind Regards Swami

    Read the article

  • Low-Hanging Fruit: Obfuscating non-critical values in JavaScript

    - by Piskvor
    I'm making an in-browser game of the type "guess what place/monument/etc. is in this satellite/aerial view", using Google Maps JS API v3. However, I need to protect against cheaters - you have to pass a google.maps.LatLng and a zoom level to the map constructor, which means a cheating user only needs to view source to get to this data. I am already unsetting every value I possibly can without breaking the map (such as center and the manipulation functions like setZoom()), and initializing the map in an anonymous function (so the object is not visible in global namespace). Now, this is of course in-browser, client-side, untrusted JavaScript; I've read much of the obfuscation tag and I'm not trying to make the script bullet-proof (it's just a game, after all). I only need to make the obfuscation reasonably hard against the 1337 Java5kryp7 haxz0rz - "kid sister encryption", as Bruce Schneier puts it. Anything harder than base64 encoding would deter most cheaters by eliminating the lowest-hanging fruit - if the cheater is smart and determined enough to use a JS debugger, he can bypass anything I can do (as I need to pass the value to Google Maps API in plaintext), but that's unlikely to happen on a mass scale (there will also be other, not-code-related ways to prevent cheating). I've tried various minimizers and obfuscators, but those will mostly deal with code - the values are still shown verbatim. TL;DR: I need to obfuscate three values in JavaScript. I'm not looking for bullet-proof armor, just a sneeze-guard. What should I use?

    Read the article

  • Trying to use HttpWebRequest to load a page in a file.

    - by Malcolm
    Hi, I have a ASP.NET MVC app that works fine in the browser. I am using the following code to be able to write the html of a retrieved page to a file. (This is to use in a PDF conversion component) But this code errors out continually but not in the browser. I get timeout errors sometimes asn 500 errors. Public Function GetPage(ByVal url As String, ByVal filename As String) As Boolean Dim request As HttpWebRequest Dim username As String Dim password As String Dim docid As String Dim poststring As String Dim bytedata() As Byte Dim requestStream As Stream Try username = "pdfuser" password = "pdfuser" docid = "docid=inv12154" poststring = String.Format("username={0}&password={1}&{2}", username, password, docid) bytedata = Encoding.UTF8.GetBytes(poststring) request = WebRequest.Create(url) request.Method = "Post" request.ContentLength = bytedata.Length request.ContentType = "application/x-www-form-urlencoded" requestStream = request.GetRequestStream() requestStream.Write(bytedata, 0, bytedata.Length) requestStream.Close() request.Timeout = 60000 Dim response As HttpWebResponse Dim responseStream As Stream Dim reader As StreamReader Dim sb As New StringBuilder() Dim line As String = String.Empty response = request.GetResponse() responseStream = response.GetResponseStream() reader = New StreamReader(responseStream, System.Text.Encoding.ASCII) line = reader.ReadLine() While (Not line Is Nothing) sb.Append(line) line = reader.ReadLine() End While File.WriteAllText(filename, sb.ToString()) Catch ex As Exception MsgBox(ex.Message) End Try Return True End Function

    Read the article

  • javascript toolkit for offline webapps

    - by anjanb
    hi all, we're building an survey webapp which will let the user to add new records to the survey when offline and will upload when the browser reconnects with the server. We've identified that this will need offline storage and hence google gears seems to be an obvious choice (we understand that adobe Flash has Offline Storage but not sure if that is the best way). I am aware of Dojo offline javascript toolkit which uses google gears for the underlying functionality. However, dojo offline is not part of the dojo toolkit after version 1.3. (currently dojo is 1.4.2). Google gears toolkit is currently frozen except for critical vulnerability fixes (it has not been updated almost for the last 1 yr) because they think that HTML 5 is the way to go ahead. Hence, we're looking for a higher abstraction on top of Google Gears engine TODAY, AND which will (in the future) switch the underlying engine to HTML5 if the browser supports HTML5 standards. We'd love to use Dojo but they have discontinued Dojo offline -- we'd prefer something that will be maintained for some time. Which are possible good strategies, JS toolkits/libraries to use for building this webapp ? Pls. advise.

    Read the article

  • Inconsistent behavior working with "Flex on Rails" example.

    - by kmontgom
    I'm experimenting with Flex and Rails right now (Rails is cool). I'm following the examples in the book "Flex on Rails", and I'm getting some puzzling and inconsistent behavior. Heres the Flex MXML: <mx:HTTPService id="index" url="http://localhost:3000/people.xml" resultFormat="e4x" /> <mx:DataGrid dataProvider="{index.lastResult.person}" width="100%" height="100%"> <mx:columns> <mx:DataGridColumn headerText="First Name" dataField="first-name"/> <mx:DataGridColumn headerText="Last Name" dataField="last-name"/> </mx:columns> </mx:DataGrid> <mx:Script> <![CDATA[ import mx.controls.Alert; private function main():void { Alert.show( "In main()" ); } ]]> </mx:Script> When I run the app from my IDE (Amythyst beta, also cool), the DataGrid appears, but is not populated. The Alert.show() also triggers. When I go out to a web browser and manually enter the url (http://localhost:3000/people.xml), the Mongrel console shows the request coming through and the browser shows the web response. No exceptions or other error messages occur. Whats the difference? Do I need to alter some OS setting? I'm using Win7 on an x64 machine.

    Read the article

  • Using Javascript to generate and save a file

    - by Toji
    I've been fiddling with WebGL lately, and have gotten a Collada reader working. Problem is it's pretty slow (Collada is a very verbose format), so I'm going to start converting files to a easier to use format (probably JSON). Thing is, I already have the code to parse the file in Javascript, so I may as well use it as my exporter too! The problem is saving. Now, I know that I can parse the file, send the result to the server, and have the browser request the file back from the server as a download. But in reality the server has nothing to do with this particular process, so why get it involved? I already have the contents of the desired file in memory. Is there any way that I could present the user with a download using pure javascript? (I doubt it, but might as well ask...) And to be clear: I am not trying to access the filesystem without the users knowledge! The user will provide a file (probably via drag and drop), the script will transform the file in memory, and the user will be prompted to download the result. All of which should be "safe" activities as far as the browser is concerned.

    Read the article

  • Is it possible to render PDF (fPDF) via a javascript?

    - by J. LaRosee
    So, I'm passing some values via jQuery to the server, which generates PDF garble. It goes something like this: $.post('/admin/printBatch', data, // Some vars and such function(data){ if(data) { var batch = window.open('','Batch Print','width=600,height=600,location=_newtab'); var html = data; // Perhaps some header info here?! batch.document.open(); batch.document.write(html); batch.document.close(); $( this ).dialog( "close" ); // jQuery UI } else { alert("Something went wrong, dawg."); } return false; }); The output file looks roughly like so: $pdf->AddPage(null, null, 'A PDF Page'); //.... $pdf->Output('', 'I'); // 'I' sends the file inline to the browser (http://fpdf.org/en/doc/output.htm) What gets rendered to the browser window: %PDF-1.3 3 0 obj <> endobj 4 0 obj <> stream ... I'm missing something major, I just know it... thoughts? Thanks, guys.

    Read the article

  • Changing Apache2.2.11 httpd.conf has no effect

    - by Adrian
    Hi, Hopefully someone can help here. I recently installed wampserver ver 2.0 with Apache ver 2.2.11. My issue is, I have some large php scripts which timeout at the default 5 min (300 sec) browser limit (I'm using ie8). It is critcal I get this limit extended. I have tried changing the httpd.conf file to include the following: TimeOut 1200 My objective was to set the timeout at 1200 seconds, or 20 min. I had just chosen a random location to place this directive within the httpd.conf file as I cannot locate any documentation to suggest it belongs in a specific place within the file. Regardless, the changes I make appear in the httpd.conf file that can be found in the system tray for wampserver, however they have no effect - the browser still times out after 5 minutes. I thought perhaps I had the capitals incorrect, so I changed to: Timeout 1200 This change had no effect either. Can someone please help, this is very frustrating. Maybe the command can only be used within a specific module? If so, I have no idea which one, nor do I know the syntax to specify this. Regards Adrian.

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >