Search Results

Search found 13940 results on 558 pages for 'chromium browser'.

Page 476/558 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • ASP.net download page

    - by Russel
    Hi I have a Reports.aspx ASP.NET page that allows users to download excel report files by clicking on several hyperlinks. When a report hyperlink is clicked, I open a new window using the javascript window.open method and navigate off to the download.aspx page. The code-behind for the download page creates a excel file on the fly using openxml(in memory) and send it back to the browser. Here is some code from the download.aspx page: byte[] outputFileBytes = CreateExcelReport().ToArray(); Response.Clear(); Response.BufferOutput = true; Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"; Response.AddHeader("Content-Disposition", string.Format("attachment; filename={0}", "tempReport.xlsx")); Response.BinaryWrite(outputFileBytes); Response.Flush(); Response.Close(); Response.End(); My problem : Some of these reports take some time to generate. I would like to display a loading.gif file on my Reports.aspx page, while the download.aspx page is requested. Once the page request is completed, the loading.gif file should be made invisible. Is there a way to achieve this. Perhaps some kind of event. I have mootools to my disposal. Thanks PS. I know that generating reports like this is not ideal, but thats a different story all together...

    Read the article

  • Wordpress Custom Posts

    - by codedude
    I'm having a serious problem with custom post types in Wordpress. I made a post type called "Sermons". I then add a meta box with some text fields and echo out the results onto the web page. But here's my problem. The first time when you add a "Sermon", it works fine and the meta box fields output correctly. However, when I try to edit one of the meta boxes and do not edit the others, (say after I closed the web browser I remembered that I needed to add something to the fields,) the fields that were not edited become blank and the content in them is erased...not good at all. So, just to simplify this: the first time the meta boxes are filled they work fine. However, when editing the post for the second time, the fields that are not filled out, but left as they were, become blank upon saving the post. Help...I'm not too much of a developer so I'm not exactly sure how to fix this...(it was hard enough getting the meta fields to work.) (If you want the actual code used, please tell me and I will add is somewhere.

    Read the article

  • CSRF Protection in AJAX Requests using MVC2

    - by mnemosyn
    The page I'm building depends heavily on AJAX. Basically, there is just one "page" and every data transfer is handled via AJAX. Since overoptimistic caching on the browser side leads to strange problems (data not reloaded), I have to perform all requests (also reads) using POST - that forces a reload. Now I want to prevent the page against CSRF. With form submission, using Html.AntiForgeryToken() works neatly, but in AJAX-request, I guess I will have to append the token manually? Is there anything out-of-the box available? My current attempt looks like this: I'd love to reuse the existing magic. However, HtmlHelper.GetAntiForgeryTokenAndSetCookie is private and I don't want to hack around in MVC. The other option is to write an extension like public static string PlainAntiForgeryToken(this HtmlHelper helper) { // extract the actual field value from the hidden input return helper.AntiForgeryToken().DoSomeHackyStringActions(); } which is somewhat hacky and leaves the bigger problem unsolved: How to verify that token? The default verification implementation is internal and hard-coded against using form fields. I tried to write a slightly modified ValidateAntiForgeryTokenAttribute, but it uses an AntiForgeryDataSerializer which is private and I really didn't want to copy that, too. At this point it seems to be easier to come up with a homegrown solution, but that is really duplicate code. Any suggestions how to do this the smart way? Am I missing something completely obvious?

    Read the article

  • Firefox: Can I use a relative path in the BASE tag?

    - by Aaron Digulla
    I have a little web project where I have many pages and an index/ToC file. The toc file is at the root of my project in toc.html. The pages are spread over a couple of subdirectories and include the toc with an iframe. The project doesn't need a web server, so I can create the HTML in a directory and browse it in my browser. The problem is that I'm running into XSS issues when JavaScript from the toc.html wants to call a function in a page (violation of the same origin policy). So I added base tags in the header with a relative URL to the directory in which toc.html. This works for Konqueror but in Firefox, I have to use absolute paths or the toc won't even display :( Here is an example: <?xml version='1.0' encoding='utf-8' ?> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <base href="../" target="_top" /> <title>Project 1</title> </head> <body> <iframe class="toc" frameborder="0" src="toc.html"> </iframe> </body> </html> This is file is in a subdirectory page. Firefox won't even load it, saying that it can't find page/toc.html. Is there a workaround? I would really like to avoid absolute paths in my export to keep it the same everywhere (locally and when I upload it on the web server later).

    Read the article

  • jQuery.load doesn't execute javascript with document.write

    - by Garfield
    I am trying to use jQuery.Load to load an ad call that has a document.write, and for some reason its not able to, or in firefox atleast, reloads the page with the entire ad. Here is the simplified version of the code. DynamicLoad.html <html> <head> <script src="http://www.prweekus.com/js/scripts.js?3729212881" type="text/javascript"></script> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>jQuery Load of Script</title> <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load("jquery", "1.3.2"); </script> <script type="text/javascript"> $(document).ready(function(){ $("#myButton").click(function() { $("#myDiv").load("source.html"); }); }); </script> </head> <body> <button id="myButton">Click Me</button> <div id="myDiv"></div> <div id="slideAdUnit"></div> </body> </html> Source.html <script language="javascript" type="text/javascript"> document.write('<script language="javascript" type="text/javascript"><\/script>'); </script> test Once you click the button in FF the browser just waits for something to load. Any thoughts ? Eventually I would be passing a src element in the document.write which points to our ad server. Thanks for your help.

    Read the article

  • google static maps via TIdHTTP

    - by cloudstrif3
    Hi all. I'm trying to return content from maps.google.com from within Delphi 2006 using the TIdHTTP component. My code is as follows procedure TForm1.GetGoogleMap(); var t_GetRequest: String; t_Source: TStringList; t_Stream: TMemoryStream; begin t_Source := TStringList.Create; try t_Stream := TMemoryStream.Create; try t_GetRequest := 'http://maps.google.com/maps/api/staticmap?' + 'center=Brooklyn+Bridge,New+York,NY' + '&zoom=14' + '&size=512x512' + '&maptype=roadmap' + '&markers=color:blue|label:S|40.702147,-74.015794' + '&markers=color:green|label:G|40.711614,-74.012318' + '&markers=color:red|color:red|label:C|40.718217,-73.998284' + '&sensor=false'; IdHTTP1.Post(t_GetRequest, t_Source, t_Stream); t_Stream.SaveToFile('google.html'); finally t_Stream.Free; end; finally t_Source.Free; end; end; However I keep getting the response HTTP/1.0 403 Forbidden. I assume this means that I don't have permission to make this request but if I copy the url into my web browser IE 8, it works fine. Is there some header information that I need or something else? thanks you advance.

    Read the article

  • Mouse bugginess - SWFObject, Firefox 3 for Mac, and Flash

    - by justinbach
    I'm pulling my hair out over a problem I'm encountering on Firefox 3.5 & 3.6 on OS X. I'm using SWFobject to embed an AmMap of the US, which has rollover tooltips for various states. The rollovers are working fine in every other browser I've tested, but they're very buggy on FF for Mac--most of the time they don't show up at all, but if I persistently click a state that's supposed to have a hover event, I might catch a glimpse of the tooltip. Here's the code for the SWFObject embed (incidentally, this isn't being done in the document head due to templating reasons). The reason that the SWFObject initialization is wrapped in Jquery's document.ready handler is that the swf wasn't even appearing in FF 3.5.9 for mac until I added that in: $(document).ready(function() { var params = { quality: "high", scale: "noscale", allowscriptaccess: "always", allowfullscreen: "true", bgcolor: "#FFFFFF", base:"/<?php print LANG . "/locations/" ?>" }; var flashvars = { path: "", settings_file: "mapsettings", data_file:"mapdata" }; var attributes = { id: "flashmap", name: "flashmap" }; swfobject.embedSWF("/assets/flash/ammap.swf", "flashmap", "470", "300", "8", null, flashvars, params, attributes); }); Any feedback would be greatly appreciate...site goes live in 48 hours! Thanks!

    Read the article

  • Crystal Reports - export to pdf in MVC

    - by BhejaFry
    Hi folks, I have integrated the below code in my application to generate a 'pdf' file using crystal reports in MVC project. However, after the request is processed, i get to see only 2 pages in the pdf file while my 'data' returns more than 2 records. Also, the pdf isn't rendered as soon as the page is processed but instead i have to refresh atleast once, then the pdf is rendered on the browser. using CrystalDecisions.CrystalReports.Engine; public FileStreamResult Report() { ReportClass rptH = new ReportClass(); List<sampledataset> data = objdb.getdataset(); rptH.FileName = Server.MapPath("[reportName].rpt"); rptH.Load(); rptH.SetDatabaseLogon("un", "pwd", "server", "db"); rptH.SetDataSource(data); Stream stream = rptH.ExportToStream(CrystalDecisions.Shared.ExportFormatType.PortableDocFormat); stream.Seek(0, System.IO.SeekOrigin.Begin); return new FileStreamResult(stream, "application/pdf"); } I took the code from here in SO but modified it like above. TIA.

    Read the article

  • asp.net does not Redirect when in frameset

    - by Snoop Dogg
    I have developed an application on asp.net. I uploaded it to my host. lets say http://myhost/app. My manager wrapped this address into an empty frameset on http://anotherhost/somename and sets the src of frame to http://myhost/ap. And so nobody can login. When the button is hit, it posts back (browser loads stuff, progress bar fills up and ends) but nothing happens. Does not redirect. (I have set IE to alwaysAllowCookies and it now does work, but other people still cannot) I think there is something that I have no clue about framesets and ASP.NET ps: I never use frames but could not convince my manager in doing so. He likes to develop in front page :) Whatz happening? Thx in advance protected void btnLogin_Click(object sender, ImageClickEventArgs e) { Member member = Logic.DoLogin(txtUsername.Text.Trim(), txtPassword.Text.Trim()); if (null == member) { lblError.Text = "Invalid Login !"; return; } CurrentMember = member; ///CurrentMember is an inherited property that accesses Session["member"] = member Response.Redirect("Default.aspx");

    Read the article

  • Fastest Method to Learn Web Design for a Developer

    - by hekevintran
    I am a Web developer and in my projects I have noticed that my weakest point is not being good at the front-end design. Relying on other designers can be annoying if they are not able to produce as quickly as I want. My perspective on HTML/CSS is that it is basically a big hack that amazingly works. There are too many CSS and browser specific bugs/quirks to learn and remember them all without spending extreme amounts of time trying to untangle everything. Is there a fast track route to getting CSS into my brain? I have looked at some CSS books, but to me they really read as long lists of how to render things correctly in IE6 and how to make corners rounded. (Seriously why does it require so many tricks to make a sharp corner round? On any platform but the Web this would be called a major oversight.) Does there exist something that does the analogous to CSS that jQuery does for JavaScript? Using jQuery you don't need to know JavaScript well to make things that work. I am not interested in learning why IE6 does things in weird ways because I don't care about supporting it at all. I am more interested in a method of learning how to use CSS to do what I want without spending hours and hours reading obscure blogs.

    Read the article

  • Making HTTP POST request

    - by infrared
    I'm trying to make a POST request to retrieve information about a book. Here is the code that returns HTTP code: 302, Moved import httplib, urllib params = urllib.urlencode({ 'isbn' : '9780131185838', 'catalogId' : '10001', 'schoolStoreId' : '15828', 'search' : 'Search' }) headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"} conn = httplib.HTTPConnection("bkstr.com:80") conn.request("POST", "/webapp/wcs/stores/servlet/BuybackSearch", params, headers) response = conn.getresponse() print response.status, response.reason data = response.read() conn.close() When I try from a browser, from this page: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackMaterialsView?langId=-1&catalogId=10001&storeId=10051&schoolStoreId=15828 , it works. What am I missing in my code? Thanks EDIT: Here's what I get when I call print response.msg 302 Moved Date: Tue, 07 Sep 2010 16:54:29 GMT Vary: Host,Accept-Encoding,User-Agent Location: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch X-UA-Compatible: IE=EmulateIE7 Content-Length: 0 Content-Type: text/plain; charset=utf-8 Seems that the location points to the same url I'm trying to access in the first place? EDIT2: I've tried using urllib2 as suggested here. Here is the code: import urllib, urllib2 url = 'http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch' values = {'isbn' : '9780131185838', 'catalogId' : '10001', 'schoolStoreId' : '15828', 'search' : 'Search' } data = urllib.urlencode(values) req = urllib2.Request(url, data) response = urllib2.urlopen(req) print response.geturl() print response.info() the_page = response.read() print the_page And here is the output: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch Date: Tue, 07 Sep 2010 16:58:35 GMT Pragma: No-cache Cache-Control: no-cache Expires: Thu, 01 Jan 1970 00:00:00 GMT Set-Cookie: JSESSIONID=0001REjqgX2axkzlR6SvIJlgJkt:1311s25dm; Path=/ Vary: Accept-Encoding,User-Agent X-UA-Compatible: IE=EmulateIE7 Content-Length: 0 Connection: close Content-Type: text/html; charset=utf-8 Content-Language: en-US Set-Cookie: TSde3575=225ec58bcb0fdddfad7332c2816f1f152224db2f71e1b0474c866f3b; Path=/

    Read the article

  • Positioning / Scrolling problem with Flex popup.

    - by user284163
    Hi all, I'm trying to work out a specific problem I'm having with positioning in Flex using the PopUpManager. Basically I'm wanting to create a popup which will scroll with the parent container - this is necessary because the parent container is large and if the user's browser window isn't large enough (this will be the case the majority of the time) - they will have to use the scrollbar of the container to scroll down. The problem is that the popup is positioned relative to another component, and it needs to stay by that component. <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"> <mx:Script> <![CDATA[ import mx.core.UITextField; import mx.containers.TitleWindow; import mx.managers.PopUpManager; private function clickeroo(event:MouseEvent):void { var popup:TitleWindow = new TitleWindow(); popup.width = 250; popup.height = 300; popup.title = "Example"; var tf:UITextField = new UITextField(); tf.wordWrap = true; tf.width = popup.width - 30; tf.text = "This window stays put and doesn't scroll when the hbox is scrolled (even with using the hbox as parent in the addPopUp method), I need the popup to be local to the HBox."; popup.addChild(tf); PopUpManager.addPopUp(popup, hbox, false); PopUpManager.centerPopUp(popup); } ]]> </mx:Script> <mx:HBox width="100%" height="2000" id="hbox"> <mx:Button label="Click Me" click="clickeroo(event)"/> </mx:HBox> </mx:Application> Could anyone give me any pointers in the right direction? Thanks.

    Read the article

  • PHP header redirection does nor reload <iframe> in IE

    - by Marco Demaio
    When displaying data from DB usually I'm in this situation I'm in page A.php that shows data from DB, user performs some action (like edit/delete etc) and page B.php is loaded to perform the action, once page B performed the action, it redirects browser to page A, page A is auto reloaded during step (3) therefor it shows an updated situation of the data In order to make page B to redirect to page A i use a simple PHP header("Location: " . "A.php", TRUE, 302); This works well in all situations, except when pages A.php is displaied into an <iframe>: in such a case it does not reload (step 4 does not get done). This seems to happen only in IE7 (don't know about IE8), it works perfectly on FF/Safari. And only when using an <iframe>, if page A.php is not in <iframe> it gest refreshed also in IE7. In order to solve this I simply added a couple of headers in page A.php to set it to not be cached: header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1 header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); // Date in the past But I was curious if you migt have exeperienced the same issue too in the past, and if you good give me some advises about this. Thanks!

    Read the article

  • How to best show progress info when using ADO.NET?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • Seam:token tag not being respected

    - by JBristow
    When I click a command button, and then hit the browser back button to the form and click it again, it submits a second time without throwing the proper exception... Even stranger, the form id itself is DIFFERENT when I come back, which implies it has regenerated a "valid" form id at some point. Here's the relevant code: Any ideas? <h:form id="accountActivationForm"> <s:token/> <a4j:commandButton id="cancelActivateAccountButton" action="#{controller[cancelAction]}" image="/images/button-Cancel-gray.gif" reRender="#{reRenderList}" oncomplete="#{onCancelComplete}" /> &#160; <a4j:commandButton id="activateAccountButton" action="#{controller[agreeAction]}" image="/images/button-i-agree-continue.gif" styleClass="activate-account-button" reRender="#{reRenderList}" oncomplete="#{onActivationComplete}"/> </h:form> Clarifications: I inherited this, so I'm trying to change it as little as possible. (It's used in a couple places.) Each action returns a view, not null. I have confirmed this by stepping through line-by-line. The reRenderList is empty in my current test-case. onActivationComplete is also empty. I'm going to be going template-by-template to see if someone made it with nested forms, because my coworkers have had unrelated problems due to that, so it couldn't hurt to eliminate that as a possible problem.

    Read the article

  • Frameset isn't working in IE

    - by Cameroon
    First of all, why use a frame set in the first place you ask? answer: Because my boss told me. That been said, I have 2 files. Index.html and Head.html. Contents of index.html: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN" "http://www.w3.org/TR/1999/REC-html401-19991224/frameset.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" /> <title>Site Title</title> </head> <frameset rows="122,*" FRAMEBORDER=NO FRAMESPACING=2 BORDER=0> <frame name="t" src="head.html" scrolling="no" marginheight="0" marginwidth="0"> <frame name="b" src="http://www.website.com"> </frameset> <noframes> <p>You have frames turned off on your browser, please turn it on and reload this page.</p> </noframes> </html> Contents of head.html: <div style="border-bottom:2px solid #000;height:120px"> <center>This is the frame head.</center> </div> The code works fine in all browsers except Internet Explorer 7 and 8 (I don't care about 6). Is there anything I am doing wrong, and if not then can the same effect be achieved without frames and if so how?

    Read the article

  • What is the right way to make a new XMLHttpRequest from an RJS response in Ruby on Rails?

    - by Yuri Baranov
    I'm trying to come closer to a solution for the problem of my previous question. The scheme I would like to try is following: User requests an action from RoR controller. Action makes some database queries, makes some calculations, sets some session variable(s) and returns some RJS code as the response. This code could either update a progress bar and make another ajax request. display the final result (e.g. a chart grahic) if all the processing is finished The browser evaluates the javascript representation of the RJS. It may make another (recursive? Is recursion allowed at all?) request, or just display the result for the user. So, my question this time is: how can I embed a XMLHttpRequest call into rjs code properly? Some things I'd like to know are: Should I create a new thread to avoid stack overflow. What rails helpers (if any) should I use? Have anybody ever done something similar before on Rails or with other frameworks? Is my idea sane?

    Read the article

  • Why does the page posts take so long?

    - by Olle
    Hi! I am having some problems with some page post backs that take a loooong time to execute. If I do a "appcmd list requests" I can get something like this: REQUEST "79000001800004e3" (url:POST /dir/file.aspx, time:87219 msec, client:xxx.xxx.xxx.xxx, stage:ExecuteRequestHandler, module:ManagedPipelineHandler) REQUEST "8600000080002f82" (url:POST /dir/file.aspx, time:61391 msec, client:xxx.xxx.xxx.xxx, stage:AcquireRequestState, module:Session) REQUEST "5e00010280000420" (url:POST /dir/file.aspx, time:21047 msec, client:xxx.xxx.xxx.xxx, stage:AcquireRequestState, module:Session) It's one particular file that causes the problem (dir/file.aspx in this case). It comes from the same IP-adress. And the first on is from ManagedPipelineHandler module and the two after that from Session module. I do not have any details about the web browser, or anything more about the client for that matter. I have looked for sql dead locks and did not find any. There are no long running sql queries at all. Do you have any idea of what can be the problem? Regards.

    Read the article

  • Manually Writing the HTML in TWebBrowser Pt. 2

    - by nomad311
    As the name suggests this is a continuation (sort of) of http://stackoverflow.com/questions/2784679/manually-writing-the-html-in-twebbrowser This time around I'm trying to add some auto-refresh logic to the HTML I get. I have pieced together an approach from several sources (see below). In short, I am trying to locate the title node and add a meta node after it (in the HTML head node). But, I get an access violation. Here is the source: iHtmlDoc := IHTMLDocument3(WebBrowser1.Document); iHtmlEleTitle := IHTMLElement2(iHtmlDoc.getElementsByName('title').item(0, 0)); iHtmlEle := IHTMLElement2(IHTMLDocument2(iHtmlDoc).createElement(Format('<meta http-equiv="refresh" content="%d">', [1]))); iHtmlEleTitle.insertAdjacentElement('afterEnd', IHTMLElement(iHtmlEle)); And A (technically not functionally) different way of doing it ...casting is slightly different here: IHTMLElement2(IHtmlDocument3(WebBrowser1.Document).getElementsByName('title').item(0, 0)).insertAdjacentElement('afterEnd', IHTMLDocument2(WebBrowser1.Document).createElement(Format('<meta http-equiv="refresh" content="%d">', [VPI_ISSUANCE_AUTO_RELOAD]))); Again all I get from Delphi is a access exception, and I fished through MSDN documentation on it, but now I'm hoping someone out there has gone through the same and has some insight. Any help? Sources (I think this is all of them): http://webdesign.about.com/od/metataglibraries/a/aa080300a.htm (auto-reload) http://delphi.about.com/od/adptips2005/qt/webbrowserhtml.htm (web browser document as an HTML document) http://msdn.microsoft.com/en-us/library/system.windows.forms.htmlelement.insertadjacentelement(VS.80).aspx (GetElementsByName) http://www.experts-exchange.com/Web_Development/Components/ActiveX/Q_26131034.html (insertAdjacentElement) http://www.experts-exchange.com/Programming/Languages/Pascal/Delphi/Q_23407977.html (GetElementsByName)

    Read the article

  • Which Javascript framework (jQuery vs Dojo vs ... )?

    - by cletus
    There are a few Javascript frameworks/toolets out there, such as: jQuery; Dojo; Prototype; YUI; MooTools; ExtJS; SmartClient; and others I'm sure. It certainly seems that jQuery is ascendant in terms of mindshare at the moment. For example, Microsoft (ASP.NET MVC) and Nokia will use it. I also found this this performance comparison of Dojo, jQuery, MooTools and Prototype (Edit: Updated Comparison), which looks highly favourable to Dojo and jQuery. Now my previous experience with Javascript has been the old school HTML + Javascript most of us have done and RIA frameworks like Google Web Toolkit ("GWT") and Ext-GWT, which were a fairly low-stress entry into the Ajax world for someone from a Java background, such as myself. But, after all this, I find myself leaning towards the more PHP + Ajax type solution, which just seems that much more lightweight. So I've been looking into jQuery and I really like it's use of commands, the use of fluent interfaces and method chaining, it's cross-browser CSS selector superset, the fact that it's lightweight and extensible, the brevity of the syntax, unobtrusive Javascript and the plug-in framework. Now obviously many of these aren't unique to jQuery but on the basis that some things are greater than their sum of parts, it just seems that it all fits together and works well. So jQuery seems to have a lot going for it and it looks to the frontrunner for what I choose to concentrate on. Is there anything else I should be aware of or any particular reasons not to choose it or to choose something else? EDIT: Just wanted to add this trend comparison of Javascript frameworks.

    Read the article

  • viewstack causing error 1065 variable not defined issue?

    - by jason
    I've got an flex application where I have a left side TREE control and a viewstack on the right and when someone selects the tree it loads the named viewstack based on the hidden node value of the XML of the tree. But it's throwing a error 1065 variable not defined on a viewstack which worked on the last browser refresh/reload. It's not related to a particular viewstack from what I can tell it just seems to throw the error on certain render events. I've tried to use creationpolicy="all" on the viewstack but it seems to not be of any help. public function treeChanged(event:Event):void { selectedNode=Tree(event.target).selectedItem as XML; //trace(selectedNode.@hidden); //Alert.show([email protected]() + " *"); if([email protected]() == '' || [email protected]() == null){ //Alert.show("NULL !"); return; } mainviewstack.selectedChild = Container(mainviewstack.getChildByName([email protected]())); //Container(mainviewstack.getChildByName(selectedNode.@hidden)); If I add in an alert box before the getchildbyname option the viewstack has time to render and everything works fine, so it leads me to believe the app is not giving it enough time to load the viewstack?

    Read the article

  • (rsErrorOpeningConnection) Could not obtain information about Windows NT group/user

    - by ChelleATL
    I am trying to deploy a report to the Reporting Services Server but keep running up against this error: An error occurred during client rendering. An error has occurred during report processing. (rsProcessingAborted) Cannot create a connection to data source 'dataSource1'. (rsErrorOpeningConnection) Could not obtain information about Windows NT group/user 'DOMAIN\useradmin', error code 0x5. Here’s my situation: Everything is being ran using DOMAIN\useradmin and the report is using a remote database. Reporting Services and SQL Server are both ran under DOMAIN\useradmin. DOMAIN\useradmin is a windows AD login and is part of the server machine Administrators group. My test report is using a data source model that in turn is using a data source that is connection to a database on a different SQL Server. The data source is using “Credentials stored securely in the report server” with the options “Use as Windows credentials when connecting to the data source” and “Impersonate the authenticated user after a connection has been made to the data source.” I am using the credentials of DOMAIN\useradmin that is the db owner of the remote database. DOMAIN\useradmin is assigned the roles, System Administrator, System User and Browser, Content Manager, My Reports, Publisher, Report Builder. So if everything is being run under an über AD account, why I am getting this Could not obtain information about Windows NT group/user 'DOMAIN\useradmin' error? Under normal circumstances , an AD login with Publisher permissions will developing reports using a datasource model created by DOMAIN\useradmin but using one of the remote database’s users which is mapped from yet another AD login. I ran the following statements and non errors were returned: use master go xp_grantlogin 'DOMAIN\useradmin' go xp_logininfo 'DOMAIN\useradmin' go

    Read the article

  • PyGTK/GIO: monitor directory for changes recursively

    - by detly
    Take the following demo code (from the GIO answer to this question), which uses a GIO FileMonitor to monitor a directory for changes: import gio def directory_changed(monitor, file1, file2, evt_type): print "Changed:", file1, file2, evt_type gfile = gio.File(".") monitor = gfile.monitor_directory(gio.FILE_MONITOR_NONE, None) monitor.connect("changed", directory_changed) import glib ml = glib.MainLoop() ml.run() After running this code, I can then create and modify child nodes and be notified of the changes. However, this only works for immediate children (I am aware that the docs don't say otherwise). The last of the following shell commands will not result in a notification: touch one mkdir two touch two/three Is there an easy way to make it recursive? I'd rather not manually code something that looks for directory creation and adds a monitor, removing them on deletion, etc. The intended use is for a VCS file browser extension, to be able to cache the statuses of files in a working copy and update them individually on changes. So there might by anywhere from tens to thousands (or more) directories to monitor. I'd like to just find the root of the working copy and add the file monitor there. I know about pyinotify, but I'm avoiding it so that this works under non-Linux kernels such as FreeBSD or... others. As far as I'm aware, the GIO FileMonitor uses inotify underneath where available, and I can understand not emphasising the implementation to maintain some degree of abstraction, but it suggested to me that it should be possible. (In case it matters, I originally posted this on the PyGTK mailing list.)

    Read the article

  • Explaining the need to avoid horizontal scroll

    - by Bradley Herman
    I need help explaining to my boss why her design is poor on a client's website. She has no knowledge of the web, and it can be difficult as a web developer working with a woman who is a graphic designer (not even a web designer really). On a current site she has designed, an image bar "needs" to be ~1200px according to her, though it isn't necessary with the content. A quick sketch to illustrate what's going on: As you see, the banner spills out past the 960px of the content and as wide as 1200px. This creates a horizontal scroll when all the content is viewable within the 960px wide viewport. I need to make this an <img and not a CSS background because it's a jQuery slideshow that fades from image to image. I think this is a big problem because a lot of people are going to get a horizontal scroll bar imposed in their browser when they're still able to see all the relevant content. She thinks no one will notice and it'll be fine; I think it's very bad practice and confusing to the end user. How do I explain the problem to her?

    Read the article

  • Sharepoint isn't accepting new Credentials initially when switching users.

    - by Tiziani
    Hi all, I have a standard website (one webapplication and one site collection) with some custom pages and webparts. The issue I'm having is that when I try to switch users, using the "Sign In As a Different User" and entering new credentials (even for another site collection admin account), IE tries the account three times, and then it presents a 401 Access Denied screen. After that, if I erase all the stuff of access denied page from the browser's url, I'm logged as the new account I just had entered and was not accepted. After researching for a while on google, I found a KB ( http://support.microsoft.com/kb/970814 ) that might relate, but just tested here and it didn't work at all. The modified method suggested by the KB is the following: function LoginAsAnother(url, bUseSource) { document.cookie="loginAsDifferentAttemptCount=0"; if (bUseSource=="1") { GoToPage(url); } else { //var ch=url.indexOf("?") =0 ? "&" : "?"; //url+=ch+"Source="+escapeProperly(window.location.href); //STSNavigate(url); document.execCommand("ClearAuthenticationCache"); } } But after making this change, it no longer asks for a new credential. Any ideas?

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >