Search Results

Search found 22447 results on 898 pages for 'cpu load'.

Page 382/898 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Efficient Clojure workflow?

    - by Alex B
    I am developing a pet project with Clojure, but wonder if I can speed up my workflow a bit. My current workflow (with Compojure) is: Start Swank with lein swank. Go to Emacs, connect with M-x slime-connect. Load all existing source files one by one. This also starts a Jetty server and an application. Write some code in REPL. When satisfied with experiments, write a full version of a construct I had in mind. Eval (C-c C-c) it. Switch REPL to namespace where this construct resides and test it. Switch to browser and reload browser tab with the affected page. Tweak the code, eval it, check in the browser. Repeat any of the above. There are a number of annoyances with it: I have to switch between Emacs and the browser (or browsers if I am testing things like templating with multiple browsers) all the time. Is there a common idiom to automate this? I used to have a JavaScript bit that reloads the page continuously, but it's of limited utility, obviously, when I have to interact with the page for more than a few seconds. My JVM instance becomes "dirty" when I experiment and write test functions. Basically namespaces become polluted, especially if I'm refactoring and moving the functions between namespaces. This can lead to symbol collisions and I need to restart Swank. Can I undef a symbol? I load all source files one by one (C-c C-k) upon restarting Swank. I suspect I'm doing it all wrong. Switching between the REPL and the file editor can be a bit irritating, especially when I have a lot of Emacs tabs open, alongside the browser(s). I'm looking for ways to improve the above points and the entire workflow in general, so I'd appreciate if you'd share yours. P. S. I have also used Vimclojure before, so Vimclojure-based workflows are welcome too.

    Read the article

  • Amazon EC2- micro-instance vs single small/medium instance

    - by shashankaholic
    I have a chat application using stack of Openfire, Tomcat6 and MySQL. Currently, i have installed all these servers on single Linux micro-instance(613 MB memory). Even in low user base 10-20 i am encountering CPU overload which is quite obvious here. As, i am new to Amazon EC2 can somebody suggest me how to scale up my architecture according to traffic use? should i use separate micro instances for every app server(openfire,mysql,tomcat6) should i use single small or medium instance for whole server stack. Some factors in context: high reliance on MYSQL high memory usage due to file transfer web-application interacting with other Amazon service like S3,SES

    Read the article

  • Trying to not need two separate solutions for x86 and x64 program.

    - by Sean Anderson
    Hi all, I have a program which needs to function in both an x86 and an x64 environment. It is using Oracle's ODBC drivers. I have a reference to Oracle.DataAccess.DLL. This DLL is different depending on whether the system is x64 or x86, though. Currently, I have two separate solutions and I am maintaining the code on both. This is atrocious. I was wondering what the proper solution is? I have my platform set to "Any CPU." and it is my understanding that VS should compile the DLL to an intermediary language such that it should not matter if I use the x86 or x64 version. Yet, if I attempt to use the x64 DLL I receive the error "Could not load file or assembly 'Oracle.DataAccess, Version=2.102.3.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format." I am running on a 32 bit machine, so the error message makes sense, but it leaves me wondering how I am supposed to efficiently develop this program when it needs to work on x64. Thanks.

    Read the article

  • css layout for footer at bottom with dynamic ajax content changing height of page

    - by m42
    [Update] I actually compromised on this problem for now by foregoing the fixed footer design. It seems that there is no problem with dynamic content moving the footer and resizing containers appropriately unless the footer is fixed to the browser bottom initially. I hope others will eventually provide a great solution that encompasses the best of both worlds. I spent all day trying to get the footer to move down the page to accommodate dynamically added (via ajax) content. I really need some pointers or links because I haven't found anything that helps. Basically: My site has some pages that begin with only a text box and a button so that the total height of the content area is only a few inches beneath the header area. I don't have any problem getting the sticky footer working so that the footer appears at the bottom of the browser window even when there is very little content on screen. That same css layout works fine for other pages that have content that extends beneath the browser window. The catch: The content has to be rendered and passed to the browser with the initial load. The Problem: Any content that is added to the page via AJAX after the initial load paints down the page correctly -- but the footer remains in its initial location. Please tell me there is a fix for this. I can't post the css until checking with my boss first - if possible - and if needed, I will later - but it's just a very basic version of the many sticky footer css solutions floating around the web. Thanks.

    Read the article

  • Cannot send HTML Emails

    - by Zen Savona
    Well I'm trying to send a HTML email using gmail smtp from CI, and it seems to reject my emails when they have any amount of tables. No error is given, they just do not appear in my inbox. If I send an email with light HTML and no tables, they go through. Anyone have any insight? $config = Array( 'protocol' => 'smtp', 'smtp_host' => 'ssl://smtp.googlemail.com', 'smtp_port' => 465, 'smtp_user' => '[email protected]', 'smtp_pass' => '--------', 'mailtype' => 'html', 'charset' => 'iso-8859-1' ); $this->load->library('email', $config); $this->email->set_newline("\r\n"); $this->email->from('myEmail', 'myName'); $this->email->to($this->input->post('email')); $this->email->subject('mySubject'); $msg = $this->load->view('partials/email', '', true); $this->email->message($msg); $this->email->send();`

    Read the article

  • jQuery dynamic css loading weired behavior

    - by jimpsr
    The app I am working on requires dynamic loading of css and js, right now the solution is as follows: myapp.loadCss = function(css){ $("head").append("<link>"); cssDom = $("head").children(":last"); cssDom.attr({rel: "stylesheet", type: "text/css", href: css }); } myapp.loadJs = funciton(js){ ... //$.ajax call is used in synchronized mode to make sure the js is fully loaded } } When some widgets need to be load, the usual call with be myapp.loadCss('/widgets/widget1/css/example.css'); myapp.loadJs('/wiggets/widget1/js/example.js'); The weired thing is that once a while (1 out of 10 or 20), the newly created dom elements from example.js will not be able to get its css from example.css, it seems however my loadCss method does not load the css in a synchronized mode? I have tried to replace my loadCss with the the following code: myapp.loadCss(css){ $('<link href="' + css + '" rel="stylesheet" type="text/css" />').appendTo($('head')); } It seems to be OK then (I refreshed the webpage a hundred times for verification :-( ) But unfortunately this method failed in IE(IE7, not tested in IE6, 8) Is there any better solution for this?

    Read the article

  • how to return a list using SwingWorker

    - by Ender
    I have an assignment where i have to create an Image Gallery which uses a SwingWorker to load the images froma a file, once the image is load you can flip threw the image and have a slideshow play. I am having trouble getting the list of loaded images using SwingWorker. This is what happens in the background it just publishes the results to a TextArea // In a thread @Override public List<Image> doInBackground() { List<Image> images = new ArrayList<Image>(); for (File filename : filenames) { try { //File file = new File(filename); System.out.println("Attempting to add: " + filename.getAbsolutePath()); images.add(ImageIO.read(filename)); publish("Loaded " + filename); System.out.println("Added file" + filename.getAbsolutePath()); } catch (IOException ioe) { publish("Error loading " + filename); } } return images; } } when it is done I just insert the images in a List<Image> and that is all it does. // In the EDT @Override protected void done() { try { for (Image image : get()) { list.add(image); } } catch (Exception e) { } } Also I created an method that returns the list called getImages() what I need to get is the list from getImages() but doesn't seam to work when I call execute() for example MySwingWorkerClass swingworker = new MySwingWorkerClass(log,list,filenames); swingworker.execute(); imageList = swingworker.getImage() Once it reaches the imageList it doesn't return anything the only way I was able to get the list was when i used the run() instead of the execute() is there another way to get the list or is the run() method the only way?. or perhaps i am not understanding the Swing Worker Class.

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • What is fastest way to backup a disk image over LAN?

    - by David Balažic
    Sometimes I boot sysrescd or a similar live linux on a PC to backup the hardrive over local network to my server. I noticed many times, that the transfer speed is not optimal (slower than HDD and network speed). Any rules of thumb what to do and what to avoid? What I typically do is something like: dd bs=16M if=/dev/sda | nc ... # on client nc ... | dd bs=16M of=/destination/disk/backup1 # on server I also "throw" in lzop (other are way too slow) and sometimes on the fly md5sum calculation (both of uncompressed and compress source). I try to add (m)buffer (or other alternatives) to improve throughput (and get a progress indicator). I noticed that even with enough free CPU, adding commands to the pipeline slows things down. Typically the destination is on a NTFS volume (accessed via ntfs-3g, with the _big_writes_ option).

    Read the article

  • VMWare ESX installation on sata disk

    - by ilansch
    I have a PC with Gigabyte H77 motherboard with Intel I5-3550 CPU 8 GB RAM 1600MHz and a 500GB Harddisk (7200RPM) - WD Sata III disk I wish to install esx on it and run some virtual machines on it. not alot, something like 2-3 VMs. My hardisk is Sata, is it possible to install ESX Server on it ? I am not worried about loading issues. When i try loading the installation it writes it cannot detect my disk (since its not SCSI disk). How can i bypass this ? or find a solution. thanks

    Read the article

  • Classic ASP application-wide initializations and object caching

    - by slack3r
    In classic ASP (which I am forced to use), I have a few factory functions, that is, functions that return classes. I use JScript. In one include file I use these factory functions to create some classes that are used throughout the application. This include file is included with the #include directive in all pages. These factory functions do some "heavy lifting" and I don't want them to be executed on every page load. So, to make this clear I have something like this: // factory.inc function make_class(arg1, arg2) { function klass() { //... } // ... Some heavy stuff return klass; } // init.inc, included everywhere <!-- #include FILE="factory.inc" --> // ... MyClass1 = make_class(myarg01, myarg02); MyClass2 = make_class(myarg11, myarg12); //... How can I achieve the same effect without calling make_class on every page load? I know that I can't cache the classes in the Application object I can't use the Application_OnStart hook in Global.asa I could probably create a scripting component, but I really don't want to do that So, is there something else I can do? Maybe some way to achieve caching of these classes, which are really objects in JScript. PS: [further clarification] In the above code "heavy stuff" is not so heavy, but I just want to know if there's a way to avoid it being executed all the time. It reads database meta information, builds a table of the primary keys in the database and another table that resolves strings to classes, etc.

    Read the article

  • iPhone Localization: simple project not working

    - by gonso
    Hello Im doing my first localized project and I've been fighting with it for several hours with no luck. I have to create an app that, based on the user selection, shows texts and images in different languages. I've read most of Apple's documents on the matter but I cant make a simple example work. This are my steps so far: 1) Create a new project. 2) Manually create a "en.lproj" directory in the projects folder. 3) Using TexEdit create file called "Localizable.strings" and store it in Unicode UTF-16. The file looks like this: /* Localizable.strings Multilanguage02 Created by Gonzalo Floria on 5/6/10. Copyright 2010 __MyCompanyName__. All rights reserved. */ "Hello" = "Hi"; "Goodbye" = "Bye"; 4) I drag this file to the Resources Folder on XCode and it appear with the "subdir" "en" underneath it (with the dropdown triangle to the left). If I try to see it on XCode it looks all wrong, whit lots of ? symbols, but Im guessing thats because its a UTF-16 file. Right? 5) Now on my view did load I can access this strings like this: NSString *translated; translated = NSLocalizedString(@"Hello", @"User greetings"); NSLog(@"Translated text is %@",translated); My problem is allowing the user to switch language. I have create an es.lproj with the Localizable.strings file (in Spanish), but I CANT access it. I've tried this line: [[NSUserDefaults standardUserDefaults] setObject: [NSArray arrayWithObjects:@"es", nil] forKey:@"AppleLanguages"]; But that only works the NEXT time you load the application. Is there no way to allow the user to switch languages while running the application?? Do I have to implement my own Dictionary files and forget all about NSLocalizableString family? Thanks for ANY advice or pointers. Gonso

    Read the article

  • OSMF seek with Amazon Cloudfront

    - by giorrrgio
    I've written a little OSMF player that streams via RTMP from Amazon Cloudfront. There's a known issue, the mp3 duration is not correctly readed from metadata and thus the seek function is not working. I know there's a workaround implying the use of getStreamLength function of NetConnection, which I successfully implemented in a previous non-OSMF player, but now I don't know how and when to call it, in terms of OSMF Events and Traits. This code is not working: protected function initApp():void { //the pointer to the media var resource:URLResource = new URLResource( STREAMING_PATH ); // Create a mediafactory instance mediaFactory = new DefaultMediaFactory(); //creates and sets the MediaElement (generic) with a resource and path element = mediaFactory.createMediaElement( resource ); var loadTrait:NetStreamLoadTrait = element.getTrait(MediaTraitType.LOAD) as NetStreamLoadTrait; loadTrait.addEventListener(LoaderEvent.LOAD_STATE_CHANGE, _onLoaded); player = new MediaPlayer( element ); //Marker 5: Add MediaPlayer listeners for media size and current time change player.addEventListener( DisplayObjectEvent.MEDIA_SIZE_CHANGE, _onSizeChange ); player.addEventListener( TimeEvent.CURRENT_TIME_CHANGE, _onProgress ); initControlBar(); } private function onGetStreamLength(result:Object):void { Alert.show("The stream length is " + result + " seconds"); duration = Number(result); } private function _onLoaded(e:LoaderEvent):void { if (e.newState == LoadState.READY) { var loadTrait:NetStreamLoadTrait = player.media.getTrait(MediaTraitType.LOAD) as NetStreamLoadTrait; if (loadTrait && loadTrait.netStream) { var responder:Responder = new Responder(onGetStreamLength); loadTrait.connection.call("getStreamLength", responder, STREAMING_PATH); } } }

    Read the article

  • Loading a gallery of external images with JQuery

    - by pingu
    Hi guys, I'm creating a gallery of images pulled in via a twitter image hosting service, and I want to show a loading graphic before they are displayed, but the logic seems to be failing me. I'm trying to do it the following way: <ul> <li> <div class="holder loading"> </div> </li> <li> <div class="holder loading"> </div> </li> <li>...</li> <li>...</li> </ul> Where there "loading" class has an animated spinner set as it's background image. JQuery: $('.holder').each(function(){ var imgsrc = http://example.com/imgsrc; var img = new Image(); $(img).load(function () { $(this).hide(); $(this).parent('.holder').removeClass('loading'); $(this).parent().append(this); $(this).fadeIn(); }) .error(function () { }).attr('src', imgsrc); }); Which isn't working - I'm not sure that the logic is correct and how to access the "holder" from inside the img.load nested function. What would be the best way to build this gallery so that the images display only after they've been loaded?

    Read the article

  • Tomcat access logs - are failed requests included?

    - by Maxim Eliseev
    We have a RESTful web service (Java, hosted in Tomcat on Ubuntu on Amazon EC2). From time to time it fails (not every week). When it fails, Java CPU consumption goes to 100% and it takes all available memory. It does not finish by itself. I have to restart the server. There is nothing suspicious in Tomcat access logs. I guess one of our users could submit a very "heavy" request which brought the server down. Is it possible this request is not in Tomcat logs since it never finished?

    Read the article

  • eclipse - starting with android sdk

    - by dontHaveName
    I want start programming for android.. What I have: -Windows 7 -Eclipse Classic 4.2 -Downloaded all there required files - http://developer.android.com/sdk/installing/adding-packages.html -ADT Plugin I want install new ADT plugin.. at first I tried to download it from http://dl-ssl.google.com/android/eclipse, I add it, but if i selected it there is only "pending.." and nothing has load..(maybe internet connection?I have selected Native connection in preferences after pending it wrotes: Unable to connect to repository http://dl-ssl.google.com/android/eclipse/content.xml org.eclipse.equinox.p2.core.ProvidesException ) Thats why I download ADT plugin. So if I select downloaded ADT plugin - content of it load - developer tools and ndk plugin so I select all and click next. It loads and writes this: "Cannot complete the install because one or more required items could not be found. Software being installed: Android Development Tools 20.0.3.v201208082019-427395 (com.android.ide.eclipse.adt.feature.group 20.0.3.v201208082019-427395) Missing requirement: Android Development Tools 20.0.3.v201208082019-427395 (com.android.ide.eclipse.adt.feature.group 20.0.3.v201208082019-427395) requires 'org.eclipse.wst.sse.core 0.0.0' but it could not be found" requires 'org.eclipse.wst.sse.core 0.0.0 this problem is shown here: http://developer.android.com/resources/faq/troubleshooting.html#installeclipsecomponents but there is solution only for version 3.3 and 3.4 (I have 4.2) anyway but I tried it- I look for updates but nothing were found I really dont know where could be problem.. Thanks for any answer. (sorry for my english) I will send 1€ to somebody who can solve my problem ;) (I think all problems all for internet connection but I cant set it..)

    Read the article

  • curl_multi_exec stops if one url is 404, how can I change that?

    - by Rob
    Currently, my cURL multi exec stops if one url it connects to doesn't work, so a few questions: 1: Why does it stop? That doesn't make sense to me. 2: How can I make it continue? EDIT: Here is my code: $SQL = mysql_query("SELECT url FROM shells") ; $mh = curl_multi_init(); $handles = array(); while($resultSet = mysql_fetch_array($SQL)){ //load the urls and send GET data $ch = curl_init($resultSet['url'] . $fullcurl); //Only load it for two seconds (Long enough to send the data) curl_setopt($ch, CURLOPT_TIMEOUT, 5); curl_multi_add_handle($mh, $ch); $handles[] = $ch; } // Create a status variable so we know when exec is done. $running = null; //execute the handles do { // Call exec. This call is non-blocking, meaning it works in the background. curl_multi_exec($mh,$running); // Sleep while it's executing. You could do other work here, if you have any. sleep(2); // Keep going until it's done. } while ($running > 0); // For loop to remove (close) the regular handles. foreach($handles as $ch) { // Remove the current array handle. curl_multi_remove_handle($mh, $ch); } // Close the multi handle curl_multi_close($mh);

    Read the article

  • ASP .NET page runs slow in production

    - by Brandi
    I have created an ASP .NET page that works flawlessly and quickly from Visual Studio. It does a very large database read from a database on our network to load a gridview inside of an update panel. It displays progress in an Ajax modalpopupextender. Of course I don't expect it to be instant what with the large db reads, but it takes on the order of seconds, not on the order of minutes. This is all working great until I put it up on the server - it is very, VERY slow when I access it via the internet - takes several minutes to load the database information into the gridview. I'm baffled why it would not perform the exact same as it had from Visual Studio. (It is in release mode and I have taken off the debug flag) I have since been trying things like eliminating unneeded update panels and throwing out the ajax tool. Nothing has made it any faster on production. It is not the database as far as I know, since it has been consistently fast from my computer (from visual studio) and consistently slow from the server. I am wondering, where do I look next? Has anyone else had this problem before? Could this be caused by update panels or Ajax modalpopupextenders in different parts of the application? Why would the live behaviour differ so much from the localhost behaviour? Both the server with the ASP .NET page and the server with the database are servers on our network. I'm using Visual Studio 2008. Thank you in advance for any insight or advice.

    Read the article

  • recommendations for a lightweight linux distribution for a test server

    - by Jack
    I'm planning on setting up a test server to experiment with some application servers (tomcat/jboss/...) and with some portals. Now the machine I've set aside for this is lightweight CPU/GPU wise(Atom D510, 4 gigabyte ram, 500 GiB hdd, onboard GPU). But it should suffice for most things, I'm more interested in the stability of JBoss/Tomcat for my purposes than the stability. However I'm having a bit of trouble picking an appropriate distribution size/performance/setup time wise/security wise since it seems I can't sneeze without another distribution popping up. I've been thinking about going for Fedora since I've read that that distribution has been optimized for Atom, but I'm not really familiar with it. My experience with Linux has mostly been limited to Ubuntu and some tinkering with puppylinux. I'm not afraid to get my hands dirty using the command line. I'm not planning on starting a discussion per se, mostly the pros/cons that people have encountered with some distributions

    Read the article

  • How to only fade content once it is ready in Jquery?

    - by Jared
    Hello, I've seen examples for load() and ready(), but they always seem to apply to the window or document body, or a div or img or something like that. I have a hover() function, and when you hover over it, it uses fadeIn() to reveal it. Problem is, the images inside the div aren't loaded yet, so it ends up just appearing anyway. I need code that will only allow the image to fade when it's contents are fully loaded. When I just plug in other selectors like the following, it doesn't work $(this).next(".menuPopup").ready(function () { //or load(), neither work fadeTo(300, 1); }); EDIT: Relevant code $( '.leftMenuProductButton' ).hover ( function () { $(this).next(".menuPopup").stop().css("opacity", 1).fadeIn('slow'); }, function () { $(this).next(".menuPopup").stop().hide(); }); HTML <div class="leftMenuProductButton"> Product 1</div> <div id="raceramps56" class="menuPopup"><IMG SRC="myimage.jpeg"> </div>

    Read the article

  • Programatically rebuild .exd-files when loading VBA

    - by aspartame
    Hi, After updating Microsoft Office 2007 to Office 2010 some custom VBA scripts embedded in our software failed to compile with the following error message: Object library invalid or contains references to object definitions that could not be found. As far as I know, this error is a result of a security update from Microsoft (Microsoft Security Advisory 960715). When adding ActiveX-controls to VBA scripts, information about the controls are stored in cache files on the local hard drive (.exd-files). The security update modified some of these controls, but the .exd-files were not automatically updated. When the VBA scripts try to load the old versions of the controls stored in the cached files, the error occurs. These cache-files must be removed from the hard drive in order for the controls to load successfully (which will create new, updated .exd-files automatically). What I would like to do is to programatically (using Visual C++) remove the outdated .exd-files when our software loads. When opening a VBA project using CApcProject::ApcProject.Open I set the following flag:axProjectThrowAwayCompiledState. TestHR(ApcProject.Open(pHost, (MSAPC::AxProjectFlag) (MSAPC::axProjectNormal | MSAPC::axProjectThrowAwayCompiledState))); According to the documentation, this flag should cause the VBA project to be recompiled and the temporary files to be deleted and rebuilt. I've also tried to update the checksum of the host application type library which should have the same effect. However none of these fixes seem to do the job and I'm running out of ideas. Help is very much appreciated!

    Read the article

  • Virtualbox and getting the Locked VT-x Error when trying to use more than 1 processor on the session

    - by Dave
    I have installed the latest version of virtualbox onto my Hewlett Pakard (h8-1170uk) I have a intel i7 2600 cpu and 8 gb of ram I can get virtual box to create several sessions of different operating systems at the same time, but whenever i try to get 1 session to open up using more than 1 processor option select ( i wanted one of my session access to 2 processors ) i keep getting this error message VT-x features locked or unavailable in MSR. (VERR_VMX_MSR_LOCKED_OR_DISABLED). Result Code: E_FAIL (0x80004005) Component: Console Interface: IConsole {1968b7d3-e3bf-4ceb-99e0-cb7c913317bb} I have search many times and cannot find a option to correct this. I have check my BIOS and there are no options about VT-x or Virtualisation or anything. Am i doing something wrong ? Why does Virtualbox run fine when just using the 1 processor option ?

    Read the article

  • How to implement a 'safe' periodical executer without using the Rails helpers?

    - by Robbie
    I am very new to Ruby on Rails and was never really big on writing JavaScript, so the built in helpers were like a tiny silce of heaven. However I have recently learned that using the helper methods creates "obtrusive javascript" so I am doing a tiny bit of refactoring to get all this messy code out of my view. I'm also using the Prototype API to figure out what all these functions do. Right now, I have: <%= periodically_call_remote(:url => {:action => "tablerefresh", :id => 1 }, :frequency => '5', :complete => "load('26', 'table1', request.responseText)")%> Which produces: <script type="text/javascript"> //<![CDATA[ new PeriodicalExecuter(function() {new Ajax.Request('/qrpsdrail/grids/tablerefresh/1', {asynchronous:true, evalScripts:true, onComplete:function(request){load('26', 'table1', request.responseText)}, parameters:'authenticity_token=' + encodeURIComponent('dfG7wWyVYEpelfdZvBWk7MlhzZoK7VvtT/HDi3w7gPM=')})}, 5) //]]> </script> My concern is that the "encodeURIComponent" and the presence of "authenticity_token" are generated by Rails. I'm assuming these are used to assure the validity of a request. (Ensuring a request comes from a currently active session?) If that is the case, how can I implement this in application.js 'safely'? It seems that the built in method, although obtrusive, does add some beneficial security. Thanks, in advance, to all who answer.

    Read the article

  • Programatically enable/disable menuBar buttons in Flex 4

    - by Hamid
    I have the following XML in my Flex4 (AIR) project that defines the start of my menu interface: <mx:MenuBar x="0" y="0" width="100%" id="myMenuBar" labelField="@label" itemClick="menuChange(event)"> <mx:dataProvider> <s:XMLListCollection> <fx:XMLList xmlns=""> <menu label="File"> <item label="New"/> <item label="Load"/> <item label="Save" enabled="false"/> </menu> <menu label="Help"> <item label="About"/> </menu> </fx:XMLList> </s:XMLListCollection> </mx:dataProvider> </mx:MenuBar> I am trying to find the syntax that will let me set the save button to enabled=true after a file has been loaded by clicking "Load", however I can't figure out the syntax, can someone make a suggestion please. Currently the way that button clicks are detected is by a Switch/Case testing the String result of the MenuEvent event.item.@label. Maybe this isn't the best way?

    Read the article

  • Computer turns off and on after start ..then goes dead

    - by Shiki
    I built a new PC from the following components: - CPU: Intel Core i7 950 - MB: Gigabyte X58A-UD3R - RAM: 2x2gb i7 Corsair memory - VGA: Zotac AMP2 GTX260 - HDD: 1 GreenSATA HDD (Western Digital 500gb RE2) When I turn it on, it goes for a few seconds, fans at maximum speed, then turns off. The again, it starts by itself.. and goes with fans on max speed, nothing happens. First I suspected my PSU. It's a Chieftec 450AA PSU. After I borrowed a Chieftec 550AA PSU, I tried to start with that. Exact same story. Any idea ? Do I need a bigger PSU? Reason why its not localized. I never seen this turn on, off, on. If you give answer for that, it would already help people like me, with the same problem.

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >