Search Results

Search found 4240 results on 170 pages for 'delayed execution'.

Page 25/170 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Is there an alternative to javascript for the web that can do multi-threading and synchronous execution?

    - by rambodash
    I would like to program web applications as I do with desktop programming languages, where the code is synchronously executed and browser doesn't freeze when doing loops. Yes I know there are workarounds using callbacks and setTimeout, but they are all workarounds after all and they don't give the same flexibility when programming in the orthodox way I've been looking at Dart as a possibilty, but I can't seem to find where it says it can do either of these. The same with haxe, emscript, and the hundreds of other converters that try to circumvent javascript. In the end it gets converted to Javascript so you ultimately have to be conscious about asynchronous/multi threading.

    Read the article

  • How to self Motivate technically to put my ideas into execution or just getting a job at MNC like google or microsoft..

    - by Demla Pawan
    I mean, How to self Motivate to get a job at google or create another google in future. ,as there is no mentor who can guide me on this topic, so asked it here: I'm a Graduate in BE IT,but with less grades,with interest in learning new programming languages, but not yet done anything great like developed some system or anything. And I'm left with 2 more years to prove my worth to someone. So,is their a quick guide to start learning a language and then just go on implementing your ideas and it gets appreciated or I get a good Job ant Big MNC's. By the way, I just build one website for my one client and running my wordpress blog. And I had tried my hands on basic of C++,Java,JS,JSP,PHP,Ubuntu,web designing in past.

    Read the article

  • what other bash variables are available during execution such as $USER that can assist on my script?

    - by semi-newbie
    This is related to question 19245, in that one of the responders answered the question in an awesome way, and very VERY clear to any newbie. Now here is a question that i can't seem to figure. i wrote a script for starting the vmware firefox plugin (don't worry. i gave that up and now run vBox VERY happily. i left vmware for my servers :) ) I needed to start the plugin as sudo, but i also needed to pass an argument (password) to it, that happen to be the same. So, if my password was Hello123, the command would be: sudo ./myscript.sh hi other Hello123 running from command line, the script would ask for my sudo password and then run. i wanted to capture THAT password and pass it as well. i also wanted to run graphically, so i tried gksudo, and there is an option -p that returns the password for variable assignment. well, that was a nightmare, because i would still get prompted for the original sudo: see below Find UserName vUser=$USER Find password (and hopefully enable sudo) vP=gksudo -p -D somedescriptiontext echo Execute command gksudo ./myscript.sh hi $vUser $vP and i still get prompted twice. so my question is tri-fold: is there a variable i can use for the password, just like there is one for user, $USER? is there a different way i should be assigning the value resulting of the command i have in $vP? i am wondering if executing the way i have it, does it in an uninitiated session and not the current one, since i am getting some addtl warning type errors on some variables blah blah i tried using Zenity to just capture the text, but then of course, i couldn't pass that value to sudo, so i could only use as a parameter, which puts me back in 2 prompts. Thanksssssssssss!

    Read the article

  • L'exécution d'applications Web est ralenti sous iOS 4.3, comment ce problème peut-il être résolu ?

    Le rendu d'applications Web est ralenti sous iOS 4.3, comment ce problème peut-il être résolu ? Mise à jour du 18.03.2011 par Katleen Si vous faîtes parti des pionniers de la mise à jour qui sont déjà passés à iOS 4.3, dont la sortie est très récente, peut-être avez-vous remarqué quelques dysfonctionnements ? En effet, suite à divers plaintes et remarques déposées en ligne par des utilisateurs mécontents, les regards se sont tournés vers le système d'exploitation mobile qui semble mal s'en tirer avec les applications Web, au niveau de la fluidité. Une information qui vient d'ailleurs d'être confirmée par Apple, dont le porte-parole Trudy Muller a expliqué que les applications en ligne ...

    Read the article

  • Which is more effective in coding? Reducing line of code and execution of code?

    - by Ayyappan.Anbalagan
    I have this doubt many years. I am wring some code to achieve some functionality. For example I am writing 20 lines of code to achieve the functionality and my co worker writing the code for the same functionality with just 5 lines. Since he used some looping statement to achieve that, but that code will execute around 30 to 50 times. So which is best way of coding? As per my knowledge I always try to reduce coding length as much I can.

    Read the article

  • How does having the Debugger change the game execution on an XBOX 360?

    - by Sebastian Gray
    So I thought my issue was relating to the difference between a Debug and a Release build as per this question: What's the difference between a "Release" Xbox 360 build and a "Debug" one? but I've since found that if I go ahead and build a Creators Club version of the game using a Debug build and deploy to the XBOX, I get the same experience I had with the Release version of my game. However if I run the game from Visual Studio using F5 and having set the XBOX as the default platform, then the game runs as expected. If I change from Debug to Release and run with CTRL+F5 then the game also works as expected. How would running the game with the debugger attached change the results I am getting in game? Is there any way that I can use the same approach or change the default compilation of the game so that I can use this approach to release my game?

    Read the article

  • How to chain actions/animations together and delay their execution?

    - by codinghands
    I'm trying to build a simple game with a number of screens - 'TitleScreen', 'LoadingScreen', 'PlayScreen', 'HighScoreScreen' - each of which has it's own draw & update logic methods, sprites, other useful fields, etc. This works well for me as a game dev beginner, and it runs. However, on my 'PlayScreen' I want to run some animations before the player gets control - dropping in some artwork, playing some sound effects, generally prettifying things a little. However, I'm not sure what the best way to chain animations / sound effects / other timed general events is. I could make an intermediary screen, 'PrePlayScreen', which simply has all of this hardcoded like so: Update(){ Animation anim1 = new Animation(.....); Animation anim2 = new Animation(.....); anim1.Run(); if(anim1.State == AnimationState.Complete) anim2.Run(); if(anim2.State == AnimationState.Complete) // Load 'PlayScreen' screen } But this doesn't seem so great - surely their must be a better way? I then thought, 'Hey - an AnimationManager! That'd be awesome!'. But then that creeping OOP panic set in as I thought about it some more. If I create the Animation in my Screen, then add it to the AnimationManager (which may or may not be a GameComponent hooked up to Update/Draw), how can I get 'back' to it? To signal commands like start / end / repeat? I'd still need to keep a reference to the object in my Screen so that I could still communicate with it once it's buried in the bosom of a List in my AnimationManager. This seems bad. I've also tried using events - call 'Update' on all the animations in the PlayScreen update loop, but crucially all of the animations have a bool flag ('Active') which determines whether they should begin. The first animation has this set to 'true', all others 'false'. On completion the first animation raises an event, which sets animation 2's bool flag to true (and so it then runs). Once animation 2 is complete another 'anim complete' event is raised, and the screen state changes. Considering the game I'm making is basically as simple as it gets I know I'm overthinking this... it's just the paradigm shift from web - game development is making me break out in a serious case of the stupids.

    Read the article

  • Why AQTime slows execution even when profiling is not on, and can anything be done for it?

    - by Antti Suni
    Hi! In AQTime for Delphi, it boasts to be very fast to get to the trouble spots by using areas and triggers etc. But it seems to me, that especially if you have very much code in the areas to profile, then the execution slows down dramatically even when the profiling is NOT on. For example, if I want to profile a specific routine late in the program flow, but don't know what is called there, I'd think to put this routine only as a trigger and the initial status for threads as Off, and then choose "Full check by Routines/Lines". However, when I do this, the program execution slows down heavily already before the trigger routine has ever been hit. For example if the "preparation flow" takes around 5 minutes without AQTime, then when I run it with profiling disabled, it already has been running for 30 minutes and still goes even when I know the trigger has not yet even been reached. I know I can try to workaround this by reducing the amount of routines/lines profiled, but it is not really a good solution for me, since I'd like to profile all of them once I get to the actual trigger routine. Also another, often better workaround is to start the application without AQTime and then use Attach to Process after the "preparation flow" has finished, but this works well only when the execution pauses in GUI in the proper place or otherwise provides a suitable time frame for doing the attaching. In all cases this is not the case. Any comments on why this is so and is there anything else to do than just try to reduce the code from the areas or attach later to the process?

    Read the article

  • SSMS Tools Pack 2.0 is out! With huge productivity booster features that will blow your mind and ease your job even more.

    - by Mladen Prajdic
    What better way to end the summer and start those productive autumn days ahead than with a fresh new version of the SSMS Tools Pack. This is a big release with two new features that are huge productivity boosters. First new feature are Tab Sessions. Every SQL tab you open is saved every N (default 2) minutes and is stored in a session. This works similar to internet browser sessions. Once you reopen SSMS you can restores your last session with a click of a button. You even get every window connected to the server it was previously connected to. The Tab History Window looks like this:   The second feature is Execution Plan Analyzer. It is designed to quickly help you find costliest operators by a number of properties. If that's not enough you can easily search through the whole execution plan for whatever you like. And to top it off you can auto analyze the execution plan. The analysis reports various problems the execution plan has and suggests a most common solution. The ultimate purpose of the Execution Plan Analyzer is to make your troubleshooting quicker and easier. It uses a simple user interface that is easy to navigate and is built directly into the execution plan itself. The execution plan analyzer looks like this:   Smaller fixes include a completely redesigned SQL History Search window and various other bug fixes. You can download the new version 2.0 at the Download page. For more detailed feature descriptions go to the main Features Page. Enjoy it!

    Read the article

  • Introducing sp_ssiscatalog (v1.0.0.0)

    - by jamiet
    Regular readers of my blog may know that over the last year I have made available a suite of SQL Server Reporting Services (SSRS) reports that provide visualisations of the data in the SQL Server Integration Services (SSIS) 2012 Catalog. Those reports are available at http://ssisreportingpack.codeplex.com. As I have built these reports and used them myself on a real life project a couple of things have dawned on me: As soon as your SSIS Catalog gets a significant amount of data in it the performance of the reports degrades rapidly. This is hampered by the fact that there are limitations as to the SQL statements that I can embed within a SSRS report. SSIS professionals are data guys at heart and those types of people feel more comfortable in a query environment rather than having to go through the rigmarole of standing up a reporting server (well, I know I do anyway) Hence I have decided to take a different tack with the reporting pack. Taking my lead from Adam Machanic’s sp_whoisactive and Brent Ozar’s sp_blitz I have produced sp_ssiscatalog, a stored procedure that makes it easy to get at the crucial data in the SSIS Catalog. I will spend the rest of this blog explaining exactly what sp_ssiscatalog does and how to use it but if you would rather just download the bits yourself and start to play you can download v1.0.0.0 from DB v1.0.0.0. Usage Scenarios Most Recent Execution I find that the most frequent information that one needs to get from the SSIS Catalog is information pertaining to the most recent execution. Hence if you execute sp_ssiscatalog with no parameters, that is exactly what you will get. EXEC [dbo].[sp_ssiscatalog] This will return up to 5 resultsets: EXECUTION - Summary information about the execution including status, start time & end time EVENTS - All events that occurred during the execution OnError,OnTaskFailed - All events where event_name is either OnError or OnTaskFailed OnWarning - All events where event_name is OnWarning EXECUTABLE_STATS - Duration and execution result of every executable in the execution All 5 resultsets will be displayed if there is any data satisfying that resultset. In other words, if there are no (for example) OnWarning events then the OnWarning resultset will not be displayed. The display of these 5 resultsets can be toggled respectively by these 5 optional parameters (all of which are of type BIT): @exec_execution @exec_events @exec_errors @exec_warnings @exec_executable_stats Any Execution As just explained the default behaviour is to supply data for the most recent execution. If you wish to specify which execution the data should return data for simply supply the execution_id as a parameter: EXEC [dbo].[sp_ssiscatalog] 6 All Executions sp_ssiscatalog can also return information about all executions: EXEC [dbo].[sp_ssiscatalog] @operation_type='execs' The most recent execution will appear at the top. sp_ssiscatalog provides a number of parameters that enable you to filter the resultset: @execs_folder_name @execs_project_name @execs_package_name @execs_executed_as_name @execs_status_desc Some typical usages might be: //Return all failed executions EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_status_desc='failed' //Return all executions for a specified folder EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_folder_name='My folder' //Return all executions of a specified package in a specified project EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_project_name='My project', @execs_package_name='Pkg.dtsx' Installing sp_ssicatalog Under the covers sp_ssiscatalog actually calls many other stored procedures and functions hence creating it on your server is not simply a case of running a CREATE PROCEDURE script. I maintain the code in an SQL Server Data Tools (SSDT) database project which means that you have two ways of obtaining it. Download the source code You can download the latest (at the time of writing) source code from http://ssisreportingpack.codeplex.com/SourceControl/changeset/view/70192. Hit the download button to download all the source code in a zip file. The contents of that zip file will include an SSDT database project which you can open up in SSDT and publish just like any other SSDT database project. You can publish to a new database or any existing database, even [SSISDB] if you prefer. Download a dacpac Maintaining the code in an SSDT database project means that it can all get packaged up into a dacpac that you can then publish to your SQL Server. That dacpac is available from DB v1.0.0.0: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. Hence, we can use the command-line tool, sqlpackage.exe, instead. Don’t worry if reverting to the command-line sounds a little daunting, I assure you it is not. Simply open a Visual Studio command-prompt and cd to the folder containing the downloaded dacpac: Type: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /action:Publish /TargetDatabaseName:SsisReportingPack /SourceFile:SSISReportingPack.dacpac /Variables:SSISDB=SSISDB /TargetServerName:(local) or the shortened form: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) remembering to set your server name appropriately (here mine is set to “(local)” ). If everything works successfully you will see this: And you’re done! You’ll have a new database called [SsisReportingPack] which contains sp_ssiscatalog:   Good luck with sp_ssiscatalog. I have been using it extensively on my own projects recently and it has proved to be very useful indeed. Rest-assured however, I will be adding many new capabilities in the future. Feedback is welcome. @Jamiet

    Read the article

  • What can I send back to the browser while I wait for PHP execution?

    - by Matt Malesky
    So....I have a PHP page that involves a lot of backend execution, namely 'exec' calls to run shell commands on the host server. This can take upwards of a few minutes depending on the calls involved. (If you look below, each recursion through the exec calls is mounting a LUN; I'd like to sometimes do upwards of 100 per execution.) I'm curious on what I can do to send content back to the browser (and prevent it from timing out). <!DOCTYPE html> <html> <head> <title>sfvmtk</title> </head> <body> <?php // TEMPORARY VARIABLES FOR TESTING $hba = 'vmhba38'; $svip = '10.10.20.100'; $targets = array ( 0 => array ( 'iqn' => 'iqn.2010-01.com.sf:t5np.esxtest.41', 'account' => 'esx', 'isecret' => 'isecret00000', 'tsecret' => 'tsecret00000' ), 1 => array ( 'iqn' => 'iqn.2010-01.com.sf:t5np.esxtest2.42', 'account' => 'esx2', 'isecret' => 'isecret00001', 'tsecret' => 'tsecret00001' ) ); $hostname = $_REQUEST['hostname']; $username = $_REQUEST['username']; $password = $_REQUEST['password']; foreach ($targets as $ctarget) { exec('esxcli -s '.$hostname.' -u '.$username.' -p '.$password.' iscsi adapter discovery statictarget add -A '.$hba.' -a '.$svip.' -n '.$ctarget['iqn'], $out); exec('esxcli -s '.$hostname.' -u '.$username.' -p '.$password.' iscsi adapter target portal auth chap set -A '.$hba.' -a '.$svip.' -N '.$ctarget['account'].' -d uni -l required -n '.$ctarget['iqn'].' -S '.$ctarget['isecret'], $out); exec('esxcli -s '.$hostname.' -u '.$username.' -p '.$password.' iscsi adapter target portal auth chap set -A '.$hba.' -a '.$svip.' -N '.$ctarget['account'].' -d mutual -l required -n '.$ctarget['iqn'].' -S '.$ctarget['tsecret'], $out); } exec('vicfg-rescan --server '.$hostname.' --username '.$username.' --password '.$password.' '.$hba, $out); ?> </body> </html>

    Read the article

  • C++ execution error: This application has requested the Runtime to terminate it in an unusual way.

    - by user1846547
    I am trying to run a C++ program and am getting the following error message when I try to run the program using - Codeblocks IDE and SQL API: "This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. Process returned 3 (0x3) execution time : 7.547 s Press any key to continue." The program compiles fine but on execution throws this error. My current OS is Windows server 2003 - SP2 (32 bit). Also program compiles and executes fine on Windows XP (32 bit) without any hassles. I checked the services (services.msc) running on my XP machine and compared with Windows server 2003 and found the settings same. Can someone please have a look and help me resolve the issue? thanks, Dhruv

    Read the article

  • HELP!! Rake aborted! Can't modify frozen hash

    - by pmneve
    Too bad even the trace doesn't say which hash is involved. Sorry this post is long: am trying to provide enough context to be meaningful. Occurs intermittently when rake jobs:work is pulling a command out of delayed_jobs while my status observer is in the process of parsing a log file for detailed results of the previous delayed_job denizen. I have an observer class (in RAILS_ROOT/lib ) which listens for the events, makes a copy of them and calls the owner class ( in apps/models ) which then calls on the log parser (also in /lib) to do the actual work. (Should both of those classes, the observer and the parser be in app/models?) Am due to deliver this application in a few days and this is killing it (and me). Am using DirectoryWatcher to look for flag files that indicate the start and finish of the delayed_jobs. That is started at the end of environment.rb like this: require 'directory_watcher' $scriptStatusObserver = ScriptStatusObserver.new dirToWatch ="#{RAILS_ROOT}/tmp/flags" $directoryWatcher = DirectoryWatcher.new( dirToWatch ) $directoryWatcher.glob= "*.flg" $directoryWatcher.interval=(15) $directoryWatcher.add_observer( $scriptStatusObserver ) $directoryWatcher.persist=("#{RAILS_ROOT}/tmp/flags/dw_state.yml") $directoryWatcher.start at_exit { $directoryWatcher.stop } This code is outside of the run method (btw is that the best place or is inside the run better?) Here is the observer: require 'script_run' class ScriptStatusObserver def initialize @rcvdEvents = [] end def update( *events ) begin puts "#{LINE.to_s}: ScriptStatusObserver events: \n"+events.to_yaml cnt = 0 events.each do |e| if e.to_s.match(/^\s*added/) cnt = cnt + 1 @rcvdEvents << e end end ScriptRun.new.catch_up( @rcvdEvents ) if cnt > 0 @rcvdEvents.clear rescue puts $! end end end Here is ScriptRun (it attaches to an associative table built with has_many:through) require 'observer' class ScriptRun < ActiveRecord::Base set_table_name "scripts_runs" belongs_to :script belongs_to :run def parse( result ) parser = LogParser.new parser.parse(result) end def catch_up( events ) events.each do |e| typ = e.type path = e.path thisMatch = path.match(/flags\/(\d+)_(\d+)_([\d\.]+)_(\w+)\.flg/) run_id = thisMatch[1] script_id = thisMatch[2] ts = thisMatch[3] status = thisMatch[4] if e.to_s.match(/^\s*added/) status_update( script_id, run_id, status, ts, path ) end end end def status_update( script_id, run_id, status, ts, path ) scriptrun = ScriptRun.find(:first, :conditions => [ "run_id = ? and script_id = ?", run_id.to_i, script_id.to_i ]) if scriptrun.kind_of?(ScriptRun) currStatus = scriptrun.status if not currStatus == 'completed' scriptrun.update_attribute(:status, status) if status == 'parse' flag = File.new(path) logSpec = flag.gets flag.close logName = File.basename(logSpec) logPath = logSpec.sub(logName, '') logName =~ /^(([\w_]+)_([\w]+)_(\d+))\.log$/ name = $1 basename = $2 runenv = $3 tsOrPid = $4 result = Result.new result.log_path = logPath result.basename = basename result.name = name result.script_id = script_id.to_i result.run_id = run_id.to_i if runenv == 'sit' runenv = 'SIT3348' end result.application_environment_id = ApplicationEnvironment.find(:first, :conditions => [ "nodename = ?", runenv]).id parse(result) if run_completed?( run_id ) myRun = Run.find(run_id.to_i) if myRun.kind_of?( Run ) myRun.update_attribute( :completed, Time.now.to_f ) end end end end else puts "#{__LINE__.to_s}: ScriptRun.status_update: ScriptRun not found for run #{run_id} script #{script_id} ts #{ts.to_s}" end File.delete(path) end def run_completed?( id ) scriptruns = ScriptRun.find(:all, :conditions = [ "run_id = ?", id.to_i] ) scriptruns.each do |sr| if not sr.status == 'completed' return false end end return true end end LogParser is too long even for this post but it reads the script log and pulls detailed information (counts and timings) out of the log and writes to a details table. It also tallies and calculates averages and rolls those up into summary tables for quicker access from the web pages. Here is the error trace: (don't ask why everything is under my Windows profile. It's a long story) Scanner running 1270239731.43 directory_watcher.notify_observers: #, #] update:[#, /pneve/workspace/waftt-0.29/tmp/flags/100039_18_1270239550.108_parse.flg"] rake aborted! can't modify frozen hash C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activerecord/l ib/active_record/attribute_methods.rb:313:in []=' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activerecord/l ib/active_record/attribute_methods.rb:313:inwrite_attribute_without_dirty' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activerecord/l ib/active_record/dirty.rb:139:in write_attribute' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activerecord/l ib/active_record/attribute_methods.rb:211:inlast_error=' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:141:in handle_failed_job' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:115:inrun' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:162:in reserve_and_run_one_job' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:92:inwork_off' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:91:in times' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:91:inwork_off' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:66:in start' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activesupport/ lib/active_support/core_ext/benchmark.rb:10:inrealtime' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:65:in start' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:62:inloop' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:62:in start' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/tasks.rb:13 c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:636:incall' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:636:in execute' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:631:ineach' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:631:in execute' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:597:ininvoke_with_call_chain' c:/Documents and Settings/pneve/ruby/lib/ruby/1.8/monitor.rb:242:in synchronize ' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:590:ininvoke_with_call_chain' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:583:in invoke' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2051:ininvoke_task' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2029:in top_level' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2029:ineach' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2029:in top_level' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2068:instandard_exception_handling' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2023:in top_level' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2001:inrun' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2068:in standard_exception_handling' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:1998:inrun' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake: 31 c:/Documents and Settings/pneve/ruby/bin/rake:16:in `load' c:/Documents and Settings/pneve/ruby/bin/rake:16

    Read the article

  • delayed evaluation of code in subroutines - 5.8 vs. 5.10 and 5.12

    - by Brock
    This bit of code behaves differently under perl 5.8 than it does under perl 5.12: my $badcode = sub { 1 / 0 }; print "Made it past the bad code.\n"; [brock@chase tmp]$ /usr/bin/perl -v This is perl, v5.8.8 built for i486-linux-gnu-thread-multi [brock@chase tmp]$ /usr/bin/perl badcode.pl Illegal division by zero at badcode.pl line 1. [brock@chase tmp]$ /usr/local/bin/perl -v This is perl 5, version 12, subversion 0 (v5.12.0) built for i686-linux [brock@chase tmp]$ /usr/local/bin/perl badcode.pl Made it past the bad code. Under perl 5.10.1, it behaves as it does under 5.12: brock@laptop:/var/tmp$ perl -v This is perl, v5.10.1 (*) built for i486-linux-gnu-thread-multi brock@laptop:/var/tmp$ perl badcode.pl Made it past the bad code. I get the same results with a named subroutine, e.g. sub badcode { 1 / 0 } I don't see anything about this in the perl5100delta pod. Is this an undocumented change? A unintended side effect of some other change? (For the record, I think 5.10 and 5.12 are doing the Right Thing.)

    Read the article

  • What can cause Powershell execution policy not to be taken into account?

    - by Stephane
    We have in our infrastructure a number of powershell scripts used for various tasks ranging from user login to support technician simulating a user context. These scripts are centralized on our file server (through DFS) for easier management. Some of them are run at logon, some are run through published Citrix applications. We have applied a policy for the whole domain and all users that sets the Powershell execution policy to "unrestricted" so that the scripts can run from the file server. This works perfectly fine for logon script (at least, so far) but for scripts that are run later (usually through a published application but the same applies when using terminal services and a full desktop), the results are inconsistent: some users can run the script fine, some are always prompted in the powershell console for letting the scripts run. I cannot find anything that could cause this behavior and it's really inconsistent: if I start powershell manually and runs get-executionpolicy, I am told that the current policy is unrestricted. Yet, if from the same session I try to run a script through a program that calls powershell <script file name> <parameters> I get prompted before the script can run. What could cause such behavior ?

    Read the article

  • Why C# IntelliSense is delayed compared to VB.NET?

    - by Sphynx
    In VB.NET projects, errors are highlighted immediately after cursor leaves the line. IN C#, I have to wait several seconds for IntelliSense to highlight it. Also, C# version doesn't show all project errors in "Errors List", unless you start to build it. Actually, it seems to work differently in every way. Is it possible to adjust that behavior?

    Read the article

  • How can I consolidate deferred/delayed calls in Objective-C ?

    - by thrusty
    I'd like to ensure that certain maintenance tasks are executed "eventually". For example, after I detect that some resources might no longer be used in a cache, I might call: [self performSelector:@selector(cleanupCache) withObject:nil afterDelay:0.5]; However, there might be numerous places where I detect this, and I don't want to be calling cleanupCache continuously. I'd like to consolidate multiple calls to cleanupCache so that we only periodically get ONE call to cleanupCache. Here's what I've come up with do to this-- is this the best way? [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(cleanupCache) object:nil]; [self performSelector:@selector(cleanupCache) withObject:nil afterDelay:0.5];

    Read the article

  • Why does multiple calls to xalloc result in delayed output?

    - by Me myself and I
    When I print the id of a stream in a single expression it prints it backwards. Normally this is what comes out: std::stringstream ss; std::cout << ss.xalloc() << '\n'; std::cout << ss.xalloc() << '\n'; std::cout << ss.xalloc(); Output is: 4 5 6 But when I do it in one expression it prints backwards, why? std::stringstream ss; std::cout << ss.xalloc() << '\n' << ss.xalloc() << '\n' << ss.xalloc(); Output: 6 5 4 I know the order of evaluation is unspecified but then why does the following always result in the correct order: std::cout << 4 << 5 << 6; Can someone explain why xalloc behaves differently? Thanks.

    Read the article

  • How to prevent the command prompt from closing after execution?

    - by Sk8erPeter
    My problem is that in Windows, there are command line windows that close immediately after execution. To solve this, I want the default behavior to be that the window is kept open. Normally, this behavior can be avoided with three methods that come to my mind: Putting a pause line after batch programs to prompt the user to press a key before exiting Running these batch files or other command line manipulating tools (even service starting, restarting, etc. with net start xy or anything similar) within cmd.exe(Start - Run - cmd.exe) Running these programs with cmd /k like this: cmd /k myprogram.bat But there are some other cases in which the user: Runs the program the first time and doesn't know that the given program will run in Command Prompt (Windows Command Processor) e.g. when running a shortcut from Start menu (or from somewhere else), OR Finds it a little bit uncomfortable to run cmd.exe all the time and doesn't have the time/opportunity to rewrite the code of these commands everywhere to put a pause after them or avoid exiting explicitly. I've read an article about changing default behavior of cmd.exe when opening it explicitly, with creating an AutoRun entry and manipulating its content in these locations: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Command Processor\AutoRun HKEY_CURRENT_USER\SOFTWARE\Microsoft\Command Processor\AutoRun (The AutoRun items are _String values_...) I put cmd /d /k as a value of it to give it a try, but this didn't change the behaviour of the stuffs mentioned above at all... It just changed the behaviour of the command line window when opening it explicitly (Start-Run-cmd.exe). So how does it work? Can you give me any ideas to solve this problem?

    Read the article

  • Is it possible to set path of database for delayed job in rails?

    - by WitchOfCloud
    Now, I am developing with mailing system with delayed_jobs gem. When I ran on developing environment, it operated well. But, after deploying application on server, it is not acted. This is my database.yml development: adapter: sqlite3 database: db/development.sqlite3 pool: 5 timeout: 5000 test: adapter: sqlite3 database: db/test.sqlite3 pool: 5 timeout: 5000 production: adapter: sqlite3 database: /var/www/service/shared/db/production.sqlite3 pool: 5 timeout: 5000 I checked queue(in /var/www/...) and it act well. Also, I started delayed_jobs(rake jobs:work). So, I think that problem is delayed_job crawl db/development.sqlite3 How can solve this problem?

    Read the article

  • Why is execution-time method resolution faster than compile-time resolution?

    - by Felix
    At school, we about virtual functions in C++, and how they are resolved (or found, or matched, I don't know what the terminology is -- we're not studying in English) at execution time instead of compile time. The teacher also told us that compile-time resolution is much faster than execution-time (and it would make sense for it to be so). However, a quick experiment would suggest otherwise. I've built this small program: #include <iostream> #include <limits.h> using namespace std; class A { public: void f() { // do nothing } }; class B: public A { public: void f() { // do nothing } }; int main() { unsigned int i; A *a = new B; for (i=0; i < UINT_MAX; i++) a->f(); return 0; } Where I made A::f() once normal, once virtual. Here are my results: [felix@the-machine C]$ time ./normal real 0m25.834s user 0m25.742s sys 0m0.000s [felix@the-machine C]$ time ./virtual real 0m24.630s user 0m24.472s sys 0m0.003s [felix@the-machine C]$ time ./normal real 0m25.860s user 0m25.735s sys 0m0.007s [felix@the-machine C]$ time ./virtual real 0m24.514s user 0m24.475s sys 0m0.000s [felix@the-machine C]$ time ./normal real 0m26.022s user 0m25.795s sys 0m0.013s [felix@the-machine C]$ time ./virtual real 0m24.503s user 0m24.468s sys 0m0.000s There seems to be a steady ~1 second difference in favor of the virtual version. Why is this? Relevant or not: dual-core pentium @ 2.80Ghz, no extra applications running between two tests. Archlinux with gcc 4.5.0. Compiling normally, like: $ g++ test.cpp -o normal Also, -Wall doesn't spit out any warnings, either.

    Read the article

  • Why would I be seeing execution timeouts when setting $_SESSION values?

    - by Kev
    I'm seeing the following errors in my PHP error logs: PHP Fatal error: Maximum execution time of 60 seconds exceeded in D:\sites\s105504\www\index.php on line 3 PHP Fatal error: Maximum execution time of 60 seconds exceeded in D:\sites\s105504\www\search.php on line 4 The lines in question are: index.php: 01 <?php 02 session_start(); 03 ob_start(); 04 error_reporting(E_All); 05 $_SESSION['nav'] = "range"; // <-- Error generated here search.php 01 <?php 02 03 session_start(); 04 $_SESSION['nav'] = "range"; 05 $_SESSION['navselected'] = 21; // <-- Error generated here Would it really take as long as 60+ seconds to assign a $_SESSION[] value? The platform is: Windows 2003 SP2 + IIS6 FastCGI for Windows 2003 (original RTM build) PHP 5.2.6 Non thread-safe build There aren't any issues with session data files being cleared up on the server as sessions expire. The oldest sess_XXXXXXXXXXXXXX file I'm seeing is around 2 hours old. There are no disk timeouts evident in the event logs or other such disk health issues that might suggest difficulty creating session data files. The site is also on a server that isn't under heavy load. The site is busy but not being hammered and is very responsive. It's just that we get these errors, three or four in a row, every three or four hours. I should also add that I'm not the original developer of this code and that it belongs to a customer who's developer has long since departed.

    Read the article

  • Can Vagrant point to a directory of Puppet manifests for execution?

    - by SeligkeitIstInGott
    I am using Vagrant to jump start some initial Puppet config and am confused on how to include/run multiple manifests (other than just site.pp) in the puppet execution workflow without making the extra manifests into modules and including them that way. In the puppet manifests directory that I point Vagrant to (see below) I have two manifests that I want executed: site.pp and hierasetup.pp. config.vm.provision "puppet" do |puppet| puppet.manifests_path = "puppet_files/manifests" puppet.module_path = "puppet_files/modules" puppet.manifest_file = "site.pp" puppet.options = "--verbose --debug" end Currently I am having site.pp be the manifest that calls hierasetup.pp. My site.pp looks like this: File { owner => 'root', group => 'root', mode => '0644', } import "hierasetup.pp" include jboss But I get this error about the deprecation of "import": Warning: The use of 'import' is deprecated at /tmp/vagrant-puppet-1/manifests/site.pp:33. See http://links.puppetlabs.com/puppet-import-deprecation (at grammar.ra:610:in `_reduce_190') According to the referenced URL under "Things to try instead" it says "To keep your node definitions in separate files, specify a directory as your main manifest". Further this puppet doc on main manifests says: "Recommended: If you’re using the main manifest heavily instead of relying on an ENC, consider changing the manifest setting to $confdir/manifests. This lets you split up your top-level code into multiple files while avoiding the import keyword. It will also match the behavior of simple environments." It appears that Puppet can reference an entire directory instead of just a specific manifest file, such that I would expect that Vagrant would make a provision for this and allow me to drop the "puppet.manifest_file = "site.pp" line and point to the parent directory instead in which all the *.pp files there will be executed. However removing that line in Vagrant merely generates a complaint about an expected "default.pp" in its stead: puppet provisioner: * The configured Puppet manifest is missing. Please specify a path to an existing manifest: /some/path/puppet_files/manifests/default.pp So: Firstly, do I understand the "new" (non-import) way of calling multiple manifests correctly, in that a directory is to be pointed to in which all the *.pp files inside it will be executed? And secondly, has Vagrant "caught up" with this new change to accommodate the referencing of directories in conjunction with Puppet's deprecation of "import"? Update: Thanks to Shane the issue with #2 (Vagrant's code not being caught up to allow pointing to puppet manifest directories) was reported on Vagrant's GitHub issue tracker site and has since been patched: https://github.com/mitchellh/vagrant/issues/4169

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >