Search Results

Search found 471 results on 19 pages for 'delayed durability'.

Page 1/19 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Delayed Durability–I start to like it!

    - by Michael Zilberstein
    In my previous post about the subject I’ve complained that according to BOL , this feature is enabled for Hekaton only. Panagiotis Antonopoulos from Microsoft commented that actually BOL is wrong – delayed durability can be used with all sorts of transactions, not just In-Memory ones. There is a database-level setting for delayed durability: default value is “Disabled”, other two options are “Allowed” and “Forced”. We’ll switch between “Disabled” and “Forced” and measure IO generated by a simple...(read more)

    Read the article

  • delayed job problem in rails.

    - by krunal shah
    My controller data_files_controller.rb def upload_balances DataFile.load_balances(params) end My model data_file.rb def self.load_balances(params) # Pull the file out of the http request, write it to file system name = params['Filename'] directory = "public/uploads" errors_table_name = "snapshot_errors" upload_file = File.join(directory, name) File.open(upload_file, "wb") { |f| f.write(params['Filedata'].read) } # Remove the old data from the table Balance.destroy_all ------ more code----- end It's working fine. Now i want to use delayed job with my controller to call my model action like .. My controller data_files_controller.rb def upload_balances DataFile.send_later(:load_balances,params) end Is it possible?? What's the other way to do it? Is it create any problem? With this send_later i am getting this error in column last_error in delayed_job table. uninitialized stream C:/cyncabc/app/models/data_file.rb:12:in read' C:/cyncabc/app/models/data_file.rb:12:inload_balances' C:/cyncabc/app/models/data_file.rb:12:in open' C:/cyncabc/app/models/data_file.rb:12:inload_balances' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/performable_method.rb:35:in send' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/performable_method.rb:35:inperform' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/backend/base.rb:66:in invoke_job' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:120:inrun' c:/ruby/lib/ruby/1.8/timeout.rb:62:in timeout' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:120:inrun' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/core_ext/benchmark.rb:10:in realtime' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:119:inrun' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:180:in reserve_and_run_one_job' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:104:inwork_off' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:103:in times' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:103:inwork_off' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:78:in start' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/core_ext/benchmark.rb:10:inrealtime' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:77:in start' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:74:inloop' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:74:in start' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:93:inrun' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:72:in run_process' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:215:incall' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:215:in start_proc' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:225:incall' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:225:in start_proc' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:255:instart' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/controller.rb:72:in run' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons.rb:188:inrun_proc' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/cmdline.rb:105:in call' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/cmdline.rb:105:incatch_exceptions' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons.rb:187:in run_proc' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:71:inrun_process' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:65:in daemonize' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:63:intimes' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:63:in `daemonize' script/delayed_job:5 Without send_later it's working fine... Is there any solution?

    Read the article

  • Durability of Websockets Server

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for.

    Read the article

  • SQL Server 2014 – delayed transaction durability

    - by Michael Zilberstein
    As I’m downloading SQL Server 2014 CTP2 at this very moment, I’ve noticed new fascinating feature that hadn’t been announced in CTP1 : delayed transaction durability . It means that if your system is heavy on writes and on another hand you can tolerate data loss on some rare occasions – you can consider declaring transaction as DELAYED_DURABILITY = ON . In this case transaction would be committed when log is written to some buffer in memory – not to disk as usual. This way transactions can become...(read more)

    Read the article

  • Use Delayed::Job to manage multiple job queues

    - by Alex
    I want to use Delayed::Job (or perhaps a more appropriate job queue to my problem) to dispatch jobs to multiple background daemons. I have several background daemons that carry out different responsibilities. Each one is interested in different jobs in the queue from the Rails app. Is this possible using Delayed::Job, or perhaps there is a different job queue that better fits this task?

    Read the article

  • Rails Delayed Job & Library Class

    - by Lee
    Hey we have a library class (lib/Mixpanel) that calls delayed job as follows: class Mixpanel attr_accessor :options attr_accessor :event def track!() .. dj = send_later :access_api # also tried with self.send_later .. end def access_api .. end The problem is that when we run rake jobs:work: we get the following error: undefined method `access_api' for # Any idea why?

    Read the article

  • PLINQ delayed execution

    - by tbischel
    I'm trying to understand how parallelism might work using PLINQ, given delayed execution. Here is a simple example. string[] words = { "believe", "receipt", "relief", "field" }; bool result = words.AsParallel().Any(w => w.Contains("ei")); With LINQ, I would expect the execution to reach the "receipt" value and return true, without executing the query for rest of the values. If we do this in parallel, the evaluation of "relief" may have began before the result of "receipt" has returned. But once the query knows that "receipt" will cause a true result, will the other threads yield immediately? In my case, this is important because the "any" test may be very expensive, and I would want to free up the processors for execution of other tasks.

    Read the article

  • Delayed Jobs is not finding Records and failing..

    - by Trip
    In my app, delayed jobs isn't running automatically on my server anymore. It used to.. When I manually ssh in, and perform rake jobs:work I return this : * Starting job worker host:ip-(censored) pid:21458 * [Worker(host:ip-(censored) pid:21458)] acquired lock on PhotoJob * [JOB] host:ip-(censored) pid:21458 failed with ActiveRecord::RecordNotFound: Couldn't find Photo with ID=9237 - 4 failed attempts This returns roughly 20 times over for what I think is several jobs. Then I get a few of these: [Worker(host:ip-(censored) pid:21458)] failed to acquire exclusive lock for PhotoJob And then finally one of these : 12 jobs processed at 73.6807 j/s, 12 failed ... Any ideas what I should be mulling over? Thanks so much!

    Read the article

  • HELP!! Delayed-job: Rake aborted! Can't modify frozen hash

    - by pmneve
    Too bad even the trace doesn't say which hash is involved. Sorry this post is long: am trying to provide enough context to be meaningful. Occurs intermittently when rake jobs:work is pulling a command out of delayed_jobs while my status observer is in the process of parsing a log file for detailed results of the previous delayed_job denizen. I have an observer class (in RAILS_ROOT/lib ) which listens for the events, makes a copy of them and calls the owner class ( in apps/models ) which then calls on the log parser (also in /lib) to do the actual work. (Should both of those classes, the observer and the parser be in app/models?) Am due to deliver this application in a few days and this is killing it (and me). Am using DirectoryWatcher to look for flag files that indicate the start and finish of the delayed_jobs. That is started at the end of environment.rb like this: require 'directory_watcher' $scriptStatusObserver = ScriptStatusObserver.new dirToWatch ="#{RAILS_ROOT}/tmp/flags" $directoryWatcher = DirectoryWatcher.new( dirToWatch ) $directoryWatcher.glob= "*.flg" $directoryWatcher.interval=(15) $directoryWatcher.add_observer( $scriptStatusObserver ) $directoryWatcher.persist=("#{RAILS_ROOT}/tmp/flags/dw_state.yml") $directoryWatcher.start at_exit { $directoryWatcher.stop } This code is outside of the run method (btw is that the best place or is inside the run better?) Here is the observer: require 'script_run' class ScriptStatusObserver def initialize @rcvdEvents = [] end def update( *events ) begin puts "#{LINE.to_s}: ScriptStatusObserver events: \n"+events.to_yaml cnt = 0 events.each do |e| if e.to_s.match(/^\s*added/) cnt = cnt + 1 @rcvdEvents << e end end ScriptRun.new.catch_up( @rcvdEvents ) if cnt > 0 @rcvdEvents.clear rescue puts $! end end end Here is ScriptRun (it attaches to an associative table built with has_many:through) require 'observer' class ScriptRun < ActiveRecord::Base set_table_name "scripts_runs" belongs_to :script belongs_to :run def parse( result ) parser = LogParser.new parser.parse(result) end def catch_up( events ) events.each do |e| typ = e.type path = e.path thisMatch = path.match(/flags\/(\d+)_(\d+)_([\d\.]+)_(\w+)\.flg/) run_id = thisMatch[1] script_id = thisMatch[2] ts = thisMatch[3] status = thisMatch[4] if e.to_s.match(/^\s*added/) status_update( script_id, run_id, status, ts, path ) end end end def status_update( script_id, run_id, status, ts, path ) scriptrun = ScriptRun.find(:first, :conditions => [ "run_id = ? and script_id = ?", run_id.to_i, script_id.to_i ]) if scriptrun.kind_of?(ScriptRun) currStatus = scriptrun.status if not currStatus == 'completed' scriptrun.update_attribute(:status, status) if status == 'parse' flag = File.new(path) logSpec = flag.gets flag.close logName = File.basename(logSpec) logPath = logSpec.sub(logName, '') logName =~ /^(([\w_]+)_([\w]+)_(\d+))\.log$/ name = $1 basename = $2 runenv = $3 tsOrPid = $4 result = Result.new result.log_path = logPath result.basename = basename result.name = name result.script_id = script_id.to_i result.run_id = run_id.to_i if runenv == 'sit' runenv = 'SIT3348' end result.application_environment_id = ApplicationEnvironment.find(:first, :conditions => [ "nodename = ?", runenv]).id parse(result) if run_completed?( run_id ) myRun = Run.find(run_id.to_i) if myRun.kind_of?( Run ) myRun.update_attribute( :completed, Time.now.to_f ) end end end end else puts "#{__LINE__.to_s}: ScriptRun.status_update: ScriptRun not found for run #{run_id} script #{script_id} ts #{ts.to_s}" end File.delete(path) end def run_completed?( id ) scriptruns = ScriptRun.find(:all, :conditions = [ "run_id = ?", id.to_i] ) scriptruns.each do |sr| if not sr.status == 'completed' return false end end return true end end LogParser is too long even for this post but it reads the script log and pulls detailed information (counts and timings) out of the log and writes to a details table. It also tallies and calculates averages and rolls those up into summary tables for quicker access from the web pages. Here is the error trace: (don't ask why everything is under my Windows profile. It's a long story) Scanner running 1270239731.43 directory_watcher.notify_observers: #, #] update:[#, /pneve/workspace/waftt-0.29/tmp/flags/100039_18_1270239550.108_parse.flg"] rake aborted! can't modify frozen hash C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activerecord/l ib/active_record/attribute_methods.rb:313:in []=' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activerecord/l ib/active_record/attribute_methods.rb:313:inwrite_attribute_without_dirty' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activerecord/l ib/active_record/dirty.rb:139:in write_attribute' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activerecord/l ib/active_record/attribute_methods.rb:211:inlast_error=' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:141:in handle_failed_job' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:115:inrun' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:162:in reserve_and_run_one_job' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:92:inwork_off' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:91:in times' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:91:inwork_off' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:66:in start' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activesupport/ lib/active_support/core_ext/benchmark.rb:10:inrealtime' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:65:in start' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:62:inloop' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:62:in start' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/tasks.rb:13 c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:636:incall' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:636:in execute' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:631:ineach' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:631:in execute' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:597:ininvoke_with_call_chain' c:/Documents and Settings/pneve/ruby/lib/ruby/1.8/monitor.rb:242:in synchronize ' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:590:ininvoke_with_call_chain' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:583:in invoke' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2051:ininvoke_task' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2029:in top_level' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2029:ineach' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2029:in top_level' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2068:instandard_exception_handling' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2023:in top_level' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2001:inrun' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2068:in standard_exception_handling' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:1998:inrun' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake: 31 c:/Documents and Settings/pneve/ruby/bin/rake:16:in `load' c:/Documents and Settings/pneve/ruby/bin/rake:16

    Read the article

  • An affordable way to use multiple Delayed::Job queues

    - by NudeCanalTroll
    I have a Ruby on Rails app that needs process many background jobs simultaneously: anywhere from 5-6 at a time to up to 50-60 at a time depending on the time of day. Right now my app is running on Heroku, which charges $.05/hour per worker, regardless of how much CPU or memory the worker is using. This is costing me a boatload each month... up to $1200/mo. Are there any hosts that will allow me to do what I'm doing for significantly cheaper?

    Read the article

  • Emails sent using Flex app are delayed

    - by user363825
    I'm currently building an application in Flex that utilizes SMTP Mailer to automatically send out emails to the user when a particular condition is satisfied. The application checks this condition every 30 seconds. The condition is satisfied based on new records being returned from a database table. The problem is as follows: When the condition is first satisfied, the email is delivered to the user with no issues. The second time the condition is satisfied, the email is not delivered. In the smtp logs, the delivery attempt appears to get hung up on the following line: 354 Start mail input; end with <CRLF>.<CRLF> No error codes are present in the smtp logs, but I do trace the following event from the SMTP Mailer class: [Event type="mailError" bubbles=false cancelable=false eventPhase=2] When the condition is satisfied a third time, the email that was not delivered when the condition was satisfied the previous time is now delivered, along with the email for this instance. This pattern then repeats itself, with the next email not being sent followed by two emails being sent simulatneously when the condition is met again. The smtp server being used is Windows 2003, on an internal network. The email is being sent to an outlook account hosted on an exchange server that is also on this internal network. Here is the actionscript code that creates the SMTPMailer object: public var testMail:SMTPMailer = null; public function alertNotify() { Security.loadPolicyFile("crossdomain.xml"); this.testMail = new SMTPMailer("myserver.ec.local",25); this.testMail.addEventListener(SMTPEvent.MAIL_SENT, onEmailEvent); this.testMail.addEventListener(SMTPEvent.MAIL_ERROR, onEmailError); this.testMail.addEventListener(SMTPEvent.DISCONNECTED, onEmailConn); this.testMail.addEventListener(SecurityErrorEvent.SECURITY_ERROR, onEmailError); } Here is the code that creates the email body and calls the method to send the email: public function alertUser(emailAC:ArrayCollection):void { trace ("In alertUser() before send, testMail.connected = " + testMail.connected.toString()); var testStr:String = " Key Location Event Type Comment Update Time "; for each (var event:rEntity in emailAC) { testStr = testStr + "" + event.key.toString() + "" + event.xml.address.toString() + " " + [email protected]() + "" + [email protected]() + "" + [email protected]() + "" + event.xml.attribute("update-time").toXMLString() + ""; } testStr = testStr + ""; testMail.flush(); testMail.sendHTMLMail("[email protected]","[email protected]","Event Notification",testStr); } Really not sure where the email that gets hung up is being stored until it is finally sent.... Any suggestions as to how to begin to remedy this issue would be much appreciated.

    Read the article

  • Tip 14 : Solve SmtpClient issues of delayed email and high CPU usage

    - by StanleyGu
    1. It is quite straightforward using SmtpClient class to send out an email 2. However, when you step through the above code executing smtpClient.Send(), you will notice about 2 minutes delay in receiving the email. 3. My first try to solve the issue of delayed email is to set MaxIdleTime=1 4. The first try solves the issue of delayed email very well but introduces another issue: high CPU usage. The CPU usage of my deployed windows service is consistently at 50%, which is much higher than the expected near-zero CPU usage. 5. The second try is to set MaxIdleTime=2, which solves the both issues.    

    Read the article

  • INSERT DELAYED on locked tables blocks PHP processes to continue

    - by sw0x2A
    Our webservers write some tracking information into a MySQL database (using INSERT DELAYED into MyISAM table). When a huge SELECT query is executed on this table or when it is locked for another reason, the webserver processes (with INSERT DELAYED) are waiting for the database and in some cases the MaxServer limit is reached in Apaches, so they will stop serving requests. We use INSERT DELAYED because The DELAYED option for the INSERT statement is a MySQL extension to standard SQL that is very useful if you have clients that cannot or need not wait for the INSERT to complete. This is a common situation when you use MySQL for logging and you also periodically run SELECT and UPDATE statements that take a long time to complete. Quote from MySQL documentation. I am wondering why the Apache processes are waiting for the INSERT DELAYED to finish. And what can I do to just send the data and forget about it. (Since this is logging data, I do not care if we lose some entries.) Even when the table is locked the PHP script should just go on and should not wait for an answer of MySQL. (We do not want to setup Master-slave for this table but we are thinking about move this data to some NoSQL database. But for now I would like to know why INSERT DELAYED is not working as expected.)

    Read the article

  • Mouse click events are delayed when moving the cursor in xubuntu 12.04

    - by bobbaluba
    I am trying out live xubuntu on my samsung n150 (netbook, similar to eeepc). So far everything seems perfect, except for this one thing. When I am clicking the mouse while moving the cursour, the click event is delayed. This is extremely annoying when i try to to move windows, because i end up clicking the window behind the window i am trying to move and thereby changing focus. This goes for all applications. I.e selecting text is difficult. There is no lag in the cursor movement, just the click event. The click event is registered with the new mouse position, not the one when the mouse was clicked. I tried searching for the problem, but all the other cases where either kind of different, or had no solution. EDIT: After testing some more, here are some more information. The click doesn't occur until i stop moving the cursor. If I move the cursor smoothly around, the click never gets through. I have ubuntu netbook remix 10.04 installed, and have no problems with the mouse there. EDIT2: Once I connected a usb-mouse, the problem dissappeared. Dragging now works perfectly with both the track pad and the usb mouse. I will update once I have installed xubuntu and figured out if this is a persistant problem. EDIT3: I have installed, and the problem is still there. It dissappears when I have a second mouse connected however, and after I resume from suspend. My solution for now, is just to keep the dongle connected all the time.

    Read the article

  • Resulting .exe from PyInstaller with wxPython crashing

    - by Helgi Hrafn Gunnarsson
    I'm trying to compile a very simple wxPython script into an executable by using PyInstaller on Windows Vista. The Python script is nothing but a Hello World in wxPython. I'm trying to get that up and running as a Windows executable before I add any of the features that the program needs to have. But I'm already stuck. I've jumped through some loops in regards to MSVCR90.DLL, MSVCP90.DLL and MSVCPM90.DLL, which I ended up copying from my Visual Studio installation (C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\x86\Microsoft.VC90.CRT). As according to the instructions for PyInstaller, I run: Command: Configure.py Output: I: computing EXE_dependencies I: Finding TCL/TK... I: could not find TCL/TK I: testing for Zlib... I: ... Zlib available I: Testing for ability to set icons, version resources... I: ... resource update available I: Testing for Unicode support... I: ... Unicode available I: testing for UPX... I: ...UPX available I: computing PYZ dependencies... So far, so good. I continue. Command: Makespec.py -F guitest.py Output: wrote C:\Code\PromoUSB\guitest.spec now run Build.py to build the executable Then there's the final command. Command: Build.py guitest.spec Output: checking Analysis building Analysis because out0.toc non existent running Analysis out0.toc Analyzing: C:\Python26\pyinstaller-1.3\support\_mountzlib.py Analyzing: C:\Python26\pyinstaller-1.3\support\useUnicode.py Analyzing: guitest.py Warnings written to C:\Code\PromoUSB\warnguitest.txt checking PYZ rebuilding out1.toc because out1.pyz is missing building PYZ out1.toc checking PKG rebuilding out3.toc because out3.pkg is missing building PKG out3.pkg checking ELFEXE rebuilding out2.toc because guitest.exe missing building ELFEXE out2.toc I get the resulting 'guitest.exe' file, but upon execution, it "simply crashes"... and there is no debug info. It's just one of those standard Windows Vista crashes. The script itself, guitest.py runs just fine by itself. It only crashes as an executable, and I'm completely lost. I don't even know what to look for, since nothing I've tried has returned any relevant results. Another file is generated as a result of the compilation process, called 'warnguitest.txt'. Here are its contents. W: no module named posix (conditional import by os) W: no module named optik.__all__ (top-level import by optparse) W: no module named readline (delayed, conditional import by cmd) W: no module named readline (delayed import by pdb) W: no module named pwd (delayed, conditional import by posixpath) W: no module named org (top-level import by pickle) W: no module named posix (delayed, conditional import by iu) W: no module named fcntl (conditional import by subprocess) W: no module named org (top-level import by copy) W: no module named _emx_link (conditional import by os) W: no module named optik.__version__ (top-level import by optparse) W: no module named fcntl (top-level import by tempfile) W: __all__ is built strangely at line 0 - collections (C:\Python26\lib\collections.pyc) W: delayed exec statement detected at line 0 - collections (C:\Python26\lib\collections.pyc) W: delayed conditional __import__ hack detected at line 0 - doctest (C:\Python26\lib\doctest.pyc) W: delayed exec statement detected at line 0 - doctest (C:\Python26\lib\doctest.pyc) W: delayed conditional __import__ hack detected at line 0 - doctest (C:\Python26\lib\doctest.pyc) W: delayed __import__ hack detected at line 0 - encodings (C:\Python26\lib\encodings\__init__.pyc) W: __all__ is built strangely at line 0 - optparse (C:\Python26\pyinstaller-1.3\optparse.pyc) W: __all__ is built strangely at line 0 - dis (C:\Python26\lib\dis.pyc) W: delayed eval hack detected at line 0 - os (C:\Python26\lib\os.pyc) W: __all__ is built strangely at line 0 - __future__ (C:\Python26\lib\__future__.pyc) W: delayed conditional __import__ hack detected at line 0 - unittest (C:\Python26\lib\unittest.pyc) W: delayed conditional __import__ hack detected at line 0 - unittest (C:\Python26\lib\unittest.pyc) W: __all__ is built strangely at line 0 - tokenize (C:\Python26\lib\tokenize.pyc) W: __all__ is built strangely at line 0 - wx (C:\Python26\lib\site-packages\wx-2.8-msw-unicode\wx\__init__.pyc) W: __all__ is built strangely at line 0 - wx (C:\Python26\lib\site-packages\wx-2.8-msw-unicode\wx\__init__.pyc) W: delayed exec statement detected at line 0 - bdb (C:\Python26\lib\bdb.pyc) W: delayed eval hack detected at line 0 - bdb (C:\Python26\lib\bdb.pyc) W: delayed eval hack detected at line 0 - bdb (C:\Python26\lib\bdb.pyc) W: delayed __import__ hack detected at line 0 - pickle (C:\Python26\lib\pickle.pyc) W: delayed __import__ hack detected at line 0 - pickle (C:\Python26\lib\pickle.pyc) W: delayed conditional exec statement detected at line 0 - iu (C:\Python26\pyinstaller-1.3\iu.pyc) W: delayed conditional exec statement detected at line 0 - iu (C:\Python26\pyinstaller-1.3\iu.pyc) W: delayed eval hack detected at line 0 - gettext (C:\Python26\lib\gettext.pyc) W: delayed __import__ hack detected at line 0 - optik.option_parser (C:\Python26\pyinstaller-1.3\optik\option_parser.pyc) W: delayed conditional eval hack detected at line 0 - warnings (C:\Python26\lib\warnings.pyc) W: delayed conditional __import__ hack detected at line 0 - warnings (C:\Python26\lib\warnings.pyc) W: __all__ is built strangely at line 0 - optik (C:\Python26\pyinstaller-1.3\optik\__init__.pyc) W: delayed exec statement detected at line 0 - pdb (C:\Python26\lib\pdb.pyc) W: delayed conditional eval hack detected at line 0 - pdb (C:\Python26\lib\pdb.pyc) W: delayed eval hack detected at line 0 - pdb (C:\Python26\lib\pdb.pyc) W: delayed conditional eval hack detected at line 0 - pdb (C:\Python26\lib\pdb.pyc) W: delayed eval hack detected at line 0 - pdb (C:\Python26\lib\pdb.pyc) I don't know what the heck to make of any of that. Again, my searches have been fruitless.

    Read the article

  • US GAAP and IFRS Convergence May Be Delayed Even More

    - by Theresa Hickman
    Yesterday, on March 10, the Financial Accounting Standards Board (FASB) met to discuss the changes in financial statement presentation. Over the last six months, the FASB and IASB have been working feverishly to converge US GAAP and IFRS standards to meet the 2011 deadline. In March alone, the standards-setters met eight times. Many people fear that this accelerated pace is compromising the quality of the end product and that maybe they should slow down and do their due diligence. According to WG&L Accounting & Compliance Alert Checkpoint 3/10/2010, (which requires a subscription to view the full article) "Some statement preparers and investors who advise the FASB believe that the process would be better served if it was slowed down so that more attention could be paid to quality." "Should 2011 be looked at as a line in the sand?" asked Joan Amble, executive vice president and corporate comptroller for American Express Co. "We don't think that due process should be compromised for the due date," concurred Lewis Dulitz, vice president of accounting policies and research for medical products supplier Covidien plc. I personally have mixed emotions about this. On one hand, I have been growing impatient with how slow the US has jumped on the IFRS band wagon. On the other hand, being the conservative that I am and knowing this convergence will be costly and disruptive to businesses, I would prefer to be safe than sorry and get it right the first time.

    Read the article

  • How do I cancel a time-delayed screenshot?

    - by coversnail
    I'm using the default screenshot application that comes with Ubuntu gnome-screenshot When I was using it earlier to take screenshots of the lock screen I had set a long time delay, but forgot to change it back after I'd finished. When I next took a timed screenshot I had to wait a long time for it to take because the delay was still set so long. Clicking the icon to relaunch the screenshot application has no effect whilst the timer is in effect, I imagine there is probably a simple terminal command to shut down an application, but I don't know it! Is there a way to do this?

    Read the article

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • Start or ensure that Delayed Job runs when an application/server restarts.

    - by btelles
    Hi there, We have to use delayed_job (or some other background-job processor) to run jobs in the background, but we're not allowed to change the boot scripts/boot-levels on the server. This means that the daemon is not guaranteed to remain available if the provider restarts the server (since the daemon would have been started by a capistrano recipe that is only run once per deployment). Currently, the best way I can think of to ensure the delayed_job daemon is always running, is to add an initializer to our Rails application that checks if the daemon is running. If it's not running, then the initializer starts the daemon, otherwise, it just leaves it be. The question, therefore, is how do we detect that the Delayed_Job daemon is running from inside a script? (We should be able to start up a daemon fairly easily, bit I don't know how to detect if one is already active). Anyone have any ideas? Regards, Bernie Based on the answer below, this is what I came up with. Just put it in config/initializers and you're all set: #config/initializers/delayed_job.rb DELAYED_JOB_PID_PATH = "#{Rails.root}/tmp/pids/delayed_job.pid" def start_delayed_job Thread.new do `ruby script/delayed_job start` end end def process_is_dead? begin pid = File.read(DELAYED_JOB_PID_PATH).strip Process.kill(0, pid.to_i) false rescue true end end if !File.exist?(DELAYED_JOB_PID_PATH) && process_is_dead? start_delayed_job end

    Read the article

  • SQLite3, "ALTER TABLE" and durability

    - by Bill
    I'd like to run some ALTER TABLE statements on a sqlite3 database. What happens if the user kills the process or the power is cut while the ALTER TABLE is running? Will the database be left in a corrupt intermediate state?

    Read the article

  • Generating PDF's via Delayed Job while maintaing a RESTful pattern.

    - by Jeff
    Hi, currently I am running a Rails app on Heroku, and everything is working great with exception of generating PDF documents that sometimes contain thousands of records. Heroku has a built-in timeout of 30 seconds, so if the request takes more than 30 seconds, it's abandoned. That's fine, since they offer delayed_job support built-in. However, all of the PDF's i generate follow a typical restful pattern. For instance, a request to "/posts.pdf" generates a pdf (using PRAWN and PRAWNTO) and it's delivered to the browser. So my basic question is, how do I create dynamically generated PDF's with delayed_job while maintaining the basic RESTful patterns Rail's so conveniently provides. Thanks.

    Read the article

  • HP server delayed boot

    - by jjrab
    I'm currently using HP Proliant DL120 G5 servers running VMWare ESXi 4 to run server VM's. They are connecting to an iSCSI SAN for the shared storage. I'd like to implement a delayed boot of these hosts servers so that they don't boot up and try to connect to the SAN before the SAN is ready for connections after a power failure. Does anyone know of a good way to do this?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >