Search Results

Search found 3089 results on 124 pages for 'lock'.

Page 37/124 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • block write access to table from an application in mysql

    - by hoberion
    Hello, We have a CMS plugin that writes statistics to 1 table, this creates performance issues on the entire platform. We decided to use another statistics plugin which can connect to a different database server (the first plugin couldn't!) however we need parts of the first plugin. I want to lock the statistics table to prevent misusage (not allowed to drop it by the developer) So I was wondering if a lock table could do this or if I can implement some sort of read only table

    Read the article

  • How to create a database deadlock using jdbc and JUNIT

    - by Isawpalmetto
    I am trying to create a database deadlock and I am using JUnit. I have two concurrent tests running which are both updating the same row in a table over and over again in a loop. My idea is that you update say row A in Table A and then row B in Table B over and over again in one test. Then at the same time you update row B table B and then row A Table A over and over again. From my understanding this should eventually result in a deadlock. Here is the code For the first test. public static void testEditCC() { try{ int rows = 0; int counter = 0; int large=10000000; Connection c=DataBase.getConnection(); while(counter<large) { int pid = 87855; int cCode = 655; String newCountry="Egypt"; int bpl = 0; stmt = c.createStatement(); rows = stmt.executeUpdate("UPDATE main " + //create lock on main table "SET BPL="+cCode+ "WHERE ID="+pid); rows = stmt.executeUpdate("UPDATE BPL SET DESCRIPTION='SomeWhere' WHERE ID=602"); //create lock on bpl table counter++; } assertTrue(rows == 1); //rows = stmt.executeUpdate("Insert into BPL (ID, DESCRIPTION) VALUES ("+cCode+", '"+newCountry+"')"); } catch(SQLException ex) { ex.printStackTrace(); //ex.getMessage(); } } And here is the code for the second test. public static void testEditCC() { try{ int rows = 0; int counter = 0; int large=10000000; Connection c=DataBase.getConnection(); while(counter<large) { int pid = 87855; int cCode = 655; String newCountry="Jordan"; int bpl = 0; stmt = c.createStatement(); //stmt.close(); rows = stmt.executeUpdate("UPDATE BPL SET DESCRIPTION='SomeWhere' WHERE ID=602"); //create lock on bpl table rows = stmt.executeUpdate("UPDATE main " + //create lock on main table "SET BPL="+cCode+ "WHERE ID="+pid); counter++; } assertTrue(rows == 1); //rows = stmt.executeUpdate("Insert into BPL (ID, DESCRIPTION) VALUES ("+cCode+", '"+newCountry+"')"); } catch(SQLException ex) { ex.printStackTrace(); } } I am running these two separate JUnit tests at the same time and am connecting to an apache Derby database that I am running in network mode within Eclipse. Can anyone help me figure out why a deadlock is not occurring? Perhaps I am using JUnit wrong.

    Read the article

  • Multi-threaded Application with Readonly Properties

    - by Shiftbit
    Should my multithreaded application with read only properties require locking? Since nothing is being written I assume there is no need for locks, but I would like to make sure. Would the answer to this question be language agnostic? Without Lock: Private m_strFoo as new String = "Foo" Public ReadOnly Property Foo() As String Get return m_strFoo.copy() End Get End Property With Lock: Private m_strBar as new String = "Bar" Public ReadOnly Property Bar() As String Get SyncLock (me) return m_strBar.copy() End Synclock End Get End Property

    Read the article

  • locking on dictionary of structs not working between 2 threads?

    - by Rancur3p1c
    C#, .Net2.0, XP, Zen I have 2 threads accessing a shared dictionary of structures, each thread via an event. At the beginning of the event I lock the dictionary, remove some structures, and exit the lock+event. Yet somehow the 2nd thread|event is finding some of the removed structures. Conceptually I must be doing something wrong for this to be happening? I thought locking was supposed to make it thread safe?

    Read the article

  • ThreadQueue problems in "Accelerated C# 2008"

    - by Singlet
    Example for threading queue book "Accelerated C# 2008" (CrudeThreadPool class) not work correctly. If I insert long job in WorkFunction() on 2-processor machine executing for next task don't run before first is over. How to solve this problem? I want to load the processor to 100 percent public class CrudeThreadPool { static readonly int MAX_WORK_THREADS = 4; static readonly int WAIT_TIMEOUT = 2000; public delegate void WorkDelegate(); public CrudeThreadPool() { stop = 0; workLock = new Object(); workQueue = new Queue(); threads = new Thread[ MAX_WORK_THREADS ]; for( int i = 0; i < MAX_WORK_THREADS; ++i ) { threads[i] = new Thread( new ThreadStart(this.ThreadFunc) ); threads[i].Start(); } } private void ThreadFunc() { lock( workLock ) { int shouldStop = 0; do { shouldStop = Interlocked.Exchange( ref stop, stop ); if( shouldStop == 0 ) { WorkDelegate workItem = null; if( Monitor.Wait(workLock, WAIT_TIMEOUT) ) { // Process the item on the front of the queue lock( workQueue ) { workItem =(WorkDelegate) workQueue.Dequeue(); } workItem(); } } } while( shouldStop == 0 ); } } public void SubmitWorkItem( WorkDelegate item ) { lock( workLock ) { lock( workQueue ) { workQueue.Enqueue( item ); } Monitor.Pulse( workLock ); } } public void Shutdown() { Interlocked.Exchange( ref stop, 1 ); } private Queue workQueue; private Object workLock; private Thread[] threads; private int stop; } public class EntryPoint { static void WorkFunction() { Console.WriteLine( "WorkFunction() called on Thread {0}",Thread.CurrentThread.GetHashCode() ); //some long job double s = 0; for (int i = 0; i < 100000000; i++) s += Math.Sin(i); } static void Main() { CrudeThreadPool pool = new CrudeThreadPool(); for( int i = 0; i < 10; ++i ) { pool.SubmitWorkItem( new CrudeThreadPool.WorkDelegate( EntryPoint.WorkFunction) ); } pool.Shutdown(); } }

    Read the article

  • How to configure emacs by using this file?

    - by Andy Leman
    From http://public.halogen-dg.com/browser/alex-emacs-settings/.emacs?rev=1346 I got: (setq load-path (cons "/home/alex/.emacs.d/" load-path)) (setq load-path (cons "/home/alex/.emacs.d/configs/" load-path)) (defconst emacs-config-dir "~/.emacs.d/configs/" "") (defun load-cfg-files (filelist) (dolist (file filelist) (load (expand-file-name (concat emacs-config-dir file))) (message "Loaded config file:%s" file) )) (load-cfg-files '("cfg_initsplit" "cfg_variables_and_faces" "cfg_keybindings" "cfg_site_gentoo" "cfg_conf-mode" "cfg_mail-mode" "cfg_region_hooks" "cfg_apache-mode" "cfg_crontab-mode" "cfg_gnuserv" "cfg_subversion" "cfg_css-mode" "cfg_php-mode" "cfg_tramp" "cfg_killbuffer" "cfg_color-theme" "cfg_uniquify" "cfg_tabbar" "cfg_python" "cfg_ack" "cfg_scpaste" "cfg_ido-mode" "cfg_javascript" "cfg_ange_ftp" "cfg_font-lock" "cfg_default_face" "cfg_ecb" "cfg_browser" "cfg_orgmode" ; "cfg_gnus" ; "cfg_cyrillic" )) ; enable disabled advanced features (put 'downcase-region 'disabled nil) (put 'scroll-left 'disabled nil) (put 'upcase-region 'disabled nil) ; narrow cursor ;(setq-default cursor-type 'hbar) (cua-mode) ; highlight current line (global-hl-line-mode 1) ; AV: non-aggressive scrolling (setq scroll-conservatively 100) (setq scroll-preserve-screen-position 't) (setq scroll-margin 0) (custom-set-variables ;; custom-set-variables was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. '(ange-ftp-passive-host-alist (quote (("redbus2.chalkface.com" . "on") ("zope.halogen-dg.com" . "on") ("85.119.217.50" . "on")))) '(blink-cursor-mode nil) '(browse-url-browser-function (quote browse-url-firefox)) '(browse-url-new-window-flag t) '(buffers-menu-max-size 30) '(buffers-menu-show-directories t) '(buffers-menu-show-status nil) '(case-fold-search t) '(column-number-mode t) '(cua-enable-cua-keys nil) '(user-mail-address "[email protected]") '(cua-mode t nil (cua-base)) '(current-language-environment "UTF-8") '(file-name-shadow-mode t) '(fill-column 79) '(grep-command "grep --color=never -nHr -e * | grep -v .svn --color=never") '(grep-use-null-device nil) '(inhibit-startup-screen t) '(initial-frame-alist (quote ((width . 80) (height . 40)))) '(initsplit-customizations-alist (quote (("tabbar" "configs/cfg_tabbar.el" t) ("ecb" "configs/cfg_ecb.el" t) ("ange\\-ftp" "configs/cfg_ange_ftp.el" t) ("planner" "configs/cfg_planner.el" t) ("dired" "configs/cfg_dired.el" t) ("font\\-lock" "configs/cfg_font-lock.el" t) ("speedbar" "configs/cfg_ecb.el" t) ("muse" "configs/cfg_muse.el" t) ("tramp" "configs/cfg_tramp.el" t) ("uniquify" "configs/cfg_uniquify.el" t) ("default" "configs/cfg_font-lock.el" t) ("ido" "configs/cfg_ido-mode.el" t) ("org" "configs/cfg_orgmode.el" t) ("gnus" "configs/cfg_gnus.el" t) ("nnmail" "configs/cfg_gnus.el" t)))) '(ispell-program-name "aspell") '(jabber-account-list (quote (("[email protected]")))) '(jabber-nickname "AVK") '(jabber-password nil) '(jabber-server "halogen-dg.com") '(jabber-username "alex") '(remember-data-file "~/Plans/remember.org") '(safe-local-variable-values (quote ((dtml-top-element . "body")))) '(save-place t nil (saveplace)) '(scroll-bar-mode (quote right)) '(semantic-idle-scheduler-idle-time 432000) '(show-paren-mode t) '(svn-status-hide-unmodified t) '(tool-bar-mode nil nil (tool-bar)) '(transient-mark-mode t) '(truncate-lines f) '(woman-use-own-frame nil)) ; ?? ????? ??????? y ??? n? (fset 'yes-or-no-p 'y-or-n-p) (custom-set-faces ;; custom-set-faces was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. '(compilation-error ((t (:foreground "tomato" :weight bold)))) '(cursor ((t (:background "red1")))) '(custom-variable-tag ((((class color) (background dark)) (:inherit variable-pitch :foreground "DarkOrange" :weight bold)))) '(hl-line ((t (:background "grey24")))) '(isearch ((t (:background "orange" :foreground "black")))) '(message-cited-text ((((class color) (background dark)) (:foreground "SandyBrown")))) '(message-header-name ((((class color) (background dark)) (:foreground "DarkGrey")))) '(message-header-other ((((class color) (background dark)) (:foreground "LightPink2")))) '(message-header-subject ((((class color) (background dark)) (:foreground "yellow2")))) '(message-separator ((((class color) (background dark)) (:foreground "thistle")))) '(region ((t (:background "brown")))) '(tooltip ((((class color)) (:inherit variable-pitch :background "IndianRed1" :foreground "black"))))) The above is a python emacs configure file. Where should I put it to use it? And, are there any other changes I need to make?

    Read the article

  • Outlook 2013 keeps freezing, semi-consistently

    - by AviD
    I have an oddity of problem with my Outlook's stability. It seems to be freezing up, not at random intervals, but based on a seemingly strange combination of configurations. I have been trying many different combinations, I've even devolved to "Cargo-cult" debugging, since I have no clue what is causing this... Here is my set up - since I don't know for sure which settings are causing the lockup, I'll probably mention irrelevant things: (relatively) clean install of Windows 8 (on hyper-v, if that matters) Clean install of Outlook 2013, fully updated 3 accounts configured: Hotmail account configured with ActiveSync Gmail account Large-ish account (several GB) connected with IMAP Only a few folders are subscribed in IMAP Outlook is set to only display subscribed folders configured to keep messages permanently Google Apps account, connected with IMAP Small account connected with IMAP All folders IMAP subscribed Outlook is set to only display subscribed folders configured to keep messages permanently Several Send/Receive Groups configured, to try different configurations of enabling/disable/partial the different accounts - with different send times, from 60 minutes down to 5 minutes. The problem is that at certain points Outlook completely freezes up and I have to kill it. This is not consistent - there are some things that cause it immediately almost consistently, there are some times that it just happens by itself after some period of time (sometimes a few moments, sometimes a few hours; sometimes while using it, sometimes after I've been away from it for a few hours). I have searched all over, and there seem to be many with similar (apparently) problem, and found numerous "solutions" (some even more cargocultish than mine), but so far none of them worked. I've removed all the accounts, both all together and one at a time, and re-configured them - eventually it freezes up. I've tried uninstalling Outlook, cleaning it up completely - removing files, app settings, registry keys, etc - then reinstalling - eventually it freezes up. I've only enabled the Hotmail account, disabling (but not removing) the Google accounts - apparently this does not lock up. I've enabled the Hotmail and the Gmail accounts, leaving the Apps one disabled - it seems like it does not lock up. With all accounts enabled, it locks up almost immediately after doing a send/receive. With only the Apps account enabled, it seems to not lock up. With the Hotmail and the Apps accounts enabled (Gmail disabled), it seems like it locks up after a random amount of time. With Hotmail enabled, and Gmail and Apps both enabled but set to receive only custom folder downloading (not all subscribed folders) - sometimes it locks up right after a send/receive, sometimes it goes for hours without locking up, and sometimes it only locks up when I send an email. I've tried switching the ports for the Google accounts (SSL/465 vs TLS/587), though I have no idea if this should affect, but no real difference. In short, I honestly have no idea what is actually causing Outlook to lock up, I might be completely barking up the wrong tree. At this point I don't really know what else to try, I'm flipping switches at random here. I would like to have all 3 accounts enabled, ideally in several groups (e.g. pull down only important folders in a group with short interval, and all other folders in a longer interval) - obviously without freezing up at all. I've tried putting in all the important details, if there is anything else important to add please let me know. Another issue that occurred to me might also be connected - the Google accounts don't always synchronize properly, even after a send/receive or "update folder". At least not consistently... though I haven't been able to find a significant connection between this and that.

    Read the article

  • DMV {dm_os_ring_buffers} - Queries to help pinpoint current Issues / usual usage patterns

    - by NeilHambly
    I'm been running some queries (below) to help me identify when I have had time-sensitive performance issues around Memory/CPU, I didn't want to load up additional overhead to the system (unless absolutely neccessary) using traces or profiler  - naturally we have various methods to do this Perfmon counters, DBCC, DMVs etc.. One quick way I like is to run a few DMV queries (normally back in seconds) to help me find those RECENT specific time periods when the system has been substantially changed in some way using, this is using the DMV dm_os_ring_buffers This one helps me identify when I'm expericing Timeout Errors (1222).. modiy code to look for other error as highlight belowDECLARE @ts_now BIGINT,@dt_max BIGINT, @dt_min BIGINT  SELECT @ts_now = cpu_ticks / CONVERT(FLOAT, cpu_ticks_in_ms) FROM sys.dm_os_sys_info SELECT @dt_max = MAX(timestamp), @dt_min = MIN(timestamp)    FROM sys.dm_os_ring_buffers WHERE ring_buffer_type = N'RING_BUFFER_EXCEPTION'  SELECT       record_id      ,DATEADD(ms, -1 * (@ts_now - [timestamp]), GETDATE()) AS EventTime      ,y.Error      ,UserDefined      ,b.description as NormalizedText FROM       (       SELECT       record.value('(./Record/@id)[1]', 'int')                    AS record_id,       record.value('(./Record/Exception/Error)[1]', 'int')        AS Error,       record.value('(./Record/Exception/UserDefined)[1]', 'int')  AS UserDefined,      TIMESTAMP       FROM             (             SELECT TIMESTAMP, CONVERT(XML, record) AS record             FROM sys.dm_os_ring_buffers             WHERE ring_buffer_type = N'RING_BUFFER_EXCEPTION'             AND record LIKE '% %'            ) AS x      ) AS y INNER JOIN sys.sysmessages b on y.Error = b.error WHERE b.msglangid = 1033 and  y.Error = 1222 ORDER BY record_id DESC Sample Output record_id EventTime Error UserDefined NormalizedText 15199195 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199194 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199193 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199192 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199191 18/03/2010 14:00 1222 0 Lock request time out period exceeded.  This one helps me identify when I have Unusally High Processing (> 50%) or # Page-FaultsSELECT record.value('(./Record/@id)[1]', 'int') AS record_id,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int')              AS SystemIdle,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int')      AS SQLProcessUtilization,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/UserModeTime)[1]', 'bigint')         AS UserModeTime,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/KernelModeTime)[1]', 'bigint')       AS KernelModeTime,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/PageFaults)[1]', 'bigint')           AS PageFaults,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/WorkingSetDelta)[1]', 'bigint')      AS WorkingSetDelta,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/MemoryUtilization)[1]', 'int')       AS MemoryUtilization,TIMESTAMPFROM (        SELECT TIMESTAMP, CONVERT(XML, record) AS record         FROM sys.dm_os_ring_buffers         WHERE ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR'        AND record LIKE '% %'         ) AS x Example: Showing entries > 50% SQL CPU record_id SystemIdle SQLProcessUtilization UserModeTime KernelModeTime PageFaults WorkingSetDelta MemoryUtilization TIMESTAMP 111916 66 29 36718750 1374843750 21333 -40960 100 7991061289 111917 54 41 50156250 1954062500 26914 -28672 100 7991121290 111918 57 39 42968750 1838437500 30096 20480 100 7991181290 111919 41 53 43906250 2530156250 22088 -4096 100 7991241307 111920 48 45 40937500 2124062500 26395 8192 100 7991301310 111921 52 43 35625000 2052812500 21996 155648 100 7991361311 111922 40 55 36875000 2637343750 33355 -262144 100 7991421311 111923 36 58 44843750 2786562500 47019 28672 100 7991481311 111924 31 64 53437500 3046562500 31027 61440 100 7991541314 111925 36 57 43906250 2711250000 37074 -8192 100 7991601317 111926 52 43 43437500 2060156250 29176 20480 100 7991661318 111927 71 24 33750000 1141250000 14478 16384 100 7991721320 111928 71 23 34531250 1116250000 12711 -20480 100 7991781320 111929 53 36 46562500 1714062500 26684 200704 100 7991841323 Finally one to provide some understanding of the level of memory state changes that are ocuringSELECT record.value('(./Record/@id)[1]', 'int')                                                       AS 'record_id',record.value('(./Record/ResourceMonitor/Notification)[1]', 'VARCHAR(100)')                     AS 'ReservedMemory',record.value('(./Record/ResourceMonitor/Indicators)[1]', 'int')                                AS 'Indicators',record.value('(./Record/ResourceMonitor/Effect/@state)[1]', 'VARCHAR(100)')         + ' - ' + record.value('(./Record/ResourceMonitor/Effect/@reversed)[1]', 'VARCHAR(100)')      + ' - ' + record.value('(./Record/ResourceMonitor/Effect)[1]', 'VARCHAR(100)')                           AS 'APPLY-HIGHPM',record.value('(./Record/ResourceMonitor/Effect/@state)[2]', 'VARCHAR(100)')         + ' - ' + record.value('(./Record/ResourceMonitor/Effect/@reversed)[2]', 'VARCHAR(100)')      + ' - ' + record.value('(./Record/ResourceMonitor/Effect)[2]', 'VARCHAR(100)')                           AS 'APPLY-HIGHPM',record.value('(./Record/ResourceMonitor/Effect/@state)[3]', 'VARCHAR(100)')         + ' - ' + record.value('(./Record/ResourceMonitor/Effect/@reversed)[3]', 'VARCHAR(100)')      + ' - ' + record.value('(./Record/ResourceMonitor/Effect)[3]', 'VARCHAR(100)')                           AS 'REVERT_HIGHPM',record.value('(./Record/MemoryNode/ReservedMemory)[1]', 'int')                                 AS 'ReservedMemory',record.value('(./Record/MemoryNode/CommittedMemory)[1]', 'int')                                AS 'CommittedMemory',record.value('(./Record/MemoryNode/SharedMemory)[1]', 'int')                                   AS 'SharedMemory',record.value('(./Record/MemoryNode/AWEMemory)[1]', 'int')                                      AS 'AWEMemory',record.value('(./Record/MemoryNode/SinglePagesMemory)[1]', 'int')                              AS 'SinglePagesMemory',record.value('(./Record/MemoryNode/CachedMemory)[1]', 'int')                                   AS 'CachedMemory',record.value('(./Record/MemoryRecord/MemoryUtilization)[1]', 'int')                            AS 'MemoryUtilization',record.value('(./Record/MemoryRecord/TotalPhysicalMemory)[1]', 'int')                          AS 'TotalPhysicalMemory',record.value('(./Record/MemoryRecord/AvailablePhysicalMemory)[1]', 'int')                      AS 'AvailablePhysicalMemory',record.value('(./Record/MemoryRecord/TotalPageFile)[1]', 'int')                                AS 'TotalPageFile',record.value('(./Record/MemoryRecord/AvailablePageFile)[1]', 'int')                            AS 'AvailablePageFile',record.value('(./Record/MemoryRecord/TotalVirtualAddressSpace)[1]', 'bigint')                  AS 'TotalVirtualAddressSpace',record.value('(./Record/MemoryRecord/AvailableVirtualAddressSpace)[1]', 'bigint')              AS 'AvailableVirtualAddressSpace',record.value('(./Record/MemoryRecord/AvailableExtendedVirtualAddressSpace)[1]', 'bigint')      AS 'AvailableExtendedVirtualAddressSpace', TIMESTAMPFROM (        SELECT TIMESTAMP, CONVERT(XML, record) AS record         FROM sys.dm_os_ring_buffers         WHERE ring_buffer_type = N'RING_BUFFER_RESOURCE_MONITOR'        AND record LIKE '% %'        ) AS x  

    Read the article

  • lxc containers hangs after upgrade to 13.10

    - by doug123
    I have 3 lxc containers. They were all working fine on 12.10 and I upgraded the containers with do-release-upgrade on the containers to 13.04 and 13.10 and that worked great. Then I upgraded the host to 13.04 and then 13.10 and now the 3 containers hang with this: >lxc-start -n as1 -l DEBUG -o $(tty) lxc-start 1383145786.513 INFO lxc_start_ui - using rcfile /var/lib/lxc/as1/config lxc-start 1383145786.513 WARN lxc_log - lxc_log_init called with log already initialized lxc-start 1383145786.513 INFO lxc_apparmor - aa_enabled set to 1 lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/2' (5/6) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/13' (7/8) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/14' (9/10) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/15' (11/12) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/17' (13/14) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/18' (15/16) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/19' (17/18) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/20' (19/20) lxc-start 1383145786.514 INFO lxc_conf - tty's configured lxc-start 1383145786.514 DEBUG lxc_start - sigchild handler set lxc-start 1383145786.514 DEBUG lxc_console - opening /dev/tty for console peer lxc-start 1383145786.514 DEBUG lxc_console - using '/dev/tty' as console lxc-start 1383145786.514 DEBUG lxc_console - 6242 got SIGWINCH fd 25 lxc-start 1383145786.514 DEBUG lxc_console - set winsz dstfd:22 cols:177 rows:53 lxc-start 1383145786.514 INFO lxc_start - 'as1' is initialized lxc-start 1383145786.522 DEBUG lxc_start - Not dropping cap_sys_boot or watching utmp lxc-start 1383145786.524 DEBUG lxc_conf - mac address of host interface 'vethB4L35W' changed to private fe:7c:96:a0:ae:29 lxc-start 1383145786.525 DEBUG lxc_conf - instanciated veth 'vethB4L35W/vethVC61K2', index is '26' lxc-start 1383145786.529 DEBUG lxc_cgroup - cgroup 'memory.limit_in_bytes' set to '20G' lxc-start 1383145786.529 DEBUG lxc_cgroup - cgroup 'cpuset.cpus' set to '12-23' lxc-start 1383145786.529 INFO lxc_cgroup - cgroup has been setup lxc-start 1383145786.555 DEBUG lxc_conf - move 'eth0' to '6249' lxc-start 1383145786.555 INFO lxc_conf - 'as1' hostname has been setup lxc-start 1383145786.575 DEBUG lxc_conf - 'eth0' has been setup lxc-start 1383145786.575 INFO lxc_conf - network has been setup lxc-start 1383145786.575 INFO lxc_conf - looking at .44 42 252:0 / / rw,relatime - ext4 /dev/mapper/limitorderbook1-root rw,errors=remount-ro,data=ordered . lxc-start 1383145786.575 INFO lxc_conf - now p is . /. lxc-start 1383145786.575 INFO lxc_conf - looking at .52 44 0:5 / /dev rw,relatime - devtmpfs udev rw,size=32961632k,nr_inodes=8240408,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /dev. lxc-start 1383145786.575 INFO lxc_conf - looking at .61 52 0:11 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,mode=600,ptmxmode=000 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /dev/pts. lxc-start 1383145786.575 INFO lxc_conf - looking at .68 44 0:15 / /run rw,nosuid,noexec,relatime - tmpfs tmpfs rw,size=6594456k,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run. lxc-start 1383145786.575 INFO lxc_conf - looking at .69 68 0:18 / /run/lock rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=5120k . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run/lock. lxc-start 1383145786.575 INFO lxc_conf - looking at .72 68 0:19 / /run/shm rw,nosuid,nodev,relatime - tmpfs none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run/shm. lxc-start 1383145786.575 INFO lxc_conf - looking at .73 68 0:21 / /run/user rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=102400k,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run/user. lxc-start 1383145786.575 INFO lxc_conf - looking at .76 44 0:14 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys. lxc-start 1383145786.575 INFO lxc_conf - looking at .77 76 0:16 / /sys/fs/cgroup rw,relatime - tmpfs none rw,size=4k,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup. lxc-start 1383145786.575 INFO lxc_conf - looking at .78 77 0:20 / /sys/fs/cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset,clone_children . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuset. lxc-start 1383145786.575 INFO lxc_conf - looking at .79 77 0:23 / /sys/fs/cgroup/cpu rw,relatime - cgroup cgroup rw,cpu . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/cpu. lxc-start 1383145786.575 INFO lxc_conf - looking at .80 77 0:24 / /sys/fs/cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuacct. lxc-start 1383145786.575 INFO lxc_conf - looking at .81 77 0:25 / /sys/fs/cgroup/memory rw,relatime - cgroup cgroup rw,memory . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/memory. lxc-start 1383145786.575 INFO lxc_conf - looking at .82 77 0:26 / /sys/fs/cgroup/devices rw,relatime - cgroup cgroup rw,devices . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/devices. lxc-start 1383145786.575 INFO lxc_conf - looking at .83 77 0:27 / /sys/fs/cgroup/freezer rw,relatime - cgroup cgroup rw,freezer . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/freezer. lxc-start 1383145786.575 INFO lxc_conf - looking at .84 77 0:28 / /sys/fs/cgroup/blkio rw,relatime - cgroup cgroup rw,blkio . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/blkio. lxc-start 1383145786.575 INFO lxc_conf - looking at .85 77 0:29 / /sys/fs/cgroup/perf_event rw,relatime - cgroup cgroup rw,perf_event . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/perf_event. lxc-start 1383145786.575 INFO lxc_conf - looking at .94 77 0:30 / /sys/fs/cgroup/hugetlb rw,relatime - cgroup cgroup rw,hugetlb . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/hugetlb. lxc-start 1383145786.575 INFO lxc_conf - looking at .95 77 0:31 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime - cgroup systemd rw,name=systemd . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/systemd. lxc-start 1383145786.575 INFO lxc_conf - looking at .96 76 0:17 / /sys/fs/fuse/connections rw,relatime - fusectl none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/fuse/connections. lxc-start 1383145786.575 INFO lxc_conf - looking at .98 76 0:6 / /sys/kernel/debug rw,relatime - debugfs none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/kernel/debug. lxc-start 1383145786.575 INFO lxc_conf - looking at .101 76 0:10 / /sys/kernel/security rw,relatime - securityfs none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/kernel/security. lxc-start 1383145786.575 INFO lxc_conf - looking at .102 76 0:22 / /sys/fs/pstore rw,relatime - pstore none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/pstore. lxc-start 1383145786.575 INFO lxc_conf - looking at .103 44 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /proc. lxc-start 1383145786.575 INFO lxc_conf - looking at .104 44 9:2 / /data rw,relatime - ext4 /dev/md2 rw,errors=remount-ro,data=ordered . lxc-start 1383145786.575 INFO lxc_conf - now p is . /data. lxc-start 1383145786.575 INFO lxc_conf - looking at .105 44 8:1 / /boot rw,relatime - ext2 /dev/sda1 rw,errors=continue . lxc-start 1383145786.575 INFO lxc_conf - now p is . /boot. lxc-start 1383145786.576 DEBUG lxc_conf - mounted '/data/srv/lxc/as1' on '/usr/lib/x86_64-linux-gnu/lxc' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//dev/pts', type 'devpts' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//proc', type 'proc' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//sys', type 'sysfs' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//run', type 'tmpfs' lxc-start 1383145786.576 INFO lxc_conf - mount points have been setup lxc-start 1383145786.577 INFO lxc_conf - console has been setup lxc-start 1383145786.577 INFO lxc_conf - 8 tty(s) has been setup lxc-start 1383145786.577 INFO lxc_conf - rootfs path is ./data/srv/lxc/as1., mount is ./usr/lib/x86_64-linux-gnu/lxc. lxc-start 1383145786.577 INFO lxc_apparmor - I am 1, /proc/self points to 1 lxc-start 1383145786.577 DEBUG lxc_conf - created '/usr/lib/x86_64-linux-gnu/lxc/lxc_putold' directory lxc-start 1383145786.577 DEBUG lxc_conf - mountpoint for old rootfs is '/usr/lib/x86_64-linux-gnu/lxc/lxc_putold' lxc-start 1383145786.577 DEBUG lxc_conf - pivot_root syscall to '/usr/lib/x86_64-linux-gnu/lxc' successful lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/dev/pts' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run/lock' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run/shm' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run/user' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpuset' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpu' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpuacct' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/memory' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/devices' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/freezer' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/blkio' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/perf_event' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/hugetlb' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/systemd' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/fuse/connections' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/kernel/debug' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/kernel/security' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/pstore' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/proc' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/data' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/boot' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/dev' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold' lxc-start 1383145786.577 INFO lxc_conf - created new pts instance lxc-start 1383145786.578 DEBUG lxc_conf - drop capability 'sys_boot' (22) lxc-start 1383145786.578 DEBUG lxc_conf - capabilities have been setup lxc-start 1383145786.578 NOTICE lxc_conf - 'as1' is setup. lxc-start 1383145786.578 DEBUG lxc_cgroup - cgroup 'memory.limit_in_bytes' set to '20G' lxc-start 1383145786.578 DEBUG lxc_cgroup - cgroup 'cpuset.cpus' set to '12-23' lxc-start 1383145786.578 INFO lxc_cgroup - cgroup has been setup lxc-start 1383145786.578 INFO lxc_apparmor - setting up apparmor lxc-start 1383145786.578 INFO lxc_apparmor - changed apparmor profile to lxc-container-default lxc-start 1383145786.578 NOTICE lxc_start - exec'ing '/sbin/init' lxc-start 1383145786.578 INFO lxc_conf - looking at .15 20 0:14 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw . lxc-start 1383145786.578 INFO lxc_conf - now p is . /sys. lxc-start 1383145786.578 INFO lxc_conf - looking at .16 20 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw . lxc-start 1383145786.578 INFO lxc_conf - now p is . /proc. lxc-start 1383145786.578 INFO lxc_conf - looking at .17 20 0:5 / /dev rw,relatime - devtmpfs udev rw,size=32961632k,nr_inodes=8240408,mode=755 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /dev. lxc-start 1383145786.578 INFO lxc_conf - looking at .18 17 0:11 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,mode=600,ptmxmode=000 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /dev/pts. lxc-start 1383145786.578 INFO lxc_conf - looking at .19 20 0:15 / /run rw,nosuid,noexec,relatime - tmpfs tmpfs rw,size=6594456k,mode=755 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /run. lxc-start 1383145786.578 INFO lxc_conf - looking at .20 1 252:0 / / rw,relatime - ext4 /dev/mapper/limitorderbook1-root rw,errors=remount-ro,data=ordered . lxc-start 1383145786.578 INFO lxc_conf - now p is . /. lxc-start 1383145786.578 INFO lxc_conf - looking at .22 15 0:16 / /sys/fs/cgroup rw,relatime - tmpfs none rw,size=4k,mode=755 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /sys/fs/cgroup. lxc-start 1383145786.578 INFO lxc_conf - looking at .23 15 0:17 / /sys/fs/fuse/connections rw,relatime - fusectl none rw . lxc-start 1383145786.578 INFO lxc_conf - now p is . /sys/fs/fuse/connections. lxc-start 1383145786.578 INFO lxc_conf - looking at .24 15 0:6 / /sys/kernel/debug rw,relatime - debugfs none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/kernel/debug. lxc-start 1383145786.579 INFO lxc_conf - looking at .25 15 0:10 / /sys/kernel/security rw,relatime - securityfs none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/kernel/security. lxc-start 1383145786.579 INFO lxc_conf - looking at .26 19 0:18 / /run/lock rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=5120k . lxc-start 1383145786.579 INFO lxc_conf - now p is . /run/lock. lxc-start 1383145786.579 INFO lxc_conf - looking at .27 19 0:19 / /run/shm rw,nosuid,nodev,relatime - tmpfs none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /run/shm. lxc-start 1383145786.579 INFO lxc_conf - looking at .28 22 0:20 / /sys/fs/cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset,clone_children . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuset. lxc-start 1383145786.579 INFO lxc_conf - looking at .29 19 0:21 / /run/user rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=102400k,mode=755 . lxc-start 1383145786.579 INFO lxc_conf - now p is . /run/user. lxc-start 1383145786.579 INFO lxc_conf - looking at .30 15 0:22 / /sys/fs/pstore rw,relatime - pstore none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/pstore. lxc-start 1383145786.579 INFO lxc_conf - looking at .31 22 0:23 / /sys/fs/cgroup/cpu rw,relatime - cgroup cgroup rw,cpu . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/cpu. lxc-start 1383145786.579 INFO lxc_conf - looking at .32 22 0:24 / /sys/fs/cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuacct. lxc-start 1383145786.579 INFO lxc_conf - looking at .33 22 0:25 / /sys/fs/cgroup/memory rw,relatime - cgroup cgroup rw,memory . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/memory. lxc-start 1383145786.579 INFO lxc_conf - looking at .34 22 0:26 / /sys/fs/cgroup/devices rw,relatime - cgroup cgroup rw,devices . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/devices. lxc-start 1383145786.579 INFO lxc_conf - looking at .35 22 0:27 / /sys/fs/cgroup/freezer rw,relatime - cgroup cgroup rw,freezer . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/freezer. lxc-start 1383145786.579 INFO lxc_conf - looking at .36 22 0:28 / /sys/fs/cgroup/blkio rw,relatime - cgroup cgroup rw,blkio . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/blkio. lxc-start 1383145786.579 INFO lxc_conf - looking at .37 22 0:29 / /sys/fs/cgroup/perf_event rw,relatime - cgroup cgroup rw,perf_event . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/perf_event. lxc-start 1383145786.579 INFO lxc_conf - looking at .38 22 0:30 / /sys/fs/cgroup/hugetlb rw,relatime - cgroup cgroup rw,hugetlb . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/hugetlb. lxc-start 1383145786.579 INFO lxc_conf - looking at .39 20 9:2 / /data rw,relatime - ext4 /dev/md2 rw,errors=remount-ro,data=ordered . lxc-start 1383145786.579 INFO lxc_conf - now p is . /data. lxc-start 1383145786.579 INFO lxc_conf - looking at .40 20 8:1 / /boot rw,relatime - ext2 /dev/sda1 rw,errors=continue . lxc-start 1383145786.579 INFO lxc_conf - now p is . /boot. lxc-start 1383145786.579 INFO lxc_conf - looking at .41 22 0:31 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime - cgroup systemd rw,name=systemd . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/systemd. lxc-start 1383145786.579 NOTICE lxc_start - '/sbin/init' started with pid '6249' lxc-start 1383145786.579 WARN lxc_start - invalid pid for SIGCHLD <4>init: ureadahead main process (7) terminated with status 5 <4>init: console-font main process (94) terminated with status 1 And it will just sit there like that for hours at least. The container becomes pingable but I can't ssh and if I try lxc-console -n as1 I get a blank screen. If I do lxc-stop -n as1 or ^C in the window where it has hung I get: ^CTERM environment variable not set. <4>init: plymouth-upstart-bridge main process (192) terminated with status 1 <4>init: hwclock-save main process (187) terminated with status 70 * Asking all remaining processes to terminate... ...done. * All processes ended within 1 seconds... ...done. * Deactivating swap... ...fail! mount: cannot mount block device /dev/md2 read-only * Will now restart But after 20 minutes it hasn't restarted. Any ideas why these containers are hanging?

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • sudo apt-get install mysql-server fails

    - by danwoods
    Hi all, I'm coming from a fresh install of Ubuntu server 9.10 and trying to install mysql-server by using 'sudo apt-get mysql-server' I get the following errors: dan@dev:~$ sudo apt-get install mysql-server [sudo] password for dan: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libnet-daemon-perl libplrpc-perl mysql-client-5.1 mysql-server-5.1 Suggested packages: dbishell libipc-sharedcache-perl tinyca The following NEW packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libnet-daemon-perl libplrpc-perl mysql-client-5.1 mysql-server mysql-server-5.1 0 upgraded, 8 newly installed, 0 to remove and 0 not upgraded. Need to get 16.5MB of archives. After this operation, 39.0MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://us.archive.ubuntu.com karmic/main libnet-daemon-perl 0.43-1 [46.9kB] Get:2 http://us.archive.ubuntu.com karmic/main libplrpc-perl 0.2020-2 [36.0kB] Get:3 http://us.archive.ubuntu.com karmic/main libdbi-perl 1.609-1 [800kB] Get:4 http://us.archive.ubuntu.com karmic/main libdbd-mysql-perl 4.011-1ubuntu1 [136kB] Get:5 http://us.archive.ubuntu.com karmic-updates/main mysql-client-5.1 5.1.37- 1ubuntu5.1 [8,202kB] Get:6 http://us.archive.ubuntu.com karmic-updates/main mysql-server-5.1 5.1.37-1ubuntu5.1 [7,186kB] Get:7 http://us.archive.ubuntu.com karmic/main libhtml-template-perl 2.9-1 [65.8kB] Get:8 http://us.archive.ubuntu.com karmic-updates/main mysql-server 5.1.37-1ubuntu5.1 [64.3kB] Fetched 16.5MB in 1min 34s (175kB/s) Preconfiguring packages ... Selecting previously deselected package libnet-daemon-perl. (Reading database ... 123083 files and directories currently installed.) Unpacking libnet-daemon-perl (from .../libnet-daemon-perl_0.43-1_all.deb) ... Selecting previously deselected package libplrpc-perl. Unpacking libplrpc-perl (from .../libplrpc-perl_0.2020-2_all.deb) ... Selecting previously deselected package libdbi-perl. Unpacking libdbi-perl (from .../libdbi-perl_1.609-1_i386.deb) ... Selecting previously deselected package libdbd-mysql-perl. Unpacking libdbd-mysql-perl (from .../libdbd-mysql-perl_4.011-1ubuntu1_i386.deb) ... Selecting previously deselected package mysql-client-5.1. Unpacking mysql-client-5.1 (from .../mysql-client-5.1_5.1.37-1ubuntu5.1_i386.deb) ... Selecting previously deselected package mysql-server-5.1. Unpacking mysql-server-5.1 (from .../mysql-server-5.1_5.1.37-1ubuntu5.1_i386.deb) ... Selecting previously deselected package libhtml-template-perl. Unpacking libhtml-template-perl (from .../libhtml-template-perl_2.9-1_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.1.37-1ubuntu5.1_all.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Setting up libnet-daemon-perl (0.43-1) ... Setting up libplrpc-perl (0.2020-2) ... Setting up libdbi-perl (1.609-1) ... Setting up libdbd-mysql-perl (4.011-1ubuntu1) ... Setting up mysql-client-5.1 (5.1.37-1ubuntu5.1) ... Setting up mysql-server-5.1 (5.1.37-1ubuntu5.1) ... * Stopping MySQL database server mysqld [ OK ] * Starting MySQL database server mysqld [fail] invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.1 (--configure): subprocess installed post-installation script returned error exit status 1 Setting up libhtml-template-perl (2.9-1) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.1; however: Package mysql-server-5.1 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.1 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) What am I missing? [update] mysqld returns: dan@dev:~$ sudo mysqld [sudo] password for dan: 100220 12:18:17 [Note] Plugin 'FEDERATED' is disabled. InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. 100220 12:18:17 InnoDB: Retrying to lock the first data file InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process This goes on for a while... InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. ^[[BInnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. 100220 12:19:57 InnoDB: Unable to open the first data file InnoDB: Error in opening ./ibdata1 100220 12:19:57 InnoDB: Operating system error number 11 in a file operation. InnoDB: Error number 11 means 'Resource temporarily unavailable'. InnoDB: Some operating system error numbers are described at InnoDB: http://dev.mysql.com/doc/refman/5.1/en/operating-system-error-codes.html InnoDB: Could not open or create data files. InnoDB: If you tried to add new data files, and it failed here, InnoDB: you should now edit innodb_data_file_path in my.cnf back InnoDB: to what it was, and remove the new ibdata files InnoDB created InnoDB: in this failed attempt. InnoDB only wrote those files full of InnoDB: zeros, but did not yet use them in any way. But be careful: do not InnoDB: remove old data files which contain your precious data! 100220 12:19:57 [ERROR] Plugin 'InnoDB' init function returned error. 100220 12:19:57 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100220 12:19:57 [ERROR] Can't start server: Bind on TCP/IP port: Address already in use 100220 12:19:57 [ERROR] Do you already have another mysqld server running on port: 3306 ? 100220 12:19:57 [ERROR] Aborting 100220 12:19:57 [Warning] Forcing shutdown of 1 plugins 100220 12:19:57 [Note] mysqld: Shutdown complete How can I check what process is using port: 3306? [Update]: sudo netstat -anp | grep LISTEN returns dan@dev:~$ sudo netstat -anp | grep LISTEN [sudo] password for dan: tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 1372/master tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 4391/mysqld tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1409/cupsd tcp6 0 0 ::1:631 :::* LISTEN 1409/cupsd [More Updates]: I can log into mysql if that makes a difference

    Read the article

  • This task is currently locked by a running workflow and cannot be edited. Limitation to both Nintex and SPD workflow

    - by ybbest
    Note, this post is from Nintex Forum here. These limitations apply to both SharePoint designer Workflow and Nintex Workflow as Nintex using the SharePoint workflow engine. The common cause that I experience is that ‘parent’ workflow is generating more than one task at once. This is common as you can have multiple approvers for certain approval process. You could also have workflow running when the task is created, one of the common scenario is you would like to set a custom column value in your approval task. For me this is huge limitation, as Nintex lover I really hope Nintex could solve this problem with Microsoft going forward. Introduction “This task is currently locked by a running workflow and cannot be edited” is a common message that is seen when an error occurs while the SharePoint workflow engine is processing a task item associated with a workflow. When a workflow processes a task normally, the following sequence of events is expected to occur: 1.       The process begins. 2.       The workflow places a ‘lock’ on the task so nothing else can change the values while the workflow is processing. 3.       The workflow processes the task. 4.       The lock is released when the task processing is finished. When the message is encountered, it usually indicates that an error occurred between step 2 and 4. As a result, the lock is never released. Therefore, the ‘task locked’ message is not an error itself, rather a symptom of another error – the ‘task locked’ message does not indicate what went wrong. In most cases, once this message is encountered, the workflow cannot be made to continue and must be terminated and started again. The following is a guide that can help troubleshoot the cause of these messages.  Some initial observations to narrow down the potential causes are: Is the error consistent or intermittent? When the error is consistent, it will happen every time the workflow is run. When it is intermittent, it may happen regularly, but not every time. Does the error occur the first time the user tries to respond to a task, or do they respond and notice the workflow does not continue, and when they respond again the error occurs? If the message is present when the user first responds to the task, the issue would have occurred when the task was created. Otherwise, it would have occurred when the user attempted to respond to the task. Causes Modifying the task list A cause of this error appearing consistently the first time a user tries to respond to a task is a modification to the default task list schema. For example, changing the ‘Assigned to’ field in a task list to be a multiple selection will cause the behaviour. Deleting the workflow task then restoring it from the Recycle bin If you start a workflow, delete the workflow task then restore it from the Recycle Bin in SharePoint, the workflow will fail with the ‘task locked’ error.  This is confirmed behaviour whether using a SharePoint Designer or a Nintex workflow.  You will need to terminate the workflow and start it again. Parallel simultaneous responses A cause of this error appearing inconsistently is multiple users responding to tasks in parallel at the same time. In this scenario, one task will complete correctly and the other will not process. When the user tries again, the ‘task locked’ message will display. Nintex included a workaround for this issue in build 11000. In build 11000 and later, one of the users will receive a message on the task form when they attempt to respond, stating that they need to try again in a few moments. Additional processing on the task A cause of this error appearing consistently and inconsistently is having an additional system running on the items in the task list. Some examples include: a workflow running on the task list, an event receiver running on the task list or another automated process querying and updating workflow tasks. Note: This Microsoft help article (http://office.microsoft.com/en-us/sharepointdesigner/HA102376561033.aspx#5) explains creating a workflow that runs on the task list to update a field on the task. Our experience shows that this causes the ‘Task Locked’ issues when the ‘parent’ workflow is generating more than one task at once. Isolated system error If the error is a rare event, or a ‘one off’ event, then an isolated system error may have occurred. For example, if there is a database connectivity issue while the workflow is processing the task response, the task will lock. In this case, the user will respond to a task but the workflow will not continue. When they respond again, the ‘task locked’ message will display. In this case, there will be an error in the SharePoint ULS Logs at the time that the user originally responded. Temporary delay while workflow processes If the workflow is taking a long time to process after a user submits a task, they may notice and try to respond to the task again. They will see the task locked error, but after a number of attempts (or after waiting some time) the task response page eventually indicates the task has been responded to. In this case, nothing actually went wrong, and the error message gives an accurate indication of what is happening – the workflow temporarily locked the task while it was processing. This scenario may occur in a very large workflow, or after the SharePoint application pool has just started. Modifying the task via a web service with an invalid url If the Nintex Workflow web service is used to respond to or delegate a task, the site context part of the url must be a valid alternative access mapping url. For example, if you access the web service via the IP address of the SharePoint server, and the IP address is not a valid AAM, the task can become locked. The workflow has become stuck without any apparent errors This behaviour can occur as a result of a bug in the SharePoint 2010 workflow engine.  If you do not have the August 2010 Cumulative Update (or later) for SharePoint, and your workflow uses delays, “Flexi-task”, State machine”, “Task Reminder” actions or variables, you could be affected. Check the SharePoint 2010 Updates site here: http://technet.microsoft.com/en-us/sharepoint/ff800847.  The October CU is recommended http://support.microsoft.com/kb/2553031.   The fix is described as “Consider the following scenario. You add a Delay activity to a workflow. Then, you set the duration for the Delay activity. You deploy the workflow in SharePoint Foundation 2010. In this scenario, the workflow is not resumed after the duration of the Delay activity”. If you find this is occurring in your environment, install the October CU, terminate all the running workflows affected and run them afresh. Investigative steps The first step to isolate the issue is to create a new task list on the site and configure the workflow to use it.  Any customizations that were made to the original task list should not be made to the new task list. If the new task list eliminates the issue, then the cause can be attributed to the original task list or a change that was made to it. To change the task list that the workflow uses: In Workflow Designer select Settings -> Startup Options Then configure the task list as required If any of the scenarios above do not help, check the SharePoint logs for any messages with a category of ‘Workflow Infrastructure’. Conclusion The information in this article has been gathered from observations and investigations by Nintex. The sources of these issues are the underlying SharePoint workflow engine. This article will be updated if further causes are discovered. From <http://connect.nintex.com/forums/thread/6503.aspx>

    Read the article

  • Ardour wont start Jack problem

    - by Drew S
    I downloaded Ardour yesterday, it worked, edited an audio file done. Come back today it wont start I get this: Ardour could not start JACK There are several possible reasons: 1) You requested audio parameters that are not supported.. 2) JACK is running as another user. Please consider the possibilities, and perhaps try different parameters. So I try and look at qjackctl to see what happening there. When I try to start JACK I get D-BUS: JACK server could not be started. then Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. and this is the message box in JACK. 15:22:12.927 Patchbay deactivated. 15:22:12.927 Statistics reset. 15:22:12.944 ALSA connection change. 15:22:12.951 D-BUS: Service is available (org.jackaudio.service aka jackdbus). Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started 15:22:12.959 ALSA connection graph change. 15:22:45.850 ALSA connection graph change. 15:22:46.021 ALSA connection change. 15:22:56.492 ALSA connection graph change. 15:22:56.624 ALSA connection change. 15:23:42.340 D-BUS: JACK server could not be started. Sorry Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Wed Oct 23 15:23:42 2013: Starting jack server... Wed Oct 23 15:23:42 2013: JACK server starting in realtime mode with priority 10 Wed Oct 23 15:23:42 2013: ERROR: Cannot lock down 82274202 byte memory area (Cannot allocate memory) Wed Oct 23 15:23:42 2013: Acquired audio card Audio0 Wed Oct 23 15:23:42 2013: creating alsa driver ... hw:0|hw:0|1024|2|44100|0|0|nomon|swmeter|-|32bit Wed Oct 23 15:23:42 2013: ERROR: ATTENTION: The playback device "hw:0" is already in use. The following applications are using your soundcard(s) so you should check them and stop them as necessary before trying to start JACK again: pulseaudio (process ID 2553) Wed Oct 23 15:23:42 2013: ERROR: Cannot initialize driver Wed Oct 23 15:23:42 2013: ERROR: JackServer::Open failed with -1 Wed Oct 23 15:23:42 2013: ERROR: Failed to open server Wed Oct 23 15:23:43 2013: Saving settings to "/home/drew/.config/jack/conf.xml" ... 15:26:41.669 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started 15:26:49.006 D-BUS: JACK server could not be started. Sorry Wed Oct 23 15:26:48 2013: Starting jack server... Wed Oct 23 15:26:48 2013: JACK server starting in non-realtime mode Wed Oct 23 15:26:48 2013: ERROR: Cannot lock down 82274202 byte memory area (Cannot allocate memory) Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Wed Oct 23 15:26:48 2013: ERROR: cannot register object path "/org/freedesktop/ReserveDevice1/Audio0": A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:26:48 2013: ERROR: Failed to acquire device name : Audio0 error : A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:26:48 2013: ERROR: Audio device hw:0 cannot be acquired... Wed Oct 23 15:26:48 2013: ERROR: Cannot initialize driver Wed Oct 23 15:26:48 2013: ERROR: JackServer::Open failed with -1 Wed Oct 23 15:26:48 2013: ERROR: Failed to open server Wed Oct 23 15:26:50 2013: Saving settings to "/home/drew/.config/jack/conf.xml" ... 15:26:52.441 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started 15:26:55.997 D-BUS: JACK server could not be started. Sorry Wed Oct 23 15:26:55 2013: Starting jack server... Wed Oct 23 15:26:55 2013: JACK server starting in non-realtime mode Wed Oct 23 15:26:55 2013: ERROR: Cannot lock down 82274202 byte memory area (Cannot allocate memory) Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Wed Oct 23 15:26:55 2013: ERROR: cannot register object path "/org/freedesktop/ReserveDevice1/Audio0": A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:26:55 2013: ERROR: Failed to acquire device name : Audio0 error : A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:26:55 2013: ERROR: Audio device hw:0 cannot be acquired... Wed Oct 23 15:26:55 2013: ERROR: Cannot initialize driver Wed Oct 23 15:26:55 2013: ERROR: JackServer::Open failed with -1 Wed Oct 23 15:26:55 2013: ERROR: Failed to open server Wed Oct 23 15:26:57 2013: Saving settings to "/home/drew/.config/jack/conf.xml" ... 15:26:59.054 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started 15:29:24.624 ALSA connection graph change. 15:29:24.641 ALSA connection change. 15:33:11.760 D-BUS: JACK server could not be started. Sorry Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Wed Oct 23 15:33:11 2013: Starting jack server... Wed Oct 23 15:33:11 2013: JACK server starting in non-realtime mode Wed Oct 23 15:33:11 2013: ERROR: Cannot lock down 82274202 byte memory area (Cannot allocate memory) Wed Oct 23 15:33:11 2013: ERROR: cannot register object path "/org/freedesktop/ReserveDevice1/Audio0": A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:33:11 2013: ERROR: Failed to acquire device name : Audio0 error : A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0 Wed Oct 23 15:33:11 2013: ERROR: Audio device hw:0 cannot be acquired... Wed Oct 23 15:33:11 2013: ERROR: Cannot initialize driver Wed Oct 23 15:33:11 2013: ERROR: JackServer::Open failed with -1 Wed Oct 23 15:33:11 2013: ERROR: Failed to open server Wed Oct 23 15:33:12 2013: Saving settings to "/home/drew/.config/jack/conf.xml" ... 15:34:09.439 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started

    Read the article

  • Establishing WebLogic Server HTTPS Trust of IIS Using a Microsoft Local Certificate Authority

    - by user647124
    Everyone agrees that self-signed and demo certificates for SSL and HTTPS should never be used in production and preferred not to be used elsewhere. Most self-signed and demo certificates are provided by vendors with the intention that they are used only to integrate within the same environment. In a vendor’s perfect world all application servers in a given enterprise are from the same vendor, which makes this lack of interoperability in a non-production environment an advantage. For us working in the real world, where not only do we not use a single vendor everywhere but have to make do with self-signed certificates for all but production, testing HTTPS between an IIS ASP.NET service provider and a WebLogic J2EE consumer application can be very frustrating to set up. It was for me, especially having found many blogs and discussion threads where various solutions were described but did not quite work and were all mostly similar but just a little bit different. To save both you and my future (who always seems to forget the hardest-won lessons) all of the pain and suffering, I am recording the steps that finally worked here for reference and sanity. How You Know You Need This The first cold clutches of dread that tells you it is going to be a long day is when you attempt to a WSDL published by IIS in WebLogic over HTTPS and you see the following: <Jul 30, 2012 2:51:31 PM EDT> <Warning> <Security> <BEA-090477> <Certificate chain received from myserver.mydomain.com - 10.555.55.123 was not trusted causing SSL handshake failure.> weblogic.wsee.wsdl.WsdlException: Failed to read wsdl file from url due to -- javax.net.ssl.SSLKeyException: [Security:090477]Certificate chain received from myserver02.mydomain.com - 10.555.55.123 was not trusted causing SSL handshake failure. The above is what started a three day sojourn into searching for a solution. Even people who had solved it before would tell me how they did, and then shrug when I demonstrated that the steps did not end in the success they claimed I would experience. Rather than torture you with the details of everything I did that did not work, here is what finally did work. Export the Certificates from IE First, take the offending WSDL URL and paste it into IE (if you have an internal Microsoft CA, you have IE, even if you don’t use it in favor of some other browser). To state the semi-obvious, if you received the error above there is a certificate configured for the IIS host of the service and the SSL port has been configured properly. Otherwise there would be a different error, usually about the site not found or connection failed. Once the WSDL loads, to the right of the address bar there will be a lock icon. Click the lock and then click View Certificates in the resulting dialog (if you do not have a lock icon but do have a Certificate Error message, see http://support.microsoft.com/kb/931850 for steps to install the certificate then you can continue from the point of finding the lock icon). Figure 1: View Certificates in IE Next, select the Details tab in the resulting dialog Figure 2: Use Certificate Details to Export Certificate Click Copy to File, then Next, then select the Base-64 encoded option for the format Figure 3: Select the Base-64 encoded option for the format For the sake of simplicity, I choose to save this to the root of the WebLogic domain. It will work from anywhere, but later you will need to type in the full path rather than just the certificate name if you save it elsewhere. Figure 4: Browse to Save Location Figure 5: Save the Certificate to the Domain Root for Convenience This is the point where I ran into some confusion. Some articles mentioned exporting the entire chain of certificates. This supposedly works for some types of certificates, or if you have a few other tools and the time to learn them. For the SSL experts out there, they already have these tools, know how to use them well, and should not be wasting their time reading this article meant for folks who just want to get things wired up and back to unit testing and development. For the rest of us, the easiest way to make sure things will work is to just export all the links in the chain individually and let WebLogic Server worry about re-assembling them into a chain (which it does quite nicely). While perhaps not the most elegant solution, the multi-step process is easy to repeat and uses only tools that are immediately available and require no learning curve. So… Next, go to Tools then Internet Options then the Content tab and click Certificates. Go to the Trust Root Certificate Authorities tab and find the certificate root for your Microsoft CA cert (look for the Issuer of the certificate you exported earlier). Figure 6: Trusted Root Certification Authorities Tab Export this one the same way as before, with a different name Figure 7: Use a Unique Name for Each Certificate Repeat this once more for the Intermediate Certificate tab. Import the Certificates to the WebLogic Domain Now, open an command prompt, navigate to [WEBLOGIC_DOMAIN_ROOT]\bin and execute setDomainEnv. You should then be in the root of the domain. If not, CD to the domain root. Assuming you saved the certificate in the domain root, execute the following: keytool -importcert -alias [ALIAS-1] -trustcacerts -file [FULL PATH TO .CER 1] -keystore truststore.jks -storepass [PASSWORD] An example with the variables filled in is: keytool -importcert -alias IIS-1 -trustcacerts -file microsftcert.cer -keystore truststore.jks -storepass password After several lines out output you will be prompted with: Trust this certificate? [no]: The correct answer is ‘yes’ (minus the quotes, of course). You’ll you know you were successful if the response is: Certificate was added to keystore If not, check your typing, as that is generally the source of an error at this point. Repeat this for all three of the certificates you exported, changing the [ALIAS-1] and [FULL PATH TO .CER 1] value each time. For example: keytool -importcert -alias IIS-1 -trustcacerts -file microsftcert.cer -keystore truststore.jks -storepass password keytool -importcert -alias IIS-2 -trustcacerts -file microsftcertRoot.cer -keystore truststore.jks -storepass password keytool -importcert -alias IIS-3 -trustcacerts -file microsftcertIntermediate.cer -keystore truststore.jks -storepass password In the above we created a new JKS key store. You can re-use an existing one by changing the name of the JKS file to one you already have and change the password to the one that matches that JKS file. For the DemoTrust.jks  that is included with WebLogic the password is DemoTrustKeyStorePassPhrase. An example here would be: keytool -importcert -alias IIS-1 -trustcacerts -file microsoft.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase keytool -importcert -alias IIS-2 -trustcacerts -file microsoftRoot.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase keytool -importcert -alias IIS-2 -trustcacerts -file microsoftInter.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase Whichever keystore you use, you can check your work with: keytool -list -keystore truststore.jks -storepass password Where “truststore.jks” and “password” can be replaced appropriately if necessary. The output will look something like this: Figure 8: Output from keytool -list -keystore Update the WebLogic Keystore Configuration If you used an existing keystore rather than creating a new one, you can restart your WebLogic Server and skip the rest of this section. For those of us who created a new one because that is the instructions we found online… Next, we need to tell WebLogic to use the JKS file (truststore.jks) we just created. Log in to the WebLogic Server Administration Console and navigate to Servers > AdminServer > Configuration > Keystores. Scroll down to “Custom Trust Keystore:” and change the value to “truststore.jks” and the value of “Custom Trust Keystore Passphrase:” and “Confirm Custom Trust Keystore Passphrase:” to the password you used when earlier, then save your changes. You will get a nice message similar to the following: Figure 9: To Be Safe, Restart Anyways The “No restarts are necessary” is somewhat of an exaggeration. If you want to be able to use the keystore you may need restart the server(s). To save myself aggravation, I always do. Your mileage may vary. Conclusion That should get you there. If there are some erroneous steps included for your situation in particular, I will offer up a semi-apology as the process described above does not take long at all and if there is one step that could be dropped from it, is still much faster than trying to figure this out from other sources.

    Read the article

  • WhatsApp &amp; Tasker for Android &ndash; Read &amp; Write messages

    - by Shaurya Anand
    So, I finally gave up on all my previous the Microsoft Mobile/Phone OS devices and made my switch to Android this year. I am using my Samsung Galaxy Note GT-N7000 with CyanogenMod 9.1.0 (http://get.cm/get/jenkins/7086/cm-9.1.0-n7000.zip) and ClockworkMod 6.0.1.2 (http://download2.clockworkmod.com/recoveries/recovery-clockwork-6.0.1.2-n7000.zip) since August this year and I am so happy with the performance and the flexibility it offers me. As a software developer by profession, I would expect most of my gadget to be highly customizable and programmable (one time or at intervals) to suit my needs as close as it can. I was introduced to Automation for Android – Tasker (https://play.google.com/store/apps/details?id=net.dinglisch.android.taskerm&hl=en) via reddit (http://www.reddit.com/r/tasker) and the word ‘automation’ was enough for me to dive right into this app. Only automation that I did earlier was switching profiles depending on location on there phones. And now, just imagine a complete set of possibilities that can be automate on the phone or via the phone. I did my research and found a couple of other tools that do the same/as close as what Tasker can do and few of them are even free. There’s one even by Microsoft called on{X} (https://play.google.com/store/apps/details?id=com.microsoft.onx.app&hl=en). Microsoft’s on{X} really caught my eye. You can write code for your phone on the web application by them, deploy it on your phone and even trace the flow all using your PC. Really brilliant, I love the fact that it’s all JavaScript. Here comes the but, it is still very very young and it’s policy of accessing my News Feed on Facebook is not something that I can not digest. On{X} is good, but as I said earlier, the API is not very mature and hence, I gave up on it. I bought Tasker, the best 5,00 € I spent in ages and I want to talk about it in this post. I am still a “noob” while operating this tool, but I tried my shot at automating WhatsApp (https://play.google.com/store/apps/details?id=com.whatsapp&hl=en), a popular messenger for various platform. The requirement for the automation is that, if I send a WhatsApp ‘wru’ message to the phone, it should respond back giving the location and battery level of my phone. It could be useful, if you like to locate your misplaced phone or automatically reply to your partner/friend, honestly, I don’t know what you will use it - through this post, I am just introducing automating WhatsApp using Tasker. Before we begin, the following script only works when your phone is rooted as we will be accessing the WhatsApp database and type some special characters like ‘:’. Let’s follow the code line by line: Profile:         Location request from XYZ. (12) // Name of your profile. Event:         Notification [ Owner Application:WhatsApp Title:* ] // When a new notification comes from WhatsApp, this event is fired. Read the end note, if you face problems with Chrome app after enabling Tasker accessibility. Enter:         A1: Run Shell [ Command:sqlite3 // We will access the WhatsApp database and check if the message comes from designated phone number or not. We mustn’t reply to every message.                 /data/data/com.whatsapp/databases/msgstore.db "SELECT _id, data FROM                  messages WHERE key_from_me='0' AND key_remote_jid LIKE '%XXXXXXXXXXX%' // Replace XXXXXXXXXXX with the phone number of your message sender.                 ORDER BY _id DESC LIMIT 1;" Timeout (Seconds):10 Use Root:On Store // I made a timeout for 10 seconds, if in case WhatsApp is busy accessing the database.                 Result In:%WHATSAPP_CURRREQ ] // Store the read Id and the last message on to the variable %WHATSAPP_CURRREQ         A2: If [ %WHATSAPP_CURRREQ ~R .*[wW][rR][uU].* ] // Check if the pattern of the message is correct and we are all set to send the location.                 A3: If [ %WHATSAPP_CURRREQ !~ %WHATSAPP_LASTREQ ] // Verify that the message is different from the last request. Remember every message has a unique Id.                         A4: Notify [ Title:WhatsApp location request... Text:Sending location // Just a notification that the location message is being prepared.                                 to Krati Gupta... Icon:<icon> Number:0 Permanent:On Priority:3 ] // Make a note it is a permanent notification, we will clear it later.                         A5: Secure Settings [ Configuration:Pattern Lock Disabled // I am disabling the pattern lock, that I use using the plugin Secure Settings.                                 Package:com.intangibleobject.securesettings.plugin Name:Secure // You can download the plugin from here: https://play.google.com/store/apps/details?id=com.intangibleobject.securesettings.plugin&hl=en                                 Settings ]                         A6: Secure Settings [ Configuration:Keyguard Disabled // Disable the keygaurd, it is useful, when your phone is on lock and you want to automate everything, even the typing.                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                         A7: Secure Settings [ Configuration:GPS Enabled // Pretty clear, turn on the GPS and get location at A8                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                         A8: AutoShortcut [ Configuration:WhatsApp: Some One // I am using AutoShortcut plugin (https://play.google.com/store/apps/details?id=com.joaomgcd.autoshortcut) to start WhatsApp with the indented recipient.                                 Package:com.joaomgcd.autoshortcut Name:AutoShortcut ] // Replace Some One, actually choose it from the plugin, the right recipient.                         A9: Get Location [ Source:Any Timeout (Seconds):30 Continue Task // I am getting the location, timeout is 30 seconds, adjust it accordingly.                                 Immediately:Off Keep Tracking:Off ]                         A10: Secure Settings [ Configuration:Screen Dim // Now, this extension of the plugin Secure Settings, wakes your device so that you can type out the string on the WhatsApp app.                                 5 Seconds Package:com.intangibleobject.securesettings.plugin                                 Name:Secure Settings ]                         A11: Run Shell [ Command:input text // Now, I am using the shell script to type the text to the window, because the ‘:’ while not be typed from the Type task in Tasker.                                 LOCATION:maps.google.com/maps?q=%LOC Timeout (Seconds):0 Use Root:On // And also, this is way faster, but remember you need root for this, not for the other way of typing.                                 Store Result In: ]                         A12: Dpad [ Button:Right Repeat Times:1 ] // Focus the Send button                         A13: Dpad [ Button:Press Repeat Times:1 ] // And press it.                         A14: Dpad [ Button:Left Repeat Times:1 ] // Get back to the typing box.                         A15: Run Shell [ Command:input text LOCATION_ACCURACY:%LOCACC Timeout                                 (Seconds):0 Use Root:On Store Result In: ]                         A16: Dpad [ Button:Right Repeat Times:1 ]                         A17: Dpad [ Button:Press Repeat Times:1 ]                         A18: Dpad [ Button:Left Repeat Times:1 ]                         A19: Run Shell [ Command:input text BATTERY_LEVEL:%BATT% Timeout // I am adding Battery level in my case as well.                                 (Seconds):0 Use Root:On Store Result In: ]                         A20: Dpad [ Button:Right Repeat Times:1 ]                         A21: Dpad [ Button:Press Repeat Times:1 ]                         A22: Variable Set [ Name:%WHATSAPP_LASTREQ To:%WHATSAPP_CURRREQ Do // And now, we say, request is done.                                 Maths:Off Append:Off ]                         A23: Button [ Button:Back ] // I am exiting the WhatsApp nicely and not killing it. If you are the murderer kind, kill it, just know, you don’t have any place in the heaven.                         A24: Button [ Button:Back ]                         A25: Notify Cancel [ Title: Warn Not Exist:Off ] // Remove the permanent notification.                         A26: Notify [ Title:WhatsApp location request Text:Location sent // Make a temporary notification, and say, location is sent.                                 successfully. Icon:<icon> Number:0 Permanent:Off Priority:3 ]                                                         A27: Secure Settings [ Configuration:GPS Disabled // Disable all the horrible things we turned on earlier.                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                         A28: Secure Settings [ Configuration:Pattern Lock Enabled                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                         A29: Secure Settings [ Configuration:Keyguard Enabled                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                 A30: End If         A31: End If Download this Task from here: http://db.tt/9vRmbhyb That’s it in the above small example – you can read/write messages from/to WhatsApp app. I am using n7000-cm9.1-cwr6. Oh yea, and if you are having the Talkback auto enabled for Chrome browser, you need to turn Off the Web scripts to run. Tasker is amazing, I have automated a lot of tasks using this tool. I will share a few none generic ones with you in my coming post here.

    Read the article

  • Steps for MySQL DB Replication

    - by Manish Agrawal
    Following are the steps for MySQL Replication implementation on Linux machine: Pre-implementation steps for DB Replication:   1.    Identify the databases to be replicated 2.    Identify the tables to be ignored during replication per database for example log tables 3.  Carefully identify and replace the variables and paths(locations) mentioned (in bold) in the commands given below with appropriate values 4.  Schedule the maintenance activity in odd hours as these activities will affect all the databases on Master database server       Implementation steps for DB Replication:     1.    Configure the /etc/my.cnf file on Master database server to enable Binary logging, setting of server id and configuring of dbnames for which logging should be done. [mysqld] log-bin=mysql-bin server-id=1 binlog-do-db = dbname   Note: You can specify multiple DB in binlog-do-db by using comma separated dbname values like: dbname1, dbname2, …, dbnameN   2.    On Master database, Grant Replication Slave Privileges, by executing following command on mysql prompt mysql> GRANT REPLICATION SLAVE ON *.* TO slaveuser@<hostname> identified by ‘slavepassword’;   3.    Stop the Master & Slave database by giving the command      mysqladmin shutdown   4.    Start the Master database by giving the command      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user&     5.    mysql> FLUSH TABLES WITH READ LOCK; Note: Leave the client (putty session) from which you issued the FLUSH TABLES statement running, so that the read lock remains in effect. If you exit the client, the lock is released. 6.    mysql > SHOW MASTER STATUS;          +---------------+----------+--------------+------------------+          | File          | Position | Binlog_Do_DB | Binlog_Ignore_DB |          +---------------+----------+--------------+------------------+          | mysql-bin.003 | 117       | dbname       |                  |          +---------------+----------+--------------+------------------+ Note: Note this information as this will be required while starting of Slave and replication in later steps   7.    Take MySQL dump by giving the following command, In another session window (putty window) run the following command: mysqldump –u user --ignore-table=dbname.tbl_name -–ignore-table=dbname.tbl_name2 --master-data dbname > dbname_dump.db Note: When choosing databases to include in the dump, remember that you will need to filter out databases on each slave that you do not want to include in the replication process.     8.    Unlock the tables on Master by giving following command: mysql> UNLOCK TABLES;   9.    Copy the dump file to Slave DB server   10.  Startup the Slave by using option --skip-slave      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user --skip-slave&   11.  Restore the dump file on Slave DB server      mysql –u user dbname < dbname_dump.db   12.  Stop the Slave database by giving the command      mysqladmin shutdown   13.  Configure the /etc/my.cnf file on the Slave database server [mysqld] server-id=2 replicate-ignore-table = dbname.tablename   14.  Start the Slave Mysql Server with 'replicate-do-db=DB name' option.      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user --replicate-do-db=dbname --skip-slave   15.  Configure the settings at Slave server for Master host name, log filename and position within the log file as shown in Step 6 above Use Change Master statement in the MySQL session mysql> CHANGE MASTER TO MASTER_HOST='<master_host_name>', MASTER_USER='<replication_user_name>', MASTER_PASSWORD='<replication_password>', MASTER_LOG_FILE='<recorded_log_file_name>', MASTER_LOG_POS=<recorded_log_position>;   16.  On Slave Servers mysql prompt give the following command: a.     mysql > START SLAVE; b.    mysql > SHOW SLAVE STATUS;         Note: To stop slave for backup or any other activity you can use the following command on the Slave Servers mysql prompt: mysql> STOP SLAVE     Refer following links for more information on MySQL DB Replication: http://dev.mysql.com/doc/refman/5.0/en/replication-options.html http://crazytoon.com/2008/04/21/mysql-replication-replicate-by-choice/ http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html

    Read the article

  • Uploadify with ruby on rails 'bad content body' 500 Internal Server Error

    - by Mr_Nizzle
    I'm Getting this error in my development log while uploadify is uploading the file and in the view i get an 'IO ERROR' beside filename. /!\ FAILSAFE /!\ Thu Mar 18 11:54:53 -0500 2010 Status: 500 Internal Server Error bad content body /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/utils.rb:351:in `parse_multipart' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/utils.rb:323:in `loop' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/utils.rb:323:in `parse_multipart' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/request.rb:133:in `POST' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:15:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/params_parser.rb:15:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/session/cookie_store.rb:93:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/reloader.rb:29:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/failsafe.rb:26:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `synchronize' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/dispatcher.rb:106:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/fastcgi.rb:58:in `serve' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:103:in `process_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:153:in `with_signal_handler' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:101:in `process_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:78:in `process_each_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:77:in `each' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:77:in `process_each_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:76:in `catch' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:76:in `process_each_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:51:in `process!' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:23:in `process!' dispatch.fcgi:24 any idea on this?

    Read the article

  • c# Named Pipe Asynchronous Peeking

    - by KJ Tsanaktsidis
    Hey all, I need to find a way to be notified when a System.IO.Pipe.NamedPipeServerStream opened in asynchronous mode has more data available for reading on it- a WaitHandle would be ideal. I cannot simply use BeginRead() to obtain such a handle because it's possible that i might be signaled by another thread which wants to write to the pipe- so I have to release the lock on the pipe and wait for the write to be complete, and NamedPipeServerStream doesnt have a CancelAsync method. I also tried calling BeginRead(), then calling the win32 function CancelIO on the pipe if the thread gets signaled, but I don't think this is an ideal solution because if CancelIO is called just as data is arriving and being processed, it will be dropped- I still wish to keep this data, but process it at a later time, after the write. I suspect the win32 function PeekNamedPipe might be useful but i'd like to avoid having to continuously poll for new data with it. In the likley event that the above text is a bit unclear, here's roughly what i'd like to be able to do... NamedPipeServerStream pipe; ManualResetEvent WriteFlag; //initialise pipe lock (pipe) { //I wish this method existed WaitHandle NewDataHandle = pipe.GetDataAvailableWaithandle(); Waithandle[] BreakConditions = new Waithandle[2]; BreakConditions[0] = NewDataHandle; BreakConditions[1] = WriteFlag; int breakcode = WaitHandle.WaitAny(BreakConditions); switch (breakcode) { case 0: //do a read on the pipe break; case 1: //break so that we release the lock on the pipe break; } }

    Read the article

  • quartz throws deadlocks under load

    - by Khandelwal
    We are using Quartz with Spring and our configuration is throwing deadlocks when quartz has more than 1 thread configured. I'm starting to believe that it's because we don't have our quartz configured correctly with Spring, but I can't find enough documentation on how to configure the two to play nicely. We are running on both Windows and Linux environments - pointing at MSSQL and Oracle DBs. With both OS, using either DB, we can throw the following deadlock errors... We're consistently throwing these deadlock errors. We run under heavy load, inserting hundreds of quartz triggers in a matter of minutes. 2010-03-17 18:52:31,737 [] [] ERROR [DFScheduler_Worker-42] core.ErrorLogger core.ErrorLogger (QuartzScheduler.java:2185) - An error occured while marking executed job complete. job= 'BPM.6e41a6567f0000020100362a51dc7a50' org.quartz.JobPersistenceException: Couldn't remove trigger: Transaction (Process ID 87) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. [See nested exception: com.microsoft.sqlserver.jdbc.SQLServerException: Transaction (Process ID 87) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.] at org.quartz.impl.jdbcjobstore.JobStoreSupport.removeTrigger(JobStoreSupport.java:1469)at org.quartz.impl.jdbcjobstore.JobStoreSupport.triggeredJobComplete(JobStoreSupport.java:2978)at org.quartz.impl.jdbcjobstore.JobStoreSupport$39.execute(JobStoreSupport.java:2962) at org.quartz.impl.jdbcjobstore.JobStoreSupport$41.execute(JobStoreSupport.java:3713)at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3747) at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3709)at org.quartz.impl.jdbcjobstore.JobStoreSupport.triggeredJobComplete(JobStoreSupport.java:2958)at org.quartz.core.QuartzScheduler.notifyJobStoreJobComplete(QuartzScheduler.java:1727)at org.quartz.core.JobRunShell.run(JobRunShell.java:273) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:534) Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Transaction (Process ID 87) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(Unknown Source at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(Unknown Source) at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(Unknown Source) at ... org.quartz.impl.jdbcjobstore.StdJDBCDelegate.deleteSimpleTrigger(StdJDBCDelegate.java:1820) at org.quartz.impl.jdbcjobstore.JobStoreSupport.deleteTriggerAndChildren(JobStoreSupport.java:1345 at org.quartz.impl.jdbcjobstore.JobStoreSupport.removeTrigger(JobStoreSupport.java:1453 ... 9 more

    Read the article

  • Cannot use READPAST in snapshot isolation mode

    - by Marcus
    I have a process which is called from multiple threads which does the following: Start transaction Select unit of work from work table with by finding the next row where IsProcessed=0 with hints (UPDLOCK, HOLDLOCK, READPAST) Process the unit of work (C# and SQL stored procedures) Commit the transaction The idea of this is that a thread dips into the pool for the "next" piece of work, and processes it, and the locks are there to ensure that a single piece of work is not processed twice. (the order doesn't matter). All this has been working fine for months. Until today that is, when I happened to realise that despite enabling snapshot isolation and making it the default at the database level, the actual transaction creation code was manually setting an isolation level of "ReadCommitted". I duly changed that to "Snapshot", and of course immediately received the "You can only specify the READPAST lock in the READ COMMITTED or REPEATABLE READ" error message. Oops! The main reason for locking the row was to "mark the row" in such a way that the "mark" would be removed when the transaction that applied the mark was committed and the lock seemed to be the best way to do this, since this table isn't read otherwise except by these threads. If I were to use the IsProcessed flag as the lock, then presumably I would need to do the update first, and then select the row I just updated, but I would need to employ the NOLOCK flag to know whether any other thread had set the flag on a row. All sounds a bit messy. The easiest option would be to abandon the snapshot isolation mode altogether, but the design of step #3 requires it. Any bright ideas on the best way to resolve this problem? Thanks Marcus

    Read the article

  • Java FileLock for Reading and Writing

    - by bobtheowl2
    I have a process that will be called rather frequently from cron to read a file that has certain move related commands in it. My process needs to read and write to this data file - and keep it locked to prevent other processes from touching it during this time. A completely separate process can be executed by a user to (potential) write/append to this same data file. I want these two processes to play nice and only access the file one at a time. The nio FileLock seemed to be what I needed (short of writing my own semaphore type files), but I'm having trouble locking it for reading. I can lock and write just fine, but when attempting to create lock when reading I get a NonWritableChannelException. Is it even possible to lock a file for reading? Seems like a RandomAccessFile is closer to what I need, but I don't see how to implement that. Here is the code that fails: FileInputStream fin = new FileInputStream(f); FileLock fl = fin.getChannel().tryLock(); if(fl != null) { System.out.println("Locked File"); BufferedReader in = new BufferedReader(new InputStreamReader(fin)); System.out.println(in.readLine()); ... The exception is thrown on the FileLock line. java.nio.channels.NonWritableChannelException at sun.nio.ch.FileChannelImpl.tryLock(Unknown Source) at java.nio.channels.FileChannel.tryLock(Unknown Source) at Mover.run(Mover.java:74) at java.lang.Thread.run(Unknown Source) Looking at the JavaDocs, it says Unchecked exception thrown when an attempt is made to write to a channel that was not originally opened for writing. But I don't necessarily need to write to it. When I try creating a FileOutpuStream, etc. for writing purposes it is happy until I try to open a FileInputStream on the same file.

    Read the article

  • Svn import with auto-props & pre-commit hook

    - by James Tisato
    My company's svn repo has a lot of MS Word docs in it. We've implemented a policy that all .doc files must have the svn:needs-lock property set to prevent parallel access on files that are hard to merge (we've also done this for xls, ppt, pdf etc.). We've implemented the policy by distributing a svn config with auto-props set appropriately for all relevant document types. We've also set up a pre-commit hook that checks that all added files of these types have the needs-lock property set (i.e. if they forget/are too lazy to update their svn config file, they won't be able to add any docs to the repo). The problem I'm having, however, is that the pre-commit hook fails when users try to import files into the repo, e.g. some users like to add files directly thru TortoiseSVN's Repo Browser, which effectively is an svn import. Through testing on other file types, I have seen that doing an import does in fact apply the auto-props listed in my config, but they don't seem to be applied at the point that the pre-commit hook runs. When importing .doc files, the hook fails, saying that the needs-lock property is missing. Is there really much difference between adding a single file to a working copy and committing it vs importing a file directly? Do we need to tailor our precommit hook in some way to cater for this scenario?

    Read the article

  • Java Concurrency : Synchronized(this) => and this.wait() and this.notify()

    - by jens
    Hello Experts, I would appreciate your help in understand a "Concurrency Example" from: http://forums.sun.com/thread.jspa?threadID=735386 Qute Start: public synchronized void enqueue(T obj) { // do addition to internal list and then... this.notify(); } public synchronized T dequeue() { while (this.size()==0) { this.wait(); } return // something from the queue } Quote End: My Question is: Why is this code valid? = When I synchronize a method like "public synchronized" = then I synchronize on the "Instance of the Object == this". However in the example above: Calling "dequeue" I will get the "lock/monitor" on this Now I am in the dequeue method. As the list is zero, the calling thread will be "waited" From my understanding I have now a deadlock situation, as I will have no chance of ever enquing an object (from an nother thread), as the "dequeue" method is not yet finised and the dequeue "method" holds the lock on this: So I will never ever get the possibility to call "enequeue" as I will not get the "this" lock. Backround: I have exactly the same problem: I have some kind of connection pool (List of Connections) and need to block if all connections are checked. What is the correct way to synchronize the List to block, if size exceeds a limit or is zero? Thank you very much Jens

    Read the article

  • In .NET when Aborting Thread, can this piece of code get corrupted?

    - by bosko
    Little intro: In complex multithreaded aplication (enterprise service bus EBS), I need to use Thread.Abort, because this EBS accepts user written modules which communicates with hardware security modules. So if this module gets deadlocked or hardware stops responding - i need to just unload this module and rest of this server aplication must keep runnnig. So there is abort sync mechanism which ensures that code can be aborted only in user section and this section must be marked as AbortAble. If this happen there is possibility that ThreadAbortException will be thrown in this pieace of code: public void StopAbortSection() { var id = Thread.CurrentThread.ManagedThreadId; lock (threadIdMap[id]) { .... } } If module is on AbortSection and Aplication decides to abort module, but after this decision but before actual Thread.Abort, module enters NonAbortableSection by calling this method, but lock is actualy taken on that locking object. So lock will block until Abort or abort can be executed before reaching this block by this code. But Object with this method is essential and i need to be sure that this pieace of code is safe to abort in any moment. Probably i have to mention that threadIdMap is Dictionary(int,ManualResetEvent), so locking object is instance of ManualResetEvent. I hope you now understad my question. Sorry for its largeness.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >