Search Results

Search found 5084 results on 204 pages for 'brute force'.

Page 194/204 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • mdadm raid5 recover double disk failure - with a twist (drive order)

    - by Peter Bos
    Let me acknowledge first off that I have made mistakes, and that I have a backup for most but not all of the data on this RAID. I still have hope of recovering the rest of the data. I don't have the kind of money to take the drives to a recovery expert company. Mistake #0, not having a 100% backup. I know. I have a mdadm RAID5 system of 4x3TB. Drives /dev/sd[b-e], all with one partition /dev/sd[b-e]1. I'm aware that RAID5 on very large drives is risky, yet I did it anyway. Recent events The RAID become degraded after a two drive failure. One drive [/dev/sdc] is really gone, the other [/dev/sde] came back up after a power cycle, but was not automatically re-added to the RAID. So I was left with a 4 device RAID with only 2 active drives [/dev/sdb and /dev/sdd]. Mistake #1, not using dd copies of the drives for restoring the RAID. I did not have the drives or the time. Mistake #2, not making a backup of the superblock and mdadm -E of the remaining drives. Recovery attempt I reassembled the RAID in degraded mode with mdadm --assemble --force /dev/md0, using /dev/sd[bde]1. I could then access my data. I replaced /dev/sdc with a spare; empty; identical drive. I removed the old /dev/sdc1 from the RAID mdadm --fail /dev/md0 /dev/sdc1 Mistake #3, not doing this before replacing the drive I then partitioned the new /dev/sdc and added it to the RAID. mdadm --add /dev/md0 /dev/sdc1 It then began to restore the RAID. ETA 300 mins. I followed the process via /proc/mdstat to 2% and then went to do other stuff. Checking the result Several hours (but less then 300 mins) later, I checked the process. It had stopped due to a read error on /dev/sde1. Here is where the trouble really starts I then removed /dev/sde1 from the RAID and re-added it. I can't remember why I did this; it was late. mdadm --manage /dev/md0 --remove /dev/sde1 mdadm --manage /dev/md0 --add /dev/sde1 However, /dev/sde1 was now marked as spare. So I decided to recreate the whole array using --assume-clean using what I thought was the right order, and with /dev/sdc1 missing. mdadm --create /dev/md0 --assume-clean -l5 -n4 /dev/sdb1 missing /dev/sdd1 /dev/sde1 That worked, but the filesystem was not recognized while trying to mount. (It should have been EXT4). Device order I then checked a recent backup I had of /proc/mdstat, and I found the drive order. md0 : active raid5 sdb1[0] sde1[4] sdd1[2] sdc1[1] 8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] I then remembered this RAID had suffered a drive loss about a year ago, and recovered from it by replacing the faulty drive with a spare one. That may have scrambled the device order a bit...so there was no drive [3] but only [0],[1],[2], and [4]. I tried to find the drive order with the Permute_array script: https://raid.wiki.kernel.org/index.php/Permute_array.pl but that did not find the right order. Questions I now have two main questions: I screwed up all the superblocks on the drives, but only gave: mdadm --create --assume-clean commands (so I should not have overwritten the data itself on /dev/sd[bde]1. Am I right that in theory the RAID can be restored [assuming for a moment that /dev/sde1 is ok] if I just find the right device order? Is it important that /dev/sde1 be given the device number [4] in the RAID? When I create it with mdadm --create /dev/md0 --assume-clean -l5 -n4 \ /dev/sdb1 missing /dev/sdd1 /dev/sde1 it is assigned the number [3]. I wonder if that is relevant to the calculation of the parity blocks. If it turns out to be important, how can I recreate the array with /dev/sdb1[0] missing[1] /dev/sdd1[2] /dev/sde1[4]? If I could get that to work I could start it in degraded mode and add the new drive /dev/sdc1 and let it resync again. It's OK if you would like to point out to me that this may not have been the best course of action, but you'll find that I realized this. It would be great if anyone has any suggestions.

    Read the article

  • How do I catch this WPF Bitmap loading exception?

    - by mmr
    I'm developing an application that loads bitmaps off of the web using .NET 3.5 sp1 and C#. The loading code looks like: try { CurrentImage = pics[unChosenPics[index]]; bi = new BitmapImage(CurrentImage.URI); // BitmapImage.UriSource must be in a BeginInit/EndInit block. bi.DownloadCompleted += new EventHandler(bi_DownloadCompleted); AssessmentImage.Source = bi; } catch { System.Console.WriteLine("Something broke during the read!"); } and the code to load on bi_DownloadCompleted is: void bi_DownloadCompleted(object sender, EventArgs e) { try { double dpi = 96; int width = bi.PixelWidth; int height = bi.PixelHeight; int stride = width * 4; // 4 bytes per pixel byte[] pixelData = new byte[stride * height]; bi.CopyPixels(pixelData, stride, 0); BitmapSource bmpSource = BitmapSource.Create(width, height, dpi, dpi, PixelFormats.Bgra32, null, pixelData, stride); AssessmentImage.Source = bmpSource; Loading.Visibility = Visibility.Hidden; AssessmentImage.Visibility = Visibility.Visible; } catch { System.Console.WriteLine("Exception when viewing bitmap."); } } Every so often, an image comes along that breaks the reader. I guess that's to be expected. However, rather than being caught by either of those try/catch blocks, the exception is apparently getting thrown outside of where I can handle it. I could handle it using global WPF exceptions, like this SO question. However, that will seriously mess up the control flow of my program, and I'd like to avoid that if at all possible. I have to do the double source assignment because it appears that many images are lacking in width/height parameters in the places where the microsoft bitmap loader expects them to be. So, the first assignment appears to force the download, and the second assignment gets the dpi/image dimensions happen properly. What can I do to catch and handle this exception? Stack trace: at MS.Internal.HRESULT.Check(Int32 hr) at System.Windows.Media.Imaging.BitmapFrameDecode.get_ColorContexts() at System.Windows.Media.Imaging.BitmapImage.FinalizeCreation() at System.Windows.Media.Imaging.BitmapImage.OnDownloadCompleted(Object sender, EventArgs e) at System.Windows.Media.UniqueEventHelper.InvokeEvents(Object sender, EventArgs args) at System.Windows.Media.Imaging.LateBoundBitmapDecoder.DownloadCallback(Object arg) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.DispatcherOperation.InvokeImpl() at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Threading.DispatcherOperation.Invoke() at System.Windows.Threading.Dispatcher.ProcessQueue() at System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Boolean isSingleParameter) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.TranslateAndDispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Application.RunInternal(Window window) at LensComparison.App.Main() in C:\Users\Mark64\Documents\Visual Studio 2008\Projects\LensComparison\LensComparison\obj\Release\App.g.cs:line 48 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • css ul li gap in ie7

    - by Gidon
    I have a css ul li nested menu that works perfectly in ie 8 and firefox but in ie7 it produces a small gap between the elements. this is my css: #nav, #nav ul { margin: 0; padding: 0; list-style-type: none; list-style-position: outside; position:static;/*the key for ie7*/ line-height: 1.5em; } #nav li { float: inherit; position: relative; width: 12em; } #nav ul { position: absolute; width: 12em; top: 1.5em; display: none; left: auto; } #nav a:link, #nav a:active, #nav a:visited { display: block; padding: 0px 5px; border: 1px solid #258be8; /*#333;*/ color: #fff; text-decoration: none; background-color: #258be8; /*#333;*/ } #nav a:hover { background-color: #fff; color: #333; } #nav ul li a { display: block; top: -1.5em; position: relative; width: 100%; overflow: auto; /*force hasLayout in IE7 */ right: 12em; padding:0.5em; } #nav ul ul { position: absolute; } #nav ul li ul { right: 13em; margin: 0px 0 0 10px; top: 0; position: absolute; } #nav li:hover ul ul, #nav li:hover ul ul ul, #nav li:hover ul ul ul ul { display: none; } #nav li:hover ul, #nav li li:hover ul, #nav li li li:hover ul, #nav li li li li:hover ul { display: block; } #nav li { background: url(~/Scripts/ourDDL/ddlArrow.gif) no-repeat center left; } #divHead, #featuresDivHead { padding: 5px 10px; width: 12em; cursor: pointer; position: relative; background-color: #99ccFF; margin: 1px; } /* Holly Hack for IE \*/ * html #nav li { float: left; height: 1%; } * html #nav li a { height: 1%; } /* End */ and here is an example for a menu: <ul id='nav'><li><a href="#">Bookstore Online</a></li> <li><a href="#">Study Resources</a></li> <li><a href="#">Service Information</a></li> <li><a href="#">TV Broadcast</a></li> <li><a href="#">Donations</a></li></ul>

    Read the article

  • Saving a .xls file with fwrite

    - by kielie
    hi guys, I have to create a script that takes a mySQL table, and exports it into .XSL format, and then saves that file into a specified folder on the web host. I got it working, but now I can't seem to get it to automatically save the file to the location without prompting the user. It needs to run every day at a specified time, so it can save the previous days leads into a .XSL file on the web host. Here is the code: <?php // DB TABLE Exporter // // How to use: // // Place this file in a safe place, edit the info just below here // browse to the file, enjoy! // CHANGE THIS STUFF FOR WHAT YOU NEED TO DO $dbhost = "-"; $dbuser = "-"; $dbpass = "-"; $dbname = "-"; $dbtable = "-"; // END CHANGING STUFF $cdate = date("Y-m-d"); // get current date // first thing that we are going to do is make some functions for writing out // and excel file. These functions do some hex writing and to be honest I got // them from some where else but hey it works so I am not going to question it // just reuse // This one makes the beginning of the xls file function xlsBOF() { echo pack("ssssss", 0x809, 0x8, 0x0, 0x10, 0x0, 0x0); return; } // This one makes the end of the xls file function xlsEOF() { echo pack("ss", 0x0A, 0x00); return; } // this will write text in the cell you specify function xlsWriteLabel($Row, $Col, $Value ) { $L = strlen($Value); echo pack("ssssss", 0x204, 8 + $L, $Row, $Col, 0x0, $L); echo $Value; return; } // make the connection an DB query $dbc = mysql_connect( $dbhost , $dbuser , $dbpass ) or die( mysql_error() ); mysql_select_db( $dbname ); $q = "SELECT * FROM ".$dbtable." WHERE date ='$cdate'"; $qr = mysql_query( $q ) or die( mysql_error() ); // Ok now we are going to send some headers so that this // thing that we are going make comes out of browser // as an xls file. // header("Pragma: public"); header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Content-Type: application/force-download"); header("Content-Type: application/octet-stream"); header("Content-Type: application/download"); //this line is important its makes the file name header("Content-Disposition: attachment;filename=export_".$dbtable.".xls "); header("Content-Transfer-Encoding: binary "); // start the file xlsBOF(); // these will be used for keeping things in order. $col = 0; $row = 0; // This tells us that we are on the first row $first = true; while( $qrow = mysql_fetch_assoc( $qr ) ) { // Ok we are on the first row // lets make some headers of sorts if( $first ) { foreach( $qrow as $k => $v ) { // take the key and make label // make it uppper case and replace _ with ' ' xlsWriteLabel( $row, $col, strtoupper( ereg_replace( "_" , " " , $k ) ) ); $col++; } // prepare for the first real data row $col = 0; $row++; $first = false; } // go through the data foreach( $qrow as $k => $v ) { // write it out xlsWriteLabel( $row, $col, $v ); $col++; } // reset col and goto next row $col = 0; $row++; } xlsEOF(); exit(); ?> I tried using, fwrite to accomplish this, but it didn't seem to go very well, I removed the header information too, but nothing worked. Here is the original code, as I found it, any help would be greatly appreciated. :-) Thanx in advance. :-)

    Read the article

  • How to show form in front in C#

    - by corlettk
    Folks, Please does anyone know how to show a Form from an otherwise invisible application, and have it get the focus (i.e. appear on top of other windows)? I'm working in C# .NET 3.5. I suspect I've taken "completely the wrong approach"... I do not Application.Run(new TheForm ()) instead I (new TheForm()).ShowModal()... The Form is basically a modal dialogue, with a few check-boxes; a text-box, and OK and Cancel Buttons. The user ticks a checkbox and types in a description (or whatever) then presses OK, the form disappears and the process reads the user-input from the Form, Disposes it, and continues processing. This works, except when the form is show it doesn't get the focus, instead it appears behind the "host" application, until you click on it in the taskbar (or whatever). This is a most annoying behaviour, which I predict will cause many "support calls", and the existing VB6 version doesn't have this problem, so I'm going backwards in usability... and users won't accept that (and nor should they). So... I'm starting to think I need to rethink the whole shebang... I should show the form up front, as a "normal application" and attach the remainer of the processing to the OK-button-click event. It should work, But that will take time which I don't have (I'm already over time/budget)... so first I really need to try to make the current approach work... even by quick-and-dirty methods. So please does anyone know how to "force" a .NET 3.5 Form (by fair means or fowl) to get the focus? I'm thinking "magic" windows API calls (I know Twilight Zone: This only appears to be an issue at work, we're I'm using Visual Studio 2008 on Windows XP SP3... I've just failed to reproduce the problem with an SSCCE (see below) at home on Visual C# 2008 on Vista Ulimate... This works fine. Huh? WTF? Also, I'd swear that at work yesterday showed the form when I ran the EXE, but not when F5'ed (or Ctrl-F5'ed) straight from the IDE (which I just put up with)... At home the form shows fine either way. Totaly confusterpating! It may or may not be relevant, but Visual Studio crashed-and-burned this morning when the project was running in debug mode and editing the code "on the fly"... it got stuck what I presumed was an endless loop of error messages. The error message was something about "can't debug this project because it is not the current project, or something... So I just killed it off with process explorer. It started up again fine, and even offered to recover the "lost" file, an offer which I accepted. using System; using System.Windows.Forms; namespace ShowFormOnTop { static class Program { [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); //Application.Run(new Form1()); Form1 frm = new Form1(); frm.ShowDialog(); } } } Background: I'm porting an existing VB6 implementation to .NET... It's a "plugin" for a "client" GIS application called MapInfo. The existing client "worked invisibly" and my instructions are "to keep the new version as close as possible to the old version", which works well enough (after years of patching); it's just written in an unsupported language, so we need to port it. About me: I'm pretty much a noob to C# and .NET generally, though I've got a bottoms wiping certificate, I have been a professional programmer for 10 years; So I sort of "know some stuff". Any insights would be most welcome... and Thank you all for taking the time to read this far. Consiseness isn't (apparently) my forte. Cheers. Keith.

    Read the article

  • SQL Server 2005 Blocking Problem (ASYNC_NETWORK_IO)

    - by ivankolo
    I am responsible for a third-party application (no access to source) running on IIS and SQL Server 2005 (500 concurrent users, 1TB data, 8 IIS servers). We have recently started to see significant blocking on the database (after months of running this application in production with no problems). This occurs at random intervals during the day, approximately every 30 minutes, and affects between 20 and 100 sessions each time. All of the sessions eventually hit the application time out and the sessions abort. The problem disappears and then gradually re-emerges. The SPID responsible for the blocking always has the following features: WAIT TYPE = ASYNC_NETWORK_IO The SQL being run is “(@claimid varchar(15))SELECT claimid, enrollid, status, orgclaimid, resubclaimid, primaryclaimid FROM claim WHERE primaryclaimid = @claimid AND primaryclaimid < claimid)”. This is relatively innocuous SQL that should only return one or two records, not a large dataset. NO OTHER SQL statements have been implicated in the blocking, only this SQL statement. This is parameterized SQL for which an execution plan is cached in sys.dm_exec_cached_plans. This SPID has an object-level S lock on the claim table, so all UPDATEs/INSERTs to the claim table are also blocked. HOST ID varies. Different web servers are responsible for the blocking sessions. E.g., sometimes we trace back to web server 1, sometimes web server 2. When we trace back to the web server implicated in the blocking, we see the following: There is always some sort of application related error in the Event Log on the web server, linked to the Host ID and Host Process ID from the SQL Session. The error messages vary, usually some sort of SystemOutofMemory. (These error messages seem to be similar to error messages that we have seen in the past without such dramatic consequences. We think was happening before, but didn’t lead to blocking. Why now?) No known problems with the network adapters on either the web servers or the SQL server. (In any event the record set returned by the offending query would be small.) Things ruled out: Indexes are regularly defragmented. Statistics regularly updated. Increased sample size of statistics on claim.primaryclaimid. Forced recompilation of the cached execution plan. Created a compound index with primaryclaimid, claimid. No networking problems. No known issues on the web server. No changes to application software on web servers. We hypothesize that the chain of events goes something like this: Web server process submits SQL above. SQL server executes the SQL, during which it acquires a lock on the claim table. Web server process gets an error and dies. SQL server session is hung waiting for the web server process to read the data set. SQL Server sessions that need to get X locks on parts of the claim table (anyone processing claims) are blocked by the lock on the claim table and remain blocked until they all hit the application time out. Any suggestions for troubleshooting while waiting for the vendor's assistance would be most welcome. Is there a way to force SQL Server to lock at the row/page level for this particular SQL statement only? Is there a way to set a threshold on ASYNC_NETWORK_IO waits only?

    Read the article

  • Perl IO modules possibly causing issues in Net::DNS module

    - by Rich
    Hi! I’m porting some software that I wrote for a White Russian OpenWRT system to a new Kamikaze 8.09.1 OpenWRT system but I am having some serious issues that I’m hoping you can help me with. Old system Linux kernel 2.4.34 MIPSEL arch Perl 5.8.7 Net::DNS 0.48 IO 1.21 IO::Socket 1.28 IO::Socket::INET 1.28 New system Linux kernel 2.6.26.8 MIPS arch Perl 5.10.0 Net::DNS 0.66 IO 1.23_01 IO::Socket 1.30_01 IO::Socket::INET 1.31 First, let me provide some background information… I am trying to resolve my server (clearprobe.winbeam.com) from within my Perl program and see the following if I enable debugging in Net::DNS: resolve: Server 'clearprobe-ddns.winbeam.com' ;; query(clearprobe-ddns.winbeam.com) ;; setting up an AF_INET() family type UDP socket ;; send_udp(192.168.88.1:53) ;; send_udp(4.2.2.2:53) ;; send_udp(192.168.88.1:53) ;; send_udp(4.2.2.2:53) resolve: res->errorstring: query timed out Both of these servers resolve clearprobe.winbeam.com fine from the command line: root@cwb-2-11:~# echo “nameserver 192.168.88.1” > /etc/resolv.conf root@cwb-2-11:~# nslookup clearprobe-ddns.winbeam.com Server: 192.168.88.1 Address 1: 192.168.88.1 router Name: clearprobe-ddns.winbeam.com Address 1: 64.13.48.40 64-13-48-40.war.clearwire-dns.net root@cwb-2-11:~# echo “nameserver 4.2.2.2” > /etc/resolv.conf root@cwb-2-11:~# nslookup clearprobe-ddns.winbeam.com Server: 4.2.2.2 Address 1: 4.2.2.2 vnsc-bak.sys.gtei.net Name: clearprobe-ddns.winbeam.com Address 1: 64.13.48.40 64-13-48-40.war.clearwire-dns.net Using Perl’s call to the C gethostbyaddr() function works fine, but I need to do another lookup later in the software which requires that I specify the nameserver (clearprobe-ddns.winbeam.com is the authority for my internal DNS zone), hence my Net::DNS requirement. Now, here is the IO module-specific information: What I am seeing is that the reply is coming back from the nameserver (confirmed via tcpdump – I can send the captures if you’d like), but the UDP packets are sitting in the process’s UDP receive queue pending reception by Net::DNS (the approx 1752 bytes per response stay queued waiting for $sel-can_read()): root@cwb-2-11:~# netstat -una Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State udp 1752 0 0.0.0.0:52680 0.0.0.0:* root@cwb-2-11:~# netstat -una Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State udp 5256 0 0.0.0.0:52680 0.0.0.0:* If I force $sock[AF_INET]-recv($buf, $self-_packetsz) around line 803 of /usr/lib/perl5/5.10/Net/DNS/Resolver/Base.pm, instead of waiting for IO::Select’s can_read() function ( @ready = $sel-can_read($timeout)) to populate @ready, the response is received and processed. Any idea what could be causing this issue? In a possibly related matter, I noticed in another script that the following code fails in the same manner (network responses stay in the process’s TCP receive queue) with the new system: $sock = new IO::Socket::INET( PeerAddr => "$server", PeerPort => 37, Proto => 'tcp', Timeout => 5 ); Whereas the following code works: $sock = new IO::Socket::INET( PeerAddr => "$server", PeerPort => 37, Proto => 'tcp' ); I have looked through the NET::DNS code and don’t see a timeout passed for the UDP sockets, so I am not sure if that this is related or not. Please let me know if I can provide you with any further information in order to help diagnose this issue. Thanks! -Rich

    Read the article

  • sshd running but no PID file

    - by dunxd
    I'm recently started using monit to monitor the status of sshd on my CentOS 5.4 server. This works fine, but every so often monit reports that sshd is no longer running. This isn't true - I am still able to login to the server via ssh, however I note the following: There is no longer any PID file at /var/run/sshd.pid - after a reboot this file exists. Once it is gone, restarting sshd via service sshd restart does not create the PID file. sudo service sshd status reports openssh-daemon is stopped - again, restarting sshd does not change this, but a reboot does. sudo service sshd stop reports failed, presumably because of the missing PID file. Any idea what is going on? Update sudo netstat -lptun gives the following output relating to port 22 tcp 0 0 :::22 :::* LISTEN 20735/sshd Killing the process with this PID as suggested by @Henry and then starting sshd via service results in service sshd status recognising the process by PID again. Would still like to understand this better. RPM verify suggested by a couple of answerers shows this: sudo rpm -vV openssh openssh-server openssh-clients | grep 'S\.5' S.5....T c /etc/pam.d/sshd S.5....T c /etc/ssh/sshd_config /etc/pam.d/sshd has the following contents: #%PAM-1.0 auth include system-auth account required pam_nologin.so account include system-auth password include system-auth session optional pam_keyinit.so force revoke session include system-auth #session required pam_loginuid.so Should that last line be commented out? Update Here's the output of @YannickGirouard 's script: $ sudo ./sshd_test Searching for the process listening on port 22... Found the following PID: 21330 Command line for PID 21330: /usr/sbin/sshd Listing process(es) relating to PID 21330: UID PID PPID C STIME TTY TIME CMD root 21330 1 0 14:04 ? 00:00:00 /usr/sbin/sshd Listing RPM information about openssh packages: Name : openssh Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:50:57 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 745390 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH implementation of SSH protocol versions 1 and 2 ------------------------------------------------------ Name : openssh-clients Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 871132 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH client applications ------------------------------------------------------ Name : openssh-server Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : System Environment/Daemons Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 492478 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH server daemon ------------------------------------------------------ However, I've since got things working by killing the process and starting afresh, as suggested by @Henry below, so perhaps I am no longer seeing the same thing. Will try again if I am seeing the issue again after next reboot. Update - 14 March Monit alerted me that sshd had disappeared, and again I am able to ssh onto the server. So now I can run the script $ sudo ./sshd_test Searching for the process listening on port 22... Found the following PID: 2208 Command line for PID 2208: /usr/sbin/sshd Listing process(es) relating to PID 2208: UID PID PPID C STIME TTY TIME CMD root 2208 1 0 Mar13 ? 00:00:00 /usr/sbin/sshd root 1885 2208 0 21:50 ? 00:00:00 sshd: dunx [priv] Listing RPM information about openssh packages: Name : openssh Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:50:57 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 745390 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH implementation of SSH protocol versions 1 and 2 ------------------------------------------------------ Name : openssh-clients Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 871132 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH client applications ------------------------------------------------------ Name : openssh-server Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : System Environment/Daemons Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 492478 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH server daemon ------------------------------------------------------ Again, when I look for /var/run/sshd.pid I don't find it. $ cat /var/run/sshd.pid cat: /var/run/sshd.pid: No such file or directory $ sudo netstat -anp | grep sshd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2208/sshd $ sudo kill 2208 $ sudo service sshd start Starting sshd: [ OK ] $ cat /var/run/sshd.pid 3794 $ sudo service sshd status openssh-daemon (pid 3794) is running... Is it possible that sshd is restarting and not creating a pidfile for some reason?

    Read the article

  • Delphi 5: Ideas for simulating "Obsolete" or "Deprecated" methods?

    - by Ian Boyd
    i want to mark a method as obsolete, but Delphi 5 doesn't have such a feature. For the sake of an example, here is a made-up method with it's deprecated and new preferred form: procedure TStormPeaksQuest.BlowHodirsHorn; overload; //obsolete procedure TStormPeaksQuest.BlowHodirsHorn(UseProtection: Boolean); overload; Note: For this hypothetical example, we assume that using the parameterless version is just plain bad. There are problems with not "using protection" - which have no good solution. Nobody likes having to use protection, but nobody wants to not use protection. So we make the caller decide if they want to use protection or not when blowing Hodir's horn. If we default the parameterless version to continue not using protection: procedure TStormPeaksQuest.BlowHodirsHorn; begin BlowHodirsHorn(False); //No protection. Bad! end; then the developer is at risk of all kinds of nasty stuff. If we force the parameterless version to use protection: procedure TStormPeaksQuest.BlowHodirsHorn; begin BlowHodirsHorn(True); //Use protection; crash if there isn't any end; then there's a potential for problems if the developer didn't get any protection, or doesn't own any. Now i could rename the obsolete method: procedure TStormPeaksQuest.BlowHodirsHorn_Deprecatedd; overload; //obsolete procedure TStormPeaksQuest.BlowHodirsHorn(UseProtection: Boolean); overload; But that will cause a compile error, and people will bitch at me (and i really don't want to hear their whining). i want them to get a nag, rather than an actual error. i thought about adding an assertion: procedure TStormPeaksQuest.BlowHodirsHorn; //obsolete begin Assert(false, 'TStormPeaksQuest.BlowHodirsHorn is deprecated. Use BlowHodirsHorn(Boolean)'); ... end; But i cannot guarantee that the developer won't ship a version without assertions, causing a nasty crash for the customer. i thought about using only throwing an assertion if the developer is debugging: procedure TStormPeaksQuest.BlowHodirsHorn; //obsolete begin if DebugHook > 0 then Assert(false, 'TStormPeaksQuest.BlowHodirsHorn is deprecated. Use BlowHodirsHorn(Boolean)'); ... end; But i really don't want to be causing a crash at all. i thought of showing a MessageDlg if they're in the debugger (which is a technique i've done in the past): procedure TStormPeaksQuest.BlowHodirsHorn; //obsolete begin if DebugHook > 0 then MessageDlg('TStormPeaksQuest.BlowHodirsHorn is deprecated. Use BlowHodirsHorn(Boolean)', mtWarning, [mbOk], 0); ... end; but that is still too disruptive. And it has caused problems where the code is stuck at showing a modal dialog, but the dialog box wasn't obviously visible. i was hoping for some sort of warning message that will sit there nagging them - until they gouge their eyes out and finally change their code. i thought perhaps if i added an unused variable: procedure TStormPeaksQuest.BlowHodirsHorn; //obsolete var ThisMethodIsObsolete: Boolean; begin ... end; i was hoping this would cause a hint only if someone referenced the code. But Delphi shows a hint even if you don't call actually use the obsolete method. Can anyone think of anything else?

    Read the article

  • Explicitly instantiating a generic member function of a generic structure

    - by Dennis Zickefoose
    I have a structure with a template parameter, Stream. Within that structure, there is a function with its own template parameter, Type. If I try to force a specific instance of the function to be generated and called, it works fine, if I am in a context where the exact type of the structure is known. If not, I get a compile error. This feels like a situation where I'm missing a typename, but there are no nested types. I suspect I'm missing something fundamental, but I've been staring at this code for so long all I see are redheads, and frankly writing code that uses templates has never been my forte. The following is the simplest example I could come up with that illustrates the issue. #include <iostream> template<typename Stream> struct Printer { Stream& str; Printer(Stream& str_) : str(str_) { } template<typename Type> Stream& Exec(const Type& t) { return str << t << std::endl; } }; template<typename Stream, typename Type> void Test1(Stream& str, const Type& t) { Printer<Stream> out = Printer<Stream>(str); /****** vvv This is the line the compiler doesn't like vvv ******/ out.Exec<bool>(t); /****** ^^^ That is the line the compiler doesn't like ^^^ ******/ } template<typename Type> void Test2(const Type& t) { Printer<std::ostream> out = Printer<std::ostream>(std::cout); out.Exec<bool>(t); } template<typename Stream, typename Type> void Test3(Stream& str, const Type& t) { Printer<Stream> out = Printer<Stream>(str); out.Exec(t); } int main() { Test2(5); Test3(std::cout, 5); return 0; } As it is written, gcc-4.4 gives the following: test.cpp: In function 'void Test1(Stream&, const Type&)': test.cpp:22: error: expected primary-expression before 'bool' test.cpp:22: error: expected ';' before 'bool' Test2 and Test3 both compile cleanly, and if I comment out Test1 the program executes, and I get "1 5" as I expect. So it looks like there's nothing wrong with the idea of what I want to do, but I've botched something in the implementation. If anybody could shed some light on what I'm overlooking, it would be greatly appreciated.

    Read the article

  • Help Repainting a Line

    - by serhio
    I am doing a custom control (inherited from VisualBasic.PowerPacks.LineShape), that should be painted like as standard one, but also having a Icon displayed near it. So, I just overrided OnPaint like this: protected override void OnPaint(System.Windows.Forms.PaintEventArgs e) { e.Graphics.DrawIcon(myIcon, StartPoint.X, StartPoint.Y); base.OnPaint(e); } Now, everything is OK, but when my control moves, the icon still remains drawn on the ancient place. Is there a way to paint it properly? The sample code for tests using Microsoft.VisualBasic.PowerPacks; using System.Windows.Forms; using System.Drawing; namespace LineShapeTest { /// /// Test Form /// public class Form1 : Form { IconLineShape myLine = new IconLineShape(); ShapeContainer shapeContainer1 = new ShapeContainer(); Panel panel1 = new Panel(); public Form1() { this.panel1.Dock = DockStyle.Fill; // load your back image here this.panel1.BackgroundImage = global::WindowsApplication22.Properties.Resources._13820t; this.panel1.Controls.Add(shapeContainer1); this.myLine.StartPoint = new Point(20, 30); this.myLine.EndPoint = new Point(80, 120); this.myLine.Parent = this.shapeContainer1; MouseEventHandler panelMouseMove = new MouseEventHandler(this.panel1_MouseMove); this.panel1.MouseMove += panelMouseMove; this.shapeContainer1.MouseMove += panelMouseMove; this.Controls.Add(panel1); } private void panel1_MouseMove(object sender, MouseEventArgs e) { if (e.Button == MouseButtons.Left) { myLine.StartPoint = e.Location; } } } /// /// Test LineShape /// public class IconLineShape : LineShape { Icon myIcon = SystemIcons.Exclamation; protected override void OnPaint(System.Windows.Forms.PaintEventArgs e) { e.Graphics.DrawIcon(myIcon, StartPoint.X, StartPoint.Y); base.OnPaint(e); } } } Nota Bene, for the lineShape: Parent = ShapeContainer Parent.Parent = Panel Update 1 TRACES In this variant of OnPaint, we have traces: protected override void OnPaint(System.Windows.Forms.PaintEventArgs e) { Graphics g = Parent.Parent.CreateGraphics(); g.DrawIcon(myIcon, StartPoint.X, StartPoint.Y); base.OnPaint(e); } Update 2 BLINKS In this variant of OnPaint, we have a blinking image: protected override void OnPaint(System.Windows.Forms.PaintEventArgs e) { Parent.Parent.Invalidate(this.Region, true); Graphics g = Parent.Parent.CreateGraphics(); g.DrawIcon(myIcon, StartPoint.X, StartPoint.Y); base.OnPaint(e); } Update 3: External Invalidation This variant works well, but... from exterior of IconLineShape class: private void panel1_MouseMove(object sender, MouseEventArgs e) { if (e.Button == MouseButtons.Left) { Region r = myLine.Region; myLine.StartPoint = e.Location; panel1.Invalidate(r); } } /// /// Test LineShape /// public class IconLineShape : LineShape { Icon myIcon = SystemIcons.Exclamation; Graphics parentGraphics; protected override void OnPaint(System.Windows.Forms.PaintEventArgs e) { parentGraphics.DrawIcon(myIcon, StartPoint.X, StartPoint.Y); base.OnPaint(e); } protected override void OnParentChanged(System.EventArgs e) { // Parent is a ShapeContainer // Parent.Parent is a Panel parentGraphics = Parent.Parent.CreateGraphics(); base.OnParentChanged(e); } } Even this resolves the problem of the test example, I need this control to be done inside the control, because I can't force the external "clients" of this control do not forget to save the old region and invalidate the parent each time changing a location...

    Read the article

  • Setup routing and iptables for new VPN connection to redirect **only** ports 80 and 443

    - by Steve
    I have a new VPN connection (using openvpn) to allow me to route around some ISP restrictions. Whilst it is working fine, it is taking all the traffic over the vpn. This is causing me issues for downloading (my internet connection is a lot faster than the vpn allows), and for remote access. I run an ssh server, and have a daemon running that allows me to schdule downloads via my phone. I have my existing ethernet connection on eth0, and the new VPN connection on tun0. I believe I need to setup the default route to use my existing eth0 connection on the 192.168.0.0/24 network, and set the default gateway to 192.168.0.1 (my knowledge is shaky as I haven't done this for a number of years). If that is correct, then I'm not exactly sure how to do it!. My current routing table is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface MSS Window irtt 0.0.0.0 10.51.0.169 0.0.0.0 UG 0 0 0 tun0 0 0 0 10.51.0.1 10.51.0.169 255.255.255.255 UGH 0 0 0 tun0 0 0 0 10.51.0.169 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 0 0 0 85.25.147.49 192.168.0.1 255.255.255.255 UGH 0 0 0 eth0 0 0 0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 0 0 0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 0 0 0 After fixing the routing, I believe I need to use iptables to configure prerouting or masquerading to force everything for destination port 80 or 443 over tun0. Again, I'm not exactly sure how to do this! Everything I've found on the internet is trying to do something far more complicated, and trying to sort the wood from the trees is proving difficult. Any help would be much appreciated. UPDATE So far, from the various sources, I've cobbled together the following: #!/bin/sh DEV1=eth0 IP1=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 192.` GW1=192.168.0.1 TABLE1=internet TABLE2=vpn DEV2=tun0 IP2=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 10.` GW2=`route -n | grep 'UG[ \t]' | awk '{print $2}'` ip route flush table $TABLE1 ip route flush table $TABLE2 ip route show table main | grep -Ev ^default | while read ROUTE ; do ip route add table $TABLE1 $ROUTE ip route add table $TABLE2 $ROUTE done ip route add table $TABLE1 $GW1 dev $DEV1 src $IP1 ip route add table $TABLE2 $GW2 dev $DEV2 src $IP2 ip route add table $TABLE1 default via $GW1 ip route add table $TABLE2 default via $GW2 echo "1" > /proc/sys/net/ipv4/ip_forward echo "1" > /proc/sys/net/ipv4/ip_dynaddr ip rule add from $IP1 lookup $TABLE1 ip rule add from $IP2 lookup $TABLE2 ip rule add fwmark 1 lookup $TABLE1 ip rule add fwmark 2 lookup $TABLE2 iptables -t nat -A POSTROUTING -o $DEV1 -j SNAT --to-source $IP1 iptables -t nat -A POSTROUTING -o $DEV2 -j SNAT --to-source $IP2 iptables -t nat -A PREROUTING -m state --state ESTABLISHED,RELATED -j CONNMARK --restore-mark iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j CONNMARK --restore-mark iptables -t nat -A PREROUTING -i $DEV1 -m state --state NEW -j CONNMARK --set-mark 1 iptables -t nat -A PREROUTING -i $DEV2 -m state --state NEW -j CONNMARK --set-mark 2 iptables -t nat -A PREROUTING -m connmark --mark 1 -j MARK --set-mark 1 iptables -t nat -A PREROUTING -m connmark --mark 2 -j MARK --set-mark 2 iptables -t nat -A PREROUTING -m state --state NEW -m connmark ! --mark 0 -j CONNMARK --save-mark iptables -t mangle -A PREROUTING -i $DEV2 -m state --state NEW -p tcp --dport 80 -j CONNMARK --set-mark 2 iptables -t mangle -A PREROUTING -i $DEV2 -m state --state NEW -p tcp --dport 443 -j CONNMARK --set-mark 2 route del default route add default gw 192.168.0.1 eth0 Now this seems to be working. Except it isn't! Connections to the blocked websites are going through, connections not on ports 80 and 443 are using the non-VPN connection. However port 80 and 443 connections that aren't to the blocked websites are using the non-VPN connection too! As the general goal has been reached, I'm relatively happy, but it would be nice to know why it isn't working exactly right. Any ideas? For reference, I now have 3 routing tables, main, internet, and vpn. The listing of them is as follows... Main: default via 192.168.0.1 dev eth0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1 Internet: default via 192.168.0.1 dev eth0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1 192.168.0.1 dev eth0 scope link src 192.168.0.73 VPN: default via 10.38.0.205 dev tun0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1

    Read the article

  • Issue with translating a delegate function from c# to vb.net for use with Google OAuth 2

    - by Jeremy
    I've been trying to translate a Google OAuth 2 example from C# to Vb.net for a co-worker's project. I'm having on end of issues translating the following methods: private OAuth2Authenticator<WebServerClient> CreateAuthenticator() { // Register the authenticator. var provider = new WebServerClient(GoogleAuthenticationServer.Description); provider.ClientIdentifier = ClientCredentials.ClientID; provider.ClientSecret = ClientCredentials.ClientSecret; var authenticator = new OAuth2Authenticator<WebServerClient>(provider, GetAuthorization) { NoCaching = true }; return authenticator; } private IAuthorizationState GetAuthorization(WebServerClient client) { // If this user is already authenticated, then just return the auth state. IAuthorizationState state = AuthState; if (state != null) { return state; } // Check if an authorization request already is in progress. state = client.ProcessUserAuthorization(new HttpRequestInfo(HttpContext.Current.Request)); if (state != null && (!string.IsNullOrEmpty(state.AccessToken) || !string.IsNullOrEmpty(state.RefreshToken))) { // Store and return the credentials. HttpContext.Current.Session["AUTH_STATE"] = _state = state; return state; } // Otherwise do a new authorization request. string scope = TasksService.Scopes.TasksReadonly.GetStringValue(); OutgoingWebResponse response = client.PrepareRequestUserAuthorization(new[] { scope }); response.Send(); // Will throw a ThreadAbortException to prevent sending another response. return null; } The main issue being this line: var authenticator = new OAuth2Authenticator<WebServerClient>(provider, GetAuthorization) { NoCaching = true }; The Method signature reads as for this particular line reads as follows: Public Sub New(tokenProvider As TClient, authProvider As System.Func(Of TClient, DotNetOpenAuth.OAuth2.IAuthorizationState)) My understanding of Delegate functions in VB.net isn't the greatest. However I have read over all of the MSDN documentation and other relevant resources on the web, but I'm still stuck as to how to translate this particular line. So far all of my attempts have resulted in either the a cast error (see below) or no call to GetAuthorization. The Code (vb.net on .net 3.5) Private Function CreateAuthenticator() As OAuth2Authenticator(Of WebServerClient) ' Register the authenticator. Dim client As New WebServerClient(GoogleAuthenticationServer.Description, oauth.ClientID, oauth.ClientSecret) Dim authDelegate As Func(Of WebServerClient, IAuthorizationState) = AddressOf GetAuthorization Dim authenticator = New OAuth2Authenticator(Of WebServerClient)(client, authDelegate) With {.NoCaching = True} 'Dim authenticator = New OAuth2Authenticator(Of WebServerClient)(client, GetAuthorization(client)) With {.NoCaching = True} 'Dim authenticator = New OAuth2Authenticator(Of WebServerClient)(client, New Func(Of WebServerClient, IAuthorizationState)(Function(c) GetAuthorization(c))) With {.NoCaching = True} 'Dim authenticator = New OAuth2Authenticator(Of WebServerClient)(client, New Func(Of WebServerClient, IAuthorizationState)(AddressOf GetAuthorization)) With {.NoCaching = True} Return authenticator End Function Private Function GetAuthorization(arg As WebServerClient) As IAuthorizationState ' If this user is already authenticated, then just return the auth state. Dim state As IAuthorizationState = AuthState If (Not state Is Nothing) Then Return state End If ' Check if an authorization request already is in progress. state = arg.ProcessUserAuthorization(New HttpRequestInfo(HttpContext.Current.Request)) If (state IsNot Nothing) Then If ((String.IsNullOrEmpty(state.AccessToken) = False Or String.IsNullOrEmpty(state.RefreshToken) = False)) Then ' Store Credentials HttpContext.Current.Session("AUTH_STATE") = state _state = state Return state End If End If ' Otherwise do a new authorization request. Dim scope As String = AnalyticsService.Scopes.AnalyticsReadonly.GetStringValue() Dim _response As OutgoingWebResponse = arg.PrepareRequestUserAuthorization(New String() {scope}) ' Add Offline Access and forced Approval _response.Headers("location") += "&access_type=offline&approval_prompt=force" _response.Send() ' Will throw a ThreadAbortException to prevent sending another response. Return Nothing End Function The Cast Error Server Error in '/' Application. Unable to cast object of type 'DotNetOpenAuth.OAuth2.AuthorizationState' to type 'System.Func`2[DotNetOpenAuth.OAuth2.WebServerClient,DotNetOpenAuth.OAuth2.IAuthorizationState]'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidCastException: Unable to cast object of type 'DotNetOpenAuth.OAuth2.AuthorizationState' to type 'System.Func`2[DotNetOpenAuth.OAuth2.WebServerClient,DotNetOpenAuth.OAuth2.IAuthorizationState]'. I've spent the better part of a day on this, and it's starting to drive me nuts. Help is much appreciated.

    Read the article

  • Forcing a transaction to rollback on validation errors in Seam

    - by Chris Williams
    Quick version: We're looking for a way to force a transaction to rollback when specific situations occur during the execution of a method on a backing bean but we'd like the rollback to happen without having to show the user a generic 500 error page. Instead, we'd like the user to see the form she just submitted and a FacesMessage that indicates what the problem was. Long version: We've got a few backing beans that use components to perform a few related operations in the database (using JPA/Hibernate). During the process, an error can occur after some of the database operations have happened. This could be for a few different reasons but for this question, let's assume there's been a validation error that is detected after some DB writes have happened that weren't detectible before the writes occurred. When this happens, we'd like to make sure all of the db changes up to this point will be rolled back. Seam can deal with this because if you throw a RuntimeException out of the current FacesRequest, Seam will rollback the current transaction. The problem with this is that the user is shown a generic error page. In our case, we'd actually like the user to be shown the page she was on with a descriptive message about what went wrong, and have the opportunity to correct the bad input that caused the problem. The solution we've come up with is to throw an Exception from the component that discovers the validation problem with the annotation: @ApplicationException( rollback = true ) Then our backing bean can catch this exception, assume the component that threw it has published the appropriate FacesMessage, and simply return null to take the user back to the input page with the error displayed. The ApplicationException annotation tells Seam to rollback the transaction and we're not showing the user a generic error page. This worked well for the first place we used it that happened to only be doing inserts. The second place we tried to use it, we have to delete something during the process. In this second case, everything works if there's no validation error. If a validation error does happen, the rollback Exception is thrown and the transaction is marked for rollback. Even if no database modifications have happened to be rolled back, when the user fixes the bad data and re-submits the page, we're getting: java.lang.IllegalArgumentException: Removing a detached instance The detached instance is lazily loaded from another object (there's a many to one relationship). That parent object is loaded when the backing bean is instantiated. Because the transaction was rolled back after the validation error, the object is now detached. Our next step was to change this page from conversation scope to page scope. When we did this, Seam can't even render the page after the validation error because our page has to hit the DB to render and the transaction has been marked for rollback. So my question is: how are other people dealing with handling errors cleanly and properly managing transactions at the same time? Better yet, I'd love to be able to use everything we have now if someone can spot something I'm doing wrong that would be relatively easy to fix. I've read the Seam Framework article on Unified error page and exception handling but this is geared more towards more generic errors your application might encounter.

    Read the article

  • InputText inside Primefaces dataTable is not refreshed

    - by robson
    I need to have inputTexts inside datatable when form is in editable mode. Everything works properly except form cleaning with immediate="true" (without form validation). Then primefaces datatable behaves unpredictable. After filling in datatable with new data - it still stores old values. Short example: test.xhtml <h:body> <h:form id="form"> <p:dataTable var="v" value="#{test.list}" id="testTable"> <p:column headerText="Test value"> <p:inputText value="#{v}"/> </p:column> </p:dataTable> <h:dataTable var="v" value="#{test.list}" id="testTable1"> <h:column> <f:facet name="header"> <h:outputText value="Test value" /> </f:facet> <p:inputText value="#{v}" /> </h:column> </h:dataTable> <p:dataTable var="v" value="#{test.list}" id="testTable2"> <p:column headerText="Test value"> <h:outputText value="#{v}" /> </p:column> </p:dataTable> <p:commandButton value="Clear" actionListener="#{test.clear()}" immediate="true" update=":form:testTable :form:testTable1 :form:testTable2"/> <p:commandButton value="Update" actionListener="#{test.update()}" update=":form:testTable :form:testTable1 :form:testTable2"/> </h:form> </h:body> And java: import java.io.Serializable; import java.util.ArrayList; import java.util.List; import javax.annotation.PostConstruct; import javax.faces.bean.ViewScoped; import javax.inject.Inject; import javax.inject.Named; @Named @ViewScoped public class Test implements Serializable { private static final long serialVersionUID = 1L; private List<String> list; @PostConstruct private void init(){ update(); } public List<String> getList() { return list; } public void setList(List<String> list) { this.list = list; } public void clear() { list = new ArrayList<String>(); } public void update() { list = new ArrayList<String>(); list.add("Item 1"); list.add("Item 2"); } } In the example above I have 3 configurations: 1. p:dataTable with p:inputText 2. h:dataTable with p:inputText 3. p:dataTable with h:outputText And 2 buttons: first clears data, second applies data Workflow: 1. Try to change data in inputTexts in p:dataTable and h:dataTable 2. Clear data of list (arrayList of string) - click "clear" button (imagine that you click cancel on form because you don't want to store data to database) 3. Load new data - click "update" button (imagine that you are openning new form with new data) Question: Why p:dataTable with p:inputText still stores manually changed data, not the loaded ones? Is there any way to force p:dataTable to behaving like h:dataTable in this case?

    Read the article

  • Bad disks in ancient server

    - by Joel Coel
    I have a 1998-era Netware 3.12 server that runs everything on our campus: general ledger, purchasing, payroll, student information, grades, you name it. The server has an Adaptec RAID controller with two volumes: RAID 1, 2 17GB scsi disks, Seagate ST318417W RAID 5, 3 4GB scsi disks, 2 Seagate ST34573W and 1 ST34572W. We are currently in the early stages of a project to replace this system, but you don't just jump into a new system like that and so I need to keep this server running until at least November 2011. This week we had not one but two hard drives fail. Thankfully they are from different volumes and we're able to keep running for the moment, but given the close nature of these failures I have serious doubts that I'll be able to avoid catastrophic failure from this server through the November target as is without restoring the RAID redundancy — it'll only take one more drive failure anywhere and I'm completely hosed. We are fortunate enough to have exact match "spares" lying around for both drives, but the spares are in unknown condition. I tried swapping just them in, but the RAID controller isn't smart enough to handle this and it renders the system unbootable. As for the RAID controller itself, there is utility I can get into during POST via a Ctrl-A shortcut, but I can't do much useful from there. To actually manage volumes I must first boot in to Netware, at which point I can use CI/O Array Management Software Version 2.0 to actually look at volume information. I suspect that the normal way to manage things is to boot from a special floppy with the controller software on it, but that floppy is long gone. Going through the options in the RAID software, I think the only supported way to replace a disk in an existing RAID volume is to physically add the disk, boot up and configure it as a "spare" for a volume, force the volume to use the spare to replace an existing down disk (and at this point I'm only guessing) so that the down disk becomes the spare, repair the volume, remove the spare from the volume, and then shut down and remove the disk. Then start all over for the other failed disk. All this amounts to a lot of downtime, assuming I can even make it work and that my spares are any good. As for finding reliable spares, I have no clue where to even begin looking to find a new 4GB scsi drive, or even which exact scsi system I'm looking for, as it's gone through a few different iterations over time. Another option is to migrate this to a virtual machine (hyper-v), but all previous attempts we've made in this area have failed to get very far. When this machine was installed I was just graduating from high school, and so it requires lower level knowledge of netware and dos than I ever developed, or if I did have since forgotten (I'm not exactly a dos neophyte, either). Part of my problem is this is a high-use server, and taking it down for a few days to figure things out isn't gonna fly very well. As for the question, I'm looking for anything that might be helpful in this situation: a recommendation on a place to find good spares from this era, personal experience repairing RAID volumes using a similar controller or building a hyper-v vm from an old netware server, a line on a floppy with better software for the RAID controller, recommendation on a good Novell consultant in Nebraska that would be able to put things right, a whole other option I haven't considered yet, etc. Update: For backups, we have good (recently verified via restore) backups of the data only -- nothing for the software that actually runs things. Update 2: Just a progress report that I currently have a working Netware 3.12 install in VMWare Virtual Server 2.0, thanks largely to the guide I found here: http://cerbulescubogdan.blogspot.com/2010/11/novell-netware-312-on-vmware.html The next steps are preparing empty netware volumes to match the additional volumes on my existing server, taking a dump of everything on the C:\ drive and netware volumes on my existing server, and figuring out from that information what modules need added to netware, installing my licenses (we do still have that disk, if it's any good), and moving data over. I have approval to bring the server down for a week after the first of the year (sadly not before), so, aside from creating empty volumes, the rest of the work will have to wait until then. Final Update (Jan 5, 2011): I was able to get spares working in both raid arrays without data loss this week. Both are now listed by the controller as "FAULT TOLLERANT" (yay!). I was also able to build on the progress from my last update and now have a functional "spare" server in VMWare Server 2.0. The spare can run and use our erp software, but I can't put it into production because I can't (yet) print from that box (and I have no idea why). Even so, this VM will do in a pinch if I have no other choice, and between it and the repaired RAID arrays I'm comfortable pushing on until I can junk the machine in November.

    Read the article

  • surfaceDestroyed called out of turn

    - by Avasulthiris
    I'm currently developing on minimum sdk version 3 (Android 1.5 - cupcake) and I'm having a strange unexplained issue that I have not been able to solve on my own. It is now becoming a rather urgent issue as I've already missed 1 deadline... I'm writing a high-level library to make long term android development easier and quicker. The one specific module has to capture images for a application... I've gotten everything right so far over the last couple months, except this one little thing and I don't know what to do any more: When I use the Camera object and implement a SurfaceHolder.Callback, the methods surfaceCreated() and surfaceChanged() are called one after the other. Then when the activity finishes, surfaceDestroyed() is called. This is how it should be, but when I stick the exact same code in my library (plain Java library that references the Android API - not in an activity), surfaceDestroyed() is called directly after created and changed. As a result - the camera object is closed before I can use it and the application force closes. What a pain. I can't do anything! This method call is controlled by the device.. Why does the surface close for no reason? Even when I post it to run on the activity thread through my own invokeAndWait(Runnable) method, like I do for many other things. I have 5 different working examples of different ways and implementations of capturing images in android but I still get the same issue when I plug it into my library. I don't understand what the difference is. The code is pretty much the same - and I post all the related code to the UI thread so its not a thread handling issue or anything like that. I've rewritten it about 20 times in different ways - same issue every time.. The only other way to approach it that I know of is creating a new Camera and setting it to the VideoView. The android source (c++ native code) however provides no Camera constructor, only an open() method which automatically forwards the camera's state to 'prepared' but I can only set the camera to the VideoView from the 'initialized' state. Pretty silly, I know, but there is no way around it unless I modify the Android library source code haha. not an option! The API does not allow for this method - you are expected to use it like my first example. So essentially - i just need to understand exactly why surfaceDestroyed() is called out of turn and if there is anything I can do to avoid it closing? If i can just understand the exact logic behind it and how it works! The documentation isn't much help! Secondly, if someone knows of any alternative ways to do it, as my second example, but hopefully one which the API actually allows for? haha Thanks guys. I would post code, but its fairly complicated, a couple thousand lines for this specific class and it would probably take a couple days to explain with all the threading and event listeners and what not. I just need help with this 1 single thing. Please let me know if you have any questions.

    Read the article

  • How do I create something like a negated character class with a string instead of characters?

    - by Chas. Owens
    I am trying to write a tokenizer for Mustache in Perl. I can easily handle most of the tokens like this: #!/usr/bin/perl use strict; use warnings; my $comment = qr/ \G \{\{ ! (?<comment> .+? ) }} /xs; my $variable = qr/ \G \{\{ (?<variable> .+? ) }} /xs; my $text = qr/ \G (?<text> .+? ) (?= \{\{ | \z ) /xs; my $tokens = qr/ $comment | $variable | $text /x; my $s = do { local $/; <DATA> }; while ($s =~ /$tokens/g) { my ($type) = keys %+; (my $contents = $+{$type}) =~ s/\n/\\n/; print "type [$type] contents [$contents]\n"; } __DATA__ {{!this is a comment}} Hi {{name}}, I like {{thing}}. But I am running into trouble with the Set Delimiters directive: #!/usr/bin/perl use strict; use warnings; my $delimiters = qr/ \G \{\{ (?<start> .+? ) = [ ] = (?<end> .+?) }} /xs; my $comment = qr/ \G \{\{ ! (?<comment> .+? ) }} /xs; my $variable = qr/ \G \{\{ (?<variable> .+? ) }} /xs; my $text = qr/ \G (?<text> .+? ) (?= \{\{ | \z ) /xs; my $tokens = qr/ $comment | $delimiters | $variable | $text /x; my $s = do { local $/; <DATA> }; while ($s =~ /$tokens/g) { for my $type (keys %+) { (my $contents = $+{$type}) =~ s/\n/\\n/; print "type [$type] contents [$contents]\n"; } } __DATA__ {{!this is a comment}} Hi {{name}}, I like {{thing}}. {{(= =)}} If I change it to my $delimiters = qr/ \G \{\{ (?<start> [^{]+? ) = [ ] = (?<end> .+?) }} /xs; It works fine, but the point of the Set Delimiters directive is to change the delimiters, so the code will wind up looking like my $variable = qr/ \G $start (?<variable> .+? ) $end /xs; And it is perfectly valid to say {{{== ==}}} (i.e. change the delimiters to {= and =}). What I want, but maybe not what I need, is the ability to say something like (?:not starting string)+?. I figure I am just going to have to give up being clean about it and drop code into the regex to force it to match only what I want. I am trying to avoid that for four reasons: I don't think it is very clean. It is marked as experimental. I am not very familier with it (I think it comes down to (?{CODE}) and returning special values. I am hoping someone knows some other exotic feature that I am not familiar with that fits the situation better (e.g. (?(condition)yes-pattern|no-pattern)). Just to make things clear (I hope), I am trying to match a constant length starting delimiter followed by the shortest string that allows a match and does not contain the starting delimiter followed by a space followed by an equals sign followed by the shortest string that allows a match that ends with the ending delimiter.

    Read the article

  • Getting the DirectShow VideoRender filter to respond to MediaType changes on its Input Pin?

    - by Jonathan Websdale
    Below is the code extract from my decoder transform filter which takes in data from my source filter which is taking RTP network data from an IP camera. The source filter, decode filter can dynamically respond to changes in the camera image dimensions since I need to handle resolution changes in the decode library. I've used the 'ReceiveConnection' method as described in the DirectShow help, passing the new MediaType data in the next sample. However, I can't get the Video Mixing Renderer to accept the resolution changes dynamically even though the renderer will render the different resolution if the graph is stopped and restarted. Can anyone point out what I need to do to get the renderer to handle dynamic resolution changes? HRESULT CDecoder::Receive(IMediaSample* pIn) { //Input data does not necessarily correspond one-to-one //with output frames, so we must override Receive instead //of Transform. HRESULT hr = S_OK; //Deliver input to library long cBytes = pIn->GetActualDataLength(); BYTE* pSrc; pIn->GetPointer(&pSrc); try { hr = m_codec.Decode(pSrc, cBytes, (hr == S_OK)?&tStart : NULL); } catch (...) { hr = E_UNEXPECTED; } if (FAILED(hr)) { if (theLog.enabled()){theLog.strm() << "Decoder Error " << hex << hr << dec << " - resetting input"; theLog.write();} //Force reset of decoder m_bReset = true; m_codec.ResetInput(); //We have handled the error -- don't pass upstream or the source may stop. return S_OK; } //Extract and deliver any decoded frames hr = DeliverDecodedFrames(); return hr; } HRESULT CDecoder::DeliverDecodedFrames() { HRESULT hr = S_OK; for (;;) { DecodedFrame frame; bool bFrame = m_codec.GetDecodedFrame(frame); if (!bFrame) { break; } CMediaType mtIn; CMediaType mtOut; GetMediaType( PINDIR_INPUT, &mtIn); GetMediaType( PINDIR_OUTPUT, &mtOut); //Get the output pin's current image resolution VIDEOINFOHEADER* pvi = (VIDEOINFOHEADER*)mtOut.Format(); if( pvi->bmiHeader.biWidth != m_cxInput || pvi->bmiHeader.biHeight != m_cyInput) { HRESULT hr = GetPin(PINDIR_OUTPUT)->GetConnected()->ReceiveConnection(GetPin(PINDIR_OUTPUT), &mtIn); if(SUCCEEDED(hr)) { SetMediaType(PINDIR_OUTPUT, &mtIn); } } IMediaSamplePtr pOut; hr = m_pOutput->GetDeliveryBuffer(&pOut, 0, 0, NULL); if (FAILED(hr)) { break; } AM_MEDIA_TYPE* pmt; if (pOut->GetMediaType(&pmt) == S_OK) { CMediaType mt(*pmt); DeleteMediaType(pmt); SetMediaType(PINDIR_OUTPUT, &mt); pOut->SetMediaType(&mt); } // crop, tramslate and deliver BYTE* pDest; pOut->GetPointer(&pDest); m_pConverter->Convert(frame.Width(), frame.Height(), frame.GetY(), frame.GetU(), frame.GetV(), pDest); pOut->SetActualDataLength(m_pOutput->CurrentMediaType().GetSampleSize()); pOut->SetSyncPoint(true); if (frame.HasTimestamp()) { REFERENCE_TIME tStart = frame.Timestamp(); REFERENCE_TIME tStop = tStart+1; pOut->SetTime(&tStart, &tStop); } m_pOutput->Deliver(pOut); } return hr; }

    Read the article

  • Build and migrated to software raid (mdadm) on GPT disk, now can't assemble array

    - by John H
    mdadm, gpt issues, unrecognized partitions. Simplified question: How do I get mdadm to recognize GPT partitions? I have been attempting to convert/copy my Ubuntu 11.10 OS from a single drive to software raid 1. I have done similar in the past, but in this case, I was adding in a drive that has been configured for GPT and I tried to work with that without fully looking into the implications. Currently, I have a non-booting mdadm RAID 1 array of /dev/md127 (the OS assigned that and it keeps picking up). I am booting off of live USB keys, currently System Rescue CD from sysresccd. While gdisk and parted can see all the partitions, most of the OS utilities do not, including mdadm. My main goal is just to make the raid array accessible so I can get pull the data and start fresh (without using GPT). /dev/md127 /dev/sda /dev/sda1 <- GPT type partition /dev/sda1 <- exists within the GPT part, member of md127 /dev/sda2 <- exists within the GPT part, empty /dev/sdb /dev/sdb1 <- GPT type partition /dev/sdb1 <- exists within the GPT part, member of md127 History: POINT A: The original OS was install on sda (actually /dev/sda6). I used a the Ubuntu live usb to add sdb. I got warning from fdisk about GPT so I used gdisk to create a raid partition (sdb1) and mdadm to create a raid1 mirror with a missing drive. I had many issues getting this working (including being unable to get grub to install) but I eventually got it to boot using grub on sda and /dev/md127 off of sdb. So at point A, I had copied my OS from sda6 to md127 on sdb. I then booted into a rescue mode and attempted to get a bootloader onto sdb, which failed. I then discovered my mistake: I had installed the raid onto sdb instead of sdb1, essentially overwriting the sdb1 partition. POINT B: I now had two copies of my data- one on md127/sdb, and one on sda. I destroyed data on sda and created a new GPT table on sda. I then created sda1 for the raid array, and sda2 for a scratch partition. I added sda1 into the raid array and let it rebuild. md127 now covered /dev/sdb and /dev/sda1 as fully active and synced. POINT C: I rebooted onto linux rescue again and was still able to access the raid array. I then removed /dev/sdb from the array and created /dev/sdb1 for the raid. I added sdb1 to the array and let it sync. I was able to mount and access /dev/md127 without issues. Once it completed, both /dev/sda1 and /dev/sdb1 were GPT partitions and actively syncing. POINT D (current): I rebooted again to test if the array would boot and grub failed to load. I booted off of my live thumb drive and found that I can no longer assemble the raid array. mdadm doesn't see the required partitions. -- root@freshdesk /root % uname -a Linux freshdesk 3.0.24-std251-amd64 #2 SMP Sat Mar 17 12:08:55 UTC 2012 x86_64 AMD Athlon(tm) II X4 645 Processor AuthenticAMD GNU/Linux === /proc/partitions and parted look good: root@freshdesk /root % cat /proc/partitions major minor #blocks name 7 0 301788 loop0 8 0 976762584 sda 8 1 732579840 sda1 8 2 244181703 sda2 8 16 732574584 sdb 8 17 732573543 sdb1 8 32 7876607 sdc 8 33 7873349 sdc1 (parted) print all Model: ATA ST31000528AS (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 750GB 750GB ext4 2 750GB 1000GB 250GB Linux/Windows data Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 750GB 750GB ext4 Linux RAID raid Model: SanDisk SanDisk Cruzer (scsi) Disk /dev/sdc: 8066MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 31.7kB 8062MB 8062MB primary fat32 boot, lba === # no sda2, and I double the sdb1 is the one shown in parted root@freshdesk /root % blkid /dev/loop0: TYPE="squashfs" /dev/sda1: UUID="75dd6c2d-f0a8-4302-9da4-792cc7d72355" TYPE="ext4" /dev/sdc1: LABEL="PENDRIVE" UUID="1102-3720" TYPE="vfat" /dev/sdb1: UUID="2dd89f15-65bb-ff88-e368-bf24bd0fce41" TYPE="linux_raid_member" root@freshdesk /root % mdadm -E /dev/sda1 mdadm: No md superblock detected on /dev/sda1. # this is probably a result of me attempting to force the array up, putting superblocks on the GPT partition root@freshdesk /root % mdadm -E /dev/sdb1 /dev/sdb1: Magic : a92b4efc Version : 0.90.00 UUID : 2dd89f15:65bbff88:e368bf24:bd0fce41 Creation Time : Fri Mar 30 19:25:30 2012 Raid Level : raid1 Used Dev Size : 732568320 (698.63 GiB 750.15 GB) Array Size : 732568320 (698.63 GiB 750.15 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 127 Update Time : Sat Mar 31 12:39:38 2012 State : clean Active Devices : 1 Working Devices : 2 Failed Devices : 1 Spare Devices : 1 Checksum : a7d038b3 - correct Events : 20195 Number Major Minor RaidDevice State this 2 8 17 2 spare /dev/sdb1 0 0 8 1 0 active sync /dev/sda1 1 1 0 0 1 faulty removed 2 2 8 17 2 spare /dev/sdb1 === root@freshdesk /root % mdadm -A /dev/md127 /dev/sda1 /dev/sdb1 mdadm: no recogniseable superblock on /dev/sda1 mdadm: /dev/sda1 has no superblock - assembly aborted root@freshdesk /root % mdadm -A /dev/md127 /dev/sdb1 mdadm: cannot open device /dev/sdb1: Device or resource busy mdadm: /dev/sdb1 has no superblock - assembly aborted

    Read the article

  • KB2667402 update fails to install with 0x80004005. Yet it's also showing as installed

    - by growse
    I've got 15 Windows 2008 R2 x64 servers that I manage with SCCM2012. I've noticed during my Windows updates reporting that there's two boxes that are showing as 'error' for total updates installed. Digging around, it looks like the update that is failing to install is KB2667402. The Software Centre on the server itself shows the following: The software change returned error code 0x80004005(-2147467259). So SCCM thinks it hasn't installed the update. However, if I go to the Programs and Features application and select 'Windows Updates', I can see an entry for KB2667402: If I try and uninstall this, I get an error: An error occurred. Not all of the updates were successfully uninstalled If I try and download the patch from Microsoft directly, I get the same error installing as displayed in Software Centre. The only odd thing about this setup that I can think would affect this is that I run the RDP service on a non-standard port. However, I do this across all the servers, so it seems odd that it would fail on just 2 out of 15. The tail of the WindowsUpdate.log file is below: 2012-06-26 15:33:53:184 3924 1608 COMAPI ------------- 2012-06-26 15:33:53:190 3924 1608 COMAPI -- START -- COMAPI: Install [ClientId = CcmExec] 2012-06-26 15:33:53:190 3924 1608 COMAPI --------- 2012-06-26 15:33:53:190 3924 1608 COMAPI - Allow source prompts: No; Forced: No; Force quiet: Yes 2012-06-26 15:33:53:190 3924 1608 COMAPI - Updates in request: 1 2012-06-26 15:33:53:190 3924 1608 COMAPI - ServiceID = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7} Managed 2012-06-26 15:33:53:199 860 1198 Agent ************* 2012-06-26 15:33:53:199 860 1198 Agent ** START ** Agent: Installing updates [CallerId = CcmExec] 2012-06-26 15:33:53:199 860 1198 Agent ********* 2012-06-26 15:33:53:199 860 1198 Agent * Updates to install = 1 2012-06-26 15:33:53:201 860 1198 Agent * Title = Security Update for Windows Server 2008 R2 x64 Edition (KB2667402) 2012-06-26 15:33:53:201 860 1198 Agent * UpdateId = {48859BE4-1331-4CD2-8E70-3B537180A0D0}.103 2012-06-26 15:33:53:201 860 1198 Agent * Bundles 1 updates: 2012-06-26 15:33:53:201 860 1198 Agent * {D854ECF1-99A7-4D67-B435-2D041BF79565}.103 2012-06-26 15:33:53:204 3924 1608 COMAPI - Updates to install = 1 2012-06-26 15:33:53:204 3924 1608 COMAPI <<-- SUBMITTED -- COMAPI: Install [ClientId = CcmExec] 2012-06-26 15:33:53:221 860 1198 Agent WARNING: failed to calculate prior restore point time with error 0x80070002; setting restore point 2012-06-26 15:33:53:222 860 1198 Agent WARNING: LoadLibrary failed for srclient.dll with hr:8007007e 2012-06-26 15:33:53:322 860 1198 DnldMgr Preparing update for install, updateId = {D854ECF1-99A7-4D67-B435-2D041BF79565}.103. 2012-06-26 15:33:53:325 5700 117c Misc =========== Logging initialized (build: 7.5.7601.17514, tz: +0100) =========== 2012-06-26 15:33:53:325 5700 117c Misc = Process: C:\Windows\system32\wuauclt.exe 2012-06-26 15:33:53:325 5700 117c Misc = Module: C:\Windows\system32\wuaueng.dll 2012-06-26 15:33:53:324 5700 117c Handler ::::::::::::: 2012-06-26 15:33:53:325 5700 117c Handler :: START :: Handler: CBS Install 2012-06-26 15:33:53:325 5700 117c Handler ::::::::: 2012-06-26 15:33:53:330 5700 117c Handler Starting install of CBS update D854ECF1-99A7-4D67-B435-2D041BF79565 2012-06-26 15:33:53:342 5700 117c Handler CBS package identity: Package_for_KB2667402~31bf3856ad364e35~amd64~~6.1.2.0 2012-06-26 15:33:53:366 5700 117c Handler Installing self-contained with source=C:\Windows\SoftwareDistribution\Download\44059e0415033d6f699a50ef69dd5ff2\windows6.1-kb2667402-v2-x64.cab, workingdir=C:\Windows\SoftwareDistribution\Download\44059e0415033d6f699a50ef69dd5ff2\inst 2012-06-26 15:33:56:270 5700 3b8 Handler FATAL: CBS called Error with 0x80004005, 2012-06-26 15:33:56:402 5700 117c Handler FATAL: Completed install of CBS update with type=0, requiresReboot=0, installerError=1, hr=0x80004005 2012-06-26 15:33:56:405 5700 117c Handler ::::::::: 2012-06-26 15:33:56:406 5700 117c Handler :: END :: Handler: CBS Install 2012-06-26 15:33:56:406 5700 117c Handler ::::::::::::: 2012-06-26 15:33:56:433 860 1198 Agent ********* 2012-06-26 15:33:56:433 860 1198 Agent ** END ** Agent: Installing updates [CallerId = CcmExec] 2012-06-26 15:33:56:433 860 1198 Agent ************* 2012-06-26 15:33:56:433 860 d14 AU Can not perform non-interactive scan if AU is interactive-only 2012-06-26 15:33:56:450 3924 e40 COMAPI >>-- RESUMED -- COMAPI: Install [ClientId = CcmExec] 2012-06-26 15:33:56:450 3924 e40 COMAPI - Install call complete (succeeded = 0, succeeded with errors = 0, failed = 1, unaccounted = 0) 2012-06-26 15:33:56:450 3924 e40 COMAPI - Reboot required = No 2012-06-26 15:33:56:450 3924 e40 COMAPI - WARNING: Exit code = 0x00000000; Call error code = 0x80240022 2012-06-26 15:33:56:451 3924 e40 COMAPI --------- 2012-06-26 15:33:56:451 3924 e40 COMAPI -- END -- COMAPI: Install [ClientId = CcmExec] 2012-06-26 15:33:56:451 3924 e40 COMAPI ------------- 2012-06-26 15:33:56:536 860 13a4 AU Triggering Offline detection (non-interactive) 2012-06-26 15:33:56:536 860 d14 AU ############# 2012-06-26 15:33:56:536 860 d14 AU ## START ## AU: Search for updates 2012-06-26 15:33:56:536 860 d14 AU ######### 2012-06-26 15:33:56:539 860 d14 AU <<## SUBMITTED ## AU: Search for updates [CallId = {2DBB046C-2265-421B-A37B-93BDECC6C261}] 2012-06-26 15:33:56:539 860 1788 Agent ************* 2012-06-26 15:33:56:539 860 1788 Agent ** START ** Agent: Finding updates [CallerId = AutomaticUpdates] 2012-06-26 15:33:56:539 860 1788 Agent ********* 2012-06-26 15:33:56:539 860 1788 Agent * Online = No; Ignore download priority = No 2012-06-26 15:33:56:539 860 1788 Agent * Criteria = "IsInstalled=0 and DeploymentAction='Installation' or IsPresent=1 and DeploymentAction='Uninstallation' or IsInstalled=1 and DeploymentAction='Installation' and RebootRequired=1 or IsInstalled=0 and DeploymentAction='Uninstallation' and RebootRequired=1" 2012-06-26 15:33:56:539 860 1788 Agent * ServiceID = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7} Managed 2012-06-26 15:33:56:539 860 1788 Agent * Search Scope = {Machine} 2012-06-26 15:33:58:562 860 1788 Agent * Found 0 updates and 70 categories in search; evaluated appl. rules of 180 out of 1072 deployed entities 2012-06-26 15:33:58:565 860 1788 Agent ********* 2012-06-26 15:33:58:565 860 1788 Agent ** END ** Agent: Finding updates [CallerId = AutomaticUpdates] 2012-06-26 15:33:58:565 860 1788 Agent ************* 2012-06-26 15:33:58:650 860 f2c AU >>## RESUMED ## AU: Search for updates [CallId = {2DBB046C-2265-421B-A37B-93BDECC6C261}] 2012-06-26 15:33:58:650 860 f2c AU # 0 updates detected 2012-06-26 15:33:58:650 860 f2c AU ######### 2012-06-26 15:33:58:650 860 f2c AU ## END ## AU: Search for updates [CallId = {2DBB046C-2265-421B-A37B-93BDECC6C261}] 2012-06-26 15:33:58:650 860 f2c AU ############# 2012-06-26 15:33:58:650 860 f2c AU Featured notifications is disabled. 2012-06-26 15:33:58:651 860 f2c AU Successfully wrote event for AU health state:0 2012-06-26 15:33:58:652 860 f2c AU Successfully wrote event for AU health state:0

    Read the article

  • How do I launch a WPF app from command.com. I'm getting a FontCache error.

    - by jttraino
    I know this is not ideal, but my constraint is that I have a legacy application written in Clipper. I want to launch a new, WinForms/WPF application from inside the application (to ease transition). This legacy application written in Clipper launches using: SwpRunCmd("C:\MyApp\MyBat.bat",0) The batch file contains something like this command: C:\PROGRA~1\INTERN~1\iexplore "http://QASVR/MyApp/AppWin/MyCompany.MyApp.AppWin.application#MyCompany.MyApp.AppWin.application" It is launching a WinForms/WPF app that is we deploy via ClickOnce. Everything has been going well until we introduced WPF into the application. We were able to easily launch from the legacy application. Since we have introduced WPF, however, we have the following behavior. If we launch via the Clipper application first, we get an exception when launching the application. The error text is: The type initializer for 'System.Windows.FrameworkElement' threw an exception. at System.Windows.FrameworkElement..ctor() at System.Windows.Controls.Panel..ctor() at System.Windows.Controls.DockPanel..ctor() at System.Windows.Forms.Integration.AvalonAdapter..ctor(ElementHost hostControl) at System.Windows.Forms.Integration.ElementHost..ctor() at MyCompany.MyApp.AppWin.Main.InitializeComponent() at MyCompany.MyApp.AppWin.Main..ctor(String[] args) at MyCompany.MyApp.AppWin.Program.Main(String[] args) The type initializer for 'System.Windows.Documents.TextElement' threw an exception. at System.Windows.FrameworkElement..cctor() The type initializer for 'System.Windows.Media.FontFamily' threw an exception. at System.Windows.Media.FontFamily..ctor(String familyName) at System.Windows.SystemFonts.get_MessageFontFamily() at System.Windows.Documents.TextElement..cctor() The type initializer for 'MS.Internal.FontCache.Util' threw an exception. at MS.Internal.FontCache.Util.get_WindowsFontsUriObject() at System.Windows.Media.FontFamily.PreCreateDefaultFamilyCollection() at System.Windows.Media.FontFamily..cctor() Invalid URI: The format of the URI could not be determined. at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind) at System.Uri..ctor(String uriString, UriKind uriKind) at MS.Internal.FontCache.Util..cctor() If we launch the application via the URL (in IE) or via the icon on the desktop first, we do not get the exception and application launches as expected. The neat thing is that whatever we launch with first determines whether the app will launch at all. So, if we launch with legacy first, it breaks right away and we can't get the app to run even if we launch with the otherwise successful URL or icon. To get it to work, we have to logout and log back in and start it from the URL or icon. If we first use the URL or the icon, we have no problem launching from the legacy application from that point forward (until we logout and come back in). One other piece of information is that we are able to simulate the problem in the following fashion. If we enter a command prompt using "cmd.exe" and execute a statement to launch from a URL, we are successful. If, however, we enter a command prompt using "command.com" and we execute that same statement, we experience the breaking behavior. We assume it is because the legacy application in Clipper uses the equivalent of command.com to create the shell to spawn the other app. We have tried a bunch of hacks like having command.com run cmd.exe or psexec and then executing, but nothing seems to work. We have some ideas for workarounds (like making the app launch on startup so we force the successful launch from a URL, making all subsequent launches successful), but they all are sub-optimal even though we have a great deal of control over our workstations. To reduce the chance that this is related to permissions, we have given the launching account administrative rights (as well as non-administrative rights in case that made a difference). Any ideas would be greatly-appreciate. Like I said, we have some work arounds, but I would love to avoid them. Thanks!

    Read the article

  • Decentralized synchronized secure data storage

    - by Alberich
    Introduction Hi, I am going to ask a question which seems utopic for me, but I need to know if there is a way to achieve what I need. And if not, I need to know why not. The idea Suppose I have a database structure, in MySql. I want to create some solution to allow anyone (no matter who, no matter where) to have a synchronized copy (updated clone) of this database (with its content) Well, and it is not going to be just one synchronized copy, it could (and should) be a multiple replication (supposing the basic, this means, for example, ten copies all over the world) And, the most important thing: It must be secure. By secure I mean only real-accepted transactions will be synchronized with all the others (no matter how many) database copies/clones. Note: Since it would be quite difficult to make the synchronization in real-time, I will design everything to make this feature dispensable. So it is not required. My auto-suggestion This is how I am thinking to manage it: Time identifiers and Updates checking: Every action (insert, update, delete...) will be stored as the action instruction itself, associated to the time identifier. [I think better than a DATETIME field, it'll be an INT one, with the number of miliseconds passed from 1st january 2013 on, for example]. So each copy is going to ask to the "neighbour copy" for new actions done since last update, and execute them after checking they are allowed. Problem 1: the "neighbour copy" could be outdated too. Solution 1: do not ask just one neighbour, create a random list with some of the copies/clones and ask them for news (I could avoid the list and ask ALL the clones for updates, but this will be inefficient if clones number ascends too much). Problem 2: Real-time global synchronization is not active. What if... Someone at CLONE_ENTERPRISING inserts a row into TABLE. ... this row goes to every clone ... Someone at CLONE_FIXEMALL deletes this row. ... and at the same time, somewhere in an outdated clone ... Someone at CLONE_DROPOUT edits this row (now inexistent at the other clones) Solution 2: easy stuff, force a GLOBAL synchronization before doing any new "depending-on-third-data action" (edit, for example). This global synch. will be unnecessary when making an INSERT, for instance. Note: Well, someone could have some fun, and make the same insert in two clones... since they're not getting updated in real-time, this row will exist twice. But, it's the same as when we have one single database, in some needed cases we check if there is an existing same-row before doing the final action. Not a problem. Problem 3: It is possible to edit the code and do not filter actions, so someone could spread instructions to delete everything, or just make some trolling activity. This is not a problem, since good clones will always be somewhere. Those who got bad won't interest anymore. I really appreciate if you read. I know this is not the perfect solution, it has possibly hundred of holes, but it is my basic start. I will now appreciate anything you can teach me now. Thanks a lot. PS.: It could be that all this I am trying already exists and has its own name. Sorry for asking then (I'd anyway thank this name, if it exists)

    Read the article

  • save the transient instance before flushing

    - by eugenn
    Exception: object references an unsaved transient instance - save the transient instance before flushing: Child How to reproduce issue: 1. Hibernate is load the entity "Parent". The property "child" is null 2. The "Parent" is rendered on the screen and after that the "child" property is auto instantiated. So I have the following graph: Parent.child != null Parent.child.childId = null Parent.child.childKey = "" Parent.child.childName = "" Question: How I could to force the Hibernate to ignore updating or inserting Child entity WHEN childId = null? If childId != null I would like just create relation. <hibernate-mapping> <class name="com.test.Parent" entity-name="ParentObject" table="parent" dynamic-insert="false" dynamic-update="true" optimistic-lock="version"> <id name="rowId" type="long"> <column name="RowID" /> <generator class="native" /> </id> <version name="versionSequence" type="integer" unsaved-value="null" generated="never" insert="false"> <column name="VersionSequence" /> </version> <many-to-one name="child" entity-name="Child" fetch="select" optimistic-lock="true" embed-xml="false" update="true" insert="false"> <column name="ChildID" /> </many-to-one> <property name="dateCreated" type="timestamp"> <column name="DateCreated" length="0" /> </property> <property name="dateUpdated" type="timestamp" update="false"> <column name="DateUpdated" length="0" /> </property> </class> </hibernate-mapping> <hibernate-mapping> <class name="com.Child" entity-name="Child" table="Child" dynamic-insert="false" dynamic-update="true" optimistic-lock="version"> <id name="childId" type="long" > <column name="ChildID" /> <generator class="native" /> </id> <version name="versionSequence" type="integer" insert="false" generated="never" > <column name="VersionSequence" /> </version> <property name="childKey" type="string" > <column name="ChildKey" length="20" /> </property> <property name="childName" type="string" > <column name="ChildName" length="30" /> </property> <property name="childNumber" type="string" > <column name="ChildNumber" /> </property> <property name="dateCreated" type="timestamp"> <column name="DateCreated" /> </property> <property name="dateUpdated" type="timestamp" update="false"> <column name="DateUpdated" /> </property> </class> </hibernate-mapping>

    Read the article

  • How can I disable 'output escaping' in minidom

    - by William
    I'm trying to build an xml document from scratch using xml.dom.minidom. Everything was going well until I tried to make a text node with a ® (Registered Trademark) symbol in. My objective is for when I finally hit print mydoc.toxml() this particular node will actually contain a ® symbol. First I tried: import xml.dom.minidom as mdom data = '®' which gives the rather obvious error of: File "C:\src\python\HTMLGen\test2.py", line 3 SyntaxError: Non-ASCII character '\xae' in file C:\src\python\HTMLGen\test2.py on line 3, but no encoding declared; see http://www.python.or g/peps/pep-0263.html for details I have of course also tried changing the encoding of my python script to 'utf-8' using the opening line comment method, but this didn't help. So I thought import xml.dom.minidom as mdom data = '&#174;' #Both accepted xml encodings for registered trademark data = '&reg;' text = mdom.Text() text.data = data print data print text.toxml() But because when I print text.toxml(), the ampersands are being escaped, I get this output: &reg; &amp;reg; My question is, does anybody know of a way that I can force the ampersands not to be escaped in the output, so that I can have my special character reference carry through to the XML document? Basically, for this node, I want print text.toxml() to produce output of &reg; or &#174; in a happy and cooperative way! EDIT 1: By the way, if minidom actually doesn't have this capacity, I am perfectly happy using another module that you can recommend which does. EDIT 2: As Hugh suggested, I tried using data = u'®' (while also using data # -*- coding: utf-8 -*- Python source tags). This almost helped in the sense that it actually caused the ® symbol itself to be outputted to my xml. This is actually not the result I am looking for. As you may have guessed by now (and perhaps I should have specified earlier) this xml document happens to be an HTML page, which needs to work in a browser. So having ® in the document ends up causing rubbish in the browser (® to be precise!). I also tried: data = unichr(174) text.data = data.encode('ascii','xmlcharrefreplace') print text.toxml() But of course this lead to the same origional problem where all that happens is the ampersand gets escaped by .toxml(). My ideal scenario would be some way of escaping the ampersand so that the XML printing function won't "escape" it on my behalf for the document (in other words, achieving my original goal of having &reg; or &#174; appear in the document). Seems like soon I'm going to have to resort to regular expressions! EDIT 2a: Or perhaps not. Seems like getting my html meta information correct <META http-equiv="Content-Type" Content="text/html; charset=UTF-8"> could help, but I'm not sure yet how this fits in with the xml structure...

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >