Search Results

Search found 3205 results on 129 pages for 'unexpected shutdown'.

Page 87/129 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Copying a file with PHP Command

    - by Tom
    Hi, I'm having a problem using the copy function in PHP, what is wrong with it? I get the error; Parse error: syntax error, unexpected T_VARIABLE On the bottom line; $targetDir = 'file.txt'; $targetDir2 = 'file2.txt'; copy($targetDir, $targetDir2); Thanks The entire file is; <?PHP $targetDir = 'file.txt'; $targetDir2 = 'file2.txt'; copy($targetDir, $targetDir2); ?> copy and pasted from the doc.

    Read the article

  • Java application return codes

    - by doele
    I have a Java program that processes one file at a time. This Java program is called from a wrapper script which logs the return code from the Java program. There are 2 types of errors. Expected errors and unexpected errors. In both cases I just need to log them. My wrapper knows about 3 different states. 0-OK, 1-PROCESSING_FAILED, 2- ERROR. Is this a valid approach? Here is my approach: enum ReturnCodes {OK,PROCESSING_FAILED,ERROR}; public static void main(String[] args) { ... proc.processMyFile(); ... System.exit(ReturnCodes.OK.ordinal()); } catch (Throwable t) { ... System.exit(ReturnCodes.ERROR.ordinal()); } private void processMyFile() { try { ... }catch( ExpectedException e) { ... System.exit(ReturnCodes.PROCESSING_FAILED.ordinal()); } }

    Read the article

  • "if" expression question

    - by Here
    Hi, I test some simple F# code for "if" expression, but the result is unexpected for me: > let test c a b = if c then a else b;; val test : bool -> 'a -> 'a -> 'a However > test true (printfn "a") (printfn "b");; a b val it : unit = () I'd expect only "a" is printed out but here I got both "a" and "b". I wonder why it comes out this way? Thanks!

    Read the article

  • php: parse error on mysql query

    - by dwstein
    I'm getting the following error: Parse error: syntax error, unexpected T_VARIABLE in /home/a4999406/public_html/willingLog.html on line 48 on the following code (line 48 is first row of this code): $rows = mysql_num_rows($result); for ($j=0; $j<$rows: ++$j) { echo 'ID: ' . mysql_result($result, $j, 'id') . '<br />'; echo 'First: ' . mysql_result($result, $j, 'first') . '<br />'; echo 'Last: ' . mysql_result($result, $j, 'last') . '<br />'; echo 'Email: ' . mysql_result($result, $j, 'email') . '<br />'; } Anyone know what i'm doing wrong?

    Read the article

  • F# "if" expression question

    - by Here
    Hi, I test some simple F# code for "if" expression, but the result is unexpected for me: > let test c a b = if c then a else b;; val test : bool -> 'a -> 'a -> 'a However > test true (printfn "a") (printfn "b");; a b val it : unit = () I'd expect only "a" is printed out but here I got both "a" and "b". I wonder why it comes out this way? Thanks!

    Read the article

  • What is the proper way to test is variable is empty in a batch file, If NOT "%1" == "" GOTO SomeLabe

    - by blak3r
    I need to test if a variable is set or not. I've tried several techniques but they seem to fail whenever %1 is surrounded by quotes such as the case when %1 = "c:\some path with spaces". IF NOT %1 GOTO MyLabel // This is invalid syntax IF "%1" == "" GOTO MyLabel // Works unless %1, otherwise fatally kills bat execution IF %1 == GOTO MyLabel // Gives an unexpected GOTO error. According to this site. These are the support IF syntax types. So, I don't see a way to do it. IF [NOT] ERRORLEVEL number command IF [NOT] string1==string2 command IF [NOT] EXIST filename command

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Grounded in Dublin

    - by Mike Dietrich
    Friday's hands-on workshop in the Oracle office in Dublin was quite good fun for everybody - except for Mick who has just recognized that his Ryanair flight back to Cork has been canceled (So I hope you've returned home well!) and me as my flights back to Munich via London City had been canceled as well. It's always good to have somebody in the workshop from Air Lingus so I've got hourly information what's going in in the Irish airspace (and now I know that the system dealing with such situations is an well prepared Oracle database which runs just like a switch watch - Thanks again for all your support!!! Was great to talk to you!!!). But to be honest, there are worse places to be grounded for a few days than Dublin. At least it gave me the chance to do something which I never had time enough before when visiting Oracle Ireland: a bit of sightseeing. When I've realized that nothing seems to move over the weekend I started organizing my travel back yesterday. It was no fun at all because there's no single system to book such a travel. Figuring out all possibilities and options going back to Munich was the first challange. Irish Ferries webpage was moaning with all the unexpected load (currently it's fully down). Hotel booking websites showed vacancies in Holyhead but didn't let me book. And calling them just reveiled that there are no rooms left. Haven't stayed overnight in a train station for quite a while ;-) The website of VirginTrains puzzled me with offering a seat at an enormous price for a train ride from Holyhead to London Euston (Thanks, Sir Richard Branson!) just to tell me after I booked a ticket that there are no seats left (but I traveled German railsways a few weeks ago from Düsseldorf to Frankfurt sitting on the floor as well). Eurostar's website let me choose tickets through the tunnel to tell me in the final step that the ticket cannot be confirmed as there are no seats left - but the next check again showed bookable seats - must be a database from some other vendor which has no proper row level locking ... hm ...?! Finally the TGV page for the speed train to Stuttgart and then the ICE to Munich was not allowing searches for quite a while - but ultimately ... after 4.5 hours of searching, waiting, sending credit card information again and again ... So if you have a few spare fingers please keep them crossed :-) And good luck to all my colleagues traveling back from the Exadata training in Berlin. As Mike Appleyard, my colleague from the UK presales team wrote: "Dublin and Berlin aren't too bad a place to get stuck... ;-)"

    Read the article

  • HTTP Headers: Max-Age vs Expires – Which One To Choose?

    - by Gopinath
    Caching of static content like images, scripts, styles on the client browser reduces load on the webservers and also improves end users browsing experience by loading web pages quickly. We can use HTTP headers Expires or Cache-Control:max-age to cache content on client browser and set expiry time for them. Expire header is HTTP/1.0 standard and Cache-Control:max-age is introduced in HTTP/1.1 specification to solve the issues and limitation with Expire  header. Consider the following headers.   Cache-Control: max-age=24560 Expires: Tue, 15 May 2012 06:17:00 GMT The first header instructs web browsers to cache the content for 24560 seconds relative to the time the content is downloaded and expire it after the time period elapses. The second header instructs web browser to expiry the content after 15th May 2011 06:17. Out of these two options which one to use – max-age or expires? I prefer max-age header for the following reasons As max-age  is a relative value and in most of the cases it makes sense to set relative expiry date rather than an absolute expiry date. Expire  header values are complex to set – time format should be proper, time zones should be appropriate. Even a small mistake in settings these values results in unexpected behaviour. As Expire header values are absolute, we need to  keep changing them at regular intervals. Lets say if we set 2011 June 1 as expiry date to all the image files of this blog, on 2011 June 2 we should modify the expiry date to something like 2012 Jan 1. This add burden of managing the Expire headers. Related: Amazon S3 Tips: Quickly Add/Modify HTTP Headers To All Files Recursively cc image flickr:rogue3w This article titled,HTTP Headers: Max-Age vs Expires – Which One To Choose?, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • What’s new in Silverlight 4 RC?

    - by pluginbaby
    I am here in Las Vegas for MIX10 where Scott Guthrie announced today the release of Silverlight 4 RC and the Visual Studio 2010 tools. You can now install VS2010 RC!!! As always, downloads links are here: www.silverlight.net He also said that the final version of Silverlight 4 will come next month (so april)! 4 months ago, I wrote a blog post on the new features of Silverlight 4 beta, so… what’s new in the RC ?   Rich Text · RichTextArea renamed to RichTextBox · Text position and selection APIs · “Xaml” property for serializing text content · XAML clipboard format · FlowDirection support on Runs tag · “Format then type” support when dragging controls to the designer · Thai/Vietnamese/Indic support · UI Automation Text pattern   Networking · UploadProgress support (Client stack) · Caching support (Client stack) · Sockets security restrictions removal (Elevated Trust) · Sockets policy file retrieval via HTTP · Accept-Language header   Out of Browser (Elevated Trust) · XAP signing · Silent install and emulation mode · Custom window chrome · Better support for COM Automation · Cancellable shutdown event · Updated security dialogs   Media · Pinned full-screen mode on secondary display · Webcam/Mic configuration preview · More descriptive MediaSourceStream errors · Content & Output protection updates · Updates to H.264 content protection (ClearNAL) · Digital Constraint Token · CGMS-A · Multicast · Graphics card driver validation & revocation   Graphics and Printing · HW accelerated Perspective Transforms · Ability to query page size and printable area · Memory usage and perf improvements   Data · Entity-level validation support of INotifyDataErrorInfo for DataGrid · XPath support for XML   Parser · New architecture enables future innovation · Performance and stability improvements · XmlnsPrefix & XmlnsDefinition attributes · Support setting order-dependent properties   Globalization & Localization · Support for 31 new languages · Arabic, Hebrew and Thai input on Mac · Indic support   More … · Update to DeepZoom code base with HW acceleration · Support for Private mode browsing · Google Chrome support (Windows) · FrameworkElement.Unloaded event · HTML Hosting accessibility · IsoStore perf improvements · Native hosting perf improvements (e.g., Bing Toolbar) · Consistency with Silverlight for Mobile APIs and Tooling · SDK   - System.Numerics.dll   - Dynamic XAP support (MEF)   - Frame/Navigation refresh support   That’s a lot!   You will find more details on the following links: http://timheuer.com/blog/archive/2010/03/15/whats-new-in-silverlight-4-rc-mix10.aspx http://www.davidpoll.com/2010/03/15/new-in-the-silverlight-4-rc-xaml-features/   Technorati Tags: Silverlight

    Read the article

  • Four Emerging Payment Stories

    - by David Dorf
    The world of alternate payments has been moving fast of late.  Innovation in this area will help both consumers and retailers, but probably hurt the banks (at least that's the plan).  Here are four recent news items in this area: Dwolla, a start-up in Iowa, is trying to make credit cards obsolete.  Twelve guys in Des Moines are using $1.3M they raised to allow businesses to skip the credit card networks and avoid the fees.  Today they move about $1M a day across their network with an average transaction size of $500. Instead of charging merchants 2.9% plus $.30 per transaction, Dwolla charges a quarter -- yep, that coin featuring George Washington. Dwolla (Web + Dollar = Dwolla) avoids the credit networks and connects directly to bank accounts using the bank's ACH network.  They are signing up banks and merchants targeting both B2B and C2B as well as P2P payments.  They leverage social networks to notify people they have a money transfer, and also have a mobile app that uses GPS location. However, all is not rosy.  There have been complaints about unexpected chargebacks and with debit fees being reduced by the big banks, the need is not as pronounced.  The big banks are working on their own network called clearXchange that could provide stiff competition. VeriFone just bought European payment processor Point for around $1B.  By itself this would not have caught my attention except for the fact that VeriFone also announced the acquisition of GlobalBay earlier this month.  In addition to their core business of selling stand-beside payment terminals, with GlobalBay they get employee-operated mobile selling tools and with Point they get a very big payment processing platform. MasterCard and Intel announced a partnership around payments, starting with PayPass, MasterCard's new payment technology.  Intel will lend its expertise to add additional levels of security, which seems to be the biggest barrier for consumer adoption.  Everyone is scrambling to get their piece of cash transactions, which still represents 85% of all transactions. Apple was awarded another mobile payment patent further cementing the rumors that the iPhone 5 will support NFC payments.  As usual, Apple is upsetting the apple cart (sorry) by moving control of key data from the carriers to Apple.  With Apple's vast number of iTunes accounts, they have a ready-made customer base to use the payment infrastructure, which I bet will slowly transition people away from credit cards and toward cheaper ACH.  Gary Schwartz explains the three step process Apple is taking to become a payment processor. Below is a picture I drew representing payments in the retail industry. There's certainly a lot of innovation happening.

    Read the article

  • Create a Shortcut to Put Your Windows Computer into Hibernation

    - by Mysticgeek
    Putting your Windows computer into Hibernation Mode allows you to save power, and quickly access your desktop again when you need it. Here we show how to create a shortcut to put your PC in Hibernation Mode quickly. Note: Here we show how to create the shortcut in Windows 7 and add it to the Taskbar. But creating the shortcut should work in XP and Vista as well. Create Shortcut  Right-click an empty area on your desktop and select New \ Shortcut from the Context Menu. In the Create Shortcut window type or copy the following in the location field… C:\Windows\System32\rundll32.exe powrprof.dll, SetSuspendState 0,1,0 Now give the shortcut a name such as Hibernate Computer or whatever you want to call it. Now you have the shortcut on your desktop, but you might want to change the icon to something else. Change Shortcut Icon Right-click the shortcut icon and select Properties. Select the Shortcut Tab and click the Change Icon button. In the Look for icons in this file field copy and past the following then click OK. %SystemRoot%\system32\SHELL32.dll This brings up a list of included Windows icons you can choose from. Select whatever you want it to be. There are a couple of Power icons in the directory…click OK. Of course you can choose any icon you want, if you customize your icons just browse to the directory they are in. For more on selecting icons check out our article on how to customize your icons in Windows 7 or how to change a file type’s icon. Now you will see the icon in the Shortcut Properties window, click OK. Here we have a nice looking shortcut that you can use to put your machine into Hibernation. Or here we used a customized Star Trek icon just to make things more interesting… You can pin the shortcut to the Taskbar for easy access. Conclusion If Hibernation is not enabled on your Windows 7 system you can easily manage it. By creating a shortcut and pinning to the Taskbar, it allows you to put your machine into Hibernation Mode quick and easy. If you like to customize your desktop with unique icons check out our posts on a Sci-Fi icon pack or Video Game icon pack. Similar Articles Productive Geek Tips Create a Shortcut for Locking Your Computer Screen in Windows 7 or VistaCreate Shutdown / Restart / Lock Icons in Windows 7 or VistaHow To Manage Hibernate Mode in Windows 7Microsoft Releases Pre-SP1 Updates for Windows VistaCreate a Shortcut or Hotkey to Run CCleaner Silently TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 10 Superb Firefox Wallpapers OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides

    Read the article

  • How to upgrade boost lib using apt-get?

    - by sam
    I use ubuntu 11.04. My boost version: sam@sam:~/code/ros/pcl$ apt-cache showpkg libboost-all-dev Package: libboost-all-dev Versions: 1.42.0.1ubuntu1 (/var/lib/apt/lists/tw.archive.ubuntu.com_ubuntu_dists_natty_universe_binary-amd64_Packages) (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/tw.archive.ubuntu.com_ubuntu_dists_natty_universe_binary-amd64_Packages MD5: 72efad05a3c79394c125b79e1d4eb3a7 Reverse Depends: libvtk5-dev,libboost-all-dev libfeel++-dev,libboost-all-dev Dependencies: 1.42.0.1ubuntu1 - libboost-dev (0 (null)) libboost-date-time-dev (0 (null)) libboost-filesystem-dev (0 (null)) libboost-graph-dev (0 (null)) libboost-iostreams-dev (0 (null)) libboost-math-dev (0 (null)) libboost-program-options-dev (0 (null)) libboost-python-dev (0 (null)) libboost-regex-dev (0 (null)) libboost-serialization-dev (0 (null)) libboost-signals-dev (0 (null)) libboost-system-dev (0 (null)) libboost-test-dev (0 (null)) libboost-thread-dev (0 (null)) libboost-wave-dev (0 (null)) Provides: 1.42.0.1ubuntu1 - Reverse Provides: sam@sam:~/code/ros/pcl$ How to upgrade boost to 1.44+ by using apt tools? Thank you~ When I run apt-add-repository,it shows: sam@sam:~/code/ros/pcl$ sudo apt-add-repository ppa:timklingt/ppa Error reading https://launchpad.net/api/1.0/~timklingt/+archive/ppa: GnuTLS recv error (-9): A TLS packet with unexpected length was received. sam@sam:~/code/ros/pcl$ How to fix it? Thank you~ I try to install libboost1.46-all-dev: sam@sam:~/code/ros/pcl$ sudo apt-get install libboost1.46-all-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libboost1.46-all-dev : Depends: libboost1.46-dev but it is not going to be installed Depends: libboost-date-time1.46-dev but it is not going to be installed Depends: libboost-filesystem1.46-dev but it is not going to be installed Depends: libboost-graph1.46-dev but it is not going to be installed Depends: libboost-iostreams1.46-dev but it is not going to be installed Depends: libboost-math1.46-dev but it is not going to be installed Depends: libboost-program-options1.46-dev but it is not going to be installed Depends: libboost-python1.46-dev but it is not going to be installed Depends: libboost-regex1.46-dev but it is not going to be installed Depends: libboost-serialization1.46-dev but it is not going to be installed Depends: libboost-signals1.46-dev but it is not going to be installed Depends: libboost-system1.46-dev but it is not going to be installed Depends: libboost-test1.46-dev but it is not going to be installed Depends: libboost-thread1.46-dev but it is not going to be installed Depends: libboost-wave1.46-dev but it is not going to be installed E: Broken packages sam@sam:~/code/ros/pcl$ What's these error means? And how to solve it? Thank you~

    Read the article

  • Automatic Standby Recreation for Data Guard

    - by pablo.boixeda(at)oracle.com
    Hi,Unfortunately sometimes a Standby Instance needs to be recreated. This can happen for many reasons such as lost archive logs, standby data files, failover, among others.This is why we wanted to have one script to recreate standby instances in an easy way.This script recreates the standby considering some prereqs:-Database Version should be at least 11gR1-Dummy instance started on the standby node (Seeking to improve this so it won't be needed)-Broker configuration hasn't been removed-In our case we have two TNSNAMES files, one for the Standby creation (using SID) and the other one for production using service names (including broker service name)-Some environment variables set up by the environment db script (like ORACLE_HOME, PATH...)-The directory tree should not have been modified in the stanby hostWe are currently using it on our 11gR2 Data Guard tests.Any improvements will be welcome! Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} #!/bin/ksh ###    NOMBRE / VERSION ###       recrea_dg.sh   v.1.00 ### ###    DESCRIPCION ###       reacreacion de la Standby ### ###    DEVUELVE ###       0 Creacion de STANDBY correcta ###       1 Fallo ### ###    NOTAS ###       Este shell script NO DEBE MODIFICARSE. ###       Todas las variables y constantes necesarias se toman del entorno. ### ###    MODIFICADO POR:    FECHA:        COMENTARIOS: ###    ---------------    ----------    ------------------------------------- ###      Oracle           15/02/2011    Creacion. ### ### ### Cargar entorno ### V_ADMIN_DIR=`dirname $0` . ${V_ADMIN_DIR}/entorno_bd.sh 1>>/dev/null if [ $? -ne 0 ] then   echo "Error Loading the environment."   exit 1 fi V_RET=0 V_DATE=`/bin/date` V_DATE_F=`/bin/date +%Y%m%d_%H%M%S` V_LOGFILE=${V_TRAZAS}/recrea_dg_${V_DATE_F}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1 ### ### Variables para Recrear el Data Guard ### V_DB_BR=`echo ${V_DB_NAME}|tr '[:lower:]' '[:upper:]'` if [ "${ORACLE_SID}" = "${V_DB_NAME}01" ] then         V_LOCAL_BR=${V_DB_BR}'01'         V_REMOTE_BR=${V_DB_BR}'02' else         V_LOCAL_BR=${V_DB_BR}'02'         V_REMOTE_BR=${V_DB_BR}'01' fi echo " Getting local instance ROLE ${ORACLE_SID} ..." sqlplus -s /nolog 1>>/dev/null 2>&1 <<-! whenever sqlerror exit 1 connect / as sysdba variable salida number declare   v_database_role v\$database.database_role%type; begin   select database_role into v_database_role from v\$database;   :salida := case v_database_role        when 'PRIMARY' then 2        when 'PHYSICAL STANDBY' then 3        else 4      end; end; / exit :salida ! case $? in 1) echo " ERROR: Cannot get instance ROLE ." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; 2) echo " Local Instance with PRIMARY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=PRIMARY ;; 3) echo " Local Instance with PHYSICAL STANDBY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=STANDBY ;; *) echo " ERROR: UNKNOWN ROLE." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; esac if [ "${V_DB_ROLE_LCL}" = "PRIMARY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_REMOTE_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_LOCAL_BR}         V_STANDBY=${V_REMOTE_BR} fi if [ "${V_DB_ROLE_LCL}" = "STANDBY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_LOCAL_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_REMOTE_BR}         V_STANDBY=${V_LOCAL_BR} fi # Cargamos las variables de los hosts # Cargamos las variables de los hosts PRY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_PRIMARY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` SBY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` echo "el HOST primary es: ${PRY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "el HOST standby es: ${SBY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## V_DATE=`/bin/date` echo "${V_DATE} - Shutting down Standby instance" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## SBY_STATUS=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',status from v\\$instance; EOF` if [ ${SBY_STATUS} = 'STARTED' ] || [ ${SBY_STATUS} = 'MOUNTED' ] || [ ${SBY_STATUS} = 'OPEN' ] then         echo "${V_DATE} - Standby instance shutdown in progress..." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1         sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         shutdown abort         ! fi V_DATE=`/bin/date` echo "" echo "${V_DATE} - Standby instance stopped" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Eliminamos los ficheros de la base de datos ## V_SBY_SID=`echo ${V_STANDBY}|tr '[:upper:]' '[:lower:]'` V_PRY_SID=`echo ${V_PRIMARY}|tr '[:upper:]' '[:lower:]'` ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/data/*.dbf ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch/*.arc ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.rdo ## ## Startup nomount stby instance ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting  DUMMY Standby Instance " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${SBY_HOST} touch /home/oracle/init_dg.ora ssh ${SBY_HOST} 'echo "DB_NAME='${V_DB_NAME}'">>/home/oracle/init_dg.ora' ssh ${SBY_HOST} touch /home/oracle/start_dummy.sh ssh ${SBY_HOST} 'echo "ORACLE_HOME=/opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_HOME">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "PATH=\$ORACLE_HOME/bin:\$PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "ORACLE_SID='${V_SBY_SID}'">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_SID">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "sqlplus -s /nolog <<-!" >>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      whenever sqlerror exit 1 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      connect / as sysdba ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      startup nomount pfile='\''/home/oracle/init_dg.ora'\''">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "! ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'chmod 744 /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'sh /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/init_dg.ora' ## ## TNSNAMES change, specific for RMAN duplicate ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Setting up TNSNAMES in PRIMARY host " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.inst  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting STANDBY creation with RMAN.. " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 rman<<-! >>${V_LOGFILE} connect target sys/${V_DB_PWD}@${V_PRIMARY} connect auxiliary sys/${V_DB_PWD}@${V_STANDBY} run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database dorecover spfile parameter_value_convert '${V_PRY_SID}','${V_SBY_SID}' set control_files='/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/control01.ctl','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/control02.ctl' set db_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set log_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set 'db_unique_name'='${V_SBY_SID}' set log_archive_config='DG_CONFIG=(${V_PRIMARY},${V_STANDBY})' set fal_client='${V_STANDBY}' set fal_server='${V_PRIMARY}' set log_archive_dest_1='LOCATION=/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch DB_UNIQUE_NAME=${V_SBY_SID} MANDATORY VALID_FOR=(ALL_LOGFILES,ALL_ROLES)' set log_archive_dest_2='SERVICE="${V_PRIMARY}"','SYNC AFFIRM DB_UNIQUE_NAME=${V_PRY_SID} DELAY=0 MAX_FAILURE=0 REOPEN=300 REGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)' nofilenamecheck ; } ! V_DATE=`/bin/date` if [ $? -ne 0 ] then         echo ""         echo "${V_DATE} - Error creating STANDBY instance"         echo ""         echo "********************************************************************************" else         echo ""         echo "${V_DATE} - STANDBY instance created SUCCESSFULLY "         echo ""         echo "********************************************************************************" fi sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=${SBY_HOST})(PORT=1544))' scope=both;         alter system set service_names='${V_DB_NAME}.eu.roca.net,${V_SBY_SID}.eu.roca.net,${V_SBY_SID}_DGMGRL.eu.roca.net' scope=both;         alter database recover managed standby database using current logfile disconnect from session;         alter system set dg_broker_start=true scope=both; ! ## ## TNSNAMES change, back to Production Mode ## V_DATE=`/bin/date` echo " " | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Restoring TNSNAMES in PRIMARY "  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.prod  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} -  Waiting for media recovery before check the DATA GUARD Broker"  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 sleep 200 dgmgrl <<-! | grep SUCCESS 1>/dev/null 2>&1     connect ${V_DB_USR}/${V_DB_PWD}@${V_STANDBY}     show configuration verbose; ! if [ $? -ne 0 ] ; then         echo "       ERROR: El status del Broker no es SUCCESS" | tee -a ${V_LOGFILE}   2>&1 ;         V_RET=1 else          echo "      DATA GUARD OK " | tee -a ${V_LOGFILE}   2>&1 ; Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}         V_RET=0 fi Hope it helps.

    Read the article

  • mysql completely removing

    - by Dmitry Teplyakov
    I broke my mysql and now I want to completely reinstall it. I tried: $ sudo apt-get install --reinstall mysql-server $ sudo apt-get remove --purge mysql-client mysql-server But always I see popup with proposal to change root password, I change it and got an error that I can change it.. $ sudo apt-get remove --purge mysql-client mysql-server Reading package lists... Done Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed The following packages were automatically installed and are no longer required: libmygpo-qt1 libqtscript4-network libqtscript4-gui libtag-extras1 libqtscript4-sql libqtscript4-xml amarok-utils amarok-common libqtscript4-uitools liblastfm0 libloudmouth1-0 libqtscript4-core Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up mysql-server-5.5 (5.5.28-0ubuntu0.12.04.2) ... 121114 19:04:03 [Note] Plugin 'FEDERATED' is disabled. 121114 19:04:03 InnoDB: The InnoDB memory heap is disabled 121114 19:04:03 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121114 19:04:03 InnoDB: Compressed tables use zlib 1.2.3.4 121114 19:04:03 InnoDB: Initializing buffer pool, size = 128.0M 121114 19:04:03 InnoDB: Completed initialization of buffer pool InnoDB: Error: auto-extending data file ./ibdata1 is of a different size InnoDB: 0 pages (rounded down to MB) than specified in the .cnf file: InnoDB: initial 640 pages, max 0 (relevant if non-zero) pages! 121114 19:04:03 InnoDB: Could not open or create data files. 121114 19:04:03 InnoDB: If you tried to add new data files, and it failed here, 121114 19:04:03 InnoDB: you should now edit innodb_data_file_path in my.cnf back 121114 19:04:03 InnoDB: to what it was, and remove the new ibdata files InnoDB created 121114 19:04:03 InnoDB: in this failed attempt. InnoDB only wrote those files full of 121114 19:04:03 InnoDB: zeros, but did not yet use them in any way. But be careful: do not 121114 19:04:03 InnoDB: remove old data files which contain your precious data! 121114 19:04:03 [ERROR] Plugin 'InnoDB' init function returned error. 121114 19:04:03 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121114 19:04:03 [ERROR] Unknown/unsupported storage engine: InnoDB 121114 19:04:03 [ERROR] Aborting 121114 19:04:03 [Note] /usr/sbin/mysqld: Shutdown complete start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: mysql-server-5.5 E: Sub-process /usr/bin/dpkg returned an error code (1) It is good for me that I have not any important databases, but..

    Read the article

  • Remove the Lock Icon from a Folder in Windows 7

    - by Trevor Bekolay
    If you’ve been playing around with folder sharing or security options, then you might have ended up with an unsightly lock icon on a folder. We’ll show you how to get rid of that icon without over-sharing it. The lock icon in Windows 7 indicates that the file or folder can only be accessed by you, and not any other user on your computer. If this is desired, then the lock icon is a good way to ensure that those settings are in place. If this isn’t your intention, then it’s an eyesore. To remove the lock icon, we have to change the security settings on the folder to allow the Users group to, at the very least, read from the folder. Right-click on the folder with the lock icon and select Properties. Switch to the Security tab, and then press the Edit… button. A list of groups and users that have access to the folder appears. Missing from the list will be the “Users” group. Click the Add… button. The next window is a bit confusing, but all you need to do is enter “Users” into the text field near the bottom of the window. Click the Check Names button. “Users” will change to the location of the Users group on your particular computer. In our case, this is PHOENIX\Users (PHOENIX is the name of our test machine). Click OK. The Users group should now appear in the list of Groups and Users with access to the folder. You can modify the specific permissions that the Users group has if you’d like – at the minimum, it must have Read access. Click OK. Keep clicking OK until you’re back at the Explorer window. You should now see that the lock icon is gone from your folder! It may be a small aesthetic nuance, but having that one folder stick out in a group of other folders is needlessly distracting. Fortunately, the fix is quick and easy, and does not compromise the security of the folder! Similar Articles Productive Geek Tips What is this "My Sharing Folders" Icon in My Computer and How Do I Remove It?Lock The Screen While in Full-Screen Mode in Windows Media PlayerHave Windows Notify You When You Accidentally Hit the Caps Lock KeyWhy Did Windows Vista’s Music Folder Icon Turn Yellow?Create Shutdown / Restart / Lock Icons in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor

    Read the article

  • Problem in installation in My Hp g4 1226se

    - by vivek Verma
    1vivek.100 Dual booting error in Hp pavilion g4 1226se Dear sir or Madam, My name is vivek verma.... I am the user of my Hp laptop which series and model name is HP PAVILION G4 1226SE........ i have purchase in the year of 2012 and month is February.....the windows 7 home basic 64 Bit is already installed in in my laptop.... Now i want to install Ubuntu 12.04 Lts or 13.10 lts..... i have try many time to install in my laptop via live CD or USB installer....and i have try many live CD and many pen drive to install Ubuntu ... but it is not done......now i am in very big problem...... when i put my CD or USB drive to boot and install the Ubuntu......my laptop screen is goes the some black (brightness of my laptop screen is very low and there is very low visibility ) and not showing any thing on my laptop screen..... and when i move the my laptop screen.....then there is graphics option in this screen to installation of the Ubuntu option......and when i press the dual boot with setting button and press to continue them my laptop is goes for shutdown after 2 or 5 minutes..... ...... and Hp service center person is saying to me our laptop hardware has no problem.....please contact to Ubuntu tech support............. show please help me if possible..... My laptop configuration is here...... Hardware Product Name g4-1226se Product Number QJ551EA Microprocessor 2.4 GHz Intel Core i5-2430M Microprocessor Cache 3 MB L3 cache Memory 4 GB DDR3 Memory Max Upgradeable to 4 GB DDR3 Video Graphics Intel HD 3000 (up to 1.65 GB) Display 35,5 cm (14,0") High-Definition LED-backlit BrightView Display (1366 x 768) Hard Drive 500 GB SATA (5400 rpm) Multimedia Drive SuperMulti DVD±R/RW with Double Layer Support Network Card Integrated 10/100 BASE-T Ethernet LAN Wireless Connectivity 802.11 b/g/n Sound Altec Lansing speakers Keyboard Full size island-style keyboard with home roll keys Pointing Device TouchPad supporting Multi-Touch gestures with On/Off button PC Card Slots Multi-Format Digital Media Card Reader for Secure Digital cards, Multimedia cards External Ports 1 VGA 1 headphone-out 1 microphone-in 3 USB 2.0 1 RJ45 Dimensions 34.1 x 23.1 x 3.56 cm Weight Starting at 2.1 kg Power 65W AC Power Adapter 6-cell Lithium-Ion (Li-Ion) What's In The Box Webcam with Integrated Digital Microphone (VGA) Software Operating System: Windows 7 Home Basic 64bit....Genuine..... ......... Sir please help me if possible....... Name =vivek verma Contact no.+919911146737 Email [email protected]

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris Stevens
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • TFS Rant *WARNING* negative opinions are being expressed.

    - by ryanabr
    It has happened several times now where I end up installing TFS "over the shoulder" of the system admin guy whose job it will be to "own" the server when I am gone. TFS is challenging enough to stand up when doing it myself on a completely open platform, but at these locations, networks are locked down, machines are locked down, and the unexpected always seems to pop up. I personally have the tolerance for these things as a software developer, but as we are installing I have to listen to all of 'colorful' remarks being made about: "why is it like this" or "this is a piece of crap". Generally the issues center around SharePoint integration. TFS on it's own is straightforward, but the last flavor in everyone's mouth is the SharePoint piece. As a product I like SharePoint, but installation is a nightmare. In this particular case, we are going to use WSS since the customer would like this separate from their corp SharePoint 2010 installations since there dev team is really small (1 developer) and it is being used as a VSS replacement, more than a full blown ALM tool. The server where it is being installed as a Cisco Security Agent on it that seems to block 'suspicious' activity, and as far as I can tell is preventing WSS from installing properly. The most confounding thing we can find no meaningful log entries to help diagnose the issue. it didn't help matters that when we tried to contact Microsoft for support, because we mentioned TFS in the list of things that we were trying to install, that after waiting 2 hours that we got a TFS support person NOT the SharePoint person that we really needed, so after another 2 hours the SharePoint support that we did get managed to corrupt the registry sufficiently with his 'tools' that we ended up starting over from scratch the next day anyway after going home at midnight. My point to this is: The System Administrator who is going to own this, now thinks it is a piece of crap because SharePoint wouldn't install properly. Perception is everything.  Everyone today is conditioned that software installs and works in a very simple matter. When looking at the different options to install TFS with the different "modes" there is inconsistency in the information being presented which leads to choices that causes headaches and this bad perception before the product is even installed. I am highlighting this because I love TFS as a product, but I HATE installing it, and would like it to install as simply and elegantly as the product operates once it is installed.

    Read the article

  • How to Create a Realistic Timeline for your Projects

    - by Aditi
    Developing a Realistic project time line is a biggest and most challenging task of any team. We here at JustSkins, have learned over time that developing and adhering to a timeline isn’t easy but is not impossible. Keeping in consideration from any technical glitches to a human resource issue, unexpected complications can come up at any time during the entire project life cycle, How ever there are many things you can do in order to save the project from going off-track there. A specific timeline is very important statistic for time management planning and keeping your client informed of the progress. Have a rigid time tracking assures the client, that you are committed to achieving specific project milestones in time. The more you work on varied IT projects, the more you know about the aspects of project and you get to better develop future estimates and timelines. Make a Structure When estimating the time required to accomplish each task, consider which all team members will be involved, also assign the amount of time each individual must put in to the project. Define Scope & dependability and set deadlines for accomplishing them. Sometimes Working in Phases or modules help in doing more in lesser time. One must use a Project management tool in order to systematize the collaboration between the team members. Realistic Goal Setting One approach is to keep a bandwidth of few days to deal with delay, errors & incorrect coding issues you are likely to have in the course. It is very realistic to keep delivery date to client different then internal delivery timeline. If your resource is having hard time finishing this task in the time specified, keep some room to give him a day or two extra to accomplish his task. This does not upset client delivery and is the safe way of doing projects. Keep and Insightful Approach Identify potential problems before they delay your project. To be a great IT manager you have to be honest & diplomatic at the same time, it is essential for you to give earlier notice of potential delays or scope changes to your clients. In situation where delay is inevitable you should be in a position to provide immediate, on-demand status progress reports. Learning from past experiences if very important one must keep a track of actual time spent on all aspects of the projects, this will help you create better future estimates and timelines.

    Read the article

  • Eclipse has multiple issues after JRE-6 (OpenJDK) upgrade

    - by Eusebius
    I'm on 12.04 LTS, and trying to use Eclipse Indigo. This morning Ubuntu made me update the following packages: Preparing to replace icedtea-6-jre-cacao 6b24-1.11.3-1ubuntu0.12.04.1 (using .../icedtea-6-jre-cacao_6b24-1.11.4-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement icedtea-6-jre-cacao ... Preparing to replace openjdk-6-jre-lib 6b24-1.11.3-1ubuntu0.12.04.1 (using .../openjdk-6-jre-lib_6b24-1.11.4-1ubuntu0.12.04.1_all.deb) ... Unpacking replacement openjdk-6-jre-lib ... Preparing to replace icedtea-6-jre-jamvm 6b24-1.11.3-1ubuntu0.12.04.1 (using .../icedtea-6-jre-jamvm_6b24-1.11.4-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement icedtea-6-jre-jamvm ... Preparing to replace openjdk-6-jre-headless 6b24-1.11.3-1ubuntu0.12.04.1 (using .../openjdk-6-jre-headless_6b24-1.11.4-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement openjdk-6-jre-headless ... Preparing to replace openjdk-6-jre 6b24-1.11.3-1ubuntu0.12.04.1 (using .../openjdk-6-jre_6b24-1.11.4-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement openjdk-6-jre ... After that (but I cannot swear it is the root cause), I have the following issues in Eclipse: When trying to launch the simplest HelloWorld program (which behaves fine with manual javac/java), I get either nothing or: An internal error occurred during: "Launching HelloWorld". org/eclipse/jdt/debug/core/JDIDebugModel I get an "Error log" tab in the console panel, with an error: Could not create the view: An unexpected exception was thrown. (Follows a consequent NullPointerException stacktrace between sun.util.calendar.ZoneInfoFile.getZoneIDs(ZoneInfoFile.java:785) and org.eclipse.equinox.launcher.Main.main(Main.java:1386)) When trying to access the Installed JREs part of the preferences, I get a popup saying: Unable to create the selected preference page. An error occurred while automatically activating bundle org.eclipse.jdt.debug.ui (162). And the preference tab says An error has occurred when creating this preference page. Until today I had a manually installed Eclipse (one of the official bundles available on their site), I've tried to replace it by the repository version and I get the same errors. What should I do to make Eclipse work again? Another person reports: Same happened to me after updating last night. Already tried reinstalling Eclipse and Java, starting Eclipse with -clean and starting new workspace and new .eclipse dir, but nothing helps.

    Read the article

  • Why Does Ejabberd Start Fail?

    - by Andrew
    I am trying to install ejabberd 2.1.10-2 on my Ubuntu 12.04.1 server. This is a fresh install, and ejabberd is never successfully installed. The Install Every time, apt-get hangs on this: Setting up ejabberd (2.1.10-2ubuntu1) ... Generating SSL certificate /etc/ejabberd/ejabberd.pem... Creating config file /etc/ejabberd/ejabberd.cfg with new version Starting jabber server: ejabberd............................................................ failed. The dots just go forever until it times out or I 'killall' beam, beam.smp, epmd, and ejabberd processes. I've turned off all firewall restrictions. Here's the output of epmd -names while the install is hung: epmd: up and running on port 4369 with data: name ejabberdctl at port 42108 name ejabberd at port 39621 And after it fails: epmd: up and running on port 4369 with data: name ejabberd at port 39621 At the same time (during and after), the output of both netstat -atnp | grep 5222 and netstat -atnp | grep 5280 is empty. The Crash File A crash dump file is create at /var/log/ejabber/erl_crash.dump. The slogan (i.e. reason for the crash) is: Slogan: Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}) It's alive? Whenever I try to relaunch ejabberd with service ejabberd start, the same thing happens - even if I've killed all processes before doing so. However, when I killall the processes listed above again, and run su - ejabberd -c /usr/sbin/ejabberd, this is the output I get: Erlang R14B04 (erts-5.8.5) [source] [64-bit] [rq:1] [async-threads:0] [kernel-poll:false] Eshell V5.8.5 (abort with ^G) (ejabberd@ns1)1> =INFO REPORT==== 15-Oct-2012::12:26:13 === I(<0.478.0>:ejabberd_listener:166) : Reusing listening port for 5222 =INFO REPORT==== 15-Oct-2012::12:26:13 === I(<0.479.0>:ejabberd_listener:166) : Reusing listening port for 5269 =INFO REPORT==== 15-Oct-2012::12:26:13 === I(<0.480.0>:ejabberd_listener:166) : Reusing listening port for 5280 =INFO REPORT==== 15-Oct-2012::12:26:13 === I(<0.40.0>:ejabberd_app:72) : ejabberd 2.1.10 is started in the node ejabberd@ns1 Then, the server appears to be running. I get a login prompt when I access http://mydomain.com:5280/admin/. Of course I can't login unless I create an account. At this time, the output of netstat -atnp | grep 5222 and netstat -atnp | grep 5280 is as follows: tcp 0 0 0.0.0.0:5222 0.0.0.0:* LISTEN 19347/beam tcp 0 0 0.0.0.0:5280 0.0.0.0:* LISTEN 19347/beam ejabberdctl Even when it appears ejabberd is running, trying to do anything with ejabberdctl fails. For example: trying to register a user: root@ns1:~# ejabberdctl register myusername mydomain.com mypassword Failed RPC connection to the node ejabberd@ns1: nodedown I have no idea what I'm doing wrong. This happens on two different servers I have with identical software installed (really not much of anything). Please help. Thanks.

    Read the article

  • How one decision can turn web services to hell

    - by DigiMortal
    In this posting I will show you how one stupid decision may turn developers life to hell. There is a project where bunch of complex applications exchange data frequently and it is very hard to change something without additional expenses. Well, one analyst thought that string is silver bullet of web services. Read what happened. Bad bad mistake In the early stages of integration project there was analyst who also established architecture and technical design for web services. There was one very bad mistake this analyst made: All data must be converted to strings before exchange! Yes, that’s correct, this was the requirement. All integers, decimals and dates are coming in and going out as strings. There was also explanation for this requirement: This way we can avoid data type conversion errors! Well, this guy works somewhere else already and I hope he works in some burger restaurant – far away from computers. Consequences If you first look at this requirement it may seem like little annoying piece of crap you can easily survive. But let’s see the real consequences one stupid decision can cause: hell load of data conversions are done by receiving applications and SSIS packages, SSIS packages are not error prone and they depend heavily on strings they get from different services, there are more than one format per type that is used in different services, for larger amounts of data all these conversion tasks slow down the work of integration packages, practically all developers have been in hurry with some SSIS import tasks and some fields that are not used in different calculations in SSAS cube are imported without data conversions (by example, some prices are strings in format “1.021 $”). The most painful problem for developers is the part of data conversions because they don’t expect that there is such a stupid requirement stated and therefore they are not able to estimate the time their tasks take on these web services. Also developers must be prepared for cases when suddenly some service sends data that is not in acceptable format and they must solve the problems ASAP. This puts unexpected load on developers and they are not very happy with it because they can’t understand why they have to live with this horror if it is possible to fix. What to do if you see something like this? Well, explain the problem to customer and demand special tasks to project schedule to get this mess solved before going on with new developments. It is cheaper to solve the problems now that later.

    Read the article

  • What is the current state of Ubuntu's transition from init scripts to Upstart?

    - by Adam Eberlin
    What is the current state of Ubuntu's transition from init.d scripts to upstart? I was curious, so I compared the contents of /etc/init.d/ to /etc/init/ on one of our development machines, which is running Ubuntu 12.04 LTS Server. # /etc/init.d/ # /etc/init/ acpid acpid.conf apache2 --------------------------- apparmor --------------------------- apport apport.conf atd atd.conf bind9 --------------------------- bootlogd --------------------------- cgroup-lite cgroup-lite.conf --------------------------- console.conf console-setup console-setup.conf --------------------------- container-detect.conf --------------------------- control-alt-delete.conf cron cron.conf dbus dbus.conf dmesg dmesg.conf dns-clean --------------------------- friendly-recovery --------------------------- --------------------------- failsafe.conf --------------------------- flush-early-job-log.conf --------------------------- friendly-recovery.conf grub-common --------------------------- halt --------------------------- hostname hostname.conf hwclock hwclock.conf hwclock-save hwclock-save.conf irqbalance irqbalance.conf killprocs --------------------------- lxc lxc.conf lxc-net lxc-net.conf module-init-tools module-init-tools.conf --------------------------- mountall.conf --------------------------- mountall-net.conf --------------------------- mountall-reboot.conf --------------------------- mountall-shell.conf --------------------------- mounted-debugfs.conf --------------------------- mounted-dev.conf --------------------------- mounted-proc.conf --------------------------- mounted-run.conf --------------------------- mounted-tmp.conf --------------------------- mounted-var.conf networking networking.conf network-interface network-interface.conf network-interface-container network-interface-container.conf network-interface-security network-interface-security.conf newrelic-sysmond --------------------------- ondemand --------------------------- plymouth plymouth.conf plymouth-log plymouth-log.conf plymouth-splash plymouth-splash.conf plymouth-stop plymouth-stop.conf plymouth-upstart-bridge plymouth-upstart-bridge.conf postgresql --------------------------- pppd-dns --------------------------- procps procps.conf rc rc.conf rc.local --------------------------- rcS rcS.conf --------------------------- rc-sysinit.conf reboot --------------------------- resolvconf resolvconf.conf rsync --------------------------- rsyslog rsyslog.conf screen-cleanup screen-cleanup.conf sendsigs --------------------------- setvtrgb setvtrgb.conf --------------------------- shutdown.conf single --------------------------- skeleton --------------------------- ssh ssh.conf stop-bootlogd --------------------------- stop-bootlogd-single --------------------------- sudo --------------------------- --------------------------- tty1.conf --------------------------- tty2.conf --------------------------- tty3.conf --------------------------- tty4.conf --------------------------- tty5.conf --------------------------- tty6.conf udev udev.conf udev-fallback-graphics udev-fallback-graphics.conf udev-finish udev-finish.conf udevmonitor udevmonitor.conf udevtrigger udevtrigger.conf ufw ufw.conf umountfs --------------------------- umountnfs.sh --------------------------- umountroot --------------------------- --------------------------- upstart-socket-bridge.conf --------------------------- upstart-udev-bridge.conf urandom --------------------------- --------------------------- ureadahead.conf --------------------------- ureadahead-other.conf --------------------------- wait-for-state.conf whoopsie whoopsie.conf To be honest, I'm not entirely sure if I'm interpreting the division of responsibilities properly, as I didn't expect to see any overlap (of what framework handles which services). So I was quite surprised to learn that there was a significant amount of overlap in service references, in addition to being unable to discern which of the two was intended to be the primary service framework. Why does there seem to be a fair amount of redundancy in individual service handling between init.d and upstart? Is something else at play here that I'm missing? What is preventing upstart from completely taking over for init.d? Is there some functionality that certain daemons require which upstart does not yet have, which are preventing some services from converting? Or is it something else entirely?

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >