Search Results

Search found 22993 results on 920 pages for 'global load balancing'.

Page 254/920 | < Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >

  • Java JCheckBox ArrayList help needed

    - by user2929626
    I'm new to Java and struggling with something which I'm sure must have a simple answer but I can't seem to find it. I have an array of checkbox objects defined as: ArrayList<JCheckBox> checkBoxList A JPanel is created with a grid layout and the checkboxes are added to the JPanel and the ArrayList: for (int i = 0; i < 256; i++) { JCheckBox c = new JCheckBox(); c.setSelected(false); checkBoxList.add(c); mainPanel.add(c); } Yes, there are 256 checkboxes! The panel is added to a JFrame and eventually the GUI is displayed. The user can select any combination of the 256 checkboxes. My class implements Serializable and this ArrayList of checkboxes can be saved and restored using 'Save' and 'Load' GUI buttons. My code to load the saved object is as below: public class LoadListener implements ActionListener { public void actionPerformed(ActionEvent a) { try { // Prompt the user for a load file JFileChooser fileLoad = new JFileChooser(); fileLoad.showOpenDialog(mainFrame); // Create a object/file input stream linking to the selected file ObjectInputStream is = new ObjectInputStream(new FileInputStream(fileLoad.getSelectedFile())); // Read the checkBox array list checkBoxList = (ArrayList<JCheckBox>) is.readObject(); is.close(); } catch (Exception ex) { ex.printStackTrace(); } } On loading the ArrayList object, the values of the checkboxes are correctly populated, however I want to update the checkboxes on the GUI to reflect this. Is there an easy way to do this? I assumed as the array of checkboxes had the correct values that I could just repaint the panel / frame but this doesn't work. I'd like to understand why - does my loaded array of checkbox objects no longer reflect the checkbox objects on the GUI? Any help would be much appreciated. Thanks!

    Read the article

  • Background loading javascript into iframe with using jQuery/Ajax?

    - by user210099
    I'm working on an offline only help system which requires loading a large amount of search-related data into an iframe before the search functionality can be used. Due to the folder structure of the project, I am unable to use Ajax-related background load methods, since the files I need are loaded a few directories "up and over." I have written some code which delays the loading of the help data until the rest of the webpage is loaded. The help data consists of a bunch of javascript files which have information about the terms, ect that exist in the help books which are installed on the system. The webpage works fine, until I start to load this help data into a hidden iframe. While the javascript files are loading, I can not use any of the webpage. Links that require a small files be downloaded for hover over effects don't show up, javascript (switching tabs on the page) has no effect. I'm wondering if this is just a limitation of the way javascript works, or if there's something else going on here. Once all the files are loaded for the help system, the webpage works as expected. function test(){ var MGCFrame = eval("parent.parent"); if((ALLFRAMESLOADED == true)){ t2 = MGCFrame.setTimeout("this.IHHeader.frames[0].loadData()",1); } else{ t1 = MGCFrame.setTimeout("this.IHHeader.frames[0].test()",1000); } } Load data simply starts the data loading process. Thanks for any help you can provide.

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

  • how to unzip uploaded zip file?

    - by Jaydeepsinh Jadeja
    I am trying to upload a zipped file using codeigniter framework with following code function do_upload() { $name=time(); $config['upload_path'] = './uploadedModules/'; $config['allowed_types'] = 'zip|rar'; $this->load->library('upload', $config); if ( ! $this->upload->do_upload()) { $error = array('error' => $this->upload->display_errors()); $this->load->view('upload_view', $error); } else { $data = array('upload_data' => $this->upload->data()); $this->load->library('unzip'); // Optional: Only take out these files, anything else is ignored $this->unzip->allow(array('css', 'js', 'png', 'gif', 'jpeg', 'jpg', 'tpl', 'html', 'swf')); $this->unzip->extract('./uploadedModules/'.$data['upload_data']['file_name'], './application/modules/'); $pieces = explode(".", $data['upload_data']['file_name']); $title=$pieces[0]; $status=1; $core=0; $this->addons_model->insertNewModule($title,$status,$core); } } But the main problem is that when extract function is called, it extract the zip but the result is empty folder. Is there any way to overcome this problem?

    Read the article

  • How would I automate my array to be used with cURL?

    - by Rob
    I have an array containing the contents of a MySQL table. I need to put each of these contents into curl_multi_handles so that I can execute them all simultaneously Here is the code for the array, in case it helps: $SQL = mysql_query("SELECT url FROM urls") or die(mysql_error()); while($resultSet = mysql_fetch_array($SQL)){ $urls[]=$resultSet } So I need to put be able to send data to each url at the same time. I don't need to get any data back, and in fact I'll be having them time out after two seconds. It only needs to send the data and then close. My code prior to this, was executing them one at a time. here is that code: $SQL = mysql_query("SELECT url FROM shells") or die(mysql_error()); while($resultSet = mysql_fetch_array($SQL)){ $ch = curl_init($resultSet['url'] . $fullcurl); //load the urls and send GET data curl_setopt($ch, CURLOPT_TIMEOUT, 2); //Only load it for two seconds (Long enough to send the data) curl_exec($ch); curl_close($ch); So my question is: How can I load the contents of the array into curl_multi_handle, execute it, and then remove each handle and close the curl_multi_handle?

    Read the article

  • mine phrases (up to 3 words) from a given text

    - by DS_web_developer
    I asked before for a simple solution to my problem (using sphinx search service) but I got nowhere... someone has kindly provided me with this code <?php /** * $Project: GeoGraph $ * $Id$ * * GeoGraph geographic photo archive project * This file copyright (C) 2005 Barry Hunter ([email protected]) * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ /** * Provides the methods for updating the worknet tables * * @package Geograph * @author Barry Hunter <[email protected]> * @version $Revision$ */ function addTwoLetterPhrase($phrase) { global $w2; $w2[$phrase] = (isset($w2[$phrase]))?($w2[$phrase]+1):1; } function addThreeLetterPhrase($phrase) { global $w3; $w3[$phrase] = (isset($w3[$phrase]))?($w3[$phrase]+1):1; } function updateWordnet(&$db,$text,$field,$id) { global $w1,$w2,$w3; $alltext = strtolower(preg_replace('/\W+/',' ',str_replace("'",'',$text))); if (strlen($text)< 1) return; $words = preg_split('/ /',$alltext); $w1 = array(); $w2 = array(); $w3 = array(); //build a list of one word phrases foreach ($words as $word) { $w1[$word] = (isset($w1[$word]))?($w1[$word]+1):1; } //build a list of two word phrases $text = $alltext; $text = preg_replace('/(\w+) (\w+)/e','addTwoLetterPhrase("$1 $2")',$text); $text = $alltext; $text = preg_replace('/(\w+)/','',$text,1); $text = preg_replace('/(\w+) (\w+)/e','addTwoLetterPhrase("$1 $2")',$text); //build a list of three word phrases $text = $alltext; $text = preg_replace('/(\w+) (\w+) (\w+)/e','addThreeLetterPhrase("$1 $2 $3")',$text); $text = $alltext; $text = preg_replace('/(\w+)/','',$text,1); $text = preg_replace('/(\w+) (\w+) (\w+)/e','addThreeLetterPhrase("$1 $2 $3")',$text); $text = $alltext; $text = preg_replace('/(\w+) (\w+)/','',$text,1); $text = preg_replace('/(\w+) (\w+) (\w+)/e','addThreeLetterPhrase("$1 $2 $3")',$text); foreach ($w1 as $word=>$count) { $db->Execute("insert into wordnet1 set gid = $id,words = '$word',$field = $count");// ON DUPLICATE KEY UPDATE $field=$field+$count"); } foreach ($w2 as $word=>$count) { $db->Execute("insert into wordnet2 set gid = $id,words = '$word',$field = $count"); } foreach ($w3 as $word=>$count) { $db->Execute("insert into wordnet3 set gid = $id,words = '$word',$field = $count"); } } ?> It works fine and does almost exactly what I need....... except.... it is not utf8 friendly... I mean... it splits whole words into parts (on special chars) where it shouldn't! so my guess is I should use multibyte functions instead of regular preg_replace... I tried to replace preg_replace with mb_ereg_replace but it is not working as it should... at least not for 2 and 3 words phrases any ideas?

    Read the article

  • Data Integration Solution?

    - by Shlomo
    At my company we have a number of data feeds and processing that run on any given day. The number of feeds and processing steps is starting to out-number the ability to manage it ad-hoc as it is managed currently. Is there a good solution that helps with logging and managing/scheduling dependencies? For example: A: When file x is FTP dropped into directory D1, kick off processing step B B: Load flat file into DB1 C: When file y is FTP dropped into directory D2, kick off processing Step D D: Load flat file into DB11 E: When B and D are done, churn through the data, and load new data into DB111. F: When Step E is done, launch application process P G: etc... I want those steps to run at the appropriate times, not to mention if B fails, there's no reason to run steps E & F, but I could still run C & D. When I re-run B successfully, it should trigger just E & F to re-run, not C & D. We're a .NET/C#/Sql Server shop, and I'm already familiar with SSIS. Is that really the best there is? That manages steps well, but not external dependencies, or logging. Open source (.NET) preferred, but not required.

    Read the article

  • Background loading javascript into iframe without using jQuery/Ajax?

    - by user210099
    I'm working on an offline only help system which requires loading a large amount of search-related data into an iframe before the search functionality can be used. Due to the folder structure of the project, I am unable to use Ajax-related background load methods, since the files I need are loaded a few directories "up and over." I have written some code which delays the loading of the help data until the rest of the webpage is loaded. The help data consists of a bunch of javascript files which have information about the terms, ect that exist in the help books which are installed on the system. The webpage works fine, until I start to load this help data into a hidden iframe. While the javascript files are loading, I can not use any of the webpage. Links that require a small files be downloaded for hover over effects don't show up, javascript (switching tabs on the page) has no effect. I'm wondering if this is just a limitation of the way javascript works, or if there's something else going on here. Once all the files are loaded for the help system, the webpage works as expected. function test(){ var MGCFrame = eval("parent.parent"); if((ALLFRAMESLOADED == true)){ t2 = MGCFrame.setTimeout("this.IHHeader.frames[0].loadData()",1); } else{ t1 = MGCFrame.setTimeout("this.IHHeader.frames[0].test()",1000); } } Load data simply starts the data loading process. Thanks for any help you can provide.

    Read the article

  • No result when Rally.data.WsapiDataStore lacks permissions

    - by user1195996
    I'm calling Ext.create('Rally.data.WsapiDataStore', params), and looking for results with the load event. I'm requesting a number of objects across programs that the user may or may not have read permission for. This works fine for queries where the user has permissions. But in the case where the user does not have permission and presumably gets zero results back, the load event does not seem to fire at all. I would expect it to fire with the unsuccessful flag or else to return with empty results. Since I don't know that the request has failed, my program waits and waits. How can I tell if a this request fails to return because of security? BTW, looking at the network stats, I believe all my requests get a "200 OK" status back. Here is the method I use to create the various data stores: _createDataStore: function(params) { this.openRequests++; var createParams = { model: params.type, autoLoad: true, // So I can later determine which query type it is, and which program requestType: params.requestType == undefined ? params.type : params.requestType, program: this.program, listeners: { load: this._onDataLoaded, scope: this }, filters: params.filters, pageSize: params.pageSize, fetch: params.fetch, context: { project: this.project, projectScopeUp: false, projectScopeDown: true }, pageSize: 1 // We only need the count }; console.log('_createDataStore', this.program, createParams.requestType); Ext.create('Rally.data.WsapiDataStore', createParams); }, And here is the _onDataLoaded method: _onDataLoaded: function(store, data, successB) { console.log('_onDataLoaded', this.program, successB); ... I only see this function called for those queries for which the account has permissions.

    Read the article

  • I'm writing a diagnostic app for iOS that loads a predetermined set of webpages and records the time it takes for the page to render on the device.

    - by user1754840
    I'm writing a sort of diagnostic app for iOS that opens a predetermined list of websites and records the elapsed time it takes each to load. I have the app open a UIWebView within a ViewController. Here are the important bits of the ViewController source: - (void)viewDidLoad { [super viewDidLoad]; DataClass *obj = [DataClass getInstance]; obj.startOfTest = [NSDate date]; //load the first webpage NSString *urlString = [websites objectAtIndex:obj.counter]; //assume firstWebsite is already instantiated and counter is initially set to zero obj.counter = obj.counter + 1; NSURL *url = [NSURL URLWithString:urlString]; NSURLRequest *request = [NSURLRequest requestWithURL:url]; [obj.websiteStartTimes addObject:[NSDate date]]; [webView loadRequest:request]; } - (void)webViewDidFinishLoading:(UIWebView *)localWebView{ DataClass *obj = [DataClass getInstance]; //gets 'global' variables if(!webView.loading){ NSString *urlString = [websites objectAt:obj.counter]; obj.counter = obj.counter + 1; NSURL *url = [NSURL URLWithString:urlString]; NSURLRequest *request = [NSURLRequest requestWithURL:url]; [obj.websiteStartTimes addObject:[NSDate date]]; [webView loadRequest:request]; } The problem with this code is that it seems to load the next website before the one before it has finished. I would have thought that both the call to webViewDidFinishLoading AND the if statement within that would ensure that the website would be done, but that's not the case. I've noticed that sometimes, a single website will invoke the didFinishLoading method more than once, but it would only enter the if statement once. For example, if I have a list of ten websites, the webView would only really show the 3rd and the 6th website on the list and then indicate that it was "done" rendering them all. What else can I do to ensure that a website is done loading completely and rendered to the screen before the app moves on to the next one?

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SignalR Auto Disconnect when Page Changed in AngularJS

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/30/signalr-auto-disconnect-when-page-changed-in-angularjs.aspxIf we are using SignalR, the connection lifecycle was handled by itself very well. For example when we connect to SignalR service from browser through SignalR JavaScript Client the connection will be established. And if we refresh the page, close the tab or browser, or navigate to another URL then the connection will be closed automatically. This information had been well documented here. In a browser, SignalR client code that maintains a SignalR connection runs in the JavaScript context of a web page. That's why the SignalR connection has to end when you navigate from one page to another, and that's why you have multiple connections with multiple connection IDs if you connect from multiple browser windows or tabs. When the user closes a browser window or tab, or navigates to a new page or refreshes the page, the SignalR connection immediately ends because SignalR client code handles that browser event for you and calls the "Stop" method. But unfortunately this behavior doesn't work if we are using SignalR with AngularJS. AngularJS is a single page application (SPA) framework created by Google. It hijacks browser's address change event, based on the route table user defined, launch proper view and controller. Hence in AngularJS we address was changed but the web page still there. All changes of the page content are triggered by Ajax. So there's no page unload and load events. This is the reason why SignalR cannot handle disconnect correctly when works with AngularJS. If we dig into the source code of SignalR JavaScript Client source code we will find something below. It monitors the browser page "unload" and "beforeunload" event and send the "stop" message to server to terminate connection. But in AngularJS page change events were hijacked, so SignalR will not receive them and will not stop the connection. 1: // wire the stop handler for when the user leaves the page 2: _pageWindow.bind("unload", function () { 3: connection.log("Window unloading, stopping the connection."); 4:  5: connection.stop(asyncAbort); 6: }); 7:  8: if (isFirefox11OrGreater) { 9: // Firefox does not fire cross-domain XHRs in the normal unload handler on tab close. 10: // #2400 11: _pageWindow.bind("beforeunload", function () { 12: // If connection.stop() runs runs in beforeunload and fails, it will also fail 13: // in unload unless connection.stop() runs after a timeout. 14: window.setTimeout(function () { 15: connection.stop(asyncAbort); 16: }, 0); 17: }); 18: }   Problem Reproduce In the codes below I created a very simple example to demonstrate this issue. Here is the SignalR server side code. 1: public class GreetingHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: Debug.WriteLine(string.Format("Connected: {0}", Context.ConnectionId)); 6: return base.OnConnected(); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: Debug.WriteLine(string.Format("Disconnected: {0}", Context.ConnectionId)); 12: return base.OnDisconnected(); 13: } 14:  15: public void Hello(string user) 16: { 17: Clients.All.hello(string.Format("Hello, {0}!", user)); 18: } 19: } Below is the configuration code which hosts SignalR hub in an ASP.NET WebAPI project with IIS Express. 1: public class Startup 2: { 3: public void Configuration(IAppBuilder app) 4: { 5: app.Map("/signalr", map => 6: { 7: map.UseCors(CorsOptions.AllowAll); 8: map.RunSignalR(new HubConfiguration() 9: { 10: EnableJavaScriptProxies = false 11: }); 12: }); 13: } 14: } Since we will host AngularJS application in Node.js in another process and port, the SignalR connection will be cross domain. So I need to enable CORS above. In client side I have a Node.js file to host AngularJS application as a web server. You can use any web server you like such as IIS, Apache, etc.. Below is the "index.html" page which contains a navigation bar so that I can change the page/state. As you can see I added jQuery, AngularJS, SignalR JavaScript Client Library as well as my AngularJS entry source file "app.js". 1: <html data-ng-app="demo"> 2: <head> 3: <script type="text/javascript" src="jquery-2.1.0.js"></script> 1:  2: <script type="text/javascript" src="angular.js"> 1: </script> 2: <script type="text/javascript" src="angular-ui-router.js"> 1: </script> 2: <script type="text/javascript" src="jquery.signalR-2.0.3.js"> 1: </script> 2: <script type="text/javascript" src="app.js"></script> 4: </head> 5: <body> 6: <h1>SignalR Auto Disconnect with AngularJS by Shaun</h1> 7: <div> 8: <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 9: <a href="javascript:void(0)" data-ui-sref="view2">View 2</a> 10: </div> 11: <div data-ui-view></div> 12: </body> 13: </html> Below is the "app.js". My SignalR logic was in the "View1" page and it will connect to server once the controller was executed. User can specify a user name and send to server, all clients that located in this page will receive the server side greeting message through SignalR. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15:  16: $locationProvider.html5Mode(true); 17: }]); 18:  19: app.value('$', $); 20: app.value('endpoint', 'http://localhost:60448'); 21: app.value('hub', 'GreetingHub'); 22:  23: app.controller('View1Ctrl', function ($scope, $, endpoint, hub) { 24: $scope.user = ''; 25: $scope.response = ''; 26:  27: $scope.greeting = function () { 28: proxy.invoke('Hello', $scope.user) 29: .done(function () {}) 30: .fail(function (error) { 31: console.log(error); 32: }); 33: }; 34:  35: var connection = $.hubConnection(endpoint); 36: var proxy = connection.createHubProxy(hub); 37: proxy.on('hello', function (response) { 38: $scope.$apply(function () { 39: $scope.response = response; 40: }); 41: }); 42: connection.start() 43: .done(function () { 44: console.log('signlar connection established'); 45: }) 46: .fail(function (error) { 47: console.log(error); 48: }); 49: }); 50:  51: app.controller('View2Ctrl', function ($scope, $) { 52: }); When we went to View1 the server side "OnConnect" method will be invoked as below. And in any page we send the message to server, all clients will got the response. If we close one of the client, the server side "OnDisconnect" method will be invoked which is correct. But is we click "View 2" link in the page "OnDisconnect" method will not be invoked even though the content and browser address had been changed. This might cause many SignalR connections remain between the client and server. Below is what happened after I clicked "View 1" and "View 2" links four times. As you can see there are 4 live connections.   Solution Since the reason of this issue is because, AngularJS hijacks the page event that SignalR need to stop the connection, we can handle AngularJS route or state change event and stop SignalR connect manually. In the code below I moved the "connection" variant to global scope, added a handler to "$stateChangeStart" and invoked "stop" method of "connection" if its state was not "disconnected". 1: var connection; 2: app.run(['$rootScope', function ($rootScope) { 3: $rootScope.$on('$stateChangeStart', function () { 4: if (connection && connection.state && connection.state !== 4 /* disconnected */) { 5: console.log('signlar connection abort'); 6: connection.stop(); 7: } 8: }); 9: }]); Now if we refresh the page and navigated to View 1, the connection will be opened. At this state if we clicked "View 2" link the content will be changed and the SignalR connection will be closed automatically.   Summary In this post I demonstrated an issue when we are using SignalR with AngularJS. The connection cannot be closed automatically when we navigate to other page/state in AngularJS. And the solution I mentioned below is to move the SignalR connection as a global variant and close it manually when AngularJS route/state changed. You can download the full sample code here. Moving the SignalR connection as a global variant might not be a best solution. It's just for easy to demo here. In production code I suggest wrapping all SignalR operations into an AngularJS factory. Since AngularJS factory is a singleton object, we can safely put the connection variant in the factory function scope.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Java Logger API

    - by Koppar
    This is a more like a tip rather than technical write up and serves as a quick intro for newbies. The logger API helps to diagnose application level or JDK level issues at runtime. There are 7 levels which decide the detailing in logging (SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST). Its best to start with highest level and as we narrow down, use more detailed logging for a specific area. SEVERE is the highest and FINEST is the lowest. This may not make sense until we understand some jargon. The Logger class provides the ability to stream messages to an output stream in a format that can be controlled by the user. What this translates to is, I can create a logger with this simple invocation and use it add debug messages in my class: import java.util.logging.*; private static final Logger focusLog = Logger.getLogger("java.awt.focus.KeyboardFocusManager"); if (focusLog.isLoggable(Level.FINEST)) { focusLog.log(Level.FINEST, "Calling peer setCurrentFocusOwner}); LogManager acts like a book keeper and all the getLogger calls are forwarded to LogManager. The LogManager itself is a singleton class object which gets statically initialized on JVM start up. More on this later. If there is no existing logger with the given name, a new one is created. If there is one (and not yet GC’ed), then the existing Logger object is returned. By default, a root logger is created on JVM start up. All anonymous loggers are made as the children of the root logger. Named loggers have the hierarchy as per their name resolutions. Eg: java.awt.focus is the parent logger for java.awt.focus.KeyboardFocusManager etc. Before logging any message, the logger checks for the log level specified. If null is specified, the log level of the parent logger will be set. However, if the log level is off, no log messages would be written, irrespective of the parent’s log level. All the messages that are posted to the Logger are handled as a LogRecord object.i.e. FocusLog.log would create a new LogRecord object with the log level and message as its data members). The level of logging and thread number are also tracked. LogRecord is passed on to all the registered Handlers. Handler is basically a means to output the messages. The output may be redirected to either a log file or console or a network logging service. The Handler classes use the LogManager properties to set filters and formatters. During initialization or JVM start up, LogManager looks for logging.properties file in jre/lib and sets the properties if the file is provided. An alternate location for properties file can also be specified by setting java.util.logging.config.file system property. This can be set in Java Control Panel ? Java ? Runtime parameters as -Djava.util.logging.config.file = <mylogfile> or passed as a command line parameter java -Djava.util.logging.config.file = C:/Sunita/myLog The redirection of logging depends on what is specified rather registered as a handler with JVM in the properties file. java.util.logging.ConsoleHandler sends the output to system.err and java.util.logging.FileHandler sends the output to file. File name of the log file can also be specified. If you prefer XML format output, in the configuration file, set java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter and if you prefer simple text, set set java.util.logging.FileHandler.formatter =java.util.logging.SimpleFormatter Below is the default logging Configuration file: ############################################################ # Default Logging Configuration File # You can use a different file by specifying a filename # with the java.util.logging.config.file system property. # For example java -Djava.util.logging.config.file=myfile ############################################################ ############################################################ # Global properties ############################################################ # "handlers" specifies a comma separated list of log Handler # classes. These handlers will be installed during VM startup. # Note that these classes must be on the system classpath. # By default we only configure a ConsoleHandler, which will only # show messages at the INFO and above levels. handlers= java.util.logging.ConsoleHandler # To also add the FileHandler, use the following line instead. #handlers= java.util.logging.FileHandler, java.util.logging.ConsoleHandler # Default global logging level. # This specifies which kinds of events are logged across # all loggers. For any given facility this global level # can be overriden by a facility specific level # Note that the ConsoleHandler also has a separate level # setting to limit messages printed to the console. .level= INFO ############################################################ # Handler specific properties. # Describes specific configuration info for Handlers. ############################################################ # default file output is in user's home directory. java.util.logging.FileHandler.pattern = %h/java%u.log java.util.logging.FileHandler.limit = 50000 java.util.logging.FileHandler.count = 1 java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter # Limit the message that are printed on the console to INFO and above. java.util.logging.ConsoleHandler.level = INFO java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter ############################################################ # Facility specific properties. # Provides extra control for each logger. ############################################################ # For example, set the com.xyz.foo logger to only log SEVERE # messages: com.xyz.foo.level = SEVERE Since I primarily use this method to track focus issues, here is how I get detailed awt focus related logging. Just set the logger name to java.awt.focus.level=FINEST and change the default log level to FINEST. Below is a basic sample program. The sample programs are from http://www2.cs.uic.edu/~sloan/CLASSES/java/ and have been modified to illustrate the logging API. By changing the .level property in the logging.properties file, one can control the output written to the logs. To play around with the example, try changing the levels in the logging.properties file and notice the difference in messages going to the log file. Example --------KeyboardReader.java------------------------------------------------------------------------------------- import java.io.*; import java.util.*; import java.util.logging.*; public class KeyboardReader { private static final Logger mylog = Logger.getLogger("samples.input"); public static void main (String[] args) throws java.io.IOException { String s1; String s2; double num1, num2, product; // set up the buffered reader to read from the keyboard BufferedReader br = new BufferedReader (new InputStreamReader (System.in)); System.out.println ("Enter a line of input"); s1 = br.readLine(); if (mylog.isLoggable(Level.SEVERE)) { mylog.log (Level.SEVERE,"The line entered is " + s1); } if (mylog.isLoggable(Level.INFO)) { mylog.log (Level.INFO,"The line has " + s1.length() + " characters"); } if (mylog.isLoggable(Level.FINE)) { mylog.log (Level.FINE,"Breaking the line into tokens we get:"); } int numTokens = 0; StringTokenizer st = new StringTokenizer (s1); while (st.hasMoreTokens()) { s2 = st.nextToken(); numTokens++; if (mylog.isLoggable(Level.FINEST)) { mylog.log (Level.FINEST, " Token " + numTokens + " is: " + s2); } } } } ----------MyFileReader.java---------------------------------------------------------------------------------------- import java.io.*; import java.util.*; import java.util.logging.*; public class MyFileReader extends KeyboardReader { private static final Logger mylog = Logger.getLogger("samples.input.file"); public static void main (String[] args) throws java.io.IOException { String s1; String s2; // set up the buffered reader to read from the keyboard BufferedReader br = new BufferedReader (new FileReader ("MyFileReader.txt")); s1 = br.readLine(); if (mylog.isLoggable(Level.SEVERE)) { mylog.log (Level.SEVERE,"ATTN The line is " + s1); } if (mylog.isLoggable(Level.INFO)) { mylog.log (Level.INFO, "The line has " + s1.length() + " characters"); } if (mylog.isLoggable(Level.FINE)) { mylog.log (Level.FINE,"Breaking the line into tokens we get:"); } int numTokens = 0; StringTokenizer st = new StringTokenizer (s1); while (st.hasMoreTokens()) { s2 = st.nextToken(); numTokens++; if (mylog.isLoggable(Level.FINEST)) { mylog.log (Level.FINEST,"Breaking the line into tokens we get:"); mylog.log (Level.FINEST," Token " + numTokens + " is: " + s2); } } //end of while } // end of main } // end of class ----------MyFileReader.txt------------------------------------------------------------------------------------------ My first logging example -------logging.properties------------------------------------------------------------------------------------------- handlers= java.util.logging.ConsoleHandler, java.util.logging.FileHandler .level= FINEST java.util.logging.FileHandler.pattern = java%u.log java.util.logging.FileHandler.limit = 50000 java.util.logging.FileHandler.count = 1 java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter java.util.logging.ConsoleHandler.level = FINEST java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter java.awt.focus.level=ALL ------Output log------------------------------------------------------------------------------------------- May 21, 2012 11:44:55 AM MyFileReader main SEVERE: ATTN The line is My first logging example May 21, 2012 11:44:55 AM MyFileReader main INFO: The line has 24 characters May 21, 2012 11:44:55 AM MyFileReader main FINE: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 1 is: My May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 2 is: first May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 3 is: logging May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 4 is: example Invocation command: "C:\Program Files (x86)\Java\jdk1.6.0_29\bin\java.exe" -Djava.util.logging.config.file=logging.properties MyFileReader References Further technical details are available here: http://docs.oracle.com/javase/1.4.2/docs/guide/util/logging/overview.html#1.0 http://docs.oracle.com/javase/1.4.2/docs/api/java/util/logging/package-summary.html http://www2.cs.uic.edu/~sloan/CLASSES/java/

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Metro: Namespaces and Modules

    - by Stephen.Walther
    The goal of this blog entry is to describe how you can use the Windows JavaScript (WinJS) library to create namespaces. In particular, you learn how to use the WinJS.Namespace.define() and WinJS.Namespace.defineWithParent() methods. You also learn how to hide private methods by using the module pattern. Why Do We Need Namespaces? Before we do anything else, we should start by answering the question: Why do we need namespaces? What function do they serve? Do they just add needless complexity to our Metro applications? After all, plenty of JavaScript libraries do just fine without introducing support for namespaces. For example, jQuery has no support for namespaces and jQuery is the most popular JavaScript library in the universe. If jQuery can do without namespaces, why do we need to worry about namespaces at all? Namespaces perform two functions in a programming language. First, namespaces prevent naming collisions. In other words, namespaces enable you to create more than one object with the same name without conflict. For example, imagine that two companies – company A and company B – both want to make a JavaScript shopping cart control and both companies want to name the control ShoppingCart. By creating a CompanyA namespace and CompanyB namespace, both companies can create a ShoppingCart control: a CompanyA.ShoppingCart and a CompanyB.ShoppingCart control. The second function of a namespace is organization. Namespaces are used to group related functionality even when the functionality is defined in different physical files. For example, I know that all of the methods in the WinJS library related to working with classes can be found in the WinJS.Class namespace. Namespaces make it easier to understand the functionality available in a library. If you are building a simple JavaScript application then you won’t have much reason to care about namespaces. If you need to use multiple libraries written by different people then namespaces become very important. Using WinJS.Namespace.define() In the WinJS library, the most basic method of creating a namespace is to use the WinJS.Namespace.define() method. This method enables you to declare a namespace (of arbitrary depth). The WinJS.Namespace.define() method has the following parameters: · name – A string representing the name of the new namespace. You can add nested namespace by using dot notation · members – An optional collection of objects to add to the new namespace For example, the following code sample declares two new namespaces named CompanyA and CompanyB.Controls. Both namespaces contain a ShoppingCart object which has a checkout() method: // Create CompanyA namespace with ShoppingCart WinJS.Namespace.define("CompanyA"); CompanyA.ShoppingCart = { checkout: function (){ return "Checking out from A"; } }; // Create CompanyB.Controls namespace with ShoppingCart WinJS.Namespace.define( "CompanyB.Controls", { ShoppingCart: { checkout: function(){ return "Checking out from B"; } } } ); // Call CompanyA ShoppingCart checkout method console.log(CompanyA.ShoppingCart.checkout()); // Writes "Checking out from A" // Call CompanyB.Controls checkout method console.log(CompanyB.Controls.ShoppingCart.checkout()); // Writes "Checking out from B" In the code above, the CompanyA namespace is created by calling WinJS.Namespace.define(“CompanyA”). Next, the ShoppingCart is added to this namespace. The namespace is defined and an object is added to the namespace in separate lines of code. A different approach is taken in the case of the CompanyB.Controls namespace. The namespace is created and the ShoppingCart object is added to the namespace with the following single line of code: WinJS.Namespace.define( "CompanyB.Controls", { ShoppingCart: { checkout: function(){ return "Checking out from B"; } } } ); Notice that CompanyB.Controls is a nested namespace. The top level namespace CompanyB contains the namespace Controls. You can declare a nested namespace using dot notation and the WinJS library handles the details of creating one namespace within the other. After the namespaces have been defined, you can use either of the two shopping cart controls. You call CompanyA.ShoppingCart.checkout() or you can call CompanyB.Controls.ShoppingCart.checkout(). Using WinJS.Namespace.defineWithParent() The WinJS.Namespace.defineWithParent() method is similar to the WinJS.Namespace.define() method. Both methods enable you to define a new namespace. The difference is that the defineWithParent() method enables you to add a new namespace to an existing namespace. The WinJS.Namespace.defineWithParent() method has the following parameters: · parentNamespace – An object which represents a parent namespace · name – A string representing the new namespace to add to the parent namespace · members – An optional collection of objects to add to the new namespace The following code sample demonstrates how you can create a root namespace named CompanyA and add a Controls child namespace to the CompanyA parent namespace: WinJS.Namespace.define("CompanyA"); WinJS.Namespace.defineWithParent(CompanyA, "Controls", { ShoppingCart: { checkout: function () { return "Checking out"; } } } ); console.log(CompanyA.Controls.ShoppingCart.checkout()); // Writes "Checking out" One significant advantage of using the defineWithParent() method over the define() method is the defineWithParent() method is strongly-typed. In other words, you use an object to represent the base namespace instead of a string. If you misspell the name of the object (CompnyA) then you get a runtime error. Using the Module Pattern When you are building a JavaScript library, you want to be able to create both public and private methods. Some methods, the public methods, are intended to be used by consumers of your JavaScript library. The public methods act as your library’s public API. Other methods, the private methods, are not intended for public consumption. Instead, these methods are internal methods required to get the library to function. You don’t want people calling these internal methods because you might need to change them in the future. JavaScript does not support access modifiers. You can’t mark an object or method as public or private. Anyone gets to call any method and anyone gets to interact with any object. The only mechanism for encapsulating (hiding) methods and objects in JavaScript is to take advantage of functions. In JavaScript, a function determines variable scope. A JavaScript variable either has global scope – it is available everywhere – or it has function scope – it is available only within a function. If you want to hide an object or method then you need to place it within a function. For example, the following code contains a function named doSomething() which contains a nested function named doSomethingElse(): function doSomething() { console.log("doSomething"); function doSomethingElse() { console.log("doSomethingElse"); } } doSomething(); // Writes "doSomething" doSomethingElse(); // Throws ReferenceError You can call doSomethingElse() only within the doSomething() function. The doSomethingElse() function is encapsulated in the doSomething() function. The WinJS library takes advantage of function encapsulation to hide all of its internal methods. All of the WinJS methods are defined within self-executing anonymous functions. Everything is hidden by default. Public methods are exposed by explicitly adding the public methods to namespaces defined in the global scope. Imagine, for example, that I want a small library of utility methods. I want to create a method for calculating sales tax and a method for calculating the expected ship date of a product. The following library encapsulates the implementation of my library in a self-executing anonymous function: (function (global) { // Public method which calculates tax function calculateTax(price) { return calculateFederalTax(price) + calculateStateTax(price); } // Private method for calculating state tax function calculateStateTax(price) { return price * 0.08; } // Private method for calculating federal tax function calculateFederalTax(price) { return price * 0.02; } // Public method which returns the expected ship date function calculateShipDate(currentDate) { currentDate.setDate(currentDate.getDate() + 4); return currentDate; } // Export public methods WinJS.Namespace.define("CompanyA.Utilities", { calculateTax: calculateTax, calculateShipDate: calculateShipDate } ); })(this); // Show expected ship date var shipDate = CompanyA.Utilities.calculateShipDate(new Date()); console.log(shipDate); // Show price + tax var price = 12.33; var tax = CompanyA.Utilities.calculateTax(price); console.log(price + tax); In the code above, the self-executing anonymous function contains four functions: calculateTax(), calculateStateTax(), calculateFederalTax(), and calculateShipDate(). The following statement is used to expose only the calcuateTax() and the calculateShipDate() functions: // Export public methods WinJS.Namespace.define("CompanyA.Utilities", { calculateTax: calculateTax, calculateShipDate: calculateShipDate } ); Because the calculateTax() and calcuateShipDate() functions are added to the CompanyA.Utilities namespace, you can call these two methods outside of the self-executing function. These are the public methods of your library which form the public API. The calculateStateTax() and calculateFederalTax() methods, on the other hand, are forever hidden within the black hole of the self-executing function. These methods are encapsulated and can never be called outside of scope of the self-executing function. These are the internal methods of your library. Summary The goal of this blog entry was to describe why and how you use namespaces with the WinJS library. You learned how to define namespaces using both the WinJS.Namespace.define() and WinJS.Namespace.defineWithParent() methods. We also discussed how to hide private members and expose public members using the module pattern.

    Read the article

  • CodePlex Daily Summary for Friday, August 01, 2014

    CodePlex Daily Summary for Friday, August 01, 2014Popular ReleasesQuickMon: Version 3.20: Added a 'Directory Services Query' collector agent. This collector allows for querying Active Directory using simple DirectorySearcher queries.Grunndatakvalitet: Initial working: Show Altinn metadata in Excel. To get a live list you need to run the sql script on a server and update the connection string in ExcelMultiple Threads TCP Server: Project: this Project is based on VS 2013, .net freamwork 4.0, you can open it by vs 2010 or laterEssence#: Philemon-2 (Alpha Build 18): The Philemon-2 release fixes bugs, most of which have to do with the binding of .Net assemblies. Configuration profiles have been added in order to help solve some of the .Net assembly binding issues. A configuration profile is persistently stored as a folder in the filesystem. It must be located in the folder %EssenceSharpPath%\Config. The folder name must use ".profile" as its file extension. A configuration profile folder may (optionally) contain one or more files--each with its own form...DataBind: DataBind 1.0: - Efficiency ImprovedConsole Progress Bar: ConsoleProgressBar-0.3: - Validate ProgressInPercentageSpace Engineers Server Manager: SESM V1.10: V1.10 - New auto update system ! You won't miss any SE Update now ^^ To learn more : https://sesm.codeplex.com/wikipage?title=Activating%20the%20auto%20update%20system - Moved the serve-wide configuration options to an other file than web.config. It should solve the "my parametters are errased by each SESM update" ! Warning ! If you upgrade from any previous version, you will need to copy-pase your parameters from your old web.config into the new SESM.config (this file is generated by acc...Accesorios de sitios Torrent en Español para Synology Download Station: Pack de Torrents en Español 6.0.0: Agregado los módulos de DivXTotal, el módulo de búsqueda depende del de alojamiento para bajar las series Utiliza el rss: http://www.divxtotal.com/rss.php DbEntry.Net (Leafing Framework): DbEntry.Net 4.2: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 4.0+. It has clearly and easily programing interface for ORM and sql directly, and supoorted Access, Sql Server, MySql, SQLite, Firebird, PostgreSQL and Oracle. It also provide a Ruby On Rails style MVC framework. Asp.Net DataSource and a simple IoC. DbEntry.Net.v4.2.Setup.zip include the setup package. DbEntry.Net.v4.2.Src.zip include source files and unit tests. DbEntry.Net.v4.2.Samples.zip ...Recaptcha for .NET: Recaptcha for .NET v1.5: What's NewMinor bug fixes Support for legacy .NET framework 4.0 and ASP.NET MVC 4. Support for .NET Framework 4.5.1.Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).SSIS Connector for Sharepoint Online: Sharepoint_Online_ConnectorsV2.3.5: V2.3.5 Removed validation for user name where it was accepting user name with only ".com" V2.3.1 Added support for column with string array type. (For source only) V2.3 Added support for delete operation. just map ID columns and choose operation type as "Delete" Any other Operation Type will be taken as Insert\Update MultiUser type will be taken as User field and single user will be added, still figuring out how to add multi user field (whatever available on internet not helping :( ) V2.2 ...Artezio SharePoint 2013 Workflow Activities for VS & Actions for Designer: SharePoint Designer 2013 custom workflow actions 1.0: SharePoint Designer 2013 custom workflow actions that work with permissions. Add Role Assignment Add Role Assignments Delete Role Assignments Get Role Definition Id Get Role Definition Id By Role Type Id Reset Role Inheritance Artezio SharePoint Consulting & DevelopmentVG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.Waterfox: Waterfox 31.0 Portable: New features in Waterfox 31.0: Added support for Unicode 7.0 Experimental support for WebCL New features in Firefox 31.0:New Add the search field to the new tab page Support of Prefer:Safe http header for parental control mozilla::pkix as default certificate verifier Block malware from downloaded files Block malware from downloaded files audio/video .ogg and .pdf files handled by Firefox if no application specified Changed Removal of the CAPS infrastructure for specifying site-sp...SuperSocket, an extensible socket server framework: SuperSocket 1.6.3: The changes below are included in this release: fixed an exception when collect a server's status but it has been stopped fixed a bug that can cause an exception in case of sending data when the connection dropped already fixed the log4net missing issue for a QuickStart project fixed a warning in a QuickStart projectYnote Classic: Ynote Classic 2.8.5 Beta: Several Changes - Multiple Carets and Multiple Selections - Improved Startup Time - Improved Syntax Highlighting - Search Improvements - Shell Command - Improved StabilityTEBookConverter: 1.2: Fixed: Could not start convertion in some cases Fixed: Progress show during convertion was truncated Fixed: Stopping convertion didn't reset program titleNew Projects.NET to C/C++ Interop Sample: Reference projects that demonstrate how to invoke C/C++ code from .NET using C#.Aircraft Guidance Systems: Software for RC model control and navigationDNN Upgrade Wizard: Provides a simple method for upgrading a DNN siteDrop and Create indexes SSIS Task: This is a simple SSIS Control Flow Task which can be used to drop and re-create all non-primary key indexes associated with specified table.Dynamics CRM 2013 Global Advanced Find: Dynamics CRM 2013 Global Advanced Find adds a button to the global navigation allowing users to launch an advanced find window from anywhere in the system.itsanewproject: sdlfjsldjfJNMSJW: hahaOmni Kits: A library tend to help in projects for any purpose while keeping small.Phase1Week1: Learning how to post, publish, and patch my commits to CodePlex with homework using bootsrap and font awesome.PL_Last1: Pll1Tabletoparmy: Tabletoparmy ist ein Tool um Tabletop Miniaturen zu Verwalten. Aktuell gibt es nur die Möglichkeit ein X-Wing Turnier zu verwalten.TalesAndTiles: Tile based game developed with Microsoft XNA Framework 4.0??????: ?????????

    Read the article

  • CodePlex Daily Summary for Monday, August 04, 2014

    CodePlex Daily Summary for Monday, August 04, 2014Popular ReleasesSpace Engineers Server Manager: SESM V1.11: V1.11 - Added the rename option in the map manager - Added the possibility to select map without starting the server - Upgraded the diagnosis page, now more usefull, more colorful, and with a big check/cross sign to tell you if your installation is healthy. Seriously, you must check it out ^^ - Corrected issue #7, now the asteroid count is the real number of asteroid of the current map - Corrected issue #8, the CPU/RAM value of the stats page should be much more accurate nowJson.NET: Json.NET 6.0 Release 4: New feature - Added Merge to LINQ to JSON New feature - Added JValue.CreateNull and JValue.CreateUndefined New feature - Added Windows Phone 8.1 support to .NET 4.0 portable assembly New feature - Added OverrideCreator to JsonObjectContract New feature - Added support for overriding the creation of interfaces and abstract types New feature - Added support for reading UUID BSON binary values as a Guid New feature - Added MetadataPropertyHandling.Ignore New feature - Improv...VidCoder: 1.5.24 Beta: Added NL-Means denoiser. Updated HandBrake core to SVN 6254. Added extra error handling to DVD player code to avoid a crash when the player was moved.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.5: *Added Send-Keys function to send a sequence of keys to an application window (Thanks to mmashwani) *Added 3 optimization/stability improvements to Execute-Process following MS best practice (Thanks to mmashwani) *Fixed issue where Execute-MSI did not use value from XML file for uninstall but instead ran all uninstalls silently by default *Fixed error on 1641 exit code (should be a success like 3010) *Fixed issue with error handling in Invoke-SCCMTask *Fixed issue with deferral dates where th...AutoUpdater.NET : Auto update library for VB.NET and C# Developer: AutoUpdater.NET 1.3: Fixed problem in DownloadUpdateDialog where download continues even if you close the dialog. Added support for new url field for 64 bit application setup. AutoUpdater.NET will decide which download url to use by looking at the value of IntPtr.Size. Added German translation provided by Rene Kannegiesser. Now developer can handle update logic herself using event suggested by ricorx7. Added italian translation provided by Gianluca Mariani. Fixed bug that prevents Application from exiti...HigLabo: HigLabo_20140801: ■Summary Add feature to create SmtpMessage object from System.Net.Mail.MailMessage, HigLabo.Mime.MailMessage. Add feature to get mail header only by using ImapClient. Add some of Portable class library project. Modify Async internal implementation. -------------------------------------------------- ■HigLabo.Converter Bug fix of QuotedPrintableConverter when last char of line is whitespace or tab. ■HigLabo.Core Modify code to add HigLabo.Core.Pcl project. ■HigLabo.Mail Add feature to create...SEToolbox: SEToolbox 01.041.012 Release 1: Added voxel material textures to read in with mods. Fixed missing texture replacements for mods. Fixed rounding issue in raytrace code. Fixed repair issue with corrupt checkpoint file. Fixed issue with updated SE binaries 01.041.012 using new container configuration.Dynamics CRM 2013 Global Advanced Find: Global Advanced Find Solution v1.0.0.0: Global Advanced Find Managed Solution v1.0.0.0Magick.NET: Magick.NET 6.8.9.601: Magick.NET linked with ImageMagick 6.8.9.6 Breaking changes: - Changed arguments for the Map method of MagickImage. - QuantizeSettings uses Riemersma by default.Multiple Threads TCP Server: Project: this Project is based on VS 2013, .net freamwork 4.0, you can open it by vs 2010 or laterAricie Shared: Aricie.Shared Version 1.8.00: Version 1.8.0 - Release Notes New: Expression Builder to design Flee Expressions New: Cryptographic helpers and configuration classes Improvement: Many fixes and improvements with property editor Improvement: Token Replace Property explorer now has a restricted mode for additional security Improvement: Better variables, types and object manipulation Fixed: smart file and flee bugs Fixed: Removed Exception while trying to read unsuported files Improvement: several performance twe...DataBind: DataBind 1.0: - Efficiency ImprovedConsole Progress Bar: ConsoleProgressBar-0.3: - Validate ProgressInPercentageGum UI Tool: Gum 0.6.07: Fixed bug related to setting the parent to screen bounds.Accesorios de sitios Torrent en Español para Synology Download Station: Pack de Torrents en Español 6.0.0: Agregado los módulos de DivXTotal, el módulo de búsqueda depende del de alojamiento para bajar las series Utiliza el rss: http://www.divxtotal.com/rss.php DbEntry.Net (Leafing Framework): DbEntry.Net 4.2: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 4.0+. It has clearly and easily programing interface for ORM and sql directly, and supoorted Access, Sql Server, MySql, SQLite, Firebird, PostgreSQL and Oracle. It also provide a Ruby On Rails style MVC framework. Asp.Net DataSource and a simple IoC. DbEntry.Net.v4.2.Setup.zip include the setup package. DbEntry.Net.v4.2.Src.zip include source files and unit tests. DbEntry.Net.v4.2.Samples.zip ...Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).VG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.New ProjectsAndroid_Games: Free downloadsDailyProgrammer Challenge 173: My solution to /r/dailyprogrammer Challenge #173EmotionPing: This a littel software to make ping to a net ip rangeEntityMapper: Library for mapping (or projecting) Entity types (Domain Model) to DTO types (Contract). Made to be compatible with the Linq Provider for Entity Framework.kevinsheldon: private collectionKeyboard Auto-Complete for PC: Did you ever use the auto-complete in mobile phones? Why not on PC?? Now you can!LetsPlay: W co pograsz dzisiaj? Wybierz jedna z gier zainstalowanych w systemie i graj do woli:)Mini ORM like Enterprise library Data Block: A mini ORM like Enterprise library Data Block.ProjkyAddin ssms addin script shortcut ssmsaddin: ProjkyAddin ???Management Studio ??,??SSMS 2005、SSMS 2008、SSMS 2008 R2、SSMS 2012、SSMS 2014,?????????????Management Studio????????????,????????????????????。Reddit Downvote Bot: Reddit Downvote Bot , Mass Downvote a certain reddit user.SQL Server Dialog: This SQL Server Dialog is now release in v.1.0.0VNTalk Messenger: VNTalk MessengerWindyPaper: Test Project

    Read the article

  • CodePlex Daily Summary for Wednesday, October 17, 2012

    CodePlex Daily Summary for Wednesday, October 17, 2012Popular ReleasesD3 Loot Tracker: 1.5.5: Compatible with 1.05.Test Management eXtensions PowerShell module: TMX 0.4.5: Bugfix in BackUp-TMXTestResults: 1. adding escape characters to sring data (at least, part) 2. ErrorRecord is now supported as a string Known issue: only one screenshot per test result.Write Once, Play Everywhere: MonoGame 3.0 (BETA): This is a beta release of the up coming MonoGame 3.0. It contains an Installer which will install a binary release of MonoGame on windows boxes with the following platforms. Windows, Linux, Android and Windows 8. If you need to build for iOS or Mac you will need to get the source code at this time as the installers for those platforms are not available yet. The installer will also install a bunch of Project templates for Visual Studio 2010 , 2012 and MonoDevleop. For those of you wish...Windawesome: Windawesome v1.4.1 x64: Fixed switching of applications across monitors Changed window flashing API (fix your config files) Added NetworkMonitorWidget (thanks to weiwen) Any issues/recommendations/requests for future versions? This is the 64-bit version of the release. Be sure to use that if you are on a 64-bit Windows. Works with "Required DLLs v3".CODE Framework: 4.0.21017.0: See change log in the Documentation section for details.Global Stock Exchange (Hobby Project): Global Stock Exchange - Invst Banking (Hobby Proj): Initial VersionMagelia WebStore Open-source Ecommerce software: Magelia WebStore 2.1: Add support for .net 4.0 to Magelia.Webstore.Client and StarterSite version 2.1.254.3 Scheduler Import & Export feature UTC datetime and timezone support .net 4.5 and Visual Studio 2012 migration client magelia global refactoring nugget package http://nuget.org/packages/Magelia.Webstore.Client burst optimisation burst time improvment (multithreading, index, ...) current burst is still active when a new burst is generating bugfixes version 2.1.254.1RazorSourceGenerator: RazorSourceGenerator v1.0 Installer: RazorSourceGenerator v1.0 InstallerJayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.2: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like OData, MongoDB, WebSQL, SqLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video Sencha Touch 2 example app using JayData: Netflix browser. What's new in JayData 1.2.2 For detailed release notes check the release notes. Revitalized IndexedDB providerNow you c...VFPX: FoxcodePlus: FoxcodePlus - Visual Studio like extensions to Visual FoxPro IntelliSense.Droid Explorer: Droid Explorer 0.8.8.8 Beta: fixed the icon for packages on the desktop fixed the install dialog closing right when it starts removed the link to "set up the sdk for me" as this is no longer supported. fixed bug where the device selection dialog would show, even if there was only one device connected. fixed toolbar from having "gap" between other toolbar removed main menu items that do not have any menus Fiskalizacija za developere: FiskalizacijaDev 1.0: Prva verzija ovog projekta, još je uvijek oznacena kao BETA - ovo znaci da su naša testiranja prošla uspješno :) No, kako mi ne proizvodimo neki software za blagajne, tako sve ovo nije niti isprobano u "realnim" uvjetima - svaka je sugestija, primjedba ili prijava bug-a je dobrodošla. Za sve ovo koristite, molimo, Discussions ili Issue Tracker. U ovom trenutku runtime binary je raspoloživ kao Any CPU za .NET verzije 2.0. Javite ukoliko trebaju i verzije buildane za 32-bit/64-bit kao i za .N...Squiggle - A free open source LAN Messenger: Squiggle 3.2 (Development): NOTE: This is development release and not recommended for production use. This release is mainly for enabling extensibility and interoperability with other platforms. Support for plugins Support for extensions Communication layer and protocol is platform independent (ZeroMQ, ProtocolBuffers) Bug fixes New /invite command Edit the sent message Disable update check NOTE: This is development release and not recommended for production use.AcDown????? - AcDown Downloader Framework: AcDown????? v4.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...PHPExcel: PHPExcel 1.7.8: See Change Log for details of the new features and bugfixes included in this release, and methods that are now deprecated. Note changes to the PDF Writer: tcPDF is no longer bundled with PHPExcel, but should be installed separately if you wish to use that 3rd-Party library with PHPExcel. Alternatively, you can choose to use mPDF or DomPDF as PDF Rendering libraries instead: PHPExcel now provides a configurable wrapper allowing you a choice of PDF renderer. See the documentation, or the PDF s...ALM Assessment Guidance: Community Value-Adds: Important: This download has been created using ALM Ranger bits by the community, for the community. Although ALM Rangers were involved in the process, the content has not been through their quality review. Please post your candid feedback and improvement suggestions to the Community tab of this Codeplex project. DirectX Tool Kit: October 12, 2012: October 12, 2012 Added PrimitiveBatch for drawing user primitives Debug object names for all D3D resources (for PIX and debug layer leak reporting)Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.70: Fixed issue described in discussion #399087: variable references within case values weren't getting resolved.GoogleMap Control: GoogleMap Control 6.1: Some important bug fixes and couple of new features were added. There are no major changes to the sample website. Source code could be downloaded from the Source Code section selecting branch release-6.1. Thus just builds of GoogleMap Control are issued here in this release. Update 14.Oct.2012 - Client side access fixed NuGet Package GoogleMap Control 6.1 NuGet Package FeaturesBounds property to provide ability to create a map by center and bounds as well; Setting in markup <artem:Goog...mojoPortal: 2.3.9.3: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2393-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0, but we recommend you to use .NET 4, we will probably drop support for .NET 3.5 once .NET 4.5 is available The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code and are not intended for use in Visual Studio. To download the source code see getting the lates...New Projects1327on17jabbr: helloAESP: Projeto AESPAutoStor: Egyetemi kurzus keretében megvalósuló alkalmazás, melynek célja egy automatikus raktározó rendszer szimulációja, objektum-orientált megvalósítással.BetterPlaceBooking: 3rd semester project for DM76 Group 3BizMate: BizMate is a Web based Accounting systemDeneme: deneme yazisiDomainSharp: Integrated development environment for design of domain-specific languages and subsequent development in such languages.EasyTwitter: EasyTwitter it's a simple .NET library where you can use twitter in your web applications or win forms applications. EasyTwitter stills in developmentEDM Designer Extender: Entity Framework Designer Extender that provides new design time properties and a template item to generate DbContext classes.Formition Password Safe (Open Source): Formition Password Safe (Open Source) for Windows 7 is a free high functionality password tool to manage your passwords and other pieces of information.g1p2_web: sport club web site based on c# and mssqlGibbsLDASharp: GibbsLDASharp is a C# implementation of Latent Dirichlet Allocation (LDA) using Gibbs Sampling technique for parameter estimation and inference. Intelligent Assistant Soccer Manager: Intelligent Assistant Soccer Manager, or IASM for short, is a decision support system that fully support condition based deploy of Fantacalcio® soccer teamIshaanOnDAL: Creating DAL Using class ObjectKeks - A 2D Graphics Engine in Java (jre7): Keks is an upcoming Java 2D Graphics Engine with the promise to become a Game Engine in the future.LDR Installer: LDR Installer to easily install security updates in hotfix mode.Live SDK with C# + REST: Usar Live SDK com API REST em qualquer sabor de windows. Use Live SDK with REST API in any flavor of windows.MacSonuclari: project to show soccer match result to for windows phone devices mozcms: mozcms for .NET 4.0 MVC 4.0!MS_Descriptions_Changer: This is utility change MS_Descriptions attribute on MS SQL Server 2005-2008 for Tables and Columns.Onestop.Contrib.DistributedEvict: Onestop.Contrib.DistributedEvict is an advanced module that has 2 primary features for managing cache in a web farm: Output Cache Evict & Remote SignalsOrchard Scripting Extensions: Core module for running scripts inside Orchard.Orchard Scripting Extensions: PHP: A child module for Orchard Scripting Extensions for running PHP code inside Orchard.Page Generated Skin Object for DotNetNuke: The Page Generated Skin Object for DotNetNuke displays the time taken to generate the current page in your custom DNN skin. ProductStore: this project is demo for mvc4, ef codefirst...Remote Controlled Switch for all RC Receivers.: AVR Tiny based simple switch for any rc receiver. Allows to turn off / on lights, sound effect, retracts chassis, using free servo channel. Report Generator in C#: This is a library used to generate reports.Resharper text localization add-in: Text localization plugin for Resharper 7.0 and some plugins development documentationRoad Addicts: Road Addicts is a mix between a strategy and a traffic simulation game.Sistema Distribuido de Pedidos de Insumos Médicos: This is school projectSplitOS: SplitOS - The user-friendly Text OSSQL Server Connection Auditor (SSCA): SSCA helps you test your database audit solution's effectiveness at auditing Microsoft SQL Server connections by automating DB connections, results, and logs.TCP Cellular Radio Driver: Addresses TCP mode shortcomings in the CellularRadio driver provided by GHI Electronics for the Seeed module. Allows transparent data connection over TCP.test1325on17: hellotesttom10162012git01: fds fdsUMK Game: Game Edukasi 3D Tatatertib LalulintasWebMatrix Extension Documentation - Staging: WebMatrix Extensión Development ServiceWhipstaff: Whipstaff is a PoC library for designing a common UI library leveraging WPF, ReactiveUI and DHGMS Data Manager. It is written in C#Windows 8 Store Apps - Tutoriales Paso a Paso: No hay mejor forma de aprender a escribir código, que leyendo código de otros. Conocé aquí tutoriales completos para crear tu primer app para Windows 8 Store. Yet Another Expression Parser - Reverse Polish Notation - C#: Following project contains a class library with simple Reverse Polish Notation implementation.

    Read the article

  • CodePlex Daily Summary for Friday, October 19, 2012

    CodePlex Daily Summary for Friday, October 19, 2012Popular ReleasesOrchard Project: Orchard 1.6 RC: RELEASE NOTES This is the Release Candidate version of Orchard 1.6. You should use this version to prepare your current developments to the upcoming final release, and report problems. Please read our release notes for Orchard 1.6 RC: http://docs.orchardproject.net/Documentation/Orchard-1-6-Release-Notes Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you wil...Rawr: Rawr 5.0.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...TFS 2012 Server/service Setup for Demo: TfsDemo_1.0.0.2: Release 1.0.0.2 New Stuff Feature 1 - Now add team favorite queries using the Tfs demo setup application. Simply specify the name of the work item query in the demoConfig.xml file and let the application work its magic. Feature 2 - Exclude the sections you do not want to be run as part of the demo. Mark the sections you don't want to run during the demo with the attribute Run="false" in the demoConfig.xml. Bug Fix 1 - If the DemoConfig.xml contains users or email addresses in work item a...XamlImageConverter: Xaml Image Converter 3.2: VisualStudio Integration Installer is now a VSIX Extension.Yahoo! UI Library: YUI Compressor for .Net: Version 2.1.1.0 - Sartha (BugFix): - Revered back the embedding of the 2x assemblies.Visual Studio Team Foundation Server Branching and Merging Guide: v2.1 - Visual Studio 2012: Welcome to the Branching and Merging Guide What is new? The Version Control specific discussions have been moved from the Branching and Merging Guide to the new Advanced Version Control Guide. The Branching and Merging Guide and the Advanced Version Control Guide have been ported to the new document style. See http://blogs.msdn.com/b/willy-peter_schaub/archive/2012/10/17/alm-rangers-raising-the-quality-bar-for-documentation-part-2.aspx for more information. Quality-Bar Details Documentatio...D3 Loot Tracker: 1.5.5: Compatible with 1.05.Write Once, Play Everywhere: MonoGame 3.0 (BETA): This is a beta release of the up coming MonoGame 3.0. It contains an Installer which will install a binary release of MonoGame on windows boxes with the following platforms. Windows, Linux, Android and Windows 8. If you need to build for iOS or Mac you will need to get the source code at this time as the installers for those platforms are not available yet. The installer will also install a bunch of Project templates for Visual Studio 2010 , 2012 and MonoDevleop. For those of you wish...WPUtils: WPUtils 1.3: Blend SDK for Silverlight provides a HyperlinkAction which is missing in Blend SDK for Windows Phone. This release adds such an action which makes use of WebBrowserTask to show web page. You can also bind the hyperlink to your view model. NOTE: Windows Phone SDK 7.1 or higher is required.Windawesome: Windawesome v1.4.1 x64: Fixed switching of applications across monitors Changed window flashing API (fix your config files) Added NetworkMonitorWidget (thanks to weiwen) Any issues/recommendations/requests for future versions? This is the 64-bit version of the release. Be sure to use that if you are on a 64-bit Windows. Works with "Required DLLs v3".CODE Framework: 4.0.21017.0: See change log in the Documentation section for details.Global Stock Exchange (Hobby Project): Global Stock Exchange - Invst Banking (Hobby Proj): Initial VersionMagelia WebStore Open-source Ecommerce software: Magelia WebStore 2.1: Add support for .net 4.0 to Magelia.Webstore.Client and StarterSite version 2.1.254.3 Scheduler Import & Export feature (for Professional and Entreprise Editions) UTC datetime and timezone support .net 4.5 and Visual Studio 2012 migration client magelia global refactoring release of a nugget package to help developers speed up development http://nuget.org/packages/Magelia.Webstore.Client optimization of the data update mechanism (a.k.a. "burst") Performance improvment of the d...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.2: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like OData, MongoDB, WebSQL, SqLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video Sencha Touch 2 example app using JayData: Netflix browser. What's new in JayData 1.2.2 For detailed release notes check the release notes. Revitalized IndexedDB providerNow you c...VFPX: FoxcodePlus: FoxcodePlus - Visual Studio like extensions to Visual FoxPro IntelliSense.Droid Explorer: Droid Explorer 0.8.8.8 Beta: fixed the icon for packages on the desktop fixed the install dialog closing right when it starts removed the link to "set up the sdk for me" as this is no longer supported. fixed bug where the device selection dialog would show, even if there was only one device connected. fixed toolbar from having "gap" between other toolbar removed main menu items that do not have any menus Fiskalizacija za developere: FiskalizacijaDev 1.0: Prva verzija ovog projekta, još je uvijek oznacena kao BETA - ovo znaci da su naša testiranja prošla uspješno :) No, kako mi ne proizvodimo neki software za blagajne, tako sve ovo nije niti isprobano u "realnim" uvjetima - svaka je sugestija, primjedba ili prijava bug-a je dobrodošla. Za sve ovo koristite, molimo, Discussions ili Issue Tracker. U ovom trenutku runtime binary je raspoloživ kao Any CPU za .NET verzije 2.0. Javite ukoliko trebaju i verzije buildane za 32-bit/64-bit kao i za .N...Squiggle - A free open source LAN Messenger: Squiggle 3.2 (Development): NOTE: This is development release and not recommended for production use. This release is mainly for enabling extensibility and interoperability with other platforms. Support for plugins Support for extensions Communication layer and protocol is platform independent (ZeroMQ, ProtocolBuffers) Bug fixes New /invite command Edit the sent message Disable update check NOTE: This is development release and not recommended for production use.NDatabase - C# Lightweight Object Database: NDatabase 2.0.1 Release: This release contains stable version of NDatabase C# Lightweight Object Database Content: binaries (dll + pdb) sources (sources, unit tests, samples) Changes: namespaces, dll name both are changed to NDatabase2 query API is changed. Now it is using generics in every possible place changing the way, how fields from class are stored - for now they are ordered by name which allows db on working well even if someone will change the order of fields in class definition (BREAKING CHANGE) A...AcDown????? - AcDown Downloader Framework: AcDown????? v4.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...New Projects7COM0207 DIY Wedding Cake: DIY Wedding Cake SiteAnt: AntCms.ArduUtilityLibrary for Arduino: ArduUtilityLibrary (AUL) is a library to assist the development of Arduino based programsbelloscoursework: Coursework to create a a web 2.0 website that support content creation.Code Jumper: Code-Jumper allows you to navigate your code by displaying a map of your declarations on the side panel of your editor. Dean's Web Scripting & Content Creation Project: Web Scripting & Content Creation Project for MSc Computer Science at Herts University.Distributed System for Medical Providers: This is a draft circulated for medical supplies providerDockPanel Suite VS2012 Look: Dock Panel Suite Control C# Visual Studio 2012 Design LookEntity Framework Json Serializer: Solve the Circular Reference problem when using Entity Framework with Asp.NET MVC JsonResult.ExpressGrid: A javascript gridFimClient: Our library - Predica.FimCommunication - for talking to FIM (Forefront Identity Manager) web servicesGetDev.NET - Mvc Talk: Sample code for local user group talk about ASP.NET MVCHelp Desk: Sistema para teste do mvc 3HospitalManagementSystem: Summary This system is for -handle channeling -handle lab reportsjQuery Filedrop: In this demo I will demonstrate using HTML5 compatible browsers to drag and drop files and save the content into a SQL2012 database. JS DNN Task Manager: This is a DNN tutorial to create a task manager projectKinect - Finger and gesture recognition: Find fingertips and pointing direction, record and recognize finger gestures. All this with the depth stream from the Kinect sensor.MarkusUtility: Utilities used in other projectsMASSIVE DATA TRANSFER OPTIMIZATIONS: This is project is used find an efficient way to transfer the massive data via TCP/IP. Minesweeper by S. Joshi: A user-created version of Minesweeper.NETFOX CATAN: very goodPrestazioni e affidabilità: Progetto di prestazioni ed affidabilità del corso di informatica, università Ca' Foscari di VeneziaProyecto Mammut: Este es un proyecto cuyo objetivo es georeferenciar la oferta commercial de la ciudad de Manta,Ecuador mediante puntos de referencias basados en T. Público.Razor Exercise 1: Just an exercise....Sannel Helpers: Varies extension methods I have created.Service Pipeline: A simple library for implementing a service pipeline with your existing service contracts.Set Last Access: Simple console client which can scans folder tree and set LastAccess time by times of newest item in folderSharePoint Web Part Replacement: A set of classes and sample feature receiver used to replace web parts on a SharePoint site collection (SPSite). Sitecore Image Placeholder: Sitecore Image Placeholder module helps Content Editors by displaying placeholders with correct image size when field of type Image is empty.SlidePuzzle: A slide puzzle.StackAttack: StackAttack is a .NET client for use with the Stack Exchange API v2.1.testASPsite: Nothing of interesttestdd18102012git01: dtestdd18102012tfs02: stestddgit10182012git03: tTransform Manager Task for creating assets in Windows Azure Media Services: Task for Transform Manager that creates assets in Windows Azure Media Services and requests a stream locator. Used to push assets into the cloud for streaming.Trie for C#: Implementing the Trie structer in C#/.Netwsccm11aah: My Project for Web Scripting and Content Creation module submitted to Steve Bennet and Mariana Lilleywuggi: Wuggi is a simple website that focuses on its users input to enrich its content. Discussions and feedback are always welcomed on Wuggi.YahtzeePC: A Yahtzee emulator for the PC.

    Read the article

  • Oracle Solaris Zones Physical to virtual (P2V)

    - by user939057
    IntroductionThis document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability.Using an example and various scenarios, this paper describes how to take advantage of theOracle Solaris 10 Zones Physical-to-Virtual (P2V) capability with other Oracle Solaris features to optimize performance using the Solaris 10 resource management advanced storage management using Solaris ZFS plus improving operating system visibility with Solaris DTrace. The most common use for this tool is when performing consolidation of existing systems onto virtualization enabled platforms, in addition to that we can use the Physical-to-Virtual (P2V) capability  for other tasks for example backup your physical system and move them into virtualized operating system environment hosted on the Disaster Recovery (DR) site another option can be building an Oracle Solaris 10 image repository with various configuration and a different software packages in order to reduce provisioning time.Oracle Solaris ZonesOracle Solaris Zones is a virtualization and partitioning technology supported on Oracle Sun servers powered by SPARC and Intel processors.This technology provides an isolated and secure environment for running applications. A zone is a virtualized operating system environment created within a single instance of the Solaris 10 Operating System.Each virtual system is called a zone and runs a unique and distinct copy of the Solaris 10 operating system.Oracle Solaris Zones Physical-to-Virtual (P2V)A new feature for Solaris 10 9/10.This feature provides the ability to build a Solaris 10 images from physical system and migrate it into a virtualized operating system environmentThere are three main steps using this tool1. Image creation on the source system, this image includes the operating system and optionally the software in which we want to include within the image. 2. Preparing the target system by configuring a new zone that will host the new image.3. Image installation on the target system using the image we created on step 1. The host, where the image is built, is referred to as the source system and the host, where theimage is installed, is referred to as the target system. Benefits of Oracle Solaris Zones Physical-to-Virtual (P2V)Here are some benefits of this new feature:  Simple- easy build process using Oracle Solaris 10 built-in commands.  Robust- based on Oracle Solaris Zones a robust and well known virtualization technology.  Flexible- support migration between V series servers into T or -M-series systems.For the latest server information, refer to the Sun Servers web page. PrerequisitesThe target Oracle Solaris system should be running the latest version of the patching patch cluster. and the minimum Solaris version on the target system should be Solaris 10 9/10.Refer to the latest Administration Guide for Oracle Solaris for a complete procedure on how todownload and install Oracle Solaris. NOTE: If the source system that used to build the image is an older version then the targetsystem, then during the process, the operating system will be upgraded to Solaris 10 9/10(update on attach).Creating the Image Used to distribute the software.We will create an image on the source machine. We can create the image on the local file system and then transfer it to the target machine, or build it into a NFS shared storage andmount the NFS file system from the target machine.Optional  before creating the image we need to complete the software installation that we want to include with the Solaris 10 image.An image is created by using the flarcreate command:Source # flarcreate -S -n s10-system -L cpio /var/tmp/solaris_10_up9.flarThe command does the following:  -S specifies that we skip the disk space check and do not write archive size data to the archive (faster).  -n specifies the image name.  -L specifies the archive format (i.e cpio). Optionally, we can add descriptions to the archive identification section, which can help to identify the archive later.Source # flarcreate -S -n s10-system -e "Oracle Solaris with Oracle DB10.2.0.4" -a "oracle" -L cpio /var/tmp/solaris_10_up9.flarYou can see example of the archive identification section in Appendix A: archive identification section.We can compress the flar image using the gzip command or adding the -c option to the flarcreate commandSource # gzip /var/tmp/solaris_10_up9.flarAn md5 checksum can be created for the image in order to ensure no data tamperingSource # digest -v -a md5 /var/tmp/solaris_10_up9.flar Moving the image into the target system.If we created the image on the local file system, we need to transfer the flar archive from the source machine to the target machine.Source # scp /var/tmp/solaris_10_up9.flar target:/var/tmpConfiguring the Zone on the target systemAfter copying the software to the target machine, we need to configure a new zone in order to host the new image on that zone.To install the new zone on the target machine, first we need to configure the zone (for the full zone creation options see the following link: http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html  )ZFS integrationA flash archive can be created on a system that is running a UFS or a ZFS root file system.NOTE: If you create a Solaris Flash archive of a Solaris 10 system that has a ZFS root, then bydefault, the flar will actually be a ZFS send stream, which can be used to recreate the root pool.This image cannot be used to install a zone. You must create the flar with an explicit cpio or paxarchive when the system has a ZFS root.Use the flarcreate command with the -L archiver option, specifying cpio or pax as themethod to archive the files. (For example, see Step 1 in the previous section).Optionally, on the target system you can create the zone root folder on a ZFS file system inorder to benefit from the ZFS features (clones, snapshots, etc...).Target # zpool create zones c2t2d0 Create the zone root folder:Target # chmod 700 /zones Target # zonecfg -z solaris10-up9-zonesolaris10-up9-zone: No such zone configuredUse 'create' to begin configuring a new zone.zonecfg:solaris10-up9-zone> createzonecfg:solaris10-up9-zone> set zonepath=/zoneszonecfg:solaris10-up9-zone> set autoboot=truezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone:net> set address=192.168.0.1zonecfg:solaris10-up9-zone:net> set physical=nxge0zonecfg:solaris10-up9-zone:net> endzonecfg:solaris10-up9-zone> verifyzonecfg:solaris10-up9-zone> commitzonecfg:solaris10-up9-zone> exit Installing the Zone on the target system using the imageInstall the configured zone solaris10-up9-zone by using the zoneadm command with the install -a option and the path to the archive.The following example shows how to create an Image and sys-unconfig the zone.Target # zoneadm -z solaris10-up9-zone install -u -a/var/tmp/solaris_10_up9.flarLog File: /var/tmp/solaris10-up9-zone.install_log.AJaGveInstalling: This may take several minutes...The following example shows how we can preserve system identity.Target # zoneadm -z solaris10-up9-zone install -p -a /var/tmp/solaris_10_up9.flar Resource management Some applications are sensitive to the number of CPUs on the target Zone. You need tomatch the number of CPUs on the Zone using the zonecfg command:zonecfg:solaris10-up9-zone>add dedicated-cpuzonecfg:solaris10-up9-zone> set ncpus=16DTrace integrationSome applications might need to be analyzing using DTrace on the target zone, you canadd DTrace support on the zone using the zonecfg command:zonecfg:solaris10-up9-zone>setlimitpriv="default,dtrace_proc,dtrace_user" Exclusive IP stack An Oracle Solaris Container running in Oracle Solaris 10 can have a shared IP stack with the global zone, or it can have an exclusive IP stack (which was released in Oracle Solaris 10 8/07). An exclusive IP stack provides a complete, tunable, manageable and independent networking stack to each zone. A zone with an exclusive IP stack can configure Scalable TCP (STCP), IP routing, IP multipathing, or IPsec. For an example of how to configure an Oracle Solaris zone with an exclusive IP stack, see the following example zonecfg:solaris10-up9-zone set ip-type=exclusivezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone> set physical=nxge0 When the installation completes, use the zoneadm list -i -v options to list the installedzones and verify the status.Target # zoneadm list -i -vSee that the new Zone status is installedID NAME STATUS PATH BRAND IP0 global running / native shared- solaris10-up9-zone installed /zones native sharedNow boot the ZoneTarget # zoneadm -z solaris10-up9-zone bootWe need to login into the Zone order to complete the zone set up or insert a sysidcfg file beforebooting the zone for the first time see example for sysidcfg file in Appendix B: sysidcfg filesectionTarget # zlogin -C solaris10-up9-zoneTroubleshootingIf an installation fails, review the log file. On success, the log file is in /var/log inside the zone. Onfailure, the log file is in /var/tmp in the global zone.If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F to reset the zone to the configured state.Target # zoneadm -z solaris10-up9-zone uninstall -FTarget # zonecfg -z solaris10-up9-zone delete -FConclusionOracle Solaris Zones P2V tool provides the flexibility to build pre-configuredimages with different software configuration for faster deployment and server consolidation.In this document, I demonstrated how to build and install images and to integrate the images with other Oracle Solaris features like ZFS and DTrace.Appendix A: archive identification sectionWe can use the head -n 20 /var/tmp/solaris_10_up9.flar command in order to access theidentification section that contains the detailed description.Target # head -n 20 /var/tmp/solaris_10_up9.flarFlAsH-aRcHiVe-2.0section_begin=identificationarchive_id=e4469ee97c3f30699d608b20a36011befiles_archived_method=cpiocreation_date=20100901160827creation_master=mdet5140-1content_name=s10-systemcreation_node=mdet5140-1creation_hardware_class=sun4vcreation_platform=SUNW,T5140creation_processor=sparccreation_release=5.10creation_os_name=SunOScreation_os_version=Generic_142909-16files_compressed_method=nonecontent_architectures=sun4vtype=FULLsection_end=identificationsection_begin=predeploymentbegin 755 predeployment.cpio.ZAppendix B: sysidcfg file sectionTarget # cat sysidcfgsystem_locale=Ctimezone=US/Pacificterminal=xtermssecurity_policy=NONEroot_password=HsABA7Dt/0sXXtimeserver=localhostname_service=NONEnetwork_interface=primary {hostname= solaris10-up9-zonenetmask=255.255.255.0protocol_ipv6=nodefault_route=192.168.0.1}name_service=NONEnfs4_domain=dynamicWe need to copy this file before booting the zoneTarget # cp sysidcfg /zones/solaris10-up9-zone/root/etc/

    Read the article

  • What is bondib1 used for on SPARC SuperCluster with InfiniBand, Solaris 11 networking & Oracle RAC?

    - by user12620111
    A co-worker asked the following question about a SPARC SuperCluster InfiniBand network: > on the database nodes the RAC nodes communicate over the cluster_interconnect. This is the > 192.168.10.0 network on bondib0. (according to ./crs/install/crsconfig_params NETWORKS> setting) > What is bondib1 used for? Is it a HA counterpart in case bondib0 dies? This is my response: Summary: bondib1 is currently only being used for outbound cluster interconnect interconnect traffic. Details: bondib0 is the cluster_interconnect $ oifcfg getif            bondeth0  10.129.184.0  global  public bondib0  192.168.10.0  global  cluster_interconnect ipmpapp0  192.168.30.0  global  public bondib0 and bondib1 are on 192.168.10.1 and 192.168.10.2 respectively. # ipadm show-addr | grep bondi bondib0/v4static  static   ok           192.168.10.1/24 bondib1/v4static  static   ok           192.168.10.2/24 Hostnames tied to the IPs are node1-priv1 and node1-priv2  # grep 192.168.10 /etc/hosts 192.168.10.1    node1-priv1.us.oracle.com   node1-priv1 192.168.10.2    node1-priv2.us.oracle.com   node1-priv2 For the 4 node RAC interconnect: Each node has 2 private IP address on the 192.168.10.0 network. Each IP address has an active InfiniBand link and a failover InfiniBand link. Thus, the 4 node RAC interconnect is using a total of 8 IP addresses and 16 InfiniBand links. bondib1 isn't being used for the Virtual IP (VIP): $ srvctl config vip -n node1 VIP exists: /node1-ib-vip/192.168.30.25/192.168.30.0/255.255.255.0/ipmpapp0, hosting node node1 VIP exists: /node1-vip/10.55.184.15/10.55.184.0/255.255.255.0/bondeth0, hosting node node1 bondib1 is on bondib1_0 and fails over to bondib1_1: # ipmpstat -g GROUP       GROUPNAME   STATE     FDT       INTERFACES ipmpapp0    ipmpapp0    ok        --        ipmpapp_0 (ipmpapp_1) bondeth0    bondeth0    degraded  --        net2 [net5] bondib1     bondib1     ok        --        bondib1_0 (bondib1_1) bondib0     bondib0     ok        --        bondib0_0 (bondib0_1) bondib1_0 goes over net24 # dladm show-link | grep bond LINK                CLASS     MTU    STATE    OVER bondib0_0           part      65520  up       net21 bondib0_1           part      65520  up       net22 bondib1_0           part      65520  up       net24 bondib1_1           part      65520  up       net23 net24 is IB Partition FFFF # dladm show-ib LINK         HCAGUID         PORTGUID        PORT STATE  PKEYS net24        21280001A1868A  21280001A1868C  2    up     FFFF net22        21280001CEBBDE  21280001CEBBE0  2    up     FFFF,8503 net23        21280001A1868A  21280001A1868B  1    up     FFFF,8503 net21        21280001CEBBDE  21280001CEBBDF  1    up     FFFF On Express Module 9 port 2: # dladm show-phys -L LINK              DEVICE       LOC net21             ibp4         PCI-EM1/PORT1 net22             ibp5         PCI-EM1/PORT2 net23             ibp6         PCI-EM9/PORT1 net24             ibp7         PCI-EM9/PORT2 Outbound traffic on the 192.168.10.0 network will be multiplexed between bondib0 & bondib1 # netstat -rn Routing Table: IPv4   Destination           Gateway           Flags  Ref     Use     Interface -------------------- -------------------- ----- ----- ---------- --------- 192.168.10.0         192.168.10.2         U        16    6551834 bondib1   192.168.10.0         192.168.10.1         U         9    5708924 bondib0   There is a lot more traffic on bondib0 than bondib1 # /bin/time snoop -I bondib0 -c 100 > /dev/null Using device ipnet/bondib0 (promiscuous mode) 100 packets captured real        4.3 user        0.0 sys         0.0 (100 packets in 4.3 seconds = 23.3 pkts/sec) # /bin/time snoop -I bondib1 -c 100 > /dev/null Using device ipnet/bondib1 (promiscuous mode) 100 packets captured real       13.3 user        0.0 sys         0.0 (100 packets in 13.3 seconds = 7.5 pkts/sec) Half of the packets on bondib0 are outbound (from self). The remaining packet are split evenly, from the other nodes in the cluster. # snoop -I bondib0 -c 100 | awk '{print $1}' | sort | uniq -c Using device ipnet/bondib0 (promiscuous mode) 100 packets captured   49 node1-priv1.us.oracle.com   24 node2-priv1.us.oracle.com   14 node3-priv1.us.oracle.com   13 node4-priv1.us.oracle.com 100% of the packets on bondib1 are outbound (from self), but the headers in the packets indicate that they are from the IP address associated with bondib0: # snoop -I bondib1 -c 100 | awk '{print $1}' | sort | uniq -c Using device ipnet/bondib1 (promiscuous mode) 100 packets captured  100 node1-priv1.us.oracle.com The destination of the bondib1 outbound packets are split evenly, to node3 and node 4. # snoop -I bondib1 -c 100 | awk '{print $3}' | sort | uniq -c Using device ipnet/bondib1 (promiscuous mode) 100 packets captured   51 node3-priv1.us.oracle.com   49 node4-priv1.us.oracle.com Conclusion: bondib1 is currently only being used for outbound cluster interconnect interconnect traffic.

    Read the article

  • PHP suddenly failed after IIS update

    - by James Hay
    All my application pools were stopped this morning after I got to work. I can restart them, but when I try to load the website the app pool crashes again. Update: I've looked in the GAC as the error below suggests and it seems that the file is not there. How do I get it back? Update 2: I found a further error in the event log saying The Module name FastCgiModule path C:\WINDOWS\System32\inetsrv\iisfcgi.dll returned an error from registration. The data is the error. So following the information from here http://forums.iis.net/t/1153937.aspx I removed CGI and my sites are working again. This has fixed the initial problem, but now I don't have FastCGI so I'm fairly sure that PHP will no longer be working (I don't have any PHP at the moment to test). Original Post I'm getting this error in the event viewer: IISMANAGER_ERROR_LOADING_PROVIDER_TYPE IIS Manager could not load type 'Web.Management.PHP.PHPProvider, Web.Management.PHP, Version=1.2.0.0, Culture=neutral, PublicKeyToken=8175de49a9aec91d' for module provider 'PHP' that is declared in %windir%\system32\inetsrv\config\administration.config. Verify that the type is correct, and that the assembly that contains the module provider is in the Global Assembly Cache (GAC). Exception:System.IO.FileNotFoundException: Could not load file or assembly 'Web.Management.PHP, Version=1.2.0.0, Culture=neutral, PublicKeyToken=8175de49a9aec91d' or one of its dependencies. The system cannot find the file specified. File name: 'Web.Management.PHP, Version=1.2.0.0, Culture=neutral, PublicKeyToken=8175de49a9aec91d' at System.RuntimeTypeHandle._GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) at System.RuntimeType.PrivateGetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) at System.Type.GetType(String typeName, Boolean throwOnError) at Microsoft.Web.Management.Server.AdministrationModuleProvider.GetModuleProvider(String userName, String connectionName) WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. Process:InetMgr Connection:CT211511\Administrator Everything was working fine last night when I left work, and since they've done the maintenance it's all broken.

    Read the article

  • Serial connection over a single USB cable (Windows to linux, or linux to linux)

    - by andyortlieb
    I'm helping out with a project for an embedded device that only has USB and no serial. This device is running Linux. These days, when we need to connect to a serial port on a device we typically use a USB to serial adapter (on something like a phone system or a load balancing device, etc). I would like to know if it is possible to have the host device behave as though it were a serial adapter, thus removing the need for one. Given the nature of USB, is this approach even necessary? To recap, I would like to be able to connect a single A-to-A USB cable from my workstation (be it windows or linux) to this device, for the purpose of administration (especially initial setup), using minicom, putty or hyperterminal. Thanks

    Read the article

  • HTTP response time profiling

    - by Sparsh Gupta
    Hello I have a nginx reverse proxy. The server is close to serving 600-700 requests per second. I have a Munin HTTP load time plugin which is outputting this: http://monitor.wingify.com/munin/visualwebsiteoptimizer.com/lb1.visualwebsiteoptimizer.com-http_loadtime.html Now, the problem is I am seeing some spikes in the graph. Expected response times should always be under 200ms. I am keeping an eye on syslog and messages but I am unable to figure out the actual cause of this. I was wondering if there is any good HTTP response time profiling system which I can install / embed with this nginx server and get a detailed reports / logs on the breakup of time taken by different things and what exactly is the cause of the spikes. The profiling system would also help me understand bottlenecks and how can I further optimize the latency. Most important right now is to investigate the cause of the spikes in the HTTP load time graphs (similar pattern is reported by external monitors - Pingdom) and to fix it to get consistent response times Thanks

    Read the article

  • EBS 12.1.1 Test Starter Kit now Available for Oracle Application Testing Suite

    - by Steven Chan
    We've discussed automated testing tools for the E-Business Suite several times on this blog, since testing is such a key part of everyone's implementation lifecycle.  An important part of our testing arsenal in E-Business Suite Development is the Oracle Application Testing Suite.  The Oracle Automated Testing Suite (OATS) is built on the foundation of the e-TEST suite of products acquired from Empirix  in 2008.  The testing suite is comprised of:   1. Oracle Load Testing for scalability, performance, and load testing   2. Oracle Functional Testing for automated functional and regression testing   3. Oracle Test Manager for test process management, test execution, and defect trackingOracle Application Testing Suite 9.0 has been supported for use with the E-Business Suite since 2009.  I'm very pleased to let you know that our E-Business Suite Release 12.1.1 Test Starter Kit is now available for Oracle Application Testing Suite 9.1.  You can download it here:Oracle Application Testing Suite Downloads

    Read the article

< Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >