Search Results

Search found 1863 results on 75 pages for 'populate'.

Page 17/75 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • UpdatePanel reloads the whole page

    - by Emil D
    Hi.I'm building an asp.net cutom control inside which I have two dropdownlists: companyIdSelection and productFamilySelection.I populate the companyIdSelection at Page_Load and in order to populate the productFamilySelection depending on the selected item in companyIdSelection.I'm using UpdatePanels to achieve this, but for some reason every time I update companyIdSelection Page_Load is being called ( which as far as I know should happen only when the entire page is reloaded ), the list is being reloaded again and the item the user selected is lost( the selected item is always the top one ).Here's the code <asp:UpdatePanel ID="updateFamilies" runat="server" UpdateMode="Always"> <ContentTemplate> Company ID:<br> <br></br> <asp:DropDownList ID="companyIdSelection" runat="server" AutoPostBack="True" OnSelectedIndexChanged="companyIdSelection_SelectedIndexChanged"> </asp:DropDownList> <br></br> Product Family: <br></br> <asp:DropDownList ID="productFamilySelection" runat="server" AutoPostBack="True" onselectedindexchanged="productFamilySelection_SelectedIndexChanged"> </asp:DropDownList> <br> </ContentTemplate> </asp:UpdatePanel> protected void Page_Load(object sender, EventArgs e) { this.companyIdSelection.DataSource = companyIds(); //companyIds returns the object containing the initial data items this.companyIdSelection.DataBind(); } protected void companyIdSelection_SelectedIndexChanged(object sender, EventArgs e) { // Page_Load is called again for some reason before this method is called, so it // resets the companyIdSelection EngDbService s = new EngDbService(); productFamilySelection.DataSource = s.getProductFamilies(companyIdSelection.Text); productFamilySelection.DataBind(); } Also, I tried setting the UpdateMode of the UpdatePanel to "Conditional" and adding an asyncpostback trigger but the result was the same. What am I doing wrong? PS: I fixed the updating problem, by using Page.IsPostBack in the Page_Load method, but I would still want to avoid a full postback if possible

    Read the article

  • C#: Populating a UI using separate threads.

    - by Andrew
    I'm trying to make some sense out of an application Ive been handed in order to track down the source of an error. Theres a bit of code (simplified here) which creates four threads which in turn populate list views on the main form. Each method gets data from the database and retrieves graphics from a resource dll in order to directly populate an imagelist and listview. From what Ive read on here (link) updating UI elements from any thread other than the UI thread should not be done, and yet this appears to work? Thread t0 = new Thread(new ThreadStart(PopulateListView1)); t0.IsBackground = true; t0.Start(); Thread t1 = new Thread(new ThreadStart(PopulateListView2)); t1.Start(); Thread t2 = new Thread(new ThreadStart(PopulateListView3)); t2.Start(); Thread t3 = new Thread(new ThreadStart(PopulateListView4)); t3.Start(); The error itself is a System.InvalidOperationException "Image cannot be added to the ImageList." which has me wondering if the above code is linked in some way. Iis this method of populating the UI recommended and if not what are the possible complications resulting from it?

    Read the article

  • Named Range Breaks Code

    - by Daniel
    I have one workbook with several sheets. I populate the listboxes (pulling static data from cells) on the 2nd sheet, click a button and it runs fine. When I populate the listboxes with a named range, the listbox populates the way I want, but I get an error because the code thinks that I didn't select anything in the listbox, even though I did. So it passes through "" instead of "title". Is this a common issue? The named range isn't a problem because it passes through the data to the listbox and I know it's selecting data because as soon as the listbox loses focus, it spits out the contents of the cell into cell A1. What's even stranger is that I have the contents of the listbox set to Msg1. So A1 gets populated with Msg1 (what I actually selected in the listbox). But when I try and use Msg1 in the code, it tells me that Msg1 is "". Again, this only happens when I use the dynamic named range, not with static data in cells K1:K9. Private Function strEndSQL1 As String Dim strSQL As String strSQL = "" 'Create SQL statement strSQL = "FROM (SELECT * FROM dbo.Filter WHERE ID = " & TextBox1.Text & " And Source IN (" & Msg1 & ")) a FULL OUTER JOIN " strSQL = strSQL & "(SELECT * FROM dbo.Filters WHERE ID = " & TextBox2.Text & " And Source IN (" & Msg1 & ")) b " strSQL = strSQL & "ON a.Group = b.Group strEndSQL = strSQL End Function

    Read the article

  • Convert enumerated records to php object

    - by Matt H
    I have a table containing a bunch of records like this: +-----------+--------+----------+ | extension | fwd_to | type | +-----------+--------+----------+ | 800 | 11111 | noanswer | | 800 | 12345 | uncond | | 800 | 22222 | unavail | | 800 | 54321 | busy | | 801 | 123 | uncond | +-----------+--------+----------+ etc The query looks like this: select fwd_to, type from forwards where extension='800'; Now I get back an array containing objects which look like the following when printed with Kohana::debug: (object) stdClass Object ( [fwd_to] => 11111 [type] => noanswer ) (object) stdClass Object ( [fwd_to] => 12345 [type] => uncond ) (object) stdClass Object ( [fwd_to] => 22222 [type] => unavail ) (object) stdClass Object ( [fwd_to] => 54321 [type] => busy ) What I'd like to do is convert this to an object of this form: (object) stdClass Object ( [busy] => 54321 [uncond] => 12345 [unavail] => 22222 [noanswer] => 11111 ) The reason being I want to then call json_encode on it. This will allow me to use jquery populate to populate a form. Is there a suggested way I can do this nicely? I'm fairly new to PHP and I'm sure this is easy but it's eluding me at the moment.

    Read the article

  • Is this state-of-the-art Ajax/jQuery, or should I do it differently?

    - by AP257
    Hi everyone: I'm using Ajax and jQuery to do some cool client-side stuff. Basically, if the user chooses a subject from a from a drop-down list, I want to auto-populate the page with a set of book details. However, I've built the Ajax from Google results and bits of string. I don't know if what I've done is up-to-date, or whether there are now much neater ways of doing it! Here's what I have: can it be improved? $("#subjectlist").change(function() { if (window.XMLHttpRequest) { xmlhttp=new XMLHttpRequest(); } else { xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { var book_details = eval(xmlhttp.responseText); alert(book_details[0]["url"]); // To be added: extra code to populate HTML results. document.getElementById("book_results").innerHTML=xmlhttp.responseText; } } xmlhttp.open("GET","/subject_json/?id=" + $("#subjectlist").val(),true); xmlhttp.send(); }); Thanks for any advice!

    Read the article

  • [javascript] Populating jsTree based on XML data uploaded to server folder

    - by PFM
    tl:dr How can I populate jsTree based on a folder location instead of an exact XML url? I'm looking for a little direction on this project. Currently I am trying to copy file structures of hard drives as XML files and recreate them using jsTree on the webserver for a completely independent version of the file structure. I have some python script that outputs XML files that are formed to jsTree and automatically uploads to a folder on the server. The problem is now I am a little lost because I have to manually enter each XML file into jsTree code for it to display so I have multiple entries like this: $("#tree1") .jstree({ "plugins" : [ "themes", "xml_data", "ui", "search", "types" ], "xml_data" : { "ajax" : { "url" : "./XML_DATA/DRIVE1.xml" }, "xsl" : "nest" }, I see in the documentation that instead of populated by the direct file the folders are populated by "server.php" but no where in the php code does it point to any directories or files. After considering the problem I thought of a few solutions and could use some advice on them: Should I be trying to write php code to automatically look through my XML_DATA folder to upload each XML file? Should I just upload all the XML to mySQL and populate my tree based on that? Should the javascript be the code looking through the server's folder for XML files? All the XML is formed the same way but the number of XML files on the server will increase and will have to be refreshed as well as they will be overwritten with changes. Any direction would be appreciated, thanks.

    Read the article

  • Parsing large txt files in ruby taking a lot of time?

    - by hershey92
    below is the code to download a txt file from internet approx 9000 lines and populate the database, I have tried a lot but it takes a lot of time more than 7 minutes. I am using win 7 64 bit and ruby 1.9.3. Is there a way to do it faster ?? require 'open-uri' require 'dbi' dbh = DBI.connect("DBI:Mysql:mfmodel:localhost","root","") #file = open('http://www.amfiindia.com/spages/NAV0.txt') file = File.open('test.txt','r') lines = file.lines 2.times { lines.next } curSubType = '' curType = '' curCompName = '' lines.each do |line| line.strip! if line[-1] == ')' curType,curSubType = line.split('(') curSubType.chop! elsif line[-4..-1] == 'Fund' curCompName = line.split(" Mutual Fund")[0] elsif line == '' next else sCode,isin_div,isin_re,sName,nav,rePrice,salePrice,date = line.split(';') sCode = Integer(sCode) sth = dbh.prepare "call mfmodel.populate(?,?,?,?,?,?,?)" sth.execute curCompName,curSubType,curType,sCode,isin_div,isin_re,sName end end dbh.do "commit" dbh.disconnect file.close 106799;-;-;HDFC ARBITRAGE FUND RETAIL PLAN DIVIDEND OPTION;10.352;10.3;10.352;29-Jun-2012 This is the format of data to be inserted in the table. Now there are 8000 such lines and how can I do an insert by combining all that and call the procedure just once. Also, does mysql support arrays and iteration to do such a thing inside the routine. Please give your suggestions.Thanks.

    Read the article

  • laying out images in UIScrollView automatically

    - by Steve Jabs
    i have a list of images retrieved from xml i want to populate them to a uiscrollview in an order such that it will look like this. 1 2 3 4 5 6 7 8 9 10 if there is only 10 images it will just stop here. right now my current code is this for (int i = 3; i<[appDelegate.ZensaiALLitems count]-1; i++) { UIButton *zenbutton2 =[UIButton buttonWithType:UIButtonTypeCustom]; Items *ZensaiPLUitems = [appDelegate.ZensaiALLitems objectAtIndex:i]; NSURL *ZensaiimageSmallURL = [NSURL URLWithString:ZensaiPLUitems.ZensaiimageSmallURL]; NSLog(@"FVGFVEFV :%@", ZensaiPLUitems.ZensaiimageSmallURL); NSData *simageData = [NSData dataWithContentsOfURL:ZensaiimageSmallURL]; UIImage *itemSmallimage = [UIImage imageWithData:simageData]; [zenbutton2 setImage:itemSmallimage forState:UIControlStateNormal]; zenbutton2.frame=CGRectMake( (i*110+i*110)-660 , 300, 200, 250); [zenbutton2 addTarget:self action:@selector(ShowNextZensaiPage) forControlEvents:UIControlEventTouchUpInside]; [scrollView addSubview:zenbutton2]; } notice the CGRectMake , i have to manually assign fixed values to position them. Is there any way to populate them out without manually assigning. for e.g the images will automatically go down a position once the first row has 3 images and subsequently for the rest.

    Read the article

  • Create 2nd tables and add data

    - by Tyler Matema
    I have this task from school, and I am confuse and lost on how I got to do this. So basically I have to create 2 tables to the database but I have to created from php. I have created the first table, but not the second one for some reason. And then, I have to populate first and second tables with 10 and 20 sample records respectively, populate, does it mean like adding more fake users? if so is it like the code shown below? *I got error on the populating second part as well Thank you so much for the help. <?php $host = "host"; $user = "me"; $pswd = "password"; $dbnm = "db"; $conn = @mysqli_connect($host, $user, $pswd, $dbnm); if (!$conn) die ("<p>Couldn't connect to the server!<p>"); $selectData = @mysqli_select_db ($conn, $dbnm); if(!$selectData) { die ("<p>Database Not Selected</p>"); } //1st table $sql = "CREATE TABLE IF NOT EXISTS `friends` ( `friend_id` INT NOT NULL auto_increment, `friend_email` VARCHAR(20) NOT NULL, `password` VARCHAR(20) NOT NULL, `profile_name` VARCHAR(30) NOT NULL, `date_started` DATE NOT NULL, `num_of_friends` INT unsigned, PRIMARY KEY (`friend_id`) )"; //2nd table $sqlMyfriends = "CREATE TABLE `myfriends` ( `friend_id1` INT NOT NULL, `friend_id2` INT NOT NULL, )"; $query_result1 = @mysqli_query($conn, $sql); $query_result2 = @mysqli_query($conn, $sqlMyfriends); //populating 1st table $sqlSt3="INSERT INTO friends (friend_id, friend_email, password, profile_name, date_started, num_of_friends) VALUES('NULL','[email protected]','123','abc','2012-10-25', 5)"; $queryResult3 = @mysqli_query($dbConnect,$sqlSt3) //populating 2nd table $sqlSt13="INSERT INTO myfriends VALUES(1,2)"; $queryResult13=@mysqli_query($dbConnect,$sqlSt13); mysqli_close($conn); ?>

    Read the article

  • Objects leaking immediately from allocation using either new or [[Object alloc] init];

    - by Sam
    While running Instruments to find leaks in my code, after I've loaded a file and populate an NSMutableArray with new objects, leaks pop up! I am correctly releasing the objects. Sample code below: //NSMutableArray declared as a retained property in the parent class if(!mutableArray) mutableArray = [[NSMutableArray alloc] initWithCapacity:objectCount]; else [mutableArray removeAllObjects]; //Iterates through the read in data and populate the NSMutableArray for(int i = 0; i < objectCount; i++){ //Initializes a new object with data MyObject *object = [MyObject new]; //Adds the object to the mutableArray [mutableArray addObject:object]; //Releases the object [object release]; } I get a number of leaks from Instruments terminating at the addition of the 'object' into the 'mutableArray', but also including the allocation of the 'object' and the 'mutableArray'. I don't get it. Not to mention, this is happening on the first call of the enclosing method so the allocation of the NSMutableArray is being hit in the logic block, not the 'removeAllObjects' selector. Lastly, does Core Foundation have a major bug in it that randomly creates CFStrings and mismanages their memory? My code does not even use those, nor do the leaks where they occur have anything to do with my code. Almost all of my applications so far deal with OpenGL (in case anyone knows of a threading issue that arises from trying to synch the backend of the program with the front end of displaying the contents of an NSOpenGLView class or whatever it is).

    Read the article

  • actionscript find and convert text to url

    - by gravesit
    I have this script that grabs a twitter feed and displays in a little widget. What I want to do is look at the text for a url and convert that url to a link. public class Main extends MovieClip { private var twitterXML:XML; // This holds the xml data public function Main() { // This is Untold Entertainment's Twitter id. Did you grab yours? var myTwitterID= "username"; // Fire the loadTwitterXML method, passing it the url to your Twitter info: loadTwitterXML("http://twitter.com/statuses/user_timeline/" + myTwitterID + ".xml"); } private function loadTwitterXML(URL:String):void { var urlLoader:URLLoader = new URLLoader(); // When all the junk has been pulled in from the url, we'll fire finishedLoadingXML: urlLoader.addEventListener(Event.COMPLETE, finishLoadingXML); urlLoader.load(new URLRequest(URL)); } private function finishLoadingXML(e:Event = null):void { // All the junk has been pulled in from the xml! Hooray! // Remove the eventListener as a bit of housecleaning: e.target.removeEventListener(Event.COMPLETE, finishLoadingXML); // Populate the xml object with the xml data: twitterXML = new XML(e.target.data); showTwitterStatus(); } private function addTextToField(text:String,field:TextField):void{ /*Regular expressions for replacement, g: replace all, i: no lower/upper case difference Finds all strings starting with "http://", followed by any number of characters niether space nor new line.*/ var reg:RegExp=/(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])/ig; //Replaces Note: "$&" stands for the replaced string. text.replace(reg,"<a href=\"$&\">$&</a>"); field.htmlText=text; } private function showTwitterStatus():void { // Uncomment this line if you want to see all the fun stuff Twitter sends you: //trace(twitterXML); // Prep the text field to hold our latest Twitter update: twitter_txt.wordWrap = true; twitter_txt.autoSize = TextFieldAutoSize.LEFT; // Populate the text field with the first element in the status.text nodes: addTextToField(twitterXML.status.text[0], twitter_txt); }

    Read the article

  • Manipulating data in sql / asp.net / c# - how?

    - by SLC
    Not sure how to word the question... Basically, so far all my SQL stuff has been stored procedures and dumped into a gridview. The odd case where I had to perform an action based on a value (such as highlighting a row green if a certain value was true) were done as the gridview was rendering in one of the overrides. Now however I have to do something far more complicated - pull three sets of data down, run a series of checks on all three and some date related checks and stuff, then populate a gridview with some of the items. In logic terms, I want to run three queries, and store the lists of results (presumably in Lists?) then run some logic, then populate the gridview. Specifically what I don't know how to do is: Best way of pulling the data, and putting it into a List or other datastructure that lets me easily run through it, and retrieve data based on column (myList.age, or more likely, myList["Age"]). One I have compared the data, I assume I create a new list that will be put into the gridview... how do I put the contents of a list INTO a gridview? How would I add other stuff such as buttons or checkboxes at the same time? Any nudge in the right direction would be appreciated! Particularly doing cool stuff with lists and sql (if there is anything cool you can do with them)

    Read the article

  • How do you get and set a class property across multiple functions in Objective-C?

    - by editor
    Following up on this question about sharing objects between classes, I now need to figure out how to share the objects across various functions in a class. First, the setup: In my App Delegate I load menu information from JSON into a NSMutableDictionary and message that through to a view controller using a function called initWithData. I need to use this dictionary to populate a new Table View, which has methods like numberOfRowsInSection and cellForRowAtIndexPath. I'd like to use the dictionary count to return numberOfRowsInSection and info in the dictionary to populate each cell. Unfortunately, my code never gets beyond the init stage and the dictionary is empty so numberOfRowsInSection always returns zero. I thought I could create a class property, synthesize it and then set it. But it doesn't seem to want to retain the property's value. What am I doing wrong here? In the header .h: @interface FirstViewController:UIViewController <UITableViewDataSource, UITableViewDelegate, UITabBarControllerDelegate> { NSMutableDictionary *sectorDictionary; NSInteger sectorCount; } @property (nonatomic, retain) NSMutableDictionary *sectorDictionary; - (id)initWithData:(NSMutableDictionary*)data; @end in the implementation .m: - (id) testFunction:(NSMutableDictionary*)dictionary { NSLog(@"Count #1: %d", [dictionary count]); return nil; } - (id)initWithData:(NSMutableDictionary *)data { if (!(self=[super init])) { return nil; } [self testFunction:data]; // this is where I'd like to set a retained property self.sectorDictionary = data; return nil; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { NSLog(@"Count #2: %d", [self.sectorDictionary count]); return [self.sectorDictionary count]; } Output from NSLog: 2010-05-04 23:00:06.255 JSONApp[15890:207] Count #1: 9 2010-05-04 23:00:06.259 JSONApp[15890:207] Count #2: 0

    Read the article

  • Performing a MYSQL query based off of $_GET results

    - by Michael N
    When a user clicks an item on my items page, it takes them to blank page template using $_GET to pass the item brand and model through. I'd like to perform another MYSQL query when that user clicks through to populate the blank page with the product details from my database. I'd like to retrieve the single row using the model number (unique ID) to populate the page with the information. I've tried a couple of things but am having a little difficulty. On my blank item page, I have $brand = $_GET['Brand']; $modelnumber = $_GET['ModelNumber']; $query = mysql_query("SELECT * FROM items WHERE `Model Number` = '$modelnumber'"); $results = mysql_fetch_row($query); echo $results; I think having ''s around Model Number is causing troubles, but without them, I get a Warning: mysql_fetch_row() expects parameter 1 to be resource, boolean given error. My database columns looks like Brand | Model Number | Price | Description | Image A few other things I have tried include $query = mysql_query("SELECT * FROM item WHERE Model Number = $_GET['ModelNumber']"); Which gave me a syntax error. I've also tried concatenating the $_GET which gives me a mysql_fetch_row() expects parameter 1 to be resource, boolean given error Which leads me to believe that I'm also going about displaying the results incorrectly. I'm not sure if I need to put it in a where loop like I have with my previous page which displays all items in the database because this is just displaying one.

    Read the article

  • Generic list/sublist handling

    - by user628661
    Let's say we have a class class ComplexCls { public int Fld1; public string Fld2; //could be more fields } class Cls { public int SomeField; } and then some code class ComplexClsList: List<ComplexCls>; ComplexClsList myComplexList; // fill myComplexList // same for Cls class ClsList : List<Cls>; ClsList myClsList; We want to populate myClsList from myComplexList, something like (pseudocode): foreach Complexitem in myComplexList { Cls ClsItem = new Cls(); ClsItem.SomeField = ComplexItem.Fld1; } The code to do this is easy and will be put in some method in myClsList. However I'd like to design this as generic as possible, for generic ComplexCls. Note that the exact ComplexCls is known at the moment of using this code, only the algorithm shd be generic. I know it can be done using (direct) reflection but is there other solution? Let me know if the question is not clear enough. (probably isn't). [EDIT] Basically, what I need is this: having myClsList, I need to specify a DataSource (ComplexClsList) and a field from that DataSource (Fld1) that will be used to populate my SomeField

    Read the article

  • How do you fix a MySQL “Incorrect key file” error when you can’t repair the table?

    - by Wayne M
    I'm trying to run a rather large query that is supposed to run nightly to populate a table. I'm getting an error saying Incorrect key file for table '/var/tmp/#sql_201e_0.MYI'; try to repair it but the storage engine I'm using (whatever the default is, I guess?) doesn't support repairing tables. how do I fix this so I can run the query? We are under pressure to get this table loaded for a client.

    Read the article

  • How to address a recurring low temperature error seen at every boot-up?

    - by GregC
    After updating to latest controller firmware, I started receiving the following error messages: LSI 2208 ROC: Temperature sensor below error threshold on enclosure 1 Sensors 5 thru 7 Is this something I should worry about, or is it a Red Herring? Details: I have a Sans Digital NexentaSTOR 24-disk JBOD enclosure connected to LSI 9286-8e RAID-on-Chip controller with two SAS cables. Seagate ES.2 3TB SAS hard drives populate every bay in the enclosure.

    Read the article

  • Project not publishing start/finish date for custom TFS work items

    - by pete the pagan-gerbil
    I've got a TFS 2012 project set up with custom work items, that include Start and Finish date read-only fields (Microsoft.VSTS.Scheduling.StartDate and FinishDate). When I publish one of those custom work items from within Office Project, it does not populate those fields the same way as when I publish a Task work item (builtin TFS work item). I've looked at the transitions in the work items, and also the TFS project field mapping XML file but can't find anything that explains the difference in behaviour. What am I missing?

    Read the article

  • Excel - Possible to create a sorted view of a column in one sheet on another sheet?

    - by Cumbayah
    Hi; I'm trying, in Excel 2007, to populate a column in one sheet with the data contained in a column on another sheet, so that I may provide another sorting on the data, related to that sheet only. I've tried to boil it down to being able to have a column on sheet2 automatically being populated with all rows from a column in sheet1, but I can't seem to do so. Any suggestions? Thanks in advance.

    Read the article

  • Possible to create a sorted view of a column in one sheet on another sheet?

    - by Cumbayah
    Hi; I'm trying, in Excel 2007, to populate a column in one sheet with the data contained in a column on another sheet, so that I may provide another sorting on the data, related to that sheet only. I've tried to boil it down to being able to have a column on sheet2 automatically being populated with all rows from a column in sheet1, but I can't seem to do so. Any suggestions? Thanks in advance.

    Read the article

  • Invalid algorithm specified on Windows 2003 Server only

    - by JL
    I am decoding a file using the following method: string outFileName = zfoFileName.Replace(".zfo", "_tmp.zfo"); FileStream inFile = null; FileStream outFile = null; inFile = File.Open(zfoFileName, FileMode.Open); outFile = File.Create(outFileName); LargeCMS.CMS cms = new LargeCMS.CMS(); cms.Decode(inFile, outFile); This is working fine on my Win 7 dev machine, but on a Windows 2003 server production machine it fails with the following exception: Exception: System.Exception: CryptMsgUpdate error #-2146893816 --- System.ComponentModel.Win32Exception: Invalid algorithm specified --- End of inner exception stack trace --- at LargeCMS.CMS.Decode(FileStream inFile, FileStream outFile) Here are the classes below which I call to do the decoding, if needed I can upload a sample file for decoding, its just strange it works on Win 7, and not on Win2k3 server: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Security.Cryptography; using System.Security.Cryptography.X509Certificates; using System.Runtime.InteropServices; using System.ComponentModel; namespace LargeCMS { class CMS { // File stream to use in callback function private FileStream m_callbackFile; // Streaming callback function for encoding private Boolean StreamOutputCallback(IntPtr pvArg, IntPtr pbData, int cbData, Boolean fFinal) { // Write all bytes to encoded file Byte[] bytes = new Byte[cbData]; Marshal.Copy(pbData, bytes, 0, cbData); m_callbackFile.Write(bytes, 0, cbData); if (fFinal) { // This is the last piece. Close the file m_callbackFile.Flush(); m_callbackFile.Close(); m_callbackFile = null; } return true; } // Encode CMS with streaming to support large data public void Encode(X509Certificate2 cert, FileStream inFile, FileStream outFile) { // Variables Win32.CMSG_SIGNER_ENCODE_INFO SignerInfo; Win32.CMSG_SIGNED_ENCODE_INFO SignedInfo; Win32.CMSG_STREAM_INFO StreamInfo; Win32.CERT_CONTEXT[] CertContexts = null; Win32.BLOB[] CertBlobs; X509Chain chain = null; X509ChainElement[] chainElements = null; X509Certificate2[] certs = null; RSACryptoServiceProvider key = null; BinaryReader stream = null; GCHandle gchandle = new GCHandle(); IntPtr hProv = IntPtr.Zero; IntPtr SignerInfoPtr = IntPtr.Zero; IntPtr CertBlobsPtr = IntPtr.Zero; IntPtr hMsg = IntPtr.Zero; IntPtr pbPtr = IntPtr.Zero; Byte[] pbData; int dwFileSize; int dwRemaining; int dwSize; Boolean bResult = false; try { // Get data to encode dwFileSize = (int)inFile.Length; stream = new BinaryReader(inFile); pbData = stream.ReadBytes(dwFileSize); // Prepare stream for encoded info m_callbackFile = outFile; // Get cert chain chain = new X509Chain(); chain.Build(cert); chainElements = new X509ChainElement[chain.ChainElements.Count]; chain.ChainElements.CopyTo(chainElements, 0); // Get certs in chain certs = new X509Certificate2[chainElements.Length]; for (int i = 0; i < chainElements.Length; i++) { certs[i] = chainElements[i].Certificate; } // Get context of all certs in chain CertContexts = new Win32.CERT_CONTEXT[certs.Length]; for (int i = 0; i < certs.Length; i++) { CertContexts[i] = (Win32.CERT_CONTEXT)Marshal.PtrToStructure(certs[i].Handle, typeof(Win32.CERT_CONTEXT)); } // Get cert blob of all certs CertBlobs = new Win32.BLOB[CertContexts.Length]; for (int i = 0; i < CertContexts.Length; i++) { CertBlobs[i].cbData = CertContexts[i].cbCertEncoded; CertBlobs[i].pbData = CertContexts[i].pbCertEncoded; } // Get CSP of client certificate key = (RSACryptoServiceProvider)certs[0].PrivateKey; bResult = Win32.CryptAcquireContext( ref hProv, key.CspKeyContainerInfo.KeyContainerName, key.CspKeyContainerInfo.ProviderName, key.CspKeyContainerInfo.ProviderType, 0 ); if (!bResult) { throw new Exception("CryptAcquireContext error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } // Populate Signer Info struct SignerInfo = new Win32.CMSG_SIGNER_ENCODE_INFO(); SignerInfo.cbSize = Marshal.SizeOf(SignerInfo); SignerInfo.pCertInfo = CertContexts[0].pCertInfo; SignerInfo.hCryptProvOrhNCryptKey = hProv; SignerInfo.dwKeySpec = (int)key.CspKeyContainerInfo.KeyNumber; SignerInfo.HashAlgorithm.pszObjId = Win32.szOID_OIWSEC_sha1; // Populate Signed Info struct SignedInfo = new Win32.CMSG_SIGNED_ENCODE_INFO(); SignedInfo.cbSize = Marshal.SizeOf(SignedInfo); SignedInfo.cSigners = 1; SignerInfoPtr = Marshal.AllocHGlobal(Marshal.SizeOf(SignerInfo)); Marshal.StructureToPtr(SignerInfo, SignerInfoPtr, false); SignedInfo.rgSigners = SignerInfoPtr; SignedInfo.cCertEncoded = CertBlobs.Length; CertBlobsPtr = Marshal.AllocHGlobal(Marshal.SizeOf(CertBlobs[0]) * CertBlobs.Length); for (int i = 0; i < CertBlobs.Length; i++) { Marshal.StructureToPtr(CertBlobs[i], new IntPtr(CertBlobsPtr.ToInt64() + (Marshal.SizeOf(CertBlobs[i]) * i)), false); } SignedInfo.rgCertEncoded = CertBlobsPtr; // Populate Stream Info struct StreamInfo = new Win32.CMSG_STREAM_INFO(); StreamInfo.cbContent = dwFileSize; StreamInfo.pfnStreamOutput = new Win32.StreamOutputCallbackDelegate(StreamOutputCallback); // TODO: CMSG_DETACHED_FLAG // Open message to encode hMsg = Win32.CryptMsgOpenToEncode( Win32.X509_ASN_ENCODING | Win32.PKCS_7_ASN_ENCODING, 0, Win32.CMSG_SIGNED, ref SignedInfo, null, ref StreamInfo ); if (hMsg.Equals(IntPtr.Zero)) { throw new Exception("CryptMsgOpenToEncode error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } // Process the whole message gchandle = GCHandle.Alloc(pbData, GCHandleType.Pinned); pbPtr = gchandle.AddrOfPinnedObject(); dwRemaining = dwFileSize; dwSize = (dwFileSize < 1024 * 1000 * 100) ? dwFileSize : 1024 * 1000 * 100; while (dwRemaining > 0) { // Update message piece by piece bResult = Win32.CryptMsgUpdate( hMsg, pbPtr, dwSize, (dwRemaining <= dwSize) ? true : false ); if (!bResult) { throw new Exception("CryptMsgUpdate error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } // Move to the next piece pbPtr = new IntPtr(pbPtr.ToInt64() + dwSize); dwRemaining -= dwSize; if (dwRemaining < dwSize) { dwSize = dwRemaining; } } } finally { // Clean up if (gchandle.IsAllocated) { gchandle.Free(); } if (stream != null) { stream.Close(); } if (m_callbackFile != null) { m_callbackFile.Close(); } if (!CertBlobsPtr.Equals(IntPtr.Zero)) { Marshal.FreeHGlobal(CertBlobsPtr); } if (!SignerInfoPtr.Equals(IntPtr.Zero)) { Marshal.FreeHGlobal(SignerInfoPtr); } if (!hProv.Equals(IntPtr.Zero)) { Win32.CryptReleaseContext(hProv, 0); } if (!hMsg.Equals(IntPtr.Zero)) { Win32.CryptMsgClose(hMsg); } } } // Decode CMS with streaming to support large data public void Decode(FileStream inFile, FileStream outFile) { // Variables Win32.CMSG_STREAM_INFO StreamInfo; Win32.CERT_CONTEXT SignerCertContext; BinaryReader stream = null; GCHandle gchandle = new GCHandle(); IntPtr hMsg = IntPtr.Zero; IntPtr pSignerCertInfo = IntPtr.Zero; IntPtr pSignerCertContext = IntPtr.Zero; IntPtr pbPtr = IntPtr.Zero; IntPtr hStore = IntPtr.Zero; Byte[] pbData; Boolean bResult = false; int dwFileSize; int dwRemaining; int dwSize; int cbSignerCertInfo; try { // Get data to decode dwFileSize = (int)inFile.Length; stream = new BinaryReader(inFile); pbData = stream.ReadBytes(dwFileSize); // Prepare stream for decoded info m_callbackFile = outFile; // Populate Stream Info struct StreamInfo = new Win32.CMSG_STREAM_INFO(); StreamInfo.cbContent = dwFileSize; StreamInfo.pfnStreamOutput = new Win32.StreamOutputCallbackDelegate(StreamOutputCallback); // Open message to decode hMsg = Win32.CryptMsgOpenToDecode( Win32.X509_ASN_ENCODING | Win32.PKCS_7_ASN_ENCODING, 0, 0, IntPtr.Zero, IntPtr.Zero, ref StreamInfo ); if (hMsg.Equals(IntPtr.Zero)) { throw new Exception("CryptMsgOpenToDecode error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } // Process the whole message gchandle = GCHandle.Alloc(pbData, GCHandleType.Pinned); pbPtr = gchandle.AddrOfPinnedObject(); dwRemaining = dwFileSize; dwSize = (dwFileSize < 1024 * 1000 * 100) ? dwFileSize : 1024 * 1000 * 100; while (dwRemaining > 0) { // Update message piece by piece bResult = Win32.CryptMsgUpdate( hMsg, pbPtr, dwSize, (dwRemaining <= dwSize) ? true : false ); if (!bResult) { throw new Exception("CryptMsgUpdate error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } // Move to the next piece pbPtr = new IntPtr(pbPtr.ToInt64() + dwSize); dwRemaining -= dwSize; if (dwRemaining < dwSize) { dwSize = dwRemaining; } } // Get signer certificate info cbSignerCertInfo = 0; bResult = Win32.CryptMsgGetParam( hMsg, Win32.CMSG_SIGNER_CERT_INFO_PARAM, 0, IntPtr.Zero, ref cbSignerCertInfo ); if (!bResult) { throw new Exception("CryptMsgGetParam error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } pSignerCertInfo = Marshal.AllocHGlobal(cbSignerCertInfo); bResult = Win32.CryptMsgGetParam( hMsg, Win32.CMSG_SIGNER_CERT_INFO_PARAM, 0, pSignerCertInfo, ref cbSignerCertInfo ); if (!bResult) { throw new Exception("CryptMsgGetParam error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } // Open a cert store in memory with the certs from the message hStore = Win32.CertOpenStore( Win32.CERT_STORE_PROV_MSG, Win32.X509_ASN_ENCODING | Win32.PKCS_7_ASN_ENCODING, IntPtr.Zero, 0, hMsg ); if (hStore.Equals(IntPtr.Zero)) { throw new Exception("CertOpenStore error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } // Find the signer's cert in the store pSignerCertContext = Win32.CertGetSubjectCertificateFromStore( hStore, Win32.X509_ASN_ENCODING | Win32.PKCS_7_ASN_ENCODING, pSignerCertInfo ); if (pSignerCertContext.Equals(IntPtr.Zero)) { throw new Exception("CertGetSubjectCertificateFromStore error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } // Set message for verifying SignerCertContext = (Win32.CERT_CONTEXT)Marshal.PtrToStructure(pSignerCertContext, typeof(Win32.CERT_CONTEXT)); bResult = Win32.CryptMsgControl( hMsg, 0, Win32.CMSG_CTRL_VERIFY_SIGNATURE, SignerCertContext.pCertInfo ); if (!bResult) { throw new Exception("CryptMsgControl error #" + Marshal.GetLastWin32Error().ToString(), new Win32Exception(Marshal.GetLastWin32Error())); } } finally { // Clean up if (gchandle.IsAllocated) { gchandle.Free(); } if (!pSignerCertContext.Equals(IntPtr.Zero)) { Win32.CertFreeCertificateContext(pSignerCertContext); } if (!pSignerCertInfo.Equals(IntPtr.Zero)) { Marshal.FreeHGlobal(pSignerCertInfo); } if (!hStore.Equals(IntPtr.Zero)) { Win32.CertCloseStore(hStore, Win32.CERT_CLOSE_STORE_FORCE_FLAG); } if (stream != null) { stream.Close(); } if (m_callbackFile != null) { m_callbackFile.Close(); } if (!hMsg.Equals(IntPtr.Zero)) { Win32.CryptMsgClose(hMsg); } } } } } and using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; using System.Security.Cryptography.X509Certificates; using System.ComponentModel; using System.Security.Cryptography; namespace LargeCMS { class Win32 { #region "CONSTS" public const int X509_ASN_ENCODING = 0x00000001; public const int PKCS_7_ASN_ENCODING = 0x00010000; public const int CMSG_SIGNED = 2; public const int CMSG_DETACHED_FLAG = 0x00000004; public const int AT_KEYEXCHANGE = 1; public const int AT_SIGNATURE = 2; public const String szOID_OIWSEC_sha1 = "1.3.14.3.2.26"; public const int CMSG_CTRL_VERIFY_SIGNATURE = 1; public const int CMSG_CERT_PARAM = 12; public const int CMSG_SIGNER_CERT_INFO_PARAM = 7; public const int CERT_STORE_PROV_MSG = 1; public const int CERT_CLOSE_STORE_FORCE_FLAG = 1; #endregion #region "STRUCTS" [StructLayout(LayoutKind.Sequential)] public struct CRYPT_ALGORITHM_IDENTIFIER { public String pszObjId; BLOB Parameters; } [StructLayout(LayoutKind.Sequential)] public struct CERT_ID { public int dwIdChoice; public BLOB IssuerSerialNumberOrKeyIdOrHashId; } [StructLayout(LayoutKind.Sequential)] public struct CMSG_SIGNER_ENCODE_INFO { public int cbSize; public IntPtr pCertInfo; public IntPtr hCryptProvOrhNCryptKey; public int dwKeySpec; public CRYPT_ALGORITHM_IDENTIFIER HashAlgorithm; public IntPtr pvHashAuxInfo; public int cAuthAttr; public IntPtr rgAuthAttr; public int cUnauthAttr; public IntPtr rgUnauthAttr; public CERT_ID SignerId; public CRYPT_ALGORITHM_IDENTIFIER HashEncryptionAlgorithm; public IntPtr pvHashEncryptionAuxInfo; } [StructLayout(LayoutKind.Sequential)] public struct CERT_CONTEXT { public int dwCertEncodingType; public IntPtr pbCertEncoded; public int cbCertEncoded; public IntPtr pCertInfo; public IntPtr hCertStore; } [StructLayout(LayoutKind.Sequential)] public struct BLOB { public int cbData; public IntPtr pbData; } [StructLayout(LayoutKind.Sequential)] public struct CMSG_SIGNED_ENCODE_INFO { public int cbSize; public int cSigners; public IntPtr rgSigners; public int cCertEncoded; public IntPtr rgCertEncoded; public int cCrlEncoded; public IntPtr rgCrlEncoded; public int cAttrCertEncoded; public IntPtr rgAttrCertEncoded; } [StructLayout(LayoutKind.Sequential)] public struct CMSG_STREAM_INFO { public int cbContent; public StreamOutputCallbackDelegate pfnStreamOutput; public IntPtr pvArg; } #endregion #region "DELEGATES" public delegate Boolean StreamOutputCallbackDelegate(IntPtr pvArg, IntPtr pbData, int cbData, Boolean fFinal); #endregion #region "API" [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern Boolean CryptAcquireContext( ref IntPtr hProv, String pszContainer, String pszProvider, int dwProvType, int dwFlags ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern IntPtr CryptMsgOpenToEncode( int dwMsgEncodingType, int dwFlags, int dwMsgType, ref CMSG_SIGNED_ENCODE_INFO pvMsgEncodeInfo, String pszInnerContentObjID, ref CMSG_STREAM_INFO pStreamInfo ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern IntPtr CryptMsgOpenToDecode( int dwMsgEncodingType, int dwFlags, int dwMsgType, IntPtr hCryptProv, IntPtr pRecipientInfo, ref CMSG_STREAM_INFO pStreamInfo ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern Boolean CryptMsgClose( IntPtr hCryptMsg ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern Boolean CryptMsgUpdate( IntPtr hCryptMsg, Byte[] pbData, int cbData, Boolean fFinal ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern Boolean CryptMsgUpdate( IntPtr hCryptMsg, IntPtr pbData, int cbData, Boolean fFinal ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern Boolean CryptMsgGetParam( IntPtr hCryptMsg, int dwParamType, int dwIndex, IntPtr pvData, ref int pcbData ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern Boolean CryptMsgControl( IntPtr hCryptMsg, int dwFlags, int dwCtrlType, IntPtr pvCtrlPara ); [DllImport("advapi32.dll", SetLastError = true)] public static extern Boolean CryptReleaseContext( IntPtr hProv, int dwFlags ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern IntPtr CertCreateCertificateContext( int dwCertEncodingType, IntPtr pbCertEncoded, int cbCertEncoded ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern Boolean CertFreeCertificateContext( IntPtr pCertContext ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern IntPtr CertOpenStore( int lpszStoreProvider, int dwMsgAndCertEncodingType, IntPtr hCryptProv, int dwFlags, IntPtr pvPara ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern IntPtr CertGetSubjectCertificateFromStore( IntPtr hCertStore, int dwCertEncodingType, IntPtr pCertId ); [DllImport("Crypt32.dll", SetLastError = true)] public static extern IntPtr CertCloseStore( IntPtr hCertStore, int dwFlags ); #endregion } }

    Read the article

  • SQL SERVER – 2008 – Introduction to Snapshot Database – Restore From Snapshot

    - by pinaldave
    Snapshot database is one of the most interesting concepts that I have used at some places recently. Here is a quick definition of the subject from Book On Line: A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the same server instance as the database. Each database snapshot is consistent, in terms of transactions, with the source database as of the moment of the snapshot’s creation. A snapshot persists until it is explicitly dropped by the database owner. If you do not know how Snapshot database work, here is a quick note on the subject. However, please refer to the official description on Book-on-Line for accuracy. Snapshot database is a read-only database created from an original database called the “source database”. This database operates at page level. When Snapshot database is created, it is produced on sparse files; in fact, it does not occupy any space (or occupies very little space) in the Operating System. When any data page is modified in the source database, that data page is copied to Snapshot database, making the sparse file size increases. When an unmodified data page is read in the Snapshot database, it actually reads the pages of the original database. In other words, the changes that happen in the source database are reflected in the Snapshot database. Let us see a simple example of Snapshot. In the following exercise, we will do a few operations. Please note that this script is for demo purposes only- there are a few considerations of CPU, DISK I/O and memory, which will be discussed in the future posts. Create Snapshot Delete Data from Original DB Restore Data from Snapshot First, let us create the first Snapshot database and observe the sparse file details. USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO Now let us see the resultset for the same. Now let us do delete something from the Original DB and check the same details we checked before. -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO When we check the details of sparse file created by Snapshot database, we will find some interesting details. The details of Regular DB remain the same. It clearly shows that when we delete data from Regular/Source DB, it copies the data pages to Snapshot database. This is the reason why the size of the snapshot DB is increased. Now let us take this small exercise to  the next level and restore our deleted data from Snapshot DB to Original Source DB. -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Now let us check the details of the select statement and we can see that we are successful able to restore the database from Snapshot Database. We can clearly see that this is a very useful feature in case you would encounter a good business that needs it. I would like to request the readers to suggest more details if they are using this feature in their business. Also, let me know if you think it can be potentially used to achieve any tasks. Complete Script of the afore- mentioned operation for easy reference is as follows: USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – SSMS: Backup and Restore Events Report

    - by Pinal Dave
    A DBA wears multiple hats and in fact does more than what an eye can see. One of the core task of a DBA is to take backups. This looks so trivial that most developers shrug this off as the only activity a DBA might be doing. I have huge respect for DBA’s all around the world because even if they seem cool with all the scripting, automation, maintenance works round the clock to keep the business working almost 365 days 24×7, their worth is knowing that one day when the systems / HDD crashes and you have an important delivery to make. So these backup tasks / maintenance jobs that have been done come handy and are no more trivial as they might seem to be as considered by many. So the important question like: “When was the last backup taken?”, “How much time did the last backup take?”, “What type of backup was taken last?” etc are tricky questions and this report lands answers to the same in a jiffy. So the SSMS report, we are talking can be used to find backups and restore operation done for the selected database. Whenever we perform any backup or restore operation, the information is stored in the msdb database. This report can utilize that information and provide information about the size, time taken and also the file location for those operations. Here is how this report can be launched.   Once we launch this report, we can see 4 major sections shown as listed below. Average Time Taken For Backup Operations Successful Backup Operations Backup Operation Errors Successful Restore Operations Let us look at each section next. Average Time Taken For Backup Operations Information shown in “Average Time Taken For Backup Operations” section is taken from a backupset table in the msdb database. Here is the query and the expanded version of that particular section USE msdb; SELECT (ROW_NUMBER() OVER (ORDER BY t1.TYPE))%2 AS l1 ,       1 AS l2 ,       1 AS l3 ,       t1.TYPE AS [type] ,       (AVG(DATEDIFF(ss,backup_start_date, backup_finish_date)))/60.0 AS AverageBackupDuration FROM backupset t1 INNER JOIN sys.databases t3 ON ( t1.database_name = t3.name) WHERE t3.name = N'AdventureWorks2014' GROUP BY t1.TYPE ORDER BY t1.TYPE On my small database the time taken for differential backup was less than a minute, hence the value of zero is displayed. This is an important piece of backup operation which might help you in planning maintenance windows. Successful Backup Operations Here is the expanded version of this section.   This information is derived from various backup tracking tables from msdb database.  Here is the simplified version of the query which can be used separately as well. SELECT * FROM sys.databases t1 INNER JOIN backupset t3 ON (t3.database_name = t1.name) LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id) LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id) WHERE (t1.name = N'AdventureWorks2014') ORDER BY backup_start_date DESC,t3.backup_set_id,t6.physical_device_name; The report does some calculations to show the data in a more readable format. For example, the backup size is shown in KB, MB or GB. I have expanded first row by clicking on (+) on “Device type” column. That has shown me the path of the physical backup file. Personally looking at this section, the Backup Size, Device Type and Backup Name are critical and are worth a note. As mentioned in the previous section, this section also has the Duration embedded inside it. Backup Operation Errors This section of the report gets data from default trace. You might wonder how. One of the event which is tracked by default trace is “ErrorLog”. This means that whatever message is written to errorlog gets written to default trace file as well. Interestingly, whenever there is a backup failure, an error message is written to ERRORLOG and hence default trace. This section takes advantage of that and shows the information. We can read below message under this section, which confirms above logic. No backup operations errors occurred for (AdventureWorks2014) database in the recent past or default trace is not enabled. Successful Restore Operations This section may not be very useful in production server (do you perform a restore of database?) but might be useful in the development and log shipping secondary environment, where we might be interested to see restore operations for a particular database. Here is the expanded version of the section. To fill this section of the report, I have restored the same backups which were taken to populate earlier sections. Here is the simplified version of the query used to populate this output. USE msdb; SELECT * FROM restorehistory t1 LEFT OUTER JOIN restorefile t2 ON ( t1.restore_history_id = t2.restore_history_id) LEFT OUTER JOIN backupset t3 ON ( t1.backup_set_id = t3.backup_set_id) WHERE t1.destination_database_name = N'AdventureWorks2014' ORDER BY restore_date DESC,  t1.restore_history_id,t2.destination_phys_name Have you ever looked at the backup strategy of your key databases? Are they in sync and do we have scope for improvements? Then this is the report to analyze after a week or month of maintenance plans running in your database. Do chime in with what are the strategies you are using in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Sorting: TransientVO Vs Query/EO based VO

    - by Vijay Mohan
    In ADF, you can do a sorting on VO rows by invoking setSortBy("VOAttrName") API, but the tricky part is that, this API actually appends a clause to VO query at runtime and the actual sorting is performed after doing VO.executeQuery(), this goes fine for Query/EO based VO. But, how about the transient VO, wherein the rows are populated programmatically..?There is a way to it..:)you can actually specify the query mode on your transient VO, so that the sorting happens on already populated VO rows.Here are the steps to go about it..//Populate your transient VO rows.//VO.setSortBy("YourVOAttrName");//VO.setQueryMode(ViewObject.QUERY_MODE_SCAN_VIEW_ROWS);//VO.executeQuery();So, here the executeQuery() is actually the trigger which calls for VO rows sorting.QUERY_MODE_SCAN_VIEW_ROWS flag makes sure that the sorting is performed on the already populated VO cache.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >