Search Results

Search found 2480 results on 100 pages for 'dave jarvis'.

Page 96/100 | < Previous Page | 92 93 94 95 96 97 98 99 100  | Next Page >

  • Serializing a DataType="time" field using XmlSerializer

    - by CraftyFella
    Hi, I'm getting an odd result when serializing a DateTime field using XmlSerializer. I have the following class: public class RecordExample { [XmlElement("TheTime", DataType = "time")] public DateTime TheTime { get; set; } [XmlElement("TheDate", DataType = "date")] public DateTime TheDate { get; set; } public static bool Serialize(Stream stream, object obj, Type objType, Encoding encoding) { try { using (var writer = XmlWriter.Create(stream, new XmlWriterSettings { Encoding = encoding })) { var xmlSerializer = new XmlSerializer(objType); if (writer != null) xmlSerializer.Serialize(writer, obj); } return true; } catch (Exception) { return false; } } } When i call the use the XmlSerializer with the following testing code: var obj = new RecordExample {TheDate = DateTime.Now.Date, TheTime = new DateTime(0001, 1, 1, 12, 00, 00)}; var ms = new MemoryStream(); RecordExample.Serialize(ms, obj, typeof (RecordExample), Encoding.UTF8); txtSource2.Text = Encoding.UTF8.GetString(ms.ToArray()); I get some strange results, here's the xml that is produced: <?xml version="1.0" encoding="utf-8"?> <RecordExample xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <TheTime>12:00:00.0000000+00:00</TheTime> <TheDate>2010-03-08</TheDate> </RecordExample> Any idea's how i can get the "TheTime" element to contain a time which looks more like this: <TheTime>12:00:00.0Z</TheTime> ...as that's what i was expecting? Thanks Dave

    Read the article

  • How do I test database-related code with NUnit?

    - by Michael Haren
    I want to write unit tests with NUnit that hit the database. I'd like to have the database in a consistent state for each test. I thought transactions would allow me to "undo" each test so I searched around and found several articles from 2004-05 on the topic: http://weblogs.asp.net/rosherove/archive/2004/07/12/180189.aspx http://weblogs.asp.net/rosherove/archive/2004/10/05/238201.aspx http://davidhayden.com/blog/dave/archive/2004/07/12/365.aspx http://haacked.com/archive/2005/12/28/11377.aspx These seem to resolve around implementing a custom attribute for NUnit which builds in the ability to rollback DB operations after each test executes. That's great but... Does this functionality exists somewhere in NUnit natively? Has this technique been improved upon in the last 4 years? Is this still the best way to test database-related code? Edit: it's not that I want to test my DAL specifically, it's more that I want to test pieces of my code that interact with the database. For these tests to be "no-touch" and repeatable, it'd be awesome if I could reset the database after each one. Further, I want to ease this into an existing project that has no testing place at the moment. For that reason, I can't practically script up a database and data from scratch for each test.

    Read the article

  • Are Fortran control characters (carriage control) still implemented in compilers?

    - by CmdrGuard
    In the book Fortran 95/2003 for Scientists and Engineers, there is much talk given to the importance of recognizing that the first column in a format statement is reserved for control characters. I've also seen control characters referred to as carriage control on the internet. To avoid confusion, by control characters, I refer to the characters "1, a blank (i.e. \s), 0, and +" as having an effect on the vertical spacing of output when placed in the first column (character) of a FORMAT statement. Also, see this text-only web page written entirely in fixed-width typeface : Fortran carriage-control (because nothing screams accuracy and antiquity better than prose in monospaced font). I found this page and others like it to be not quite clear. According to Fortran 95/2003 for Scientists and Engineers, failure to recall that the first column is reserved for carriage control can lead to horrible unintended output. Paraphrasing Dave Barry, type the wrong character, and nuclear missiles get fired at Norway. However, when I attempt to adhere to this stern warning, I find that gfortran has no idea what I'm talking about. Allow me to illustrate my point with some example code. I am trying to print out the number Pi: PROGRAM test_format IMPLICIT NONE REAL :: PI = 2 * ACOS(0.0) WRITE (*, 100) PI WRITE (*, 200) PI WRITE (*, 300) PI 100 FORMAT ('1', "New page: ", F11.9) 200 FORMAT (' ', "Single Space: ", F11.9) 300 FORMAT ('0', "Double Space: ", F11.9) END PROGRAM test_format This is the output: 1New page: 3.141592741 Single Space: 3.141592741 0Double Space: 3.141592741 The "1" and "0" are not typos. It appears that gfortran is completely ignoring the control character column. My question, then, is this: Are control characters still implemented in standards compliant compilers or is gfortran simply not standards compliant? For clarity, here is the output of my gfortran -v Using built-in specs. Target: powerpc-apple-darwin9 Configured with: ../gcc-4.4.0/configure --prefix=/sw --prefix=/sw/lib/gcc4.4 --mandir=/sw/share/man --infodir=/sw/share/info --enable-languages=c,c++,fortran,objc,java --with-gmp=/sw --with-libiconv-prefix=/sw --with-ppl=/sw --with-cloog=/sw --with-system-zlib --x-includes=/usr/X11R6/include --x-libraries=/usr/X11R6/lib --disable-libjava-multilib --build=powerpc-apple-darwin9 --host=powerpc-apple-darwin9 --target=powerpc-apple-darwin9 Thread model: posix gcc version 4.4.0 (GCC)

    Read the article

  • C# ASP.NET Binding Controls via Generic Method

    - by OverTech
    I have a few web applications that I maintain and I find myself very often writing the same block of code over and over again to bind a GridView to a data source. I'm trying to create a Generic method to handle data binding but I'm having trouble getting it to work with Repeaters and DataLists. Here is the Generic method I have so far: public void BindControl<T>(T control, SqlCommand sql) where T : System.Web.UI.WebControls.BaseDataBoundControl { cmd = sql; cn.Open(); SqlDataReader dr = cmd.ExecuteReader(); if (dr.HasRows) { control.DataSource = dr; control.DataBind(); } dr.Close(); cn.Close(); } That way I can just define my CommandText then make a call to "BindControls(myGridView, cmd)" instead of retyping this same basic block of code every time I need to bind a grid. The problem is, this doesn't work with Repeaters or DataLists. Each of these controls inherit their respective "DataSource" and "DataBind" methods from different classes. Someone on another forum suggested that I implement an interface, but I'm not sure how to make that work either. The GridView, Datalist and Repeater get their respective "DataBind" methods from BaseDataBoundControl, BaseDataList, and Repeater classes. How would I go about creating a single interface to tie them all together? Or am I better off just using 3 overloads for this method? Dave

    Read the article

  • WordPress Get 1 attachment per post from X category outside the loop

    - by davebowker
    Hey, Hope someone can help with this. What I want is to display 5 attachments but only 1 attachment from each post from a specific category in the sidebar, which links to the posts permalink. I'm using the following code so far which gets all attachments from all posts, but some posts have more than 1 attachment and I just want to show the first one, and link it to the permalink of the post. So although the limit is 5 posts, if one post has 4 attachments then currently it will show 4 from one, and 1 from the other totalling 5, when what I want it to do is just show 1 from each of 5 different posts. <?php $args = array( 'post_type' => 'attachment', 'numberposts' => 5, 'post_status' => null, 'post_parent' => null, // any parent 'category_name' => 'work', ); $attachments = get_posts($args); if ($attachments) { foreach ($attachments as $post) { setup_postdata($post); the_title(); the_permalink(); the_attachment_link($post->ID, false); the_excerpt(); } } ?> Cheers. Dave

    Read the article

  • Multiple page technique

    - by the Hampster
    Six years ago I wrote an large web application. It was my very first php program, and it grew, and grew. I used a lot of forms, and some very (atrocious) techniques to give an AJAX "look" to the site, and to handle multiple users with various permissions. I am looking to expand this, and sell it to other companies in the field, but I need to seriously modernize it. Currently, it loads index.php where you log in. The user can then select from various "students" or "participants" (We work with people with disabilities). From there, you can select different lesson plans or life skills that are being worked on. There are many other branches as well. At each stage, a different page is served: Sel_Paricipant.php, Sel_Program.php, etc. Are there any fundamental problems with doing it this way? If so, what are some good techniques for eliminating this, so that people only see the TLD in their address bar? Thanks in advance. --Dave

    Read the article

  • Byte-Pairing for data compression

    - by user1669533
    Question about Byte-Pairing for data compression. If byte pairing converts two byte values to a single byte value, splitting the file in half, then taking a gig file and recusing it 16 times shrinks it to 62,500,000. My question is, is byte-pairing really efficient? Is the creation of a 5,000,000 iteration loop, to be conservative, efficient? I would like some feed back on and some incisive opinions please. Dave, what I read was: "The US patent office no longer grants patents on perpetual motion machines, but has recently granted at least two patents on a mathematically impossible process: compression of truly random data." I was not inferring the Patent Office was actually considering what I am inquiring about. I was merely commenting on the notion of a "mathematically impossible process." If someone has, in some way created a method of having a "single" data byte as a placeholder of 8 individual bytes of data, that would be a consideration for a patent. Now, about the mathematically impossibility of an 8 to 1 compression method, it is not so much a mathematically impossibility, but a series of rules and conditions that can be created. As long as there is the rule of 8 or 16 bit representation of storing data on a medium, there are ways to manipulate data that mirrors current methods, or creation by a new way of thinking.

    Read the article

  • table subtraction challenge

    - by Valentin
    I have a challenge that I haven’t overcome in the last two days using Stored Procedures and SQL 2008. I took several approaches but must fell short. One appraoch very interesting was using a table substraction. It’s really all about table subtraction. I was wondering if you could help me crack this one. Here is the challenge: Two tables 1Testdb y 2Testdb. My first step was to select ID relationships ([2Testdb].Acc_id) on table 2Testdb for one given individual ([2Testdb].Bus_id). Then query table 1Testdb for records not mathcing my original selection from 2Testdb. But other approaches are welcome. Data and Structures: USE [Challengedb] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[1Testdb]( [Acc_id] [uniqueidentifier] NULL [Name] [Varchar(10)] NULL ) ON [PRIMARY] GO CREATE TABLE [dbo].[2Testdb]( [Acc_id] [uniqueidentifier] NULL, [Bus_id] [uniqueidentifier] NULL ) ON [PRIMARY] GO Records on 1Testdb: 34455F60-9474-4521-804E-66DB39A579F3, John C23523F6-2309-4F58-BB3F-EF7486C7AF8B, Pete DC711615-3BE4-4B31-9EF2-B1314185CA62, Dave E3AAB073-2398-476D-828B-92829F686A4C, Adam Records on 2Testdb: (Relationship table, ex. Friend relationships) Record #1: DC711615-3BE4-4B31-9EF2-B1314185CA62, 34455F60-9474-4521-804E-66DB39A579F3 Record #2: E3AAB073-2398-476D-828B-92829F686A4C, 34455F60-9474-4521-804E-66DB39A579F3 Record # 3: DC711615-3BE4-4B31-9EF2-B1314185CA62, E3AAB073-2398-476D-828B-92829F686A4C Record # 4: E3AAB073-2398-476D-828B-92829F686A4C, DC711615-3BE4-4B31-9EF2-B1314185CA62 Challenge: Select from table 1Testdb only those records distinct that may not have a relationship with John [34455F60-9474-4521-804E-66DB39A579F3] on table 2Testdb. Expected result should be (Who does John doesn’t have relationship with?): C23523F6-2309-4F58-BB3F-EF7486C7AF8B, Pete Thank you, Valentin

    Read the article

  • Event not triggering

    - by the Hampster
    I have no idea where to start on this one. I have a that does not appear until a button is clicked. This function call works: onclick="highlight('mod_sup_div', true);" function highlight(aDiv,show) { if (show) { Effect.Appear('Overlay',{duration: 0.5, to: .80}); Effect.Appear(aDiv,{duration: 0.5}) } else { Effect.Fade('Overlay',{duration: 0.5, to: .80}); Effect.Fade(aDiv,{duration: 0.5}) } } In the <div> I have a button to close the window. <p class="closer"><span onclick="highlight('mod_sup_div',false)">X</span></p> This does not work. The function is not even called, as I made a alert() the first line of the function at it does nothing. What is odd, is that onclick="Effect.Fade(aDiv,{duration: 0.5})" does work. Other simple javascript functions in the onclick="" work, except for the function call. Any help as to why this is happening would be very appreciated. Thanks, Dave

    Read the article

  • Presenting collection of structures of strings in grid or similar in WPF - Example? Ideas?

    - by Andrew
    Hi, I have a collection of structures. The structure is just some strings. Example public struct ReportLine { public string Name; public string Address; public string Phone; ...// about 10 other strings } I can't change this part. What I want to do is display it in a simple grid or simlar in WPF. My only requirements are: a) need column headers b) rows must alternate in color c) columns big enough to hold largest datum (which is not know until run time) Can someone point me to an example to get me started? Is the GridView the way to go? Or DataGrid? Or perhaps just the grid? I have the book Pro WPF in C# 2008 and it covers binding ListBox's to collections, but the collections always seem to be collections of one field (ex. a collection of 40 names). Here I have a collection (an array in fact) of a structure. How do I setup the databinding? As you can see, I'm new to this and probably would most benefit from a reference to an article. I've found articles covering collections of 1 field, but no examples covering binding to an array of structures. Also, my intitial research indicates c) can't be easily done, if you demand that the column is big enough for all the data in that column, not just the visible data. Thanks, dave

    Read the article

  • Changing type of object in a conditional

    - by David Doria
    I'm having a bit of trouble with dynamic_casting. I need to determine at runtime the type of an object. Here is a demo: include include class PersonClass { public: std::string Name; virtual void test(){}; //it is annoying that this has to be here... }; class LawyerClass : public PersonClass { public: void GoToCourt(){}; }; class DoctorClass : public PersonClass { public: void GoToSurgery(){}; }; int main(int argc, char *argv[]) { PersonClass* person = new PersonClass; if(true) { person = dynamic_cast(person); } else { person = dynamic_cast(person); } person-GoToCourt(); return 0; } I would like to do the above. The only legal way I found to do it is to define all of the objects before hand: PersonClass* person = new PersonClass; LawyerClass* lawyer; DoctorClass* doctor; if(true) { lawyer = dynamic_cast(person); } else { doctor = dynamic_cast(person); } if(true) { lawyer-GoToCourt(); } The main problem with this (besides having to define a bunch of objects that won't be use) is that I have to change the name of the 'person' variable. Is there a better way? (I am not allowed to change any of the classes (Person, Lawyer, or Doctor) because they are part of a library that people who will use my code have and won't want to change). Thanks, Dave

    Read the article

  • Create Extension Method to Produce Open & Closing Tags like Html.BeginForm()

    - by DaveDev
    Hi Guys I wonder if it's possible to create an extension method which has functionality & behaviour similar to Html.BeginForm(), in that it would generate a complete Html tag, and I could specificy its contents inside <% { & } %> tags. For example, I could have a view like: <% using(Html.BeginDiv("divId")) %> <% { %> <!-- Form content goes here --> <% } %> This capability would be very useful in the context of the functionality I'm trying to produce with the example in this question This would give me the ability to create containers for the types that I'll be <% var myType = new MyType(123, 234); %> <% var tag = new TagBuilder("div"); %> <% using(Html.BeginDiv<MyType>(myType, tag) %> <% { %> <!-- controls used for the configuration of MyType --> <!-- represented in the context of a HTML element, e.g.: --> <div class="MyType" prop1="123" prop2="234"> <!-- add a select here --> <!-- add a radio control here --> <!-- whatever, it represents elements in the context of their type --> </div> <% } %> I realise this will produce invalid XHTML, but I think there could be other benefits that outweigh this, especially since this project doesn't require that the XHTML validate to the W3C standards. Thanks Dave

    Read the article

  • How do you send an extended-ascii AT-command (CCh) from Android bluetooth to a serial device?

    - by softex
    This one really has me banging my head. I'm sending alphanumeric data from an Android app, through the BluetoothChatService, to a serial bluetooth adaptor connected to the serial input of a radio transceiver. Everything works fine except when I try to configure the radio on-the-fly with its AT-commands. The AT+++ (enter command mode) is received OK, but the problem comes with the extended-ascii characters in the next two commands: Changing the radio destination address (which is what I'm trying to do) requires CCh 10h (plus 3 hex radio address bytes), and exiting the command mode requires CCh ATO. I know the radio can be configured OK because I've done it on an earlier prototype with the serial commands from PIC basic, and it also can be configured by entering the commands directly from hyperterm. Both these methods somehow convert that pesky CCh into a form the radio understands. I've have tried just about everything an Android noob could possibly come up with to finagle the encoding such as: private void command_address() { byte[] addrArray = {(byte) 0xCC, 16, 36, 65, 21, 13}; CharSequence addrvalues = EncodingUtils.getString(addrArray, "UTF-8"); sendMessage((String) addrvalues); } but no matter what, I can't seem to get that high-order byte (CCh/204/-52) to behave as it should. All other (< 127) bytes, command or data, transmit with no problem. Any help here would be greatly appreciated. -Dave

    Read the article

  • Sqlite3 INSERT INTO Question × 377

    - by user316717
    Hi, My 1st post. I am creating an exercise app that will record the weight used and the number of "reps" the user did in 4 "Sets" per day over a period of 7 days so the user may view their progress. I have built the database table named FIELDS with 2 columns ROW and FIELD_DATA and I can use the code below to load the data into the db. But the code has a sql statement that says, INSERT OR REPLACE INTO FIELDS (ROW, FIELD_DATA)VALUES (%d, '%@'); When I change the statment to: INSERT INTO FIELDS (ROW, FIELD_DATA)VALUES (%d, '%@'); Nothing happens. That is no data is recorded in the db. Below is the code: #define kFilname @"StData.sqlite3" - (NSString *)dataFilePath { NSArray *paths = NSSearchPathForDirectoriesInDomains (NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; return [documentsDirectory stringByAppendingPathComponent:kFilname]; } -(IBAction)saveData:(id)sender; { for (int i = 1; i <= 8; i++) { NSString *fieldName = [[NSString alloc]initWithFormat:@"field%d", i]; UITextField *field = [self valueForKey:fieldName]; [fieldName release]; NSString *insert = [[NSString alloc] initWithFormat: @"INSERT OR REPLACE INTO FIELDS (ROW, FIELD_DATA) VALUES (%d, '%@');",i, field.text]; // sqlite3_stmt *stmt; char *errorMsg; if (sqlite3_exec (database, [insert UTF8String], NULL, NULL, &errorMsg) != SQLITE_OK) { // NSAssert1(0, @"Error updating table: %s", errorMsg); sqlite3_free(errorMsg); } } sqlite3_close(database); } So how do I modify the code to do what I want? It seemed like a simple sql statement change at first but obviously there must be more. I am new to Objective-C and iPhone programming. I am not new to using sql statements as I have been creating web apps in ASP for a number of years. Any help will be greatly appreciated, this is driving me nuts! Thanks in advance Dave

    Read the article

  • How to deny the web access to some files?

    - by Strae
    I need to do an operation a bit strange. First, i run on Debian, apache2 (which 'runs' as user www-data) So, I have simple text file with .txt ot .ini, or whatever extension, doesnt matter. These files are located in subfolders with a structure like this: www.example.com/folder1/car/foobar.txt www.example.com/folder1/cycle/foobar.txt www.example.com/folder1/fish/foobar.txt www.example.com/folder1/fruit/foobar.txt therefore, the file name always the same, ditto for the 'hierarchy', just change the name of the folder: /folder-name-static/folder-name-dinamyc/file-name-static.txt What I should do is (I think) relatively simple: I must be able to read that file by programs on the server (python, php for example), but if I try to retrieve the file contents by broswer (digiting the url www.example.com/folder1/car/foobar.txt, or via cUrl, etc..) I must get a forbidden error, or whatever, but not access the file. It would also be nice that even accessing those files via FTP are 'hidden', or anyway couldnt be downloaded (at least that I use with the ftp root and user data) How can I do? I found this online, be put in the file .htaccess: <Files File.txt> Order allow, deny Deny from all </ Files> It seems to work, but only if the file is in the web root (www.example.com / myfile.txt), and not in subfolders. Moreover, the folders in the second level (www.example.com/folder1/fruit/foobar.txt) will be dinamycally created.. I would like to avoid having to change .htaccess file from time to time. It is possible to create a rule, something like that, that goes for all files with given name, which is on www.example.com/folder-name-static/folder-name-dinamyc/file-name-static.txt, where those parts are allways the same, just that one change ? EDIT: As Dave Drager said, i could semplify this keeping those file outside the web accessible directory. But those directory's will contain others files too, images, and stuff used by my users, so i'm simply try to not have a duplicate folders system, like: /var/www/vhosts/example.com/httpdocs/folder1/car/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/cycle/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/fish/[other folders and files here] //and, then for the 'secrets' files: /folder1/data/car/foobar.txt /folder1/data/cycle/foobar.txt /folder1/data/fish/foobar.txt

    Read the article

  • Apache won't serve images larger than ~2K

    - by dtbaker
    Hello, Just upgraded an old box to Ubuntu to 10.04.2 LTS. Apache will not display images to a browser that are over about 2K. Small images seem to display fine. Static HTML and PHP continues to works fine as well. Installed: apache2 2.2.14-5ubuntu8.4 apache2-mpm-prefork 2.2.14-5ubuntu8.4 apache2-utils 2.2.14-5ubuntu8.4 apache2.2-bin 2.2.14-5ubuntu8.4 apache2.2-common 2.2.14-5ubuntu8.4 here is an ngrep of an image that doesn't display fine in the browser: T 192.168.0.4:32907 - 192.168.0.54:80 [AP] GET /path/path/logo.png HTTP/1.1..Host: 192.1 68.0.54..Connection: keep-alive..Accept: application/xml,application/xhtml+ xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5..User-Ag ent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13..Accept-Enco ding: gzip,deflate,sdch..Accept-Language: en-US,en;q=0.8..Accept- Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3.... T 192.168.0.54:80 - 192.168.0.4:32907 [A] HTTP/1.1 200 OK..Date: Wed, 09 Mar 2011 05:28:38 GMT..Server: Apa che/2.2.14 (Ubuntu)..Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT ..ETag: "17b6f4-15fe-491dd63eb2f40"..Accept-Ranges: bytes..Conten t-Length: 5630..Keep-Alive: timeout=15, max=100..Connection: Keep -Alive..Content-Type: image/png.....PNG........IHDR...!...v...... .%.....sRGB.........bKGD..............pHYs.................tIME.. etc... This looks ok to me! I have tried firefox and chrome, both display small images fine but when a large image is requested the browser prompts to download the file. When the image file is saved to the local computer it is corrupt, it also takes a long time to save which makes me think the browser cannot see the content-length header sent from apache. Also when I look at the saved image file it includes the headers from apache, along with a bit of garbage at the top, like so: vi logo.png: ^@^UÅd^@$^]V^S^H^@E^@^Q,n!@^@@^F^@^@À¨^@6À¨^@^D^@P^Y¬rÇŹéw^P^@Ú^@^@^A^A^H ^@^GÝ^]^@pbSHTTP/1.1 200 OK^M Date: Wed, 09 Mar 2011 04:47:04 GMT^M Server: Apache/2.2.14 (Ubuntu)^M Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT^M ETag: "17b6ff-157c-491dd63eb2f40"^M Accept-Ranges: bytes^M Content-Length: 5500^M Keep-Alive: timeout=15, max=94^M Connection: Keep-Alive^M Content-Type: image/png^M ^M PNG^M etc... Any ideas? It's driving me nuts. There is nothing in apache error logs, and permissions are fine (because the image data is there, it's just somewhat corrupt). There's no proxy or iptables on this ubuntu box either. Thanks heaps!! Dave ps: just tried on IE from a different computer, same problem :( pps: rebooted server, no help.

    Read the article

  • SQL Transactional Replication snapshot not applying

    - by dmch2
    Hi, I'm using SQL Transactional Replication with pull subscriptions to replicate databases (hosting their own distribution database) from several servers across a VPN to a central server. I've got the first 2 databases working fine but the 3rd one is causing me problems. My subscription server is SQL 2008, the source systems are all SQL 2005. The source databases are a few 100Mb in size and contain audit data so are simply growing slowly by adding new records at approx 1kb a second. As far as the replication monitor, Agent logs and event logs show everything is working fine - except that no data appears in my subscription database. The distribution agent doesn't seem to want to read the snapshot (and hence the initial state and schema) from the publisher. New transactions aren't applied although they do seem to be arriving OK as the replication monitor shows things like '5 transactions with 10 commands were delivered'. I would expect (as in previous times) to see statements about data being BCPed in the replication monitor. The snapshot is on the publisher on a shared folder. The subscriber can view the snapshot OK (\\repldata) and the alt snapshot folder is pointing at it. But the distribution agent doesn't seem to be making an attempt to do read it. I tried changing the snapshot path to something that's incorrect and didn't even get an error saying that it couldn't access it. After lots of googling etc I found that sp_MSget_repl_commands is called by the subscriber on the distribution database on the publisher. Running a profiler I can see that it's only called for one agent Id. After a reinit it's called for sequence number 0x0 as expected so I thought that would mean it's would look for the snapshot. However, looking on the publisher I see that there's data for two agents - the snapshot agent and the log reader agent (which is being queries). So I guess I need to tell the distribution agent to get the data for both. But how? and more importantly - why? It worked fine on the other two servers I've replicated. I'm not an SQL novice but this is pretty much my first go at replication so don't be afraid to accuse me of missing something obvious/stupid! I can get log files (eg from the distribution agent) if you want but they don't seem to have any errors in them - it just starts up and starts applying log reader agent changes. Cheers Dave

    Read the article

  • SQL SERVER – 2008 – Introduction to Snapshot Database – Restore From Snapshot

    - by pinaldave
    Snapshot database is one of the most interesting concepts that I have used at some places recently. Here is a quick definition of the subject from Book On Line: A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the same server instance as the database. Each database snapshot is consistent, in terms of transactions, with the source database as of the moment of the snapshot’s creation. A snapshot persists until it is explicitly dropped by the database owner. If you do not know how Snapshot database work, here is a quick note on the subject. However, please refer to the official description on Book-on-Line for accuracy. Snapshot database is a read-only database created from an original database called the “source database”. This database operates at page level. When Snapshot database is created, it is produced on sparse files; in fact, it does not occupy any space (or occupies very little space) in the Operating System. When any data page is modified in the source database, that data page is copied to Snapshot database, making the sparse file size increases. When an unmodified data page is read in the Snapshot database, it actually reads the pages of the original database. In other words, the changes that happen in the source database are reflected in the Snapshot database. Let us see a simple example of Snapshot. In the following exercise, we will do a few operations. Please note that this script is for demo purposes only- there are a few considerations of CPU, DISK I/O and memory, which will be discussed in the future posts. Create Snapshot Delete Data from Original DB Restore Data from Snapshot First, let us create the first Snapshot database and observe the sparse file details. USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO Now let us see the resultset for the same. Now let us do delete something from the Original DB and check the same details we checked before. -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO When we check the details of sparse file created by Snapshot database, we will find some interesting details. The details of Regular DB remain the same. It clearly shows that when we delete data from Regular/Source DB, it copies the data pages to Snapshot database. This is the reason why the size of the snapshot DB is increased. Now let us take this small exercise to  the next level and restore our deleted data from Snapshot DB to Original Source DB. -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Now let us check the details of the select statement and we can see that we are successful able to restore the database from Snapshot Database. We can clearly see that this is a very useful feature in case you would encounter a good business that needs it. I would like to request the readers to suggest more details if they are using this feature in their business. Also, let me know if you think it can be potentially used to achieve any tasks. Complete Script of the afore- mentioned operation for easy reference is as follows: USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Dec 5th Links: ASP.NET, ASP.NET MVC, jQuery, Silverlight, Visual Studio

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my VS 2010 and .NET 4 series for another on-going blog series I’m working on. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET ASP.NET Code Samples Collection: J.D. Meier has a great post that provides a detailed round-up of ASP.NET code samples and tutorials from a wide variety of sources.  Lots of useful pointers. Slash your ASP.NET compile/load time without any hard work: Nice article that details a bunch of optimizations you can make to speed up ASP.NET project load and compile times. You might also want to read my previous blog post on this topic here. 10 Essential Tools for Building ASP.NET Websites: Great article by Stephen Walther on 10 great (and free) tools that enable you to more easily build great ASP.NET Websites.  Highly recommended reading. Optimize Images using the ASP.NET Sprite and Image Optimization Framework: A nice article by 4GuysFromRolla that discusses how to use the open-source ASP.NET Sprite and Image Optimization Framework (one of the tools recommended by Stephen in the previous article).  You can use this to significantly improve the load-time of your pages on the client. Formatting Dates, Times and Numbers in ASP.NET: Scott Mitchell has a great article that discusses formatting dates, times and numbers in ASP.NET.  A very useful link to bookmark.  Also check out James Michael’s DateTime is Packed with Goodies blog post for other DateTime tips. Examining ASP.NET’s Membership, Roles and Profile APIs (Part 18): Everything you could possibly want to known about ASP.NET’s built-in Membership, Roles and Profile APIs must surely be in this tutorial series. Part 18 covers how to store additional user info with Membership. ASP.NET with jQuery An Introduction to jQuery Templates: Stephen Walther has written an outstanding introduction and tutorial on the new jQuery Template plugin that the ASP.NET team has contributed to the jQuery project. Composition with jQuery Templates and jQuery Templates, Composite Rendering, and Remote Loading: Dave Ward has written two nice posts that talk about composition scenarios with jQuery Templates and some cool scenarios you can enable with them. Using jQuery and ASP.NET to Build a News Ticker: Scott Mitchell has a nice tutorial that demonstrates how to build a dynamically updated “news ticker” style UI with ASP.NET and jQuery. Checking All Checkboxes in a GridView using jQuery: Scott Mitchell has a nice post that covers how to use jQuery to enable a checkbox within a GridView’s header to automatically check/uncheck all checkboxes contained within rows of it. Using jQuery to POST Form Data to an ASP.NET AJAX Web Service: Rick Strahl has a nice post that discusses how to capture form variables and post them to an ASP.NET AJAX Web Service (.asmx). ASP.NET MVC ASP.NET MVC Diagnostics Using NuGet: Phil Haack has a nice post that demonstrates how to easily install a diagnostics page (using NuGet) that can help identify and diagnose common configuration issues within your apps. ASP.NET MVC 3 JsonValueProviderFactory: James Hughes has a nice post that discusses how to take advantage of the new JsonValueProviderFactory support built into ASP.NET MVC 3.  This makes it easy to post JSON payloads to MVC action methods. Practical jQuery Mobile with ASP.NET MVC: James Hughes has another nice post that discusses how to use the new jQuery Mobile library with ASP.NET MVC to build great mobile web applications. Credit Card Validator for ASP.NET MVC 3: Benjii Me has a nice post that demonstrates how to build a [CreditCard] validator attribute that can be used to easily validate credit card numbers are in the correct format with ASP.NET MVC. Silverlight Silverlight FireStarter Keynote and Sessions: A great blog post from John Papa that contains pointers and descriptions of all the great Silverlight content we published last week at the Silverlight FireStarter.  You can watch all of the talks online.  More details on my keynote and Silverlight 5 announcements can be found here. 31 Days of Windows Phone 7: 31 great tutorials on how to build Windows Phone 7 applications (using Silverlight).  Silverlight for Windows Phone Toolkit Update: David Anson has a nice post that discusses some of the additional controls provided with the Silverlight for Windows Phone Toolkit. Visual Studio JavaScript Editor Extensions: A nice (and free) Visual Studio plugin built by the web tools team that significantly improves the JavaScript intellisense support within Visual Studio. HTML5 Intellisense for Visual Studio: Gil has a blog post that discusses a new extension my team has posted to the Visual Studio Extension Gallery that adds HTML5 schema support to Visual Studio 2008 and 2010. Team Build + Web Deployment + Web Deploy + VS 2010 = Goodness: Visual blogs about how to enable a continuous deployment system with VS 2010, TFS 2010 and the Microsoft Web Deploy framework.  Visual Studio 2010 Emacs Emulation Extension and VIM Emulation Extension: Check out these two extensions if you are fond of Emacs and VIM key bindings and want to enable them within Visual Studio 2010. Hope this helps, Scott

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Developer Training – A Conclusive Summary- Part 5

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 We have now reached the end of our series about developer training.  I hope you have come away thinking that training is the best way to advance in your company and that you are looking for training opportunities right now.  If you’re still not convinced here are a few things to keep in mind:  Training benefits the employer and the employee. A well trained employee is a happy employee, and a happy employee is more efficient and productive. Training an employee might be expensive, but it is less expensive than hiring a new person. Whether you are looking at him from the employee’s or the company’s point of view, there are always advantages to training. A Broader View This series is definitely written for Developer Training but it is not limited to developers only. There are IT Pro, System Admins, DBAs as well many other technology professionals; this article series is for all professionals in the world. The concepts and take away will remain common across all the platform and regardless of technology affiliation. Pass the Knowledge If I have to pick one advise which is extremely important related to training, I will pick – pass the knowledge. Once you have decided in favor of training, there is more to it than simply showing up and staying awake.  It is always a good idea to take notes – at the very least it will help you stay awake, but they will often serve as a good way to remember your training when you go back to work.  You can also use them to pass your new knowledge on to fellow employees, which can be very fun and rewarding. Right Place, Right Time and Right Training There are so many ways to get developer training.  In-person and on the job training is easy to come by and is the most usual type of training, but don’t overlook my favorite type of training: On Demand.  Being able to learn at your own pace, own place and on your own time will make training a realistic goal for almost every employee. I can think of nothing more important in life than furthering your education.  Especially when you work in a field that is constantly changing – like technology.  Whether you like it or not, training is incredibly important.  That is why I feel it is so important to receive training.  And because there are so many different training formats – live, online, through books, through people – I am certain that we all can find a way to be trained that best suits our goals and personalities. The Teacher Within If you think of anyone who is a master of the technology field or an incredibly successful developer (the obvious examples that spring to mind are Steve Jobs or Bill Gates), you will also find a teacher.  Both these individuals spent their lives developing better technology, but also educating other developers and the public about how to use these technologies and how it can change your life for the better.  I think that we all should strive to be like these wonderful teachers.  We might not be able to change the world, but we can certainly change a few lives around us. Even if we never turn into trainers ourselves , being trained as a student can be a good exercise.  We learn a lot and become better employees – and it would not be a stretch to say that this makes us better individuals, as well. Final Say I think learning and growing in your chosen field is not only a good idea, career-wise, but can be fun, too!  I for one never feel more alive than when I am learning about something I am really passionate about.  I think my job title – technology evangelist – explains how enthusiastic I am about this subject.  But please don’t think that I am thinking of this as someone who wants to train and educate others (although this is also one of my passions).  I am also a passionate student.  I enjoy learning new things and am always on the lookout for new ways to learn and new people to learn from. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Developer Training – Various Options for Maximum Benefit – Part 4

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 If you have been reading this series, by now you are aware of all the pros and cons that can come along with training.  We’ve asked and answered hard questions, and investigated them “whys” and “hows” of training.  Now it is time to talk about all the different kinds of training that are out there! On Job Training The most common type of training is on the job training.  Everyone receives this kind of education – even experts who come in to consult have to be taught where the printer, pens, and copy machines are.  If you are thinking about more concrete topics, though, on the job training can be some of the easiest to come across.  Picture this: someone in the company whom you really admire is hard at work on a project.  You come up to them and ask to help them out – if they are a busy developer, the odds are that they will say “yes, please!”   If you phrase your question as an offer of help, you can receive training without ever putting someone in the awkward position of acting as a mentor.  However, some people may want the task of being a mentor.  It can never hurt to ask.  Most people will be more than willing to pass their knowledge along. Extreme Programming If your company and coworkers are willing, you can even investigate Extreme Programming.  This is a type of programming that allows small teams to quickly develop code and products that are released with almost immediate user feedback.  You can find more information at http://www.extremeprogramming.org/.  If this is something your company could use, suggest it to your supervisor.  Even if they say no, it will make it clear that you are a go-getter who is interested in new and exciting projects.  If the answer is yes, then you have the opportunity to get some of the best on the job training around. In Person Training Click on Image to Enlarge When you say the word “training,” most people’s minds go back to the classroom, an image they are familiar with.  While training doesn’t always have to be in a traditional setting, because it is so familiar it can also be the most valuable type of training.  There are many ways to get training through a live instructor.  Some companies may be willing to send a representative to you, where employees will get training, sometimes food and coffee, and a live instructor who can answer questions immediately.  Sometimes these trainers are also able to do consultations at the same time, which can invaluable to a company.  If you are the one to asks your supervisor for a training session that can also be turned into a consultation, you may stick in their minds as an incredibly dedicated employee.  If you can’t find a representative, local colleges can also be a good resource for free or cheap classes – or they may have representatives coming who are willing to take on a few more students. Benefits of On Demand Developer Training Of course, you can often get the best of all these types of training with online or On Demand training.  You can get the benefit of a live instructor who is willing to answer questions (although in this case, usually through e-mail or other online venues), there are often real-world examples to follow along – like on the job training – and best of all you can learn whenever you have the time or need.  Did a problem with your server come up at midnight when all your supervisors are safe at home and probably in bed?  No problem!  On Demand training is especially useful if you need to slow down, pause, or rewind a training session.  Not even a real-life instructor can do that! When I was writing this blog post, I felt that each of the subject, which I have covered can be blog posts of itself. However, I wanted to keep the the blog post concise and so touch based on three major training aspects 1) On Job Training 2) In Person Training and 3) Online training. Here is the question for you – is there any other kind of training methods available, which are effective and one should consider it? If yes, what are those, I may write a follow up blog post on the same subject next week. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Meeting with Allen Bailochan Tuladhar – An Unlimited Experience

    - by pinaldave
    Allen  Tuladhar I recently came back from my 9-day trip in Nepal and I must say that this is one of the best trips I had in my lifetime. Allen Bailochan Tuladhar is a wonderful person and an extreme enthusiast for Microsoft Technology. Allen is the Chief Executive Officer of Unlimited Technologies Pvt Ltd., Country Manager of Microsoft MDP Nepal, the Member Secretary of Nepali Language in Information Technology, and member of the Steering Committee of the Government of Nepal. He is the person who keeps the Nepal’s Tech Community constantly motivating and taking it to the next level. I have met Allen for many times before, but this was the first time I was with him in Kathmandu, Nepal. I was very impressed with the amount of the work he does in the community. During my 9 days of stay, every single day was a new lesson for me. I was amazed and overwhelmed with the many things he does every single day. Not only he does he work closely with Government of Nepal ministry, but he is also the most known person in the Student Community. His expertise in the technical subject matter is not limited to one technology; rather, I have seen him actively engaging himself in  discussions of various tech topics. Allen presending at TechMela Kathmandu, Nepal Allen is currently active in working out to localize Windows and Office and incorporate it using the Nepali language. I was able to witness and experience how the localization works, as well as the procedure on how to do such. If you know the whole localization process, you must have realized how big and daunting of a process it is. I was glad that I became a part of it. Prominent Personality of Nepal on Panel Discussion Another great opportunity I had when I was at Allen’s office is that I have learned how the radio technology talk show works. Nepali Radio station has the weekly program in their local language, in which MS technology is discussed and industry leaders are invited to talk about their experience with the technology. I found the program so interesting because it has so much variety in terms of technology subjects. Well, my understanding of Nepali language is limited but I did understand quite a bit. Ravi, Nutan, Pinal, Gandip I got the chance to meet lots of Database Professionals as well. People in Nepal are very polite even though they are very strong in their technology fundamentals. I had in-depth discussion regarding High Availability scenarios, as well Query Tuning. Database professionals from the leading financial sectors of Nepal wanted me to visit their Data Center and help them out with a few advances. In no time, Allen organized a visit for me. He sent me a Nepali-speaking expert from his own organization to accompany me in overcoming any difficulties while I was on my way helping this financial district. Pinal (SQLAuthority) and Deependra (Unlimited) When I was going to Nepal, I was really not sure if I would be able to stay busy for 9 days straight in Community-related activity. However, on the 9th day I realize that I can still stay here for more than 9 days because in every single day, I feel enthusiastic enough to do something new. Allen Bailochan Tuladhar Even though I was working  very hard every day, I hardly had the chance to work with and talk to him one-on-one for the first few days. One of the evenings, Allen invited me to his home and we discussed about his future ideas. I was really surprised to see how much a man can do for his technical community and for his country. When I asked Allen’s wife and daughter if they ever think it’s getting too much with regards to Allen putting tough efforts to the community, their answer was something I did not expect. I found out that Allen’s wife manages all the back office and logistics of the community events and his daughter manages the websites. I felt that they do not have any complain,  and instead, their whole family is in this activity as deeply as it can get, which I thought is a very good thing. Pinal and Allen I want to end this post with an interesting story that happened during our lunch hour at one of the Nepali restaurants. While we were having our lunch and having some chitchat, Allen suddenly stood up and called several people walking along the pavement. He introduced them all to me as Microsoft Student Partners. He asked all of them to order their favorite dish and called the waiter to inform that he will pick up their tab. Figuring out the question written on my face, he just said one sentence: “They are all future technology professionals who are going to make all of us proud.” I guess I have a lot of things to learn. Hats off to Allen! Pinal and Allen at Microsoft MDP Unlimited Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology

    Read the article

  • Last week I was presented with a Microsoft MVP award in Virtual Machines – time to thank all who hel

    - by Liam Westley
    MVP in Virtual Machines Last week, on 1st April, I received an e-mail from Microsoft letting me know that I had been presented with a 2010 Microsoft® MVP Award for outstanding contributions in Virtual Machine technical communities during the past year.   It was an honour to be nominated, and is a great reflection on the vibrancy of the UK user group community which made this possible. Virtualisation for developers, not just IT Pros I consider it a special honour as my expertise in virtualisation is as a software developer utilising virtual machines to aid my software development, rather than an IT Pro who manages data centre and network infrastructure.  I’ve been on a minor mission over the past few years to enthuse developers in a topic usually seen as only for network admins, but which can make their life a whole lot easier once understood properly. Continuous learning is fun In 1676, the scientist Isaac Newton, in a letter to Robert Hooke used the phrase (http://www.phrases.org.uk/meanings/268025.html) ‘If I have seen a little further it is by standing on the shoulders of Giants’ I’m a nuclear physicist by education, so I am more than comfortable that any knowledge I have is based on the work of others.  Although far from a science, software development and IT is equally built upon the work of others. It’s one of the reasons I despise software patents. So in that sense this MVP award is a result of all the great minds that have provided virtualisation solutions for me to talk about.  I hope that I have always acknowledged those whose work I have used when blogging or giving presentations, and that I have executed my responsibility to share any knowledge gained as widely as possible. Thanks to all those who helped – a big thanks to the UK user group community I reckon this journey started in 2003 when I started attending a user group called the London .Net Users Group (http://www.dnug.org.uk) started by a nice chap called Ian Cooper. The great thing about Ian was that he always encouraged non professional speakers to take the stage at the user group, and my first ever presentation was on 30th September 2003; SQL Server CE 2.0 and the.NET Compact Framework. In 2005 Ian Cooper was on the committee for the first DeveloperDeveloperDeveloper! day, the free community conference held at Microsoft’s UK HQ in Thames Valley park in Reading.  He encouraged me to take part and so on 14th May 2005 I presented a talk previously given to the London .Net User Group on Simplifying access to multiple DB providers in .NET.  From that point on I definitely had the bug; presenting at DDD2, DDD3, groking at DDD4 and SQLBits I and after a break, DDD7, DDD Scotland and DDD8.  What definitely made me keen was the encouragement and infectious enthusiasm of some of the other DDD organisers; Craig Murphy, Barry Dorrans, Phil Winstanley and Colin Mackay. During the first few DDD events I met the Dave McMahon and Richard Costall from NxtGenUG who made it easy to start presenting at their user groups.  Along the way I’ve met a load of great user group organisers; Guy Smith-Ferrier of the .Net Developer Network, Jimmy Skowronski of GL.Net and the double act of Ray Booysen and Gavin Osborn behind what was Vista Squad and is now Edge UG. Final thanks to those who suggested virtualisation as a topic ... Final thanks have to go the people who inspired me to create my Virtualisation for Developers talk.  Toby Henderson (@holytshirt) ensured I took notice of Sun’s VirtualBox, Peter Ibbotson for being a fine sounding board at the Kew Railway over quite a few Adnam’s Broadside and to Guy Smith-Ferrier for allowing his user group to be the guinea pigs for the talk before it was seen at DDD7.  Thanks to all of you I now know much more about virtualisation than I would have thought possible and it continues to be great fun. Conclusion If this was an academy award acceptance speech I would have been cut off after the first few paragraphs, so well done if you made it this far.  I’ll be doing my best to do justice to the MVP award and the UK community.  I’m fortunate in having a new employer who considers presenting at user groups as a good thing, so don’t expect me to stop any time soon. If you’ve never seen me in action, then you can view the original DDD7 Virtualisation for Developers presentation (filmed by the Microsoft Channel 9 team) as part of the full DDD7 video list here, http://www.craigmurphy.com/blog/?p=1591.  Also thanks to Craig Murphy’s fine video work you can also view my latest DDD8 presentation on Commercial Software Development, here, http://vimeo.com/9216563 P.S. If I’ve missed anyone out, do feel free to lambast me in comments, it’s your duty.

    Read the article

  • SQL SERVER – Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage

    - by pinaldave
    I have mentioned several times on this blog that the best part of blogging is the questions I receive from readers. They are often very interesting. The questions from readers give me a good idea what other readers might be thinking as well. After reading my earlier article Simple Example to Configure Resource Governor – Introduction to Resource Governor – I received an email from a reader and we exchanged a few emails. After exchanging emails we both figured out what is going on. It was indeed interesting and reader suggested to that I should blog about it.  I asked for permission to publish his name but he does not like the attention so we will just call him Jeff. I have converted our emails into chat for easy consumption. Jeff: Your script does not work at all. I think either there is a bug in SQL Server. Pinal: Would you please explain in detail? Jeff: Your code does not limit the CPU usage? Pinal: How did you measure it? Jeff: Well, we have third party tools for it but let us say I have limited the resources for Reporting Services and used your script described in your blog. After that I ran only reporting service workload the CPU is still used more than 100% and it is not limited to 30% as described in your script. Clearly something is wrong somewhere. Pinal: Did you say you ONLY ran reporting server load? Jeff: Yeah, to validate I ran ONLY reporting server load and CPU did not throttle at 30% as per your script. Pinal: Oh! I get it here is the answer - CAP_CPU_PERCENT = 30. Use it. Jeff: What is that, I think your earlier script says it will throttle the Reporting Service workload and Application/OLTP workload and balance it. Pinal: Exactly, that is correct. Jeff: You need to write more in email buddy! Just like your blogs, your answers do not make sense! No Offense! Pinal: Hmm…feedback well taken. Let me try again. In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Configuration: [Read Earlier Post] Reporting Workload: MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30 Application/OLTP Workload: MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100 Example 1: If there is only Reporting Workload on the server: SQL Server will not limit usage of CPU to only 30% workload but SQL Server instance will use all available CPU (if needed). In another word in this scenario it will use more than 30% CPU. Example 2: If there is Reproting Workload and heavy Application/OLTP workload: SQL Server will allocate a maximum of 30% CPU resources to Reporting Workload and allocate remaining resources to heavy application/OLTP workload. The reason for this enhancement is for better utilization of the resources. Let us think, if there is only single workload, which we have limited to max CPU usage to 30%. The other unused available CPU resources is now wasted. In this situation SQL Server allows the workload to use more than 30% resources leading to overall improved/optimized performance. However, in the case of multiple workload where lots of resources are needed the limits specified in MAX_CPU_PERCENT are acknowledged. Example 3: If there is a situation where the max CPU workload has to be enforced: This is a very interesting scenario, in the case when the max CPU workload has to be enforced irrespective of the workload and enhanced algorithm, the keyword CAP_CPU_PERCENT is essential. It specifies a hard cap on the CPU bandwidth that all requests in the resource pool will receive. It will never let CPU usage for reporting workload to go over 30% in our case. You can use the key word as follows: -- Creating Resource Pool for Report Server CREATE RESOURCE POOL ReportServerPool WITH ( MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30, CAP_CPU_PERCENT=40, MIN_MEMORY_PERCENT=0, MAX_MEMORY_PERCENT=30) GO Notice that there is MAX_CPU_PERCENT=30 and CAP_CPU_PERCENT=40, what it means is that when SQL Server Instance is under heavy load under different workload it will use the maximum CPU at 30%. However, when the SQL Server instance is not under workload it will go over the 30% limit. However, as CAP_CPU_PERCENT is set to 40, it will not go over 40% in any case by limiting the usage of CPU. CAP_CPU_PERCENT puts a hard limit on the resources usage by workload. Jeff: Nice Pinal, you should blog about it. [A day passes by] Pinal: Jeff, it is done! Click here to read it. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Service Broker

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100  | Next Page >