Search Results

Search found 60669 results on 2427 pages for 'time tracking'.

Page 480/2427 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • Using Radio Button in GridView with Validation

    - by Vincent Maverick Durano
    A developer is asking how to select one radio button at a time if the radio button is inside the GridView.  As you may know setting the group name attribute of radio button will not work if the radio button is located within a Data Representation control like GridView. This because the radio button inside the gridview bahaves differentely. Since a gridview is rendered as table element , at run time it will assign different "name" to each radio button. Hence you are able to select multiple rows. In this post I'm going to demonstrate how select one radio button at a time in gridview and add a simple validation on it. To get started let's go ahead and fire up visual studio and the create a new web application / website project. Add a WebForm and then add gridview. The mark up would look something like this: <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="false" > <Columns> <asp:TemplateField> <ItemTemplate> <asp:RadioButton ID="rb" runat="server" /> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField="RowNumber" HeaderText="Row Number" /> <asp:BoundField DataField="Col1" HeaderText="First Column" /> <asp:BoundField DataField="Col2" HeaderText="Second Column" /> </Columns> </asp:GridView> Noticed that I've added a templatefield column so that we can add the radio button there. Also I have set up some BoundField columns and set the DataFields as RowNumber, Col1 and Col2. These columns are just dummy columns and i used it for the simplicity of this example. Now where these columns came from? These columns are created by hand at the code behind file of the ASPX. Here's the code below: private DataTable FillData() { DataTable dt = new DataTable(); DataRow dr = null; //Create DataTable columns dt.Columns.Add(new DataColumn("RowNumber", typeof(string))); dt.Columns.Add(new DataColumn("Col1", typeof(string))); dt.Columns.Add(new DataColumn("Col2", typeof(string))); //Create Row for each columns dr = dt.NewRow(); dr["RowNumber"] = 1; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 2; dr["Col1"] = "AA"; dr["Col2"] = "BB"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 3; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 4; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 5; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); return dt; } And here's the code for binding the GridView with the dummy data above. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { GridView1.DataSource = FillData(); GridView1.DataBind(); } } Okay we have now a GridView data with a radio button on each row. Now lets go ahead and switch back to ASPX mark up. In this example I'm going to use a JavaScript for validating the radio button to select one radio button at a time. Here's the javascript code below: function CheckOtherIsCheckedByGVID(rb) { var isChecked = rb.checked; var row = rb.parentNode.parentNode; if (isChecked) { row.style.backgroundColor = '#B6C4DE'; row.style.color = 'black'; } var currentRdbID = rb.id; parent = document.getElementById("<%= GridView1.ClientID %>"); var items = parent.getElementsByTagName('input'); for (i = 0; i < items.length; i++) { if (items[i].id != currentRdbID && items[i].type == "radio") { if (items[i].checked) { items[i].checked = false; items[i].parentNode.parentNode.style.backgroundColor = 'white'; items[i].parentNode.parentNode.style.color = '#696969'; } } } } The function above sets the row of the current selected radio button's style to determine that the row is selected and then loops through the radio buttons in the gridview and then de-select the previous selected radio button and set the row style back to its default. You can then call the javascript function above at onlick event of radio button like below: <asp:RadioButton ID="rb" runat="server" onclick="javascript:CheckOtherIsCheckedByGVID(this);" /> Here's the output below: On Load: After Selecting a Radio Button: As you have noticed, on initial load there's no default selected radio in the GridView. Now let's add a simple validation for that. We will basically display an error message if a user clicks a button that triggers a postback without selecting  a radio button in the GridView. Here's the javascript for the validation: function ValidateRadioButton(sender, args) { var gv = document.getElementById("<%= GridView1.ClientID %>"); var items = gv.getElementsByTagName('input'); for (var i = 0; i < items.length ; i++) { if (items[i].type == "radio") { if (items[i].checked) { args.IsValid = true; return; } else { args.IsValid = false; } } } } The function above loops through the rows in gridview and find all the radio buttons within it. It will then check each radio button checked property. If a radio is checked then set IsValid to true else set it to false.  The reason why I'm using IsValid is because I'm using the ASP validator control for validation. Now add the following mark up below under the GridView declaration: <br /> <asp:Label ID="lblMessage" runat="server" /> <br /> <asp:Button ID="btn" runat="server" Text="POST" onclick="btn_Click" ValidationGroup="GroupA" /> <asp:CustomValidator ID="CustomValidator1" runat="server" ErrorMessage="Please select row in the grid." ClientValidationFunction="ValidateRadioButton" ValidationGroup="GroupA" style="display:none"></asp:CustomValidator> <asp:ValidationSummary ID="ValidationSummary1" runat="server" ValidationGroup="GroupA" HeaderText="Error List:" DisplayMode="BulletList" ForeColor="Red" /> And then at Button Click event add this simple code below just to test if  the validation works: protected void btn_Click(object sender, EventArgs e) { lblMessage.Text = "Postback at: " + DateTime.Now.ToString("hh:mm:ss tt"); } Here's the output below that you can see in the browser:   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,JavaScript,GridView

    Read the article

  • Sell good Dumps, track 1&2, CVV, Paypal, WU TRANSFER Service

    - by gOOD dUMPS cvv
    my products for sale: Sell CVV; Dumps, track1&2; Bank logins; Paypal Accounts;Ebay Accounts Mailpass; SMTP;RDP;VPS;CCN;SSN; Sell Amazon gift card & itunes gift card; Game Cards, ATM Card; MSR, ATM SKIMMERS. Do WU Transfers and Bank Transfers I am here to sell, supply good and quality CVV for shipping;booking airline ticket;shopping online;ordering Laptop,Iphone;... In last 5 years my Job Is This. PRESTIGE is my first motto. Not easy to build the good PRESTIGE. My motto is Always make customers satisfied & happy ! I have unlocked many softwares make good money, example: -Software to make the bug and crack MTCN of the Western Union. Version : 2.0.1.1 ( new update ) -Software to open balance in PayPal and Bank Login -Software hacking credit card, debit card Version 1.0 **I only sell it for my good customers, and my familiarity ***I update more than 200 CC + CVV everyday. Fresh + good valid + Strong,private + high balance with best price Our products are checked by a partner who works in a bank. Our products are better than 5-7 days after they are dead. They are raised mainly for money atm. Can be used in most countries. ** If you are a serious buyer, let contact via : Yahoo ID: goodcvv_dumps Mail: [email protected] ICQ: 667686221 * Sell CVV; Dumps,track1&2; Bank logins; Paypal Accounts;Ebay Accounts Mailpass; SMTP;RDP;VPS;CCN;SSN; Sell Amazon gift card & itunes gift card; Game Cards, ATM Card; MSR, ATM SKIMMERS. Do WU Transfer and Bank Transfers. CVV for shipping;booking airline ticket;shopping online;ordering Laptop,Iphone;... I promise CC of mine are good,high balance and fresh all with good price. PRESTIGE is my first motto. I sure u will be happy All I need is good & serious buyer to business for a long time * SELL GOOD CVV for shipping;booking airline ticket;shopping online;ordering Laptop,Iphone;...!IF NOT GOOD, WILL CHANGE IMMEDIATELY * Contact me to negotiate about the price if buying bulk. I really need more serious buyers to do long business. You will be given many endow when we have long time business,you do good for me, I do good for u too. Long & good business.This is all I need :) - US (vis,mas)= $3/CC; US (amex,dis)= $5/CC; US BIN; US fullz; USA Visa VBV info for sale. - UK (vis,mas)= $8/CC; UK (amex,dis)= $20/CC; UK BIN; UK DOB; UK with Postcode; UK fullz; UK pass VBV - EU (vis,mas)= $20/CC; EU Amex = $30/CC; EU DOB; EU fullz; EU pass VBV. Include: Italy CVV; Spain CVV; France CVV; Sweden CVV; Denmark CVV; Slovakia CVV; Portugal CVV; Norway CVV; Belgium CVV Greece CVV; Germany CVV; Ireland CVV; Newzealand CVV; Switzerland CVV; Finland CVV; Turkey CVV; Netherland CVV - CA (vis,mas)= $8/CC; CA BIN; CA GOLD; CA Amex; CA Fullz; CA pass VBV - AU (vis,mas)= $10/CC; AU BIN; AU Amex; AU DOB; AU fullz; AU pass VBV - Brazil random = $15/CC; Brazil BIN - Middle East: UAE = $15/CC; Qatar= $10/CC; Saudi Arabia;... - ASIA ( Malay; Indo; Japan;China; Hongkong; Singapore...) = $10/CC - South Africa = $10/CC - And All CC; CC pass VBV; CVV pass VBV; CCN SSN- INTER ( BIN,DOB,SSN,FULLZ) of another Countries. Good CC, CVV for shipping;booking airline ticket;shopping online;ordering Laptop,Iphone;... [Sell CVV; Dumps, track1&2; Bank logins; Paypal Accounts;Ebay Accounts Mailpass; SMTP;RDP;VPS;CCN;SSN; Sell Amazon gift card & itunes gift card; Game Cards, ATM Card; MSR, ATM SKIMMERS. Do WU Transfer and Bank Transfers] [Sell CVV; Dumps, track1&2; Bank logins; Paypal Accounts;Ebay Accounts Mailpass; SMTP;RDP;VPS;CCN;SSN; Sell Amazon gift card & itunes gift card; Game Cards, ATM Card; MSR, ATM SKIMMERS. Do WU Transfers and Bank Transfers] ------------------------------ CONTACT via Y!H: goodcvv_dumps or ICQ: 667686221 or Mail: [email protected] ------------------------------------ * WARNING!!! BEFORE MAKE BUSINESS or add my ID, let read carefull my rule because i really hate Spammers,Rippers and Scammers - Dont trust, dont talk more - Don't Spamm And Don't Scam! I very hate do spam or rip and I don't want who spam me. - All my CVV are tested before sell, that's sure - I accept LR; WU or MoneyGram. - I only work with reliable buyers. Need good & serious buyer to business for a long time - I work with only one slogan: prestige and quality to satisfy my clients !!! - I was so happy to see you actually make more big money from the business with me Once you trust me, work with me. And if not trust,dont contact me, dont waste time! --------------------------THANKS, LOOK FORWARD TO WORKING WITH ALL of YOU !!!---------------------------- * Sell CVV; Dumps, track1&2; Bank logins; Paypal Accounts,Ebay Accounts; Mailpass; SMTP;RDP;VPS;CCN;SSN; Sell Amazon gift card & itunes gift card; Game Cards, ATM Card; MSR, ATM SKIMMERS. Do WU Transfers and Bank Transfers. CVV for shipping;booking airline ticket;shopping online;ordering Laptop,Iphone;... [Sell CVV; Dumps, track1&2; Bank logins; Paypal Accounts,Ebay Accounts; Mailpass; SMTP;RDP;VPS;CCN;SSN; Sell Amazon gift card & itunes gift card; Game Cards, ATM Card; MSR, ATM SKIMMERS. Do WU Transfers and Bank Transfers] * Contact via Y!H: goodcvv_dumps or ICQ: 667686221. Mail: [email protected] ===================== WESTERN UNION TRANSFER SERVICE ======================= We are Very PROFESSIONAL in WESTION UNION. Our Special Job Is this We Have big Western Union Service for everywhere and every when for you. We transfer money to all country in world. We can transfer big amount. And you can receive this money from your country. Our service accept payment 15% of transfer amount for small transfer . And 10% of big transfer. For large transfer . We make is very safe. And this service is very fast. We start to run software to make transfer to your WU info very fast,without delay and immediately. We give you MTCN and sender info and all cashout info, 15 mins after your payment complete. CONTACT US via Y!H: goodcvv_dumps or ICQ: 667686221 or Mail: [email protected] to know more info, price list of WU TRANSFER SERVICE ====================== Verified Paypal Accounts for sale ======================== If u are interested in it, contact me to know the price list & have the Tips for using above accounts safely,not be suspended when using accounts. I will not responsible if you get suspended. ===================== Dumps, Track1&2 with PIN & without PIN for ATM Cashout ======================= - Tracks 1&2 US;Tracks 1&2 UK;Tracks 1&2 CA,AU; Tracks 1&2 EU, with PIN and without PIN. - Dumps US; Dumps CA; Dumps EU; Dumps ASIA; Dumps AU, Brazil with good quality & price. Update Types of Dumps having now: Mix; Debit Classic; MC Standard;MC World; Gold; Platinum; Business/Corporate; Purchasing/Signature; Infinite - Contact me via Y!H: goodcvv_dumps (ICQ: 667686221) to know more info & price list of dumps, tracks ! ======================== Bank Logins Account (US UK CA AU EU) ======================== Sell Bank acc: Bank BOA, Bank HSBC USA, HSBC UK, Chase,Washovia, Halifax, Barclays, Abbey,... I make sure that my BANK LOGIN are security & easily to use. If u are interested in this, contact me to know more info about balance, price list,...! ================= Top-up Prepaid Cards, Debit Cards ========================= - If you hold any prepaid cards, debit cards, any country or any company. - I can top you funds into your prepaid cards, debit cards or any virtual cards. - top up your debit cards with hacked credit cards - top up your prepaid card with bank account login - top up you card with paypal account or any other - Have all tools to top your cards account - Top up does not take more then 10 minutes - Payoneer Cards top up available at cheap ================= Service: Provide Ebay - Apple - Amazon - Itunes GIFT CARD & Game Card with best price ===================== Contact me to negotiate about the price if buying bulk - PlayStation® Network Card - Xbox LIVE 12 Month Gold Membership = 30$ Xbox LIVE 4000 Microsoft Points = 30$ Zynga $50 Game Card (World Wide) = 30$ Ultimate game card 50$ = 30$ Ultimate game card 20$ = 10$ Key Diablo 3 = 25$ ITUNES GIFT CARD AMAZON GIFT CARD Ebay gift card Visa gift card ---------------- Our products are checked by a partner who works in a bank -------------------- Our products are better than 5-7 days after they are dead. They are raised mainly for money atm. Can be used in most countries. ---------------- Contact Via Y!H: goodcvv_dumps or ICQ: 667686221. Mail: [email protected] ------------------------ Need good & serious buyer to business for a long time [Sell CVV, Dumps,track1&2; Bank logins; Paypal Accounts;Ebay Accounts; Mailpass; SMTP;RDP;VPS;CCN;SSN; Sell Amazon gift card & itunes gift card; Game Card, ATM Card, MSR, ATM SKIMMERS. Do WU Transfer and Bank Transfers....Contact via Y!H: goodcvv_dumps or ICQ: 667686221. Mail: [email protected]] [Sell CVV, Dumps,track1&2; Bank logins; Paypal Accounts;Ebay Accounts; Mailpass; SMTP;RDP;VPS;CCN;SSN; Sell Amazon gift card & itunes gift card; Game Card, ATM Card, MSR, ATM SKIMMERS. Do WU Transfer and Bank Transfers....Contact via Y!H: goodcvv_dumps or ICQ: 667686221. Mail: [email protected]] [Sell CVV, Dumps,track1&2; Bank logins; Paypal Accounts;Ebay Accounts; Mailpass; SMTP;RDP;VPS;CCN;SSN; Sell Amazon gift card & itunes gift card; Game Card, ATM Card, MSR, ATM SKIMMERS. Do WU Transfer and Bank Transfers....Contact via Y!H: goodcvv_dumps or ICQ: 667686221. Mail: [email protected]] [Sell CVV, Dumps,track1&2; Bank logins; Paypal Accounts;Ebay Accounts; Mailpass; SMTP;RDP;VPS;CCN;SSN; Sell Amazon gift card & itunes gift card; Game Card, ATM Card, MSR, ATM SKIMMERS. Do WU Transfer and Bank Transfers....Contact via Y!H: goodcvv_dumps or ICQ: 667686221. Mail: [email protected]] [Sell CVV, Dumps,track1&2; Bank logins; Paypal Accounts;Ebay Accounts; Mailpass; SMTP;RDP;VPS;CCN;SSN; Sell Amazon gift card & itunes gift card; Game Card, ATM Card, MSR, ATM SKIMMERS. Do WU Transfer and Bank Transfers....Contact via Y!H: goodcvv_dumps or ICQ: 667686221. Mail: [email protected]]

    Read the article

  • Have you really fixed that problem?

    - by DavidWimbush
    The day before yesterday I saw our main live server's CPU go up to constantly 100% with just the occasional short drop to a lower level. The exact opposite of what you'd want to see. We're log shipping every 15 minutes and part of that involves calling WinRAR to compress the log backups before copying them over. (We're on SQL2005 so there's no native compression and we have bandwidth issues with the connection to our remote site.) I realised the log shipping jobs were taking about 10 minutes and that most of that was spent shipping a 'live' reporting database that is completely rebuilt every 20 minutes. (I'm just trying to keep this stuff alive until I can improve it.) We can rebuild this database in minutes if we have to fail over so I disabled log shipping of that database. The log shipping went down to less than 2 minutes and I went off to the SQL Social evening in London feeling quite pleased with myself. It was a great evening - fun, educational and thought-provoking. Thanks to Simon Sabin & co for laying that on, and thanks too to the guests for making the effort when they must have been pretty worn out after doing DevWeek all day first. The next morning I came down to earth with a bump: CPU still at 100%. WTF? I looked in the activity monitor but it was confusing because some sessions have been running for a long time so it's not a good guide what's using the CPU now. I tried the standard reports showing queries by CPU (average and total) but they only show the top 10 so they just show my big overnight archiving and data cleaning stuff. But the Profiler showed it was four queries used by our new website usage tracking system. Four simple indexes later the CPU was back where it should be: about 20% with occasional short spikes. So the moral is: even when you're convinced you've found the cause and fixed the problem, you HAVE to go back and confirm that the problem has gone. And, yes, I have checked the CPU again today and it's still looking sweet.

    Read the article

  • Generating EF Code First model classes from an existing database

    - by Jon Galloway
    Entity Framework Code First is a lightweight way to "turn on" data access for a simple CLR class. As the name implies, the intended use is that you're writing the code first and thinking about the database later. However, I really like the Entity Framework Code First works, and I want to use it in existing projects and projects with pre-existing databases. For example, MVC Music Store comes with a SQL Express database that's pre-loaded with a catalog of music (including genres, artists, and songs), and while it may eventually make sense to load that seed data from a different source, for the MVC 3 release we wanted to keep using the existing database. While I'm not getting the full benefit of Code First - writing code which drives the database schema - I can still benefit from the simplicity of the lightweight code approach. Scott Guthrie blogged about how to use entity framework with an existing database, looking at how you can override the Entity Framework Code First conventions so that it can work with a database which was created following other conventions. That gives you the information you need to create the model classes manually. However, it turns out that with Entity Framework 4 CTP 5, there's a way to generate the model classes from the database schema. Once the grunt work is done, of course, you can go in and modify the model classes as you'd like, but you can save the time and frustration of figuring out things like mapping SQL database types to .NET types. Note that this template requires Entity Framework 4 CTP 5 or later. You can install EF 4 CTP 5 here. Step One: Generate an EF Model from your existing database The code generation system in Entity Framework works from a model. You can add a model to your existing project and delete it when you're done, but I think it's simpler to just spin up a separate project to generate the model classes. When you're done, you can delete the project without affecting your application, or you may choose to keep it around in case you have other database schema updates which require model changes. I chose to add the Model classes to the Models folder of a new MVC 3 application. Right-click the folder and select "Add / New Item..."   Next, select ADO.NET Entity Data Model from the Data Templates list, and name it whatever you want (the name is unimportant).   Next, select "Generate from database." This is important - it's what kicks off the next few steps, which read your database's schema.   Now it's time to point the Entity Data Model Wizard at your existing database. I'll assume you know how to find your database - if not, I covered that a bit in the MVC Music Store tutorial section on Models and Data. Select your database, uncheck the "Save entity connection settings in Web.config" (since we won't be using them within the application), and click Next.   Now you can select the database objects you'd like modeled. I just selected all tables and clicked Finish.   And there's your model. If you want, you can make additional changes here before going on to generate the code.   Step Two: Add the DbContext Generator Like most code generation systems in Visual Studio lately, Entity Framework uses T4 templates which allow for some control over how the code is generated. K Scott Allen wrote a detailed article on T4 Templates and the Entity Framework on MSDN recently, if you'd like to know more. Fortunately for us, there's already a template that does just what we need without any customization. Right-click a blank space in the Entity Framework model surface and select "Add Code Generation Item..." Select the Code groupt in the Installed Templates section and pick the ADO.NET DbContext Generator. If you don't see this listed, make sure you've got EF 4 CTP 5 installed and that you're looking at the Code templates group. Note that the DbContext Generator template is similar to the EF POCO template which came out last year, but with "fix up" code (unnecessary in EF Code First) removed.   As soon as you do this, you'll two terrifying Security Warnings - unless you click the "Do not show this message again" checkbox the first time. It will also be displayed (twice) every time you rebuild the project, so I checked the box and no immediate harm befell my computer (fingers crossed!).   Here's the payoff: two templates (filenames ending with .tt) have been added to the project, and they've generated the code I needed.   The "MusicStoreEntities.Context.tt" template built a DbContext class which holds the entity collections, and the "MusicStoreEntities.tt" template build a separate class for each table I selected earlier. We'll customize them in the next step. I recommend copying all the generated .cs files into your application at this point, since accidentally rebuilding the generation project will overwrite your changes if you leave them there. Step Three: Modify and use your POCO entity classes Note: I made a bunch of tweaks to my POCO classes after they were generated. You don't have to do any of this, but I think it's important that you can - they're your classes, and EF Code First respects that. Modify them as you need for your application, or don't. The Context class derives from DbContext, which is what turns on the EF Code First features. It holds a DbSet for each entity. Think of DbSet as a simple List, but with Entity Framework features turned on.   //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Data.Entity; public partial class Entities : DbContext { public Entities() : base("name=Entities") { } public DbSet<Album> Albums { get; set; } public DbSet<Artist> Artists { get; set; } public DbSet<Cart> Carts { get; set; } public DbSet<Genre> Genres { get; set; } public DbSet<OrderDetail> OrderDetails { get; set; } public DbSet<Order> Orders { get; set; } } } It's a pretty lightweight class as generated, so I just took out the comments, set the namespace, removed the constructor, and formatted it a bit. Done. If I wanted, though, I could have added or removed DbSets, overridden conventions, etc. using System.Data.Entity; namespace MvcMusicStore.Models { public class MusicStoreEntities : DbContext { public DbSet Albums { get; set; } public DbSet Genres { get; set; } public DbSet Artists { get; set; } public DbSet Carts { get; set; } public DbSet Orders { get; set; } public DbSet OrderDetails { get; set; } } } Next, it's time to look at the individual classes. Some of mine were pretty simple - for the Cart class, I just need to remove the header and clean up the namespace. //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Cart { // Primitive properties public int RecordId { get; set; } public string CartId { get; set; } public int AlbumId { get; set; } public int Count { get; set; } public System.DateTime DateCreated { get; set; } // Navigation properties public virtual Album Album { get; set; } } } I did a bit more customization on the Album class. Here's what was generated: //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Album { public Album() { this.Carts = new HashSet(); this.OrderDetails = new HashSet(); } // Primitive properties public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } // Navigation properties public virtual Artist Artist { get; set; } public virtual Genre Genre { get; set; } public virtual ICollection Carts { get; set; } public virtual ICollection OrderDetails { get; set; } } } I removed the header, changed the namespace, and removed some of the navigation properties. One nice thing about EF Code First is that you don't have to have a property for each database column or foreign key. In the Music Store sample, for instance, we build the app up using code first and start with just a few columns, adding in fields and navigation properties as the application needs them. EF Code First handles the columsn we've told it about and doesn't complain about the others. Here's the basic class: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { public class Album { public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List OrderDetails { get; set; } } } It's my class, not Entity Framework's, so I'm free to do what I want with it. I added a bunch of MVC 3 annotations for scaffolding and validation support, as shown below: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { [Bind(Exclude = "AlbumId")] public class Album { [ScaffoldColumn(false)] public int AlbumId { get; set; } [DisplayName("Genre")] public int GenreId { get; set; } [DisplayName("Artist")] public int ArtistId { get; set; } [Required(ErrorMessage = "An Album Title is required")] [StringLength(160)] public string Title { get; set; } [Required(ErrorMessage = "Price is required")] [Range(0.01, 100.00, ErrorMessage = "Price must be between 0.01 and 100.00")] public decimal Price { get; set; } [DisplayName("Album Art URL")] [StringLength(1024)] public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List<OrderDetail> OrderDetails { get; set; } } } The end result was that I had working EF Code First model code for the finished application. You can follow along through the tutorial to see how I built up to the finished model classes, starting with simple 2-3 property classes and building up to the full working schema. Thanks to Diego Vega (on the Entity Framework team) for pointing me to the DbContext template.

    Read the article

  • Ad-hoc taxonomy: owning the chess set doesn't mean you decide how the little horsey moves

    - by Roger Hart
    There was one of those little laugh-or-cry moments recently when I heard an anecdote about content strategy failings at a major online retailer. The story goes a bit like this: successful company in a highly commoditized marketplace succeeds on price and largely ignores its content team. Being relatively entrepreneurial, the founders are still knocking around, and occasionally like to "take an interest". One day, they decree that clothing sold on the site can no longer be described as "unisex", because this sounds old fashioned. Sad now. Let me just reiterate for the folks at the back: large retailer, commoditized market place, differentiating on price. That's inherently unstable. Sooner or later, they're going to need one or both of competitive differentiation and significant optimization. I can't speak for the latter, since I'm hypothesizing off a raft of rumour, but one of the simpler paths to the former is to become - or rather acknowledge that they are - a content business. Regardless, they need highly-searchable terminology. Even in the face of tooth and claw resistance to noticing the fundamental position content occupies in driving sales (and SEO) on the web, there's a clear information problem here. Dilettante taxonomy is a disaster. Ok, so this is a small example, but that kind of makes it a good one. Unisex probably is the best way of describing clothing designed to suit either men or women interchangeably. It certainly takes less time to type (and read). It's established terminology, and as a single word, it's significantly better for web readability than a phrasal workaround. Something like "fits men or women" is short, by could fall foul of clause-level discard in web scanning. It's not an adjective, so for intuitive reading it's never going to be near the start of a title or description. It would also clutter up search results, and impose cognitive load in list scanning. Sorry kids, it's just worse. Even if "unisex" were an archaism (which it isn't), the only thing that would weigh against its being more usable and concise terminology would be evidence that this archaism were hurting conversions. Good luck with that. We once - briefly - called one of our products a "Can of worms". It was a bundle in a bug-tracking suite, and we thought it sounded terribly cool. Guess how well that sold. We have information and content professionals for a reason: to make sure that whatever we put in front of users is optimised to meet user and business goals. If that thinking doesn't inform style guides, taxonomy, messaging, title structure, and so forth, you might as well be finger painting.

    Read the article

  • Product Development Investment: A Measure of Vendor Performance

    - by Jim Mcglothlin
    The relationship between a large, complex organization and its key suppliers of information technology is normally more than just "strategic". Expectations about the duration of the relationship typically exceed 20 years. Enterprise applications and technology infrastructure are not expected to be changed out like petunias. So how would you rate the due diligence processes as performed in Higher Education when selecting critical, transformational information technology? My observation: I see a lot of effort put into elaborate demonstration of basic software functionality. I see a lot of attention paid to the cost element of technology acquisition, including the contracted cost of implementation consulting services. But the factor that receives only cursory analysis and due diligence is long-term performance--the ability of a vendor to grow, expand, and develop, and bring its customers along with it. So what should you look for in a long-term IT supplier? Oracle has a public track record for product development. The annual investment has been on a run rate of almost $3 Billion organic product development. Oracle's well-publicized acquisitions and mergers have been supplemental to its R&D. This is important for Higher Education. Another meaningful way to evaluate a company is to look at the tangible track record of enhancement. Consider the Oracle-PeopleSoft enterprise business platform since acquired by Oracle 6 years ago: Product or Technology Enhancement Customer or User Impact Service Oriented Architecture (SOA) 300+ new web services delivered in versions 9.0 & 9.1 provide flexibility, so that customers can integrate PeopleSoft with other applications. Campus Solutions has added Admissions and Constituent Web Services. Constituent Relationship Management PeopleSoft CRM 9.1 for Higher Education introduced new process flows for student recruiting and retention to support "Student Success" initiatives. A 360 view of the constituent is now delivered, and the concept of a single-stop Student Services Center is now in CRM 9.1 with tight integration to PeopleSoft Campus Solutions. Human Capital Management Contract Pay for Education, with flexibility for configuration and calculation, has been extended in HCM 9.1. New chartfield integration among Project Costing - Time & Labor - Payroll to serve the labor distribution requirements for Grants / Sponsored Research. Talent Management PeopleSoft 9.0 and 9.1 feature an integrated talent management approach centered on definitions in "Profile Manager", with all new usability improvements. Internal and external candidate pools, and the entire recruitment process, are driven by delivered configurable selection and on-boarding processes. Interview scheduling, and online job offers are newly delivered processes. Performance Management PeopleSoft HCM ePerformance 9.1 will include significant new functionality designed to help organizations more effectively align business objectives with employee goals. Using an Organization Chart view, your business goals can flow down to become tangible objectives per employee. Succession Planning / Workforce Development New in HCM 9.0, enhanced in 9.1, is a planning capability for regular or unusual (major organizational change) succession of internal or external candidates. PeopleSoft supports employee-based career planning, which ultimately increases the integrity of the succession planning process (identify their career needs, plans, preferences, and interests). Dashboards / Oracle Business Intelligence Application Suite Oracle Human Resources Analytics provides the workforce information foundation that integrates data from HR functional areas and Finance. Oracle Human Resources Analytics delivers 9 dashboards and over 200 reports. Provide your HR professionals and front-line managers the tools to analyze workforce staffing, retention, productivity, to better source high-quality applicants, and to reduce absence costs. Multi-year Planning and Commitment Control External funding sources, especially Grants, require a multi-year encumbrance business process. PeopleSoft HCM 9.1 adds multi-year funding and commitment control, including budget checking. The newly designed Real Time Budget Checking will provide the customer with an updated snapshot of their budget and encumbrances at any given time. Position Budgeting with Hyperion Hyperion Planning world-class products now include delivered integration to PeopleSoft HCM. Position Budgeting is available in the new Public Sector Planning module of Hyperion. Web 2.0 features for the latest in usability PeopleSoft 9.1 features a contemporary internet user experience: Partial-page refreshing Drag and drop pagelets New menu structure Navigation pagelets Modal popup message windows Favorites & recently used links Type-ahead Drag and drop grid columns, pop-out grids Portal Workspaces Enterprise 2.0 for your collaborative web communities, using new content management, along with Wikis, blogs, and discussion forums in PeopleSoft Portal 9.1. PeopleTools enhanced by Oracle Fusion Middleware Standards-based tools have been added to the PeopleTools application infrastructure: BI (XML) Publisher, Java tools. Certified for use with PeopleSoft: Oracle Business Intelligence (OBIEE), Oracle Enterprise Manager, Oracle Weblogic Server, Oracle SOA Suite. Hosting for PeopleSoft applications A solid new deployment option: Oracle On Demand remote hosting center for high scalability, security, and continuity of operations. Business Process Outsourcing (BPO) for HCM / Payroll functions Partnership with AT&T provides hosting of HR/Payroll application along with payroll business process operations, and subscription-based service fees (SaaS). AT&T BPO full service includes pay sheet processing, bank and 3rd party file transfer, payroll tax handling, etc. Continuous Delivery Model Feature Packs provide faster time-to-benefit; new features become available in PeopleSoft 9.1 (or Campus Solutions 9.0) without need to perform upgrade. Golden person data model across all campus applications Oracle Higher Education Constituent Hub provides synchronization and data governance of person data across any application, e.g. HR/ Payroll, Student Information System, Housing, Emergency Contact, LMS, CRM. Oracle's aggressive enhancement plans within the "Applications Unlimited" program continue, as new functionality is under development for a new version of a PeopleSoft release planned for 2012. Meanwhile, new capabilities are planned on an annual basis in Feature Packs. PeopleSoft just delivered the HCM 2010 Feature Pack and another is planned for 2011. In February we plan to have over 100 customers from our Customer Advisory Boards at our PeopleSoft Development Center in California to review designs for all of these releases. For those of you near New York City The investment and progressive development story described above is the subject of an Oracle road show event on February 9, 2011. Charting Your Course with Oracle Applications is a global event series designed to help business and IT executives assess the impact of new inflection points on their business and applications roadmap: changing workforces, shifting customer and constituent bases, and increased volatility. Learn how innovations ranging from new deployment models like cloud computing to the introduction of social applications and smart devices are delivering results across all areas of business and industry. THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT.

    Read the article

  • Silverlight Cream for November 08, 2011 -- #1165

    - by Dave Campbell
    In this Issue: Brian Noyes, Michael Crump, WindowsPhoneGeek, Erno de Weerd, Jesse Liberty, Derik Whittaker, Sumit Dutta, Asim Sajjad, Dhananjay Kumar, Kunal Chowdhury, and Beth Massi. Above the Fold: Silverlight: "Working with Prism 4 Part 1: Getting Started" Brian Noyes WP7: "Getting Started with the Coding4Fun toolkit Tile Control" WindowsPhoneGeek LightSwitch: "How to Connect to and Diagram your SQL Express Database in Visual Studio LightSwitch" Beth Massi Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com: Working with Prism 4 Part 1: Getting Started Brian Noyes has a series starting at SilverlightShow about Prism 4 ... this is the first one, so a good time to jump in and pick up on an intro and basic info about Prism plus building your first Prism app. 10 Laps around Silverlight 5 (Part 5 of 10) Michael Crump has Part 5 of his 10-part Silverlight 5 investigation up at SilverlightShow talking about all the various text features added in Silverlight 5 Beta: Text Tracking and Leading, Linked and MultiColumn, OpenType, etc. Getting Started with the Coding4Fun toolkit Tile Control WindowsPhoneGeek takes on the Tile control from the Coding4Fun toolkit... as usual, great tutorial... diagrams, code, explanation Using AppHarbor, Bitbucket and Mercurial with ASP.NET and Silverlight – Part 2 CouchDB, Cloudant and Hammock Erno de Weerd has Part 2 of his trilogy and he's trying to beat David Anson for the long title record :) ... in this episode, he's adding in cloud storage to the mix in a 35-step tutorial. Background Audio Jesse Liberty's talking about background Audio... and no not the Muzak in the elevator (do they still have that?) ... he's tlking about the WP7.1 BackgroundAudioPlayer Using the ToggleSwitch in WinRT/Metro (for C#) Derik Whittaker shows off the ToggleSwitch for WinRT/Metro... not a lot to be said about it, but he says it all :) Part 19 - Windows Phone 7 - Access Phone Contacts Sumit Dutta has Part 19! of his WP7 series up... talking today about getting a phone number from the directory using the PhoneNumberChooserTask ContextMenu using MVVM Asim Sajjad shows how to make the Context Menu ViewModel friendly in this short tutorial. Code to make call in Windows Phone 7 Dhananjay Kumar's latest WP7 post is explaining how to make a call programmatically using the PhoneCallTask launcher. Silverlight Page Navigation Framework - Basic Concept Kunal Chowdhury has a 3-part tutorial series on Silverlight Navigation up. This is the first in the series, and he hits the basics... what constitutes a Page, and how to get started with the navigation framework. How to Connect to and Diagram your SQL Express Database in Visual Studio LightSwitch Beth Massi's latest LightSwitch post is on using the Data Designer to easily crete and model database tables... during development this is in SQL Express, but can be deployed to most SQL server db you like Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • how to get 12 for joel test working in a small team of 3-4 on php website?

    - by keisimone
    Hi i read this inspired, i am asking for specific help to achieve a 12 for my current project. i am working in a team of 3-4 on a php project that is based on cakephp. i only have a dedicated server running on linux which i intend to have the website live on. and i have a plan with assembla where i am using its svn repository. that's it. i like to hear a major, impactful step towards answering each point raised by the joel test. by impactful i mean doing just this one thing would raise my project to scoring or close to scoring on that area of the joel test. lets begin: 1) do you have a source control system? I am very proud to say learning how to use svn even though we know nuts about branch/release policies made the biggest impact to our programming lives. and the svn repos is on assembla paid plan. Feel free to add if anyone thinks we can do more in this area. 2) Can you make a build in one step? i think the issue is how do i define as a build? i think we are going to define it as if tomorrow my dedicated server crashed and we found another server from another normal hosting provider and all my team's machines all destroyed, how are we going to get the website up again? my code is in svn on assembla. 1 step means as close to 1 button to push as possible. 3)Do you make daily builds? i know nothing about this. please help. i googled and came across this phpundercontrol. but i am not sure if we can get that to work with assembla. are there easier ways? 4)Do you have a bug database? we have not used the assembla features on bug tracking. ashamed to say. i think i will sort this out myself. 5)Do you fix bugs before writing new code? policy issue. i will sort it out myself. 6)Do you have an up-to-date schedule? Working on it. Same as above. estimates have historically been overly optimistic. having spent too much time using all sorts of funny project management tools, i think this time i am going to use just paper and pen. please dont tell me scrum. i need to keep things even simpler than that. 7)Do you have a spec? We do, but its in paper and pen. what would be a good template? 8)Do programmers have quiet working conditions? Well we work at home and in distributed manner. so .. 9)Do you use the best tools money can buy? We use cheap tools. we are not big. 10)Do you have testers? NO testers. Since we have a team of 3, i think i should go get 1 tester. even on a part time basis. so i should get this 1 part time tester test in what manner to extract maximum effects? should i get him to write out the test scenarios and expected outcomes and then test it? or i write the test scenarios and then ask him to do it? we will be writing the test cases ourselves using simpletest. i came across selenium. how useful is that? 11)Do new candidates write code during their interview? Not applicable. But i will do it next time i try to hire anyone else. hires or contractors alike. 12)Do you do hallway usability testing? Will do so on a per month or per milestone basis. i will grab my friends who are not net-savvy. they will be the best testers of this type. Thank you.

    Read the article

  • Using Table-Valued Parameters in SQL Server

    - by Jesse
    I work with stored procedures in SQL Server pretty frequently and have often found myself with a need to pass in a list of values at run-time. Quite often this list contains a set of ids on which the stored procedure needs to operate the size and contents of which are not known at design time. In the past I’ve taken the collection of ids (which are usually integers), converted them to a string representation where each value is separated by a comma and passed that string into a VARCHAR parameter of a stored procedure. The body of the stored procedure would then need to parse that string into a table variable which could be easily consumed with set-based logic within the rest of the stored procedure. This approach works pretty well but the VARCHAR variable has always felt like an un-wanted “middle man” in this scenario. Of course, I could use a BULK INSERT operation to load the list of ids into a temporary table that the stored procedure could use, but that approach seems heavy-handed in situations where the list of values is usually going to contain only a few dozen values. Fortunately SQL Server 2008 introduced the concept of table-valued parameters which effectively eliminates the need for the clumsy middle man VARCHAR parameter. Example: Customer Transaction Summary Report Let’s say we have a report that can summarize the the transactions that we’ve conducted with customers over a period of time. The report returns a pretty simple dataset containing one row per customer with some key metrics about how much business that customer has conducted over the date range for which the report is being run. Sometimes the report is run for a single customer, sometimes it’s run for all customers, and sometimes it’s run for a handful of customers (i.e. a salesman runs it for the customers that fall into his sales territory). This report can be invoked from a website on-demand, or it can be scheduled for periodic delivery to certain users via SQL Server Reporting Services. Because the report can be created from different places and the query to generate the report is complex it’s been packed into a stored procedure that accepts three parameters: @startDate – The beginning of the date range for which the report should be run. @endDate – The end of the date range for which the report should be run. @customerIds – The customer Ids for which the report should be run. Obviously, the @startDate and @endDate parameters are DATETIME variables. The @customerIds parameter, however, needs to contain a list of the identity values (primary key) from the Customers table representing the customers that were selected for this particular run of the report. In prior versions of SQL Server we might have made this parameter a VARCHAR variable, but with SQL Server 2008 we can make it into a table-valued parameter. Defining And Using The Table Type In order to use a table-valued parameter, we first need to tell SQL Server about what the table will look like. We do this by creating a user defined type. For the purposes of this stored procedure we need a very simple type to model a table variable with a single integer column. We can create a generic type called ‘IntegerListTableType’ like this: CREATE TYPE IntegerListTableType AS TABLE (Value INT NOT NULL) Once defined, we can use this new type to define the @customerIds parameter in the signature of our stored procedure. The parameter list for the stored procedure definition might look like: 1: CREATE PROCEDURE dbo.rpt_CustomerTransactionSummary 2: @starDate datetime, 3: @endDate datetime, 4: @customerIds IntegerListTableTableType READONLY   Note the ‘READONLY’ statement following the declaration of the @customerIds parameter. SQL Server requires any table-valued parameter be marked as ‘READONLY’ and no DML (INSERT/UPDATE/DELETE) statements can be performed on a table-valued parameter within the routine in which it’s used. Aside from the DML restriction, however, you can do pretty much anything with a table-valued parameter as you could with a normal TABLE variable. With the user defined type and stored procedure defined as above, we could invoke like this: 1: DECLARE @cusomterIdList IntegerListTableType 2: INSERT @customerIdList VALUES (1) 3: INSERT @customerIdList VALUES (2) 4: INSERT @customerIdList VALUES (3) 5:  6: EXEC dbo.rpt_CustomerTransationSummary 7: @startDate = '2012-05-01', 8: @endDate = '2012-06-01' 9: @customerIds = @customerIdList   Note that we can simply declare a variable of type ‘IntegerListTableType’ just like any other normal variable and insert values into it just like a TABLE variable. We could also populate the variable with a SELECT … INTO or INSERT … SELECT statement if desired. Using The Table-Valued Parameter With ADO .NET Invoking a stored procedure with a table-valued parameter from ADO .NET is as simple as building a DataTable and passing it in as the Value of a SqlParameter. Here’s some example code for how we would construct the SqlParameter for the @customerIds parameter in our stored procedure: 1: var customerIdsParameter = new SqlParameter(); 2: customerIdParameter.Direction = ParameterDirection.Input; 3: customerIdParameter.TypeName = "IntegerListTableType"; 4: customerIdParameter.Value = selectedCustomerIds.ToIntegerListDataTable("Value");   All we’re doing here is new’ing up an instance of SqlParameter, setting the pamameters direction, specifying the name of the User Defined Type that this parameter uses, and setting its value. We’re assuming here that we have an IEnumerable<int> variable called ‘selectedCustomerIds’ containing all of the customer Ids for which the report should be run. The ‘ToIntegerListDataTable’ method is an extension method of the IEnumerable<int> type that looks like this: 1: public static DataTable ToIntegerListDataTable(this IEnumerable<int> intValues, string columnName) 2: { 3: var intergerListDataTable = new DataTable(); 4: intergerListDataTable.Columns.Add(columnName); 5: foreach(var intValue in intValues) 6: { 7: var nextRow = intergerListDataTable.NewRow(); 8: nextRow[columnName] = intValue; 9: intergerListDataTable.Rows.Add(nextRow); 10: } 11:  12: return intergerListDataTable; 13: }   Since the ‘IntegerListTableType’ has a single int column called ‘Value’, we pass that in for the ‘columnName’ parameter to the extension method. The method creates a new single-columned DataTable using the provided column name then iterates over the items in the IEnumerable<int> instance adding one row for each value. We can then use this SqlParameter instance when invoking the stored procedure just like we would use any other parameter. Advanced Functionality Using passing a list of integers into a stored procedure is a very simple usage scenario for the table-valued parameters feature, but I’ve found that it covers the majority of situations where I’ve needed to pass a collection of data for use in a query at run-time. I should note that BULK INSERT feature still makes sense for passing large amounts of data to SQL Server for processing. MSDN seems to suggest that 1000 rows of data is the tipping point where the overhead of a BULK INSERT operation can pay dividends. I should also note here that table-valued parameters can be used to deal with more complex data structures than single-columned tables of integers. A User Defined Type that backs a table-valued parameter can use things like identities and computed columns. That said, using some of these more advanced features might require the use the SqlDataRecord and SqlMetaData classes instead of a simple DataTable. Erland Sommarskog has a great article on his website that describes when and how to use these classes for table-valued parameters. What About Reporting Services? Earlier in the post I referenced the fact that our example stored procedure would be called from both a web application and a SQL Server Reporting Services report. Unfortunately, using table-valued parameters from SSRS reports can be a bit tricky and warrants its own blog post which I’ll be putting together and posting sometime in the near future.

    Read the article

  • CodePlex Daily Summary for Monday, February 07, 2011

    CodePlex Daily Summary for Monday, February 07, 2011Popular ReleasesRawr: Rawr 4.0.19 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...EnhSim: EnhSim 2.3.5 ALPHA: 2.3.5 ALPHAThis release supports WoW patch 4.06 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Removed the chanc...Pyxis 2: Production Release: Pyxis 2.0.0.13 - Full Production Release This release of Pyxis 2 offers you a wide range of features: Launch Applications in their own threads & domains Render alpha-blended icons on the desktop Support for SD & USB drives Online App Store Dynamic & Static IP support Menus & Modals Over a dozen GUI controls File selection dialogs Folder selection dialog Application, Bootloader, and Firmware Updating Update Release Notes Much More!Microsoft All-In-One Code Framework: Sample Browser v2 (CTP Release): Sample Browser v2 (CTP Release) http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=205917MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (4): There was a small issue with the previous release that caused errors when installing the templates in VS10 Express. This release corrects the error. Only use this if you encountered issues when installing the previous release. No changes in the binaries.Barcode Rendering Framework: 2.2.0.0: Breaking ChangesThe assembly version for many files has changed (all now aligned with the core BRF assembly at version 2.1.0.0) so please update web.config files if you are using the Zen.Barcode.Web/Zen.Barcode.Web.Design assemblies. What's NewSimple barcode image HTTP handler that uses standard HTTP GET and query string parameters for rendering barcode images. Support for embedding barcodes in SSRS 2008 reports. Important SQL Server Reporting Services 2008 InformationFirstly the SSRS CRI...Finestra Virtual Desktops: 1.0: Finally the version 1.0 release! Sorry for the long delay since the last release, but I think that you'll find this release to be really smooth, really stable, and a really great enhancement to Windows. New features include: Windows 7 taskbar integration Major performance and usability improvements Redesigned look and feel New name: Finestra Better automatic updating Much faster full-screen switcher Fixes Windows 7 hotkey collisions by default Updated installerNuclex Framework: R1323: This release is a pure XNA 4.0 release that no longer includes any XNA 3.1 binaries or projects. All x86 assemblies have been compiled targeting the .NET 4.0 Client Profile. Requires either Visual C# 2010 Express or Visual Studio 2010, both with XNA Game Studio 4.0. 3rd party libraries needed to compile and run the source code are included, so everything will compile out of the box. Changes: - Thanks to a generous contribution by Adrian Tsai, the TrueType importer now accepts standard Windo...Community Forums NNTP bridge: Community Forums NNTP Bridge V43: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has added some features / bugfixes: Bugfix: Now supporting multi-line headers in all headers ;) / Thanks to Kai Schätzl for reporting this! Debug output optimized / Added a "Copy to clipboard" button in the debug windowFacebook C# SDK: 5.0.2 (BETA): PLEASE TAKE A FEW MINUTES TO GIVE US SOME FEEDBACK: Facebook C# SDK Survey This is third BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. This release contains some breaking changes. Particularly with authentication. After spending time reviewing the trouble areas that people are having using th...ASP.NET MVC SiteMap provider: MvcSiteMapProvider 3.0.0 for MVC3: Using NuGet?MvcSiteMapProvider is also listed in the NuGet feed. Learn more... Like the project? Consider a donation!Donate via PayPal via PayPal. ChangelogTargeting ASP.NET MVC 3 and .NET 4.0 Additional UpdatePriority options for generating XML sitemaps Allow to specify target on SiteMapTitleAttribute One action with multiple routes and breadcrumbs Medium Trust optimizations Create SiteMapTitleAttribute for setting parent title IntelliSense for your sitemap with MvcSiteMapSchem...patterns & practices SharePoint Guidance: SharePoint Guidance 2010 Hands On Lab: SharePoint Guidance 2010 Hands On Lab consists of six labs: one for logging, one for service location, and four for application setting manager. Each lab takes about 20 minutes to walk through. Each lab consists of a PDF document. You can go through the steps in the doc to create solution and then build/deploy the solution and run the lab. For those of you who wants to save the time, we included the final solution so you can just build/deploy the solution and run the lab.Value Injecter - object(s) to -> object mapper: 2.3: it lets you define your own convention-based matching algorithms (ValueInjections) in order to match up (inject) source values to destination values. inject from multiple sources in one InjectFrom added ConventionInjectionMobile Device Detection and Redirection: 0.1.11.11: Improvements to Beta Release The following changes have been made in version 0.1.11.11: BlackBerry Version 6 devices (such as the 9800 Torch) are now correctly identified with a dedicated handler. Android powered devices are now correctly identified. Minor change to Provider.cs to improve performance and optimise data sent to 51Degrees.mobi if the option is enabled. GC.collect is no longer called at any point. All garbage collection now happens automatically IMPORTANT CHANGES This rele...TweetSharp: TweetSharp v2.0.0.0 - Preview 10: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 9 ChangesAdded support for trends Added support for Silverlight 4 Elevated WP7 fixes Third Party Library VersionsHammock v1.1.7: http://hammock.codeplex.com Json.NET 4.0 Release 1: http://json.codeplex.comFacebook Graph Toolkit: Facebook Graph Toolkit 0.7: Version 0.7 updates (2 Feb 2011)new Facebook Graph objects: Link, Note, StatusMessage new publish features: status update, post with link attachment new Graph Api connections in User object: statuses, links, notes internal code path improvement on Api object bug fixed: extra "r" character appears for strings with "\r" symbols in Json Objects bug fixed: error when performing Postback to the same page Tutorial and documentation available at http://fbgraph.computerbeacon.netPhalanger - The PHP Language Compiler for the .NET Framework: 2.0 (February 2011): Next release of Phalanger; again faster, more stable and ready for daily use. Based on many user experiences this release is one more step closer to be perfect compiler and runtime of your old PHP applications; or perfect platform for migrating to .NET. February 2011 release of Phalanger introduces several changes, enhancements and fixes. See complete changelist for all the changes. To improve the performance of your application using MySQL, please use Managed MySQL Extension for Phalanger....Chemistry Add-in for Word: Chemistry Add-in for Word - Version 1.0: On February 1, 2011, we announced the availability of version 1 of the Chemistry Add-in for Word, as well as the assignment of the open source project to the Outercurve Foundation by Microsoft Research and the University of Cambridge. System RequirementsHardware RequirementsAny computer that can run Office 2007 or Office 2010. Software RequirementsYour computer must have the following software: Any version of Windows that can run Office 2007 or Office 2010, which includes Windows XP SP3 and...Minemapper: Minemapper v0.1.4: Updated mcmap, now supports new block types. Added a Worlds->'View Cache Folder' menu item.StyleCop for ReSharper: StyleCop for ReSharper 5.1.15005.000: Applied patch from rodpl for merging of stylecop setting files with settings in parent folder. Previous release: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new Objec...New ProjectsAtari Lynx Emulator: Atari Lynx emulator written in C#BMIcalculator: BMI Calculator for Windows Phone 7.Channel9 Plugin for PlayOn: This is a plugin for use with PlayOn Digital Media Server.cup: cosDEBUG ANALYZER.NET Plugs: Debugging memory dumps using .NET Plugs written in C#Foursquare Venue api: This project allows a user to enter venue information and search foursquare . it will show who is the mayor who is currently at the venue as well as icons for the user . it uses google maps lat and long as well as foursqaure V2 api JSON..G19 Glower: Application for the Logitech G19 keyboard to change the colour of the keyboard as you type. Basic effect is to glow the keyboard more as you type, but there are other modes available. It is also fully customizable and plugin based, so feel free to change it all you wish!GMare: GMare (Game Maker Alternative Room Editor) is a third party room editor for YoYo Game's Game Maker (Supports versions 5 to 8). Offering more robust tools and options to make room editing less cumbersome. GMare is developed in C#, using .NET 2.0, and OpenGL.Hime Parser Generator: Lexer and parser generator for C#. Currently parsing methods are LR(0), LR(1) and LALR(1). Partially implemented parsing methods are GLR(1), GLALR(1), RNGLR(1) and RNGLALR(1). Can be extended for using other parsing methods, including from the LL family.Irrlicht.Net: Irrlicht.Net, is practically what the name says, a wrapper for Irrlicht 1.7.2, written in C#, but should work for other .Net based programming languages. It is, right now in the very early stages, but still you can make something out of it.MapShell: MapShell makes it easy for GIS Administrators to automate repetitive tasks by providing a command-line interface to useful GIS functionality with the consistent syntax and staggering power of PowerShell.MonoStrategy: This project is intended to be an OpenSource remake of one of the best multiplayer realtime strategy games in the world, "The Settlers 3" (Die Siedler 3), with support for all major platforms and mobile devices released under Affero GPL 3. OpenKTV: open source KTV systemRemote Hardware Monitor: Remote Hardware Monitor provides JSON and XML serialised access to data from the Open Hardware Monitor (http://openhardwaremonitor.org/). It is developed in C# using Json.NET for JSON serialisation. SENG 403 Group 6: TBASharePoint Correlation ID View Webpart: Enables a webpart and a ribbon button that allows you to retrieve the information recorded in the ULS log tagged with a specific correlation ID. This makes it much easier for SharePoint developers to retrieve the log messages for a specific correlation token.SharePoint OM Explorer: Explorer SharePoint 2010 under the covers with a set of connected and connectable web parts with an underlying vision to make it easy to: - View site hiearchy - See all object model data for sites, lists, content types, event receivers, etc - Easily view customized bits vs. OOB.System Center Service Manager Facade: This project is c# facade API around SCSM objects using T4 templates and codeTemporary internal server sample for your C# application: CSISS is a sample for you as a C# developer. It explains with an example how to create a virtual, temporary "server" in a class.TestSGB2: This is a test upload2Tiff Splitter: A simple WinForms app that that opens a multi-page tiff file and saves all pages as individual tiff files. The multi-page tiff can be opened via open file dialog, or dragged in.Time Tracker for Windows Phone 7: This is a sample time tracker application created by the members of the LetsXNA !! linked in user group. See LetsXNA.Org for more details. You are welcome to contribute by joining the linked in group and the registering as a developer on codeplex. Lets build a great time tracker!TraceLight: <project name> TraceLight ray tracer </project name> <programming language> C# </programming language>

    Read the article

  • Ubuntu 14.04 Failed to load module udlfb

    - by jar276705
    DisplayLink doesn't load and run. The adapter is recognized and /dev/FB1 is created. USB bus info: Bus 001 Device 006: ID 17e9:0198 DisplayLink Xorg.0.log: X.Org X Server 1.15.1 Release Date: 2014-04-13 [ 44708.386] X Protocol Version 11, Revision 0 [ 44708.389] Build Operating System: Linux 3.2.0-37-generic i686 Ubuntu [ 44708.392] Current Operating System: Linux rrl 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:08:14 UTC 2014 i686 [ 44708.392] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.13.0-24-generic root=UUID=6b719a77-29e0-4668-8f16-57d0d3a73a3f ro quiet splash vt.handoff=7 [ 44708.399] Build Date: 16 April 2014 01:40:08PM [ 44708.402] xorg-server 2:1.15.1-0ubuntu2 (For technical support please see http://www.ubuntu.com/support) [ 44708.405] Current version of pixman: 0.30.2 [ 44708.412] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 44708.412] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 44708.427] (==) Log file: "/var/log/Xorg.0.log", Time: Thu May 1 09:38:27 2014 [ 44708.431] (==) Using config file: "/etc/X11/xorg.conf" [ 44708.434] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 44708.435] (==) ServerLayout "X.org Configured" [ 44708.435] (**) |-->Screen "DisplayLinkScreen" (0) [ 44708.435] (**) | |-->Monitor "DisplayLinkMonitor" [ 44708.435] (**) | |-->Device "DisplayLinkDevice" [ 44708.435] (**) |-->Screen "Screen0" (1) [ 44708.435] (**) | |-->Monitor "Monitor0" [ 44708.435] (**) | |-->Device "Card0" [ 44708.435] (**) |-->Input Device "Mouse0" [ 44708.435] (**) |-->Input Device "Keyboard0" [ 44708.435] (==) Automatically adding devices [ 44708.435] (==) Automatically enabling devices [ 44708.435] (==) Automatically adding GPU devices [ 44708.435] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (**) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/100dpi/:unscaled, /usr/share/fonts/X11/Type1, /usr/share/fonts/X11/100dpi, built-ins, /usr/share/fonts/X11/misc, /usr/share/fonts/X11/100dpi/:unscaled, /usr/share/fonts/X11/Type1, /usr/share/fonts/X11/100dpi, built-ins [ 44708.435] (**) ModulePath set to "/usr/lib/xorg/modules" [ 44708.435] (WW) Hotplugging is on, devices using drivers 'kbd', 'mouse' or 'vmmouse' will be disabled. [ 44708.435] (WW) Disabling Mouse0 [ 44708.435] (WW) Disabling Keyboard0 [ 44708.435] (II) Loader magic: 0xb77106c0 [ 44708.435] (II) Module ABI versions: [ 44708.435] X.Org ANSI C Emulation: 0.4 [ 44708.435] X.Org Video Driver: 15.0 [ 44708.435] X.Org XInput driver : 20.0 [ 44708.435] X.Org Server Extension : 8.0 [ 44708.436] (II) xfree86: Adding drm device (/dev/dri/card0) [ 44708.436] (II) xfree86: Adding drm device (/dev/dri/card1) [ 44708.437] (--) PCI:*(0:1:5:0) 1002:9616:105b:0e26 rev 0, Mem @ 0xf0000000/134217728, 0xfeae0000/65536, 0xfe900000/1048576, I/O @ 0x0000b000/256 [ 44708.441] Initializing built-in extension Generic Event Extension [ 44708.444] Initializing built-in extension SHAPE [ 44708.448] Initializing built-in extension MIT-SHM [ 44708.452] Initializing built-in extension XInputExtension [ 44708.456] Initializing built-in extension XTEST [ 44708.460] Initializing built-in extension BIG-REQUESTS [ 44708.464] Initializing built-in extension SYNC [ 44708.468] Initializing built-in extension XKEYBOARD [ 44708.471] Initializing built-in extension XC-MISC [ 44708.475] Initializing built-in extension SECURITY [ 44708.479] Initializing built-in extension XINERAMA [ 44708.483] Initializing built-in extension XFIXES [ 44708.487] Initializing built-in extension RENDER [ 44708.491] Initializing built-in extension RANDR [ 44708.494] Initializing built-in extension COMPOSITE [ 44708.498] Initializing built-in extension DAMAGE [ 44708.502] Initializing built-in extension MIT-SCREEN-SAVER [ 44708.506] Initializing built-in extension DOUBLE-BUFFER [ 44708.510] Initializing built-in extension RECORD [ 44708.513] Initializing built-in extension DPMS [ 44708.517] Initializing built-in extension Present [ 44708.521] Initializing built-in extension DRI3 [ 44708.525] Initializing built-in extension X-Resource [ 44708.528] Initializing built-in extension XVideo [ 44708.532] Initializing built-in extension XVideo-MotionCompensation [ 44708.535] Initializing built-in extension SELinux [ 44708.539] Initializing built-in extension XFree86-VidModeExtension [ 44708.542] Initializing built-in extension XFree86-DGA [ 44708.546] Initializing built-in extension XFree86-DRI [ 44708.549] Initializing built-in extension DRI2 [ 44708.549] (II) "glx" will be loaded. This was enabled by default and also specified in the config file. [ 44708.549] (WW) "xmir" is not to be loaded by default. Skipping. [ 44708.549] (II) LoadModule: "glx" [ 44708.549] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 44708.550] (II) Module glx: vendor="X.Org Foundation" [ 44708.550] compiled for 1.15.1, module version = 1.0.0 [ 44708.550] ABI class: X.Org Server Extension, version 8.0 [ 44708.550] (==) AIGLX enabled [ 44708.553] Loading extension GLX [ 44708.553] (II) LoadModule: "udlfb" [ 44708.554] (WW) Warning, couldn't open module udlfb [ 44708.554] (II) UnloadModule: "udlfb" [ 44708.554] (II) Unloading udlfb [ 44708.554] (EE) Failed to load module "udlfb" (module does not exist, 0) [ 44708.554] (II) LoadModule: "modesetting" [ 44708.554] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 44708.554] (II) Module modesetting: vendor="X.Org Foundation" [ 44708.554] compiled for 1.15.0, module version = 0.8.1 [ 44708.554] Module class: X.Org Video Driver [ 44708.554] ABI class: X.Org Video Driver, version 15.0 [ 44708.554] (==) Matched fglrx as autoconfigured driver 0 [ 44708.554] (==) Matched ati as autoconfigured driver 1 [ 44708.554] (==) Matched fglrx as autoconfigured driver 2 [ 44708.554] (==) Matched ati as autoconfigured driver 3 [ 44708.554] (==) Matched modesetting as autoconfigured driver 4 [ 44708.554] (==) Matched fbdev as autoconfigured driver 5 [ 44708.554] (==) Matched vesa as autoconfigured driver 6 [ 44708.554] (==) Assigned the driver to the xf86ConfigLayout [ 44708.554] (II) LoadModule: "fglrx" [ 44708.554] (WW) Warning, couldn't open module fglrx [ 44708.554] (II) UnloadModule: "fglrx" [ 44708.554] (II) Unloading fglrx [ 44708.554] (EE) Failed to load module "fglrx" (module does not exist, 0) [ 44708.554] (II) LoadModule: "ati" [ 44708.554] (II) Loading /usr/lib/xorg/modules/drivers/ati_drv.so [ 44708.554] (II) Module ati: vendor="X.Org Foundation" [ 44708.554] compiled for 1.15.0, module version = 7.3.0 [ 44708.554] Module class: X.Org Video Driver [ 44708.554] ABI class: X.Org Video Driver, version 15.0 [ 44708.554] (II) LoadModule: "radeon" [ 44708.555] (II) Loading /usr/lib/xorg/modules/drivers/radeon_drv.so [ 44708.555] (II) Module radeon: vendor="X.Org Foundation" [ 44708.555] compiled for 1.15.0, module version = 7.3.0 [ 44708.555] Module class: X.Org Video Driver [ 44708.555] ABI class: X.Org Video Driver, version 15.0 [ 44708.555] (II) LoadModule: "modesetting" [ 44708.555] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 44708.555] (II) Module modesetting: vendor="X.Org Foundation" [ 44708.555] compiled for 1.15.0, module version = 0.8.1 [ 44708.555] Module class: X.Org Video Driver [ 44708.555] ABI class: X.Org Video Driver, version 15.0 [ 44708.555] (II) UnloadModule: "modesetting" [ 44708.555] (II) Unloading modesetting [ 44708.555] (II) Failed to load module "modesetting" (already loaded, 0) [ 44708.555] (II) LoadModule: "fbdev" [ 44708.555] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 44708.555] (II) Module fbdev: vendor="X.Org Foundation" [ 44708.555] compiled for 1.15.0, module version = 0.4.4 [ 44708.555] Module class: X.Org Video Driver [ 44708.555] ABI class: X.Org Video Driver, version 15.0 [ 44708.555] (II) LoadModule: "vesa" [ 44708.555] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 44708.555] (II) Module vesa: vendor="X.Org Foundation" [ 44708.555] compiled for 1.15.0, module version = 2.3.3 [ 44708.555] Module class: X.Org Video Driver [ 44708.555] ABI class: X.Org Video Driver, version 15.0 [ 44708.555] (II) modesetting: Driver for Modesetting Kernel Drivers: kms [ 44708.555] (II) RADEON: Driver for ATI Radeon chipsets: [ 44708.560] (II) FBDEV: driver for framebuffer: fbdev [ 44708.560] (II) VESA: driver for VESA chipsets: vesa [ 44708.560] (--) using VT number 7 [ 44708.578] (II) modesetting(0): using drv /dev/dri/card0 [ 44708.578] (II) modesetting(G0): using drv /dev/dri/card1 [ 44708.578] (WW) Falling back to old probe method for fbdev [ 44708.578] (II) Loading sub module "fbdevhw" [ 44708.578] (II) LoadModule: "fbdevhw" [ 44708.578] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so [ 44708.578] (II) Module fbdevhw: vendor="X.Org Foundation" [ 44708.578] compiled for 1.15.1, module version = 0.0.2 [ 44708.578] ABI class: X.Org Video Driver, version 15.0 [ 44708.578] (WW) Falling back to old probe method for vesa [ 44708.578] (**) modesetting(0): Depth 16, (--) framebuffer bpp 16 [ 44708.578] (==) modesetting(0): RGB weight 565 [ 44708.578] (==) modesetting(0): Default visual is TrueColor [ 44708.578] (II) modesetting(0): ShadowFB: preferred YES, enabled YES [ 44708.608] (II) modesetting(0): Output VGA-0 using monitor section DisplayLinkMonitor [ 44708.610] (II) modesetting(0): Output DVI-0 has no monitor section [ 44708.640] (II) modesetting(0): EDID for output VGA-0 [ 44708.640] (II) modesetting(0): Manufacturer: ACR Model: 74 Serial#: 2483090993 [ 44708.640] (II) modesetting(0): Year: 2009 Week: 40 [ 44708.640] (II) modesetting(0): EDID Version: 1.3 [ 44708.640] (II) modesetting(0): Analog Display Input, Input Voltage Level: 0.700/0.700 V [ 44708.640] (II) modesetting(0): Sync: Separate [ 44708.640] (II) modesetting(0): Max Image Size [cm]: horiz.: 53 vert.: 29 [ 44708.640] (II) modesetting(0): Gamma: 2.20 [ 44708.640] (II) modesetting(0): DPMS capabilities: StandBy Suspend Off; RGB/Color Display [ 44708.641] (II) modesetting(0): First detailed timing is preferred mode [ 44708.641] (II) modesetting(0): redX: 0.649 redY: 0.338 greenX: 0.289 greenY: 0.609 [ 44708.641] (II) modesetting(0): blueX: 0.146 blueY: 0.070 whiteX: 0.313 whiteY: 0.329 [ 44708.641] (II) modesetting(0): Supported established timings: [ 44708.641] (II) modesetting(0): 720x400@70Hz [ 44708.641] (II) modesetting(0): 640x480@60Hz [ 44708.641] (II) modesetting(0): 640x480@72Hz [ 44708.641] (II) modesetting(0): 640x480@75Hz [ 44708.641] (II) modesetting(0): 800x600@56Hz [ 44708.641] (II) modesetting(0): 800x600@60Hz [ 44708.641] (II) modesetting(0): 800x600@72Hz [ 44708.641] (II) modesetting(0): 800x600@75Hz [ 44708.641] (II) modesetting(0): 1024x768@60Hz [ 44708.641] (II) modesetting(0): 1024x768@70Hz [ 44708.641] (II) modesetting(0): 1024x768@75Hz [ 44708.641] (II) modesetting(0): 1280x1024@75Hz [ 44708.641] (II) modesetting(0): Manufacturer's mask: 0 [ 44708.641] (II) modesetting(0): Supported standard timings: [ 44708.641] (II) modesetting(0): #0: hsize: 1280 vsize 1024 refresh: 60 vid: 32897 [ 44708.641] (II) modesetting(0): #1: hsize: 1152 vsize 864 refresh: 75 vid: 20337 [ 44708.641] (II) modesetting(0): #2: hsize: 1440 vsize 900 refresh: 60 vid: 149 [ 44708.641] (II) modesetting(0): #3: hsize: 1440 vsize 900 refresh: 75 vid: 3989 [ 44708.641] (II) modesetting(0): #4: hsize: 1600 vsize 1200 refresh: 60 vid: 16553 [ 44708.641] (II) modesetting(0): #5: hsize: 1680 vsize 1050 refresh: 60 vid: 179 [ 44708.641] (II) modesetting(0): Supported detailed timing: [ 44708.641] (II) modesetting(0): clock: 138.5 MHz Image Size: 531 x 298 mm [ 44708.641] (II) modesetting(0): h_active: 1920 h_sync: 1968 h_sync_end 2000 h_blank_end 2080 h_border: 0 [ 44708.641] (II) modesetting(0): v_active: 1080 v_sync: 1083 v_sync_end 1088 v_blanking: 1111 v_border: 0 [ 44708.641] (II) modesetting(0): Monitor name: H243H [ 44708.641] (II) modesetting(0): Ranges: V min: 56 V max: 76 Hz, H min: 31 H max: 83 kHz, PixClock max 185 MHz [ 44708.641] (II) modesetting(0): Serial No: LEW0C0044002 [ 44708.641] (II) modesetting(0): EDID (in hex): [ 44708.641] (II) modesetting(0): 00ffffffffffff000472740031f60094 [ 44708.641] (II) modesetting(0): 2813010368351d78ea6085a6564a9c25 [ 44708.641] (II) modesetting(0): 125054afcf008180714f9500950fa940 [ 44708.641] (II) modesetting(0): b300010101011a3680a070381f403020 [ 44708.641] (II) modesetting(0): 3500132a2100001a000000fc00483234 [ 44708.642] (II) modesetting(0): 33480a20202020202020000000fd0038 [ 44708.642] (II) modesetting(0): 4c1f5312000a202020202020000000ff [ 44708.642] (II) modesetting(0): 004c45573043303034343030320a003c [ 44708.642] (II) modesetting(0): Printing probed modes for output VGA-0 [ 44708.642] (II) modesetting(0): Modeline "1280x1024"x75.0 135.00 1280 1296 1440 1688 1024 1025 1028 1066 +hsync +vsync (80.0 kHz UeP) [ 44708.642] (II) modesetting(0): Modeline "1920x1080"x59.9 138.50 1920 1968 2000 2080 1080 1083 1088 1111 +hsync -vsync (66.6 kHz eP) [ 44708.642] (II) modesetting(0): Modeline "1600x1200"x60.0 162.00 1600 1664 1856 2160 1200 1201 1204 1250 +hsync +vsync (75.0 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1680x1050"x60.0 146.25 1680 1784 1960 2240 1050 1053 1059 1089 -hsync +vsync (65.3 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1280x1024"x60.0 108.00 1280 1328 1440 1688 1024 1025 1028 1066 +hsync +vsync (64.0 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1440x900"x75.0 136.75 1440 1536 1688 1936 900 903 909 942 -hsync +vsync (70.6 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1440x900"x59.9 106.50 1440 1520 1672 1904 900 903 909 934 -hsync +vsync (55.9 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1152x864"x75.0 108.00 1152 1216 1344 1600 864 865 868 900 +hsync +vsync (67.5 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1024x768"x75.1 78.80 1024 1040 1136 1312 768 769 772 800 +hsync +vsync (60.1 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1024x768"x70.1 75.00 1024 1048 1184 1328 768 771 777 806 -hsync -vsync (56.5 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1024x768"x60.0 65.00 1024 1048 1184 1344 768 771 777 806 -hsync -vsync (48.4 kHz e) [ 44708.642] (II) modesetting(0): Modeline "800x600"x72.2 50.00 800 856 976 1040 600 637 643 666 +hsync +vsync (48.1 kHz e) [ 44708.642] (II) modesetting(0): Modeline "800x600"x75.0 49.50 800 816 896 1056 600 601 604 625 +hsync +vsync (46.9 kHz e) [ 44708.642] (II) modesetting(0): Modeline "800x600"x60.3 40.00 800 840 968 1056 600 601 605 628 +hsync +vsync (37.9 kHz e) [ 44708.642] (II) modesetting(0): Modeline "800x600"x56.2 36.00 800 824 896 1024 600 601 603 625 +hsync +vsync (35.2 kHz e) [ 44708.642] (II) modesetting(0): Modeline "640x480"x75.0 31.50 640 656 720 840 480 481 484 500 -hsync -vsync (37.5 kHz e) [ 44708.642] (II) modesetting(0): Modeline "640x480"x72.8 31.50 640 664 704 832 480 489 491 520 -hsync -vsync (37.9 kHz e) [ 44708.642] (II) modesetting(0): Modeline "640x480"x60.0 25.20 640 656 752 800 480 490 492 525 -hsync -vsync (31.5 kHz e) [ 44708.642] (II) modesetting(0): Modeline "720x400"x70.1 28.32 720 738 846 900 400 412 414 449 -hsync +vsync (31.5 kHz e) [ 44708.645] (II) modesetting(0): EDID for output DVI-0 [ 44708.645] (II) modesetting(0): Output VGA-0 connected [ 44708.645] (II) modesetting(0): Output DVI-0 disconnected [ 44708.645] (II) modesetting(0): Using user preference for initial modes [ 44708.645] (II) modesetting(0): Output VGA-0 using initial mode 1280x1024 [ 44708.645] (II) modesetting(0): Using default gamma of (1.0, 1.0, 1.0) unless otherwise stated. [ 44708.645] (==) modesetting(0): DPI set to (96, 96) [ 44708.645] (II) Loading sub module "fb" [ 44708.645] (II) LoadModule: "fb" [ 44708.645] (II) Loading /usr/lib/xorg/modules/libfb.so [ 44708.645] (II) Module fb: vendor="X.Org Foundation" [ 44708.645] compiled for 1.15.1, module version = 1.0.0 [ 44708.645] ABI class: X.Org ANSI C Emulation, version 0.4 [ 44708.645] (II) Loading sub module "shadow" [ 44708.645] (II) LoadModule: "shadow" [ 44708.646] (II) Loading /usr/lib/xorg/modules/libshadow.so [ 44708.646] (II) Module shadow: vendor="X.Org Foundation" [ 44708.646] compiled for 1.15.1, module version = 1.1.0 [ 44708.646] ABI class: X.Org ANSI C Emulation, version 0.4 [ 44708.646] (**) modesetting(G0): Depth 16, (--) framebuffer bpp 16 [ 44708.646] (==) modesetting(G0): RGB weight 565 [ 44708.646] (==) modesetting(G0): Default visual is TrueColor [ 44708.646] (II) modesetting(G0): ShadowFB: preferred NO, enabled NO [ 44708.727] (II) modesetting(G0): Output DVI-1-0 using monitor section DisplayLinkMonitor [ 44708.808] (II) modesetting(G0): EDID for output DVI-1-0 [ 44708.808] (II) modesetting(G0): Manufacturer: WDE Model: 1702 Serial#: 0 [ 44708.808] (II) modesetting(G0): Year: 2005 Week: 14 [ 44708.808] (II) modesetting(G0): EDID Version: 1.3 [ 44708.808] (II) modesetting(G0): Analog Display Input, Input Voltage Level: 0.700/0.700 V [ 44708.808] (II) modesetting(G0): Sync: Separate [ 44708.808] (II) modesetting(G0): Max Image Size [cm]: horiz.: 34 vert.: 27 [ 44708.808] (II) modesetting(G0): Gamma: 2.20 [ 44708.808] (II) modesetting(G0): DPMS capabilities: StandBy Suspend Off; RGB/Color Display [ 44708.808] (II) modesetting(G0): Default color space is primary color space [ 44708.808] (II) modesetting(G0): First detailed timing is preferred mode [ 44708.808] (II) modesetting(G0): GTF timings supported [ 44708.808] (II) modesetting(G0): redX: 0.643 redY: 0.352 greenX: 0.283 greenY: 0.608 [ 44708.808] (II) modesetting(G0): blueX: 0.147 blueY: 0.102 whiteX: 0.313 whiteY: 0.329 [ 44708.808] (II) modesetting(G0): Supported established timings: [ 44708.808] (II) modesetting(G0): 720x400@70Hz [ 44708.808] (II) modesetting(G0): 640x480@60Hz [ 44708.808] (II) modesetting(G0): 640x480@67Hz [ 44708.808] (II) modesetting(G0): 640x480@72Hz [ 44708.808] (II) modesetting(G0): 640x480@75Hz [ 44708.808] (II) modesetting(G0): 800x600@56Hz [ 44708.808] (II) modesetting(G0): 800x600@60Hz [ 44708.808] (II) modesetting(G0): 800x600@72Hz [ 44708.808] (II) modesetting(G0): 800x600@75Hz [ 44708.808] (II) modesetting(G0): 832x624@75Hz [ 44708.808] (II) modesetting(G0): 1024x768@60Hz [ 44708.808] (II) modesetting(G0): 1024x768@70Hz [ 44708.808] (II) modesetting(G0): 1024x768@75Hz [ 44708.809] (II) modesetting(G0): 1280x1024@75Hz [ 44708.809] (II) modesetting(G0): Manufacturer's mask: 0 [ 44708.809] (II) modesetting(G0): Supported standard timings: [ 44708.809] (II) modesetting(G0): #0: hsize: 1280 vsize 1024 refresh: 60 vid: 32897 [ 44708.809] (II) modesetting(G0): #1: hsize: 1152 vsize 864 refresh: 75 vid: 20337 [ 44708.809] (II) modesetting(G0): Supported detailed timing: [ 44708.809] (II) modesetting(G0): clock: 108.0 MHz Image Size: 338 x 270 mm [ 44708.809] (II) modesetting(G0): h_active: 1280 h_sync: 1328 h_sync_end 1440 h_blank_end 1688 h_border: 0 [ 44708.809] (II) modesetting(G0): v_active: 1024 v_sync: 1025 v_sync_end 1028 v_blanking: 1066 v_border: 0 [ 44708.809] (II) modesetting(G0): Ranges: V min: 50 V max: 75 Hz, H min: 30 H max: 82 kHz, PixClock max 145 MHz [ 44708.809] (II) modesetting(G0): Monitor name: WDE LCM-17v2 [ 44708.809] (II) modesetting(G0): Serial No: 0 [ 44708.809] (II) modesetting(G0): EDID (in hex): [ 44708.809] (II) modesetting(G0): 00ffffffffffff005c85021700000000 [ 44708.809] (II) modesetting(G0): 0e0f010368221b78ef8bc5a45a489b25 [ 44708.809] (II) modesetting(G0): 1a5054bfef008180714f010101010101 [ 44708.809] (II) modesetting(G0): 010101010101302a009851002a403070 [ 44708.809] (II) modesetting(G0): 1300520e1100001e000000fd00324b1e [ 44708.809] (II) modesetting(G0): 520e000a202020202020000000fc0057 [ 44708.809] (II) modesetting(G0): 4445204c434d2d313776320a000000ff [ 44708.809] (II) modesetting(G0): 00300a202020202020202020202000e7 [ 44708.809] (II) modesetting(G0): Printing probed modes for output DVI-1-0 [ 44708.809] (II) modesetting(G0): Modeline "1280x1024"x60.0 108.00 1280 1328 1440 1688 1024 1025 1028 1066 +hsync +vsync (64.0 kHz UeP) [ 44708.809] (II) modesetting(G0): Modeline "1280x1024"x75.0 135.00 1280 1296 1440 1688 1024 1025 1028 1066 +hsync +vsync (80.0 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1280x960"x60.0 108.00 1280 1376 1488 1800 960 961 964 1000 +hsync +vsync (60.0 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1280x800"x74.9 106.50 1280 1360 1488 1696 800 803 809 838 -hsync +vsync (62.8 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1280x800"x59.8 83.50 1280 1352 1480 1680 800 803 809 831 +hsync -vsync (49.7 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1152x864"x75.0 108.00 1152 1216 1344 1600 864 865 868 900 +hsync +vsync (67.5 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1280x768"x74.9 102.25 1280 1360 1488 1696 768 771 778 805 +hsync -vsync (60.3 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1280x768"x59.9 79.50 1280 1344 1472 1664 768 771 778 798 -hsync +vsync (47.8 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "1024x768"x75.1 78.80 1024 1040 1136 1312 768 769 772 800 +hsync +vsync (60.1 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "1024x768"x70.1 75.00 1024 1048 1184 1328 768 771 777 806 -hsync -vsync (56.5 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "1024x768"x60.0 65.00 1024 1048 1184 1344 768 771 777 806 -hsync -vsync (48.4 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "1024x576"x60.0 46.97 1024 1064 1168 1312 576 577 580 597 -hsync +vsync (35.8 kHz) [ 44708.810] (II) modesetting(G0): Modeline "832x624"x74.6 57.28 832 864 928 1152 624 625 628 667 -hsync -vsync (49.7 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "800x600"x72.2 50.00 800 856 976 1040 600 637 643 666 +hsync +vsync (48.1 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "800x600"x75.0 49.50 800 816 896 1056 600 601 604 625 +hsync +vsync (46.9 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "800x600"x60.3 40.00 800 840 968 1056 600 601 605 628 +hsync +vsync (37.9 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "800x600"x56.2 36.00 800 824 896 1024 600 601 603 625 +hsync +vsync (35.2 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "848x480"x60.0 33.75 848 864 976 1088 480 486 494 517 +hsync +vsync (31.0 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "640x480"x75.0 31.50 640 656 720 840 480 481 484 500 -hsync -vsync (37.5 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "640x480"x72.8 31.50 640 664 704 832 480 489 491 520 -hsync -vsync (37.9 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "640x480"x66.7 30.24 640 704 768 864 480 483 486 525 -hsync -vsync (35.0 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "640x480"x60.0 25.20 640 656 752 800 480 490 492 525 -hsync -vsync (31.5 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "720x400"x70.1 28.32 720 738 846 900 400 412 414 449 -hsync +vsync (31.5 kHz e) [ 44708.810] (II) modesetting(G0): Using default gamma of (1.0, 1.0, 1.0) unless otherwise stated. [ 44708.810] (==) modesetting(G0): DPI set to (96, 96) [ 44708.810] (II) Loading sub module "fb" [ 44708.810] (II) LoadModule: "fb" [ 44708.810] (II) Loading /usr/lib/xorg/modules/libfb.so [ 44708.810] (II) Module fb: vendor="X.Org Foundation" [ 44708.810] compiled for 1.15.1, module version = 1.0.0 [ 44708.811] ABI class: X.Org ANSI C Emulation, version 0.4 [ 44708.811] (II) UnloadModule: "radeon" [ 44708.811] (II) Unloading radeon [ 44708.811] (II) UnloadModule: "fbdev" [ 44708.811] (II) Unloading fbdev [ 44708.811] (II) UnloadSubModule: "fbdevhw" [ 44708.811] (II) Unloading fbdevhw [ 44708.811] (II) UnloadModule: "vesa" [ 44708.811] (II) Unloading vesa [ 44708.811] (==) modesetting(G0): Backing store enabled [ 44708.811] (==) modesetting(G0): Silken mouse enabled [ 44708.812] (II) modesetting(G0): RandR 1.2 enabled, ignore the following RandR disabled message. [ 44708.812] (==) modesetting(G0): DPMS enabled [ 44708.812] (WW) modesetting(G0): Option "fbdev" is not used [ 44708.812] (==) modesetting(0): Backing store enabled [ 44708.812] (==) modesetting(0): Silken mouse enabled [ 44708.812] (II) modesetting(0): RandR 1.2 enabled, ignore the following RandR disabled message. [ 44708.812] (==) modesetting(0): DPMS enabled [ 44708.812] (WW) modesetting(0): Option "fbdev" is not used [ 44708.856] (--) RandR disabled [ 44708.867] (II) SELinux: Disabled on system [ 44708.868] (II) AIGLX: Screen 0 is not DRI2 capable [ 44708.868] (EE) AIGLX: reverting to software rendering [ 44708.878] (II) AIGLX: Loaded and initialized swrast [ 44708.878] (II) GLX: Initialized DRISWRAST GL provider for screen 0 [ 44708.879] (II) modesetting(G0): Damage tracking initialized [ 44708.879] (II) modesetting(0): Damage tracking initialized [ 44708.879] (II) modesetting(0): Setting screen physical size to 338 x 270 [ 44708.900] (II) XKB: generating xkmfile /tmp/server-B20D7FC79C7F597315E3E501AEF10E0D866E8E92.xkm [ 44708.918] (II) config/udev: Adding input device Power Button (/dev/input/event1) [ 44708.918] (**) Power Button: Applying InputClass "evdev keyboard catchall" [ 44708.918] (II) LoadModule: "evdev" [ 44708.918] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so [ 44708.918] (II) Module evdev: vendor="X.Org Foundation" [ 44708.918] compiled for 1.15.0, module version = 2.8.2 [ 44708.918] Module class: X.Org XInput Driver [ 44708.918] ABI class: X.Org XInput driver, version 20.0 [ 44708.918] (II) Using input driver 'evdev' for 'Power Button' [ 44708.918] (**) Power Button: always reports core events [ 44708.918] (**) evdev: Power Button: Device: "/dev/input/event1" [ 44708.918] (--) evdev: Power Button: Vendor 0 Product 0x1 [ 44708.918] (--) evdev: Power Button: Found keys [ 44708.918] (II) evdev: Power Button: Configuring as keyboard [ 44708.918] (**) Option "config_info" "udev:/sys/devices/LNXSYSTM:00/LNXPWRBN:00/input/input1/event1" [ 44708.918] (II) XINPUT: Adding extended input device "Power Button" (type: KEYBOARD, id 6) [ 44708.918] (**) Option "xkb_rules" "evdev" [ 44708.918] (**) Option "xkb_model" "pc105" [ 44708.918] (**) Option "xkb_layout" "us" [ 44708.919] (II) config/udev: Adding input device Power Button (/dev/input/event0) [ 44708.919] (**) Power Button: Applying InputClass "evdev keyboard catchall" [ 44708.919] (II) Using input driver 'evdev' for 'Power Button' [ 44708.919] (**) Power Button: always reports core events [ 44708.919] (**) evdev: Power Button: Device: "/dev/input/event0" [ 44708.919] (--) evdev: Power Button: Vendor 0 Product 0x1 [ 44708.919] (--) evdev: Power Button: Found keys [ 44708.919] (II) evdev: Power Button: Configuring as keyboard [ 44708.919] (**) Option "config_info" "udev:/sys/devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input0/event0" Is there anything I can do to fix this problem.

    Read the article

  • SQL Server Split() Function

    - by HighAltitudeCoder
    Title goes here   Ever wanted a dbo.Split() function, but not had the time to debug it completely?  Let me guess - you are probably working on a stored procedure with 50 or more parameters; two or three of them are parameters of differing types, while the other 47 or so all of the same type (id1, id2, id3, id4, id5...).  Worse, you've found several other similar stored procedures with the ONLY DIFFERENCE being the number of like parameters taped to the end of the parameter list. If this is the situation you find yourself in now, you may be wondering, "why am I working with three different copies of what is basically the same stored procedure, and why am I having to maintain changes in three different places?  Can't I have one stored procedure that accomplishes the job of all three? My answer to you: YES!  Here is the Split() function I've created.    /******************************************************************************                                       Split.sql   ******************************************************************************/ /******************************************************************************   Split a delimited string into sub-components and return them as a table.   Parameter 1: Input string which is to be split into parts. Parameter 2: Delimiter which determines the split points in input string. Works with space or spaces as delimiter. Split() is apostrophe-safe.   SYNTAX: SELECT * FROM Split('Dvorak,Debussy,Chopin,Holst', ',') SELECT * FROM Split('Denver|Seattle|San Diego|New York', '|') SELECT * FROM Split('Denver is the super-awesomest city of them all.', ' ')   ******************************************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE xtype = 'TF'       AND name = 'Split'       ) BEGIN       DROP FUNCTION Split END GO   CREATE FUNCTION Split (       @InputString                  VARCHAR(8000),       @Delimiter                    VARCHAR(50) )   RETURNS @Items TABLE (       Item                          VARCHAR(8000) )   AS BEGIN       IF @Delimiter = ' '       BEGIN             SET @Delimiter = ','             SET @InputString = REPLACE(@InputString, ' ', @Delimiter)       END         IF (@Delimiter IS NULL OR @Delimiter = '')             SET @Delimiter = ','   --INSERT INTO @Items VALUES (@Delimiter) -- Diagnostic --INSERT INTO @Items VALUES (@InputString) -- Diagnostic         DECLARE @Item                 VARCHAR(8000)       DECLARE @ItemList       VARCHAR(8000)       DECLARE @DelimIndex     INT         SET @ItemList = @InputString       SET @DelimIndex = CHARINDEX(@Delimiter, @ItemList, 0)       WHILE (@DelimIndex != 0)       BEGIN             SET @Item = SUBSTRING(@ItemList, 0, @DelimIndex)             INSERT INTO @Items VALUES (@Item)               -- Set @ItemList = @ItemList minus one less item             SET @ItemList = SUBSTRING(@ItemList, @DelimIndex+1, LEN(@ItemList)-@DelimIndex)             SET @DelimIndex = CHARINDEX(@Delimiter, @ItemList, 0)       END -- End WHILE         IF @Item IS NOT NULL -- At least one delimiter was encountered in @InputString       BEGIN             SET @Item = @ItemList             INSERT INTO @Items VALUES (@Item)       END         -- No delimiters were encountered in @InputString, so just return @InputString       ELSE INSERT INTO @Items VALUES (@InputString)         RETURN   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON Split TO UserRole1 --GRANT SELECT ON Split TO UserRole2 --GO   The syntax is basically as follows: SELECT <fields> FROM Table 1 JOIN Table 2 ON ... JOIN Table 3 ON ... WHERE LOGICAL CONDITION A AND LOGICAL CONDITION B AND LOGICAL CONDITION C AND TABLE2.Id IN (SELECT * FROM Split(@IdList, ',')) @IdList is a parameter passed into the stored procedure, and the comma (',') is the delimiter you have chosen to split the parameter list on. You can also use it like this: SELECT <fields> FROM Table 1 JOIN Table 2 ON ... JOIN Table 3 ON ... WHERE LOGICAL CONDITION A AND LOGICAL CONDITION B AND LOGICAL CONDITION C HAVING COUNT(SELECT * FROM Split(@IdList, ',') Similarly, it can be used in other aggregate functions at run-time: SELECT MIN(SELECT * FROM Split(@IdList, ','), <fields> FROM Table 1 JOIN Table 2 ON ... JOIN Table 3 ON ... WHERE LOGICAL CONDITION A AND LOGICAL CONDITION B AND LOGICAL CONDITION C GROUP BY <fields> Now that I've (hopefully effectively) explained the benefits to using this function and implementing it in one or more of your database objects, let me warn you of a caveat that you are likely to encounter.  You may have a team member who waits until the right moment to ask you a pointed question: "Doesn't this function just do the same thing as using the IN function?  Why didn't you just use that instead?  In other words, why bother with this function?" What's happening is, one or more team members has failed to understand the reason for implementing this kind of function in the first place.  (Note: this is THE MOST IMPORTANT ASPECT OF THIS POST). Allow me to outline a few pros to implementing this function, so you may effectively parry this question.  Touche. 1) Code consolidation.  You don't have to maintain what is basically the same code and logic, but with varying numbers of the same parameter in several SQL objects.  I'm not going to go into the cons related to using this function, because the afore mentioned team member is probably more than adept at pointing these out.  Remember, the real positive contribution is ou are decreasing the liklihood that your team fails to update all (x) duplicate copies of what are basically the same stored procedure, and so on...  This is the classic downside to duplicate code.  It is a virus, and you should kill it. You might be better off rejecting your team member's question, and responding with your own: "Would you rather maintain the same logic in multiple different stored procedures, and hope that the team doesn't forget to always update all of them at the same time?".  In his head, he might be thinking "yes, I would like to maintain several different copies of the same stored procedure", although you probably will not get such a direct response.  2) Added flexibility - you can use the Split function elsewhere, and for splitting your data in different ways.  Plus, you can use any kind of delimiter you wish.  How can you know today the ways in which you might want to examine your data tomorrow?  Segue to my next point. 3) Because the function takes a delimiter parameter, you can split the data in any number of ways.  This greatly increases the utility of such a function and enables your team to work with the data in a variety of different ways in the future.  You can split on a single char, symbol, word, or group of words.  You can split on spaces.  (The list goes on... test it out). Finally, you can dynamically define the behavior of a stored procedure (or other SQL object) at run time, through the use of this function.  Rather than have several objects that accomplish almost the same thing, why not have only one instead?

    Read the article

  • What's up with OCFS2?

    - by wcoekaer
    On Linux there are many filesystem choices and even from Oracle we provide a number of filesystems, all with their own advantages and use cases. Customers often confuse ACFS with OCFS or OCFS2 which then causes assumptions to be made such as one replacing the other etc... I thought it would be good to write up a summary of how OCFS2 got to where it is, what we're up to still, how it is different from other options and how this really is a cool native Linux cluster filesystem that we worked on for many years and is still widely used. Work on a cluster filesystem at Oracle started many years ago, in the early 2000's when the Oracle Database Cluster development team wrote a cluster filesystem for Windows that was primarily focused on providing an alternative to raw disk devices and help customers with the deployment of Oracle Real Application Cluster (RAC). Oracle RAC is a cluster technology that lets us make a cluster of Oracle Database servers look like one big database. The RDBMS runs on many nodes and they all work on the same data. It's a Shared Disk database design. There are many advantages doing this but I will not go into detail as that is not the purpose of my write up. Suffice it to say that Oracle RAC expects all the database data to be visible in a consistent, coherent way, across all the nodes in the cluster. To do that, there were/are a few options : 1) use raw disk devices that are shared, through SCSI, FC, or iSCSI 2) use a network filesystem (NFS) 3) use a cluster filesystem(CFS) which basically gives you a filesystem that's coherent across all nodes using shared disks. It is sort of (but not quite) combining option 1 and 2 except that you don't do network access to the files, the files are effectively locally visible as if it was a local filesystem. So OCFS (Oracle Cluster FileSystem) on Windows was born. Since Linux was becoming a very important and popular platform, we decided that we would also make this available on Linux and thus the porting of OCFS/Windows started. The first version of OCFS was really primarily focused on replacing the use of Raw devices with a simple filesystem that lets you create files and provide direct IO to these files to get basically native raw disk performance. The filesystem was not designed to be fully POSIX compliant and it did not have any where near good/decent performance for regular file create/delete/access operations. Cache coherency was easy since it was basically always direct IO down to the disk device and this ensured that any time one issues a write() command it would go directly down to the disk, and not return until the write() was completed. Same for read() any sort of read from a datafile would be a read() operation that went all the way to disk and return. We did not cache any data when it came down to Oracle data files. So while OCFS worked well for that, since it did not have much of a normal filesystem feel, it was not something that could be submitted to the kernel mail list for inclusion into Linux as another native linux filesystem (setting aside the Windows porting code ...) it did its job well, it was very easy to configure, node membership was simple, locking was disk based (so very slow but it existed), you could create regular files and do regular filesystem operations to a certain extend but anything that was not database data file related was just not very useful in general. Logfiles ok, standard filesystem use, not so much. Up to this point, all the work was done, at Oracle, by Oracle developers. Once OCFS (1) was out for a while and there was a lot of use in the database RAC world, many customers wanted to do more and were asking for features that you'd expect in a normal native filesystem, a real "general purposes cluster filesystem". So the team sat down and basically started from scratch to implement what's now known as OCFS2 (Oracle Cluster FileSystem release 2). Some basic criteria were : Design it with a real Distributed Lock Manager and use the network for lock negotiation instead of the disk Make it a Linux native filesystem instead of a native shim layer and a portable core Support standard Posix compliancy and be fully cache coherent with all operations Support all the filesystem features Linux offers (ACL, extended Attributes, quotas, sparse files,...) Be modern, support large files, 32/64bit, journaling, data ordered journaling, endian neutral, we can mount on both endian /cross architecture,.. Needless to say, this was a huge development effort that took many years to complete. A few big milestones happened along the way... OCFS2 was development in the open, we did not have a private tree that we worked on without external code review from the Linux Filesystem maintainers, great folks like Christopher Hellwig reviewed the code regularly to make sure we were not doing anything out of line, we submitted the code for review on lkml a number of times to see if we were getting close for it to be included into the mainline kernel. Using this development model is standard practice for anyone that wants to write code that goes into the kernel and having any chance of doing so without a complete rewrite or.. shall I say flamefest when submitted. It saved us a tremendous amount of time by not having to re-fit code for it to be in a Linus acceptable state. Some other filesystems that were trying to get into the kernel that didn't follow an open development model had a lot harder time and a lot harsher criticism. March 2006, when Linus released 2.6.16, OCFS2 officially became part of the mainline kernel, it was accepted a little earlier in the release candidates but in 2.6.16. OCFS2 became officially part of the mainline Linux kernel tree as one of the many filesystems. It was the first cluster filesystem to make it into the kernel tree. Our hope was that it would then end up getting picked up by the distribution vendors to make it easy for everyone to have access to a CFS. Today the source code for OCFS2 is approximately 85000 lines of code. We made OCFS2 production with full support for customers that ran Oracle database on Linux, no extra or separate support contract needed. OCFS2 1.0.0 started being built for RHEL4 for x86, x86-64, ppc, s390x and ia64. For RHEL5 starting with OCFS2 1.2. SuSE was very interested in high availability and clustering and decided to build and include OCFS2 with SLES9 for their customers and was, next to Oracle, the main contributor to the filesystem for both new features and bug fixes. Source code was always available even prior to inclusion into mainline and as of 2.6.16, source code was just part of a Linux kernel download from kernel.org, which it still is, today. So the latest OCFS2 code is always the upstream mainline Linux kernel. OCFS2 is the cluster filesystem used in Oracle VM 2 and Oracle VM 3 as the virtual disk repository filesystem. Since the filesystem is in the Linux kernel it's released under the GPL v2 The release model has always been that new feature development happened in the mainline kernel and we then built consistent, well tested, snapshots that had versions, 1.2, 1.4, 1.6, 1.8. But these releases were effectively just snapshots in time that were tested for stability and release quality. OCFS2 is very easy to use, there's a simple text file that contains the node information (hostname, node number, cluster name) and a file that contains the cluster heartbeat timeouts. It is very small, and very efficient. As Sunil Mushran wrote in the manual : OCFS2 is an efficient, easily configured, quickly installed, fully integrated and compatible, feature-rich, architecture and endian neutral, cache coherent, ordered data journaling, POSIX-compliant, shared disk cluster file system. Here is a list of some of the important features that are included : Variable Block and Cluster sizes Supports block sizes ranging from 512 bytes to 4 KB and cluster sizes ranging from 4 KB to 1 MB (increments in power of 2). Extent-based Allocations Tracks the allocated space in ranges of clusters making it especially efficient for storing very large files. Optimized Allocations Supports sparse files, inline-data, unwritten extents, hole punching and allocation reservation for higher performance and efficient storage. File Cloning/snapshots REFLINK is a feature which introduces copy-on-write clones of files in a cluster coherent way. Indexed Directories Allows efficient access to millions of objects in a directory. Metadata Checksums Detects silent corruption in inodes and directories. Extended Attributes Supports attaching an unlimited number of name:value pairs to the file system objects like regular files, directories, symbolic links, etc. Advanced Security Supports POSIX ACLs and SELinux in addition to the traditional file access permission model. Quotas Supports user and group quotas. Journaling Supports both ordered and writeback data journaling modes to provide file system consistency in the event of power failure or system crash. Endian and Architecture neutral Supports a cluster of nodes with mixed architectures. Allows concurrent mounts on nodes running 32-bit and 64-bit, little-endian (x86, x86_64, ia64) and big-endian (ppc64) architectures. In-built Cluster-stack with DLM Includes an easy to configure, in-kernel cluster-stack with a distributed lock manager. Buffered, Direct, Asynchronous, Splice and Memory Mapped I/Os Supports all modes of I/Os for maximum flexibility and performance. Comprehensive Tools Support Provides a familiar EXT3-style tool-set that uses similar parameters for ease-of-use. The filesystem was distributed for Linux distributions in separate RPM form and this had to be built for every single kernel errata release or every updated kernel provided by the vendor. We provided builds from Oracle for Oracle Linux and all kernels released by Oracle and for Red Hat Enterprise Linux. SuSE provided the modules directly for every kernel they shipped. With the introduction of the Unbreakable Enterprise Kernel for Oracle Linux and our interest in reducing the overhead of building filesystem modules for every minor release, we decide to make OCFS2 available as part of UEK. There was no more need for separate kernel modules, everything was built-in and a kernel upgrade automatically updated the filesystem, as it should. UEK allowed us to not having to backport new upstream filesystem code into an older kernel version, backporting features into older versions introduces risk and requires extra testing because the code is basically partially rewritten. The UEK model works really well for continuing to provide OCFS2 without that extra overhead. Because the RHEL kernel did not contain OCFS2 as a kernel module (it is in the source tree but it is not built by the vendor in kernel module form) we stopped adding the extra packages to Oracle Linux and its RHEL compatible kernel and for RHEL. Oracle Linux customers/users obviously get OCFS2 included as part of the Unbreakable Enterprise Kernel, SuSE customers get it by SuSE distributed with SLES and Red Hat can decide to distribute OCFS2 to their customers if they chose to as it's just a matter of compiling the module and making it available. OCFS2 today, in the mainline kernel is pretty much feature complete in terms of integration with every filesystem feature Linux offers and it is still actively maintained with Joel Becker being the primary maintainer. Since we use OCFS2 as part of Oracle VM, we continue to look at interesting new functionality to add, REFLINK was a good example, and as such we continue to enhance the filesystem where it makes sense. Bugfixes and any sort of code that goes into the mainline Linux kernel that affects filesystems, automatically also modifies OCFS2 so it's in kernel, actively maintained but not a lot of new development happening at this time. We continue to fully support OCFS2 as part of Oracle Linux and the Unbreakable Enterprise Kernel and other vendors make their own decisions on support as it's really a Linux cluster filesystem now more than something that we provide to customers. It really just is part of Linux like EXT3 or BTRFS etc, the OS distribution vendors decide. Do not confuse OCFS2 with ACFS (ASM cluster Filesystem) also known as Oracle Cloud Filesystem. ACFS is a filesystem that's provided by Oracle on various OS platforms and really integrates into Oracle ASM (Automatic Storage Management). It's a very powerful Cluster Filesystem but it's not distributed as part of the Operating System, it's distributed with the Oracle Database product and installs with and lives inside Oracle ASM. ACFS obviously is fully supported on Linux (Oracle Linux, Red Hat Enterprise Linux) but OCFS2 independently as a native Linux filesystem is also, and continues to also be supported. ACFS is very much tied into the Oracle RDBMS, OCFS2 is just a standard native Linux filesystem with no ties into Oracle products. Customers running the Oracle database and ASM really should consider using ACFS as it also provides storage/clustered volume management. Customers wanting to use a simple, easy to use generic Linux cluster filesystem should consider using OCFS2. To learn more about OCFS2 in detail, you can find good documentation on http://oss.oracle.com/projects/ocfs2 in the Documentation area, or get the latest mainline kernel from http://kernel.org and read the source. One final, unrelated note - since I am not always able to publicly answer or respond to comments, I do not want to selectively publish comments from readers. Sometimes I forget to publish comments, sometime I publish them and sometimes I would publish them but if for some reason I cannot publicly comment on them, it becomes a very one-sided stream. So for now I am going to not publish comments from anyone, to be fair to all sides. You are always welcome to email me and I will do my best to respond to technical questions, questions about strategy or direction are sometimes not possible to answer for obvious reasons.

    Read the article

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 29 (sys.dm_os_buffer_descriptors)

    - by Tamarick Hill
    The sys.dm_os_buffer_descriptors Dynamic Management View gives you a look into the data pages that are currently in your SQL Server buffer pool. Just in case you are not familiar with some of the internals to SQL Server and how the engine works, SQL Server only works with objects that are in memory (buffer pool). When an object such as a table needs to be read and it does not exist in the buffer pool, SQL Server will read (copy) the necessary data page(s) from disk into the buffer pool and cache it. Caching takes place so that it can be reused again and prevents the need of expensive physical reads. To better illustrate this DMV, lets query it against our AdventureWorks2012 database and view the result set. SELECT * FROM sys.dm_os_buffer_descriptors WHERE database_id = db_id('AdventureWorks2012') The first column returned from this result set is the database_id column which identifies the specific database for a given row. The file_id column represents the file that a particular buffer descriptor belongs to. The page_id column represents the ID for the data page within the buffer. The page_level column represents the index level of the data page. Next we have the allocation_unit_id column which identifies a unique allocation unit. An allocation unit is basically a set of data pages. The page_type column tells us exactly what type of page is in the buffer pool. From my screen shot above you see I have 3 distinct type of Pages in my buffer pool, Index, Data, and IAM pages. Index pages are pages that are used to build the Root and Intermediate levels of a B-Tree. A Data page would represent the actual leaf pages of a clustered index which contain the actual data for the table. Without getting into too much detail, an IAM page is Index Allocation Map page which track GAM (Global Allocation Map) pages which in turn track extents on your system. The row_count column details how many data rows are present on a given page. The free_space_in_bytes tells you how much of a given data page is still available, remember pages are 8K in size. The is_modified signifies whether or not a page has been changed since it has been read into memory, .ie a dirty page. The numa_node column represents the Nonuniform memory access node for the buffer. Lastly is the read_microsec column which tells you how many microseconds it took for a data page to be read (copied) into the buffer pool. This is a great DMV for use when you are tracking down a memory issue or if you just want to have a look at what type of pages are currently in your buffer pool. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms173442.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Scrum and Team Consolidation

    - by John K. Hines
    I’m still working my way through one of the more painful team consolidations of my career.  One thing that’s made it hard was my assumption that the use of Agile methods and Scrum would make everything easy.  Take three teams, make all work visible, track it, and presto: An efficient, functioning software development team. What I’ve come to realize is that the primary benefit of Scrum is that Scrum brings teams closer to their customers.  Frequent meetings, short iterations, and phased deployments are all meant to keep the customer in the loop.  It’s true that as teams become proficient with Scrum they tend to become more efficient.  But I don’t think it’s true that Scrum automatically helps people work together. Instead, Scrum can point out when teams aren’t good at working together.   And it really illustrates when teams, especially teams in sustaining mode, are reacting to their customers instead of innovating with them.  At the moment we’ve inherited a huge backlog of tools, processes, and personalities.  It’s up to us to sort them all out.  Unfortunately, after 7 &frac12; months we’re still sorting. What I’d recommend for any blended team is to look at your current product lifecycles and work on a single lifecycle for all work.  If you can’t objectively come up with one process, that’s a good indication that the new team might not be a good fit for being a single unit (which happens all the time in bigger companies).  Go ahead & self-organize into sub-teams.  Then repeat the process. If you can come up with a single process, tackle each piece and standardize all of them.  Do this as soon as possible, as it can be uncomfortable.  Standardize your requirements gathering and tracking, your exploration and technical analysis, your project planning, development standards, validation and sustaining processes.  Standardize all of it.  Make this your top priority, get it out of the way, and get back to work. Lastly, managers of blended teams should realize what I’m suggesting is a disruptive process.  But you’ve just reorganized the team is already disrupted.   Don’t pull the bandage off slowly and force the team through a prolonged transition phase, lowering their productivity over the long term.  You can role model leadership to your team and drive a true consolidation.  Destroy roadblocks, reassure those on your team who are afraid of change, and push forward to create something efficient and beautiful.  Then use Scrum to reengage your customers in a way that they’ll love. Technorati tags: Scrum Scrum Process

    Read the article

  • Oracle Solaris: Zones on Shared Storage

    - by Jeff Victor
    Oracle Solaris 11.1 has several new features. At oracle.com you can find a detailed list. One of the significant new features, and the most significant new feature releated to Oracle Solaris Zones, is casually called "Zones on Shared Storage" or simply ZOSS (rhymes with "moss"). ZOSS offers much more flexibility because you can store Solaris Zones on shared storage (surprise!) so that you can perform quick and easy migration of a zone from one system to another. This blog entry describes and demonstrates the use of ZOSS. ZOSS provides complete support for a Solaris Zone that is stored on "shared storage." In this case, "shared storage" refers to fiber channel (FC) or iSCSI devices, although there is one lone exception that I will demonstrate soon. The primary intent is to enable you to store a zone on FC or iSCSI storage so that it can be migrated from one host computer to another much more easily and safely than in the past. With this blog entry, I wanted to make it easy for you to try this yourself. I couldn't assume that you have a SAN available - which is a good thing, because neither do I! What could I use, instead? [There he goes, foreshadowing again... -Ed.] Developing this entry reinforced the lesson that the solution to every lab problem is VirtualBox. Oracle VM VirtualBox (its formal name) helps here in a couple of important ways. It offers the ability to easily install multiple copies of Solaris as guests on top of any popular system (Microsoft Windows, MacOS, Solaris, Oracle Linux (and other Linuxes) etc.). It also offers the ability to create a separate virtual disk drive (VDI) that appears as a local hard disk to a guest. This virtual disk can be moved very easily from one guest to another. In other words, you can follow the steps below on a laptop or larger x86 system. Please note that the ability to use ZOSS to store a zone on a local disk is very useful for a lab environment, but not so useful for production. I do not suggest regularly moving disk drives among computers. In the method I describe below, that virtual hard disk will contain the zone that will be migrated among the (virtual) hosts. In production, you would use FC or iSCSI LUNs instead. The zonecfg(1M) man page details the syntax for each of the three types of devices. Why Migrate? Why is the migration of virtual servers important? Some of the most common reasons are: Moving a workload to a different computer so that the original computer can be turned off for extensive maintenance. Moving a workload to a larger system because the workload has outgrown its original system. If the workload runs in an environment (such as a Solaris Zone) that is stored on shared storage, you can restore the service of the workload on an alternate computer if the original computer has failed and will not reboot. You can simplify lifecycle management of a workload by developing it on a laptop, migrating it to a test platform when it's ready, and finally moving it to a production system. Concepts For ZOSS, the important new concept is named "rootzpool". You can read about it in the zonecfg(1M) man page, but here's the short version: it's the backing store (hard disk(s), or LUN(s)) that will be used to make a ZFS zpool - the zpool that will hold the zone. This zpool: contains the zone's Solaris content, i.e. the root file system does not contain any content not related to the zone can only be mounted by one Solaris instance at a time Method Overview Here is a brief list of the steps to create a zone on shared storage and migrate it. The next section shows the commands and output. You will need a host system with an x86 CPU (hopefully at least a couple of CPU cores), at least 2GB of RAM, and at least 25GB of free disk space. (The steps below will not actually use 25GB of disk space, but I don't want to lead you down a path that ends in a big sign that says "Your HDD is full. Good luck!") Configure the zone on both systems, specifying the rootzpool that both will use. The best way is to configure it on one system and then copy the output of "zonecfg export" to the other system to be used as input to zonecfg. This method reduces the chances of pilot error. (It is not necessary to configure the zone on both systems before creating it. You can configure this zone in multiple places, whenever you want, and migrate it to one of those places at any time - as long as those systems all have access to the shared storage.) Install the zone on one system, onto shared storage. Boot the zone. Provide system configuration information to the zone. (In the Real World(tm) you will usually automate this step.) Shutdown the zone. Detach the zone from the original system. Attach the zone to its new "home" system. Boot the zone. The zone can be used normally, and even migrated back, or to a different system. Details The rest of this shows the commands and output. The two hostnames are "sysA" and "sysB". Note that each Solaris guest might use a different device name for the VDI that they share. I used the device names shown below, but you must discover the device name(s) after booting each guest. In a production environment you would also discover the device name first and then configure the zone with that name. Fortunately, you can use the command "zpool import" or "format" to discover the device on the "new" host for the zone. The first steps create the VirtualBox guests and the shared disk drive. I describe the steps here without demonstrating them. Download VirtualBox and install it using a method normal for your host OS. You can read the complete instructions. Create two VirtualBox guests, each to run Solaris 11.1. Each will use its own VDI as its root disk. Install Solaris 11.1 in each guest.Install Solaris 11.1 in each guest. To install a Solaris 11.1 guest, you can either download a pre-built VirtualBox guest, and import it, or install Solaris 11.1 from the "text install" media. If you use the latter method, after booting you will not see a windowing system. To install the GUI and other important things, login and run "pkg install solaris-desktop" and take a break while it installs those important things. Life is usually easier if you install the VirtualBox Guest Additions because then you can copy and paste between the host and guests, etc. You can find the guest additions in the folder matching the version of VirtualBox you are using. You can also read the instructions for installing the guest additions. To create the zone's shared VDI in VirtualBox, you can open the storage configuration for one of the two guests, select the SATA controller, and click on the "Add Hard Disk" icon nearby. Choose "Create New Disk" and specify an appropriate path name for the file that will contain the VDI. The shared VDI must be at least 1.5 GB. Note that the guest must be stopped to do this. Add that VDI to the other guest - using its Storage configuration - so that each can access it while running. The steps start out the same, except that you choose "Choose Existing Disk" instead of "Create New Disk." Because the disk is configured on both of them, VirtualBox prevents you from running both guests at the same time. Identify device names of that VDI, in each of the guests. Solaris chooses the name based on existing devices. The names may be the same, or may be different from each other. This step is shown below as "Step 1." Assumptions In the example shown below, I make these assumptions. The guest that will own the zone at the beginning is named sysA. The guest that will own the zone after the first migration is named sysB. On sysA, the shared disk is named /dev/dsk/c7t2d0 On sysB, the shared disk is named /dev/dsk/c7t3d0 (Finally!) The Steps Step 1) Determine the name of the disk that will move back and forth between the systems. root@sysA:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,2829@d/disk@0,0 1. c7t2d0 /pci@0,0/pci8086,2829@d/disk@2,0 Specify disk (enter its number): ^D Step 2) The first thing to do is partition and label the disk. The magic needed to write an EFI label is not overly complicated. root@sysA:~# format -e c7t2d0 selecting c7t2d0 [disk formatted] FORMAT MENU: ... format fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. n SELECT ONE OF THE FOLLOWING: ... Enter Selection: 1 ... G=EFI_SYS 0=Exit? f SELECT ONE... ... 6 format label ... Specify Label type[1]: 1 Ready to label disk, continue? y format quit root@sysA:~# ls /dev/dsk/c7t2d0 /dev/dsk/c7t2d0 Step 3) Configure zone1 on sysA. root@sysA:~# zonecfg -z zone1 Use 'create' to begin configuring a new zone. zonecfg:zone1 create create: Using system default template 'SYSdefault' zonecfg:zone1 set zonename=zone1 zonecfg:zone1 set zonepath=/zones/zone1 zonecfg:zone1 add rootzpool zonecfg:zone1:rootzpool add storage dev:dsk/c7t2d0 zonecfg:zone1:rootzpool end zonecfg:zone1 exit root@sysA:~# oot@sysA:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: ... rootzpool: storage: dev:dsk/c7t2d0 Step 4) Install the zone. This step takes the most time, but you can wander off for a snack or a few laps around the gym - or both! (Just not at the same time...) root@sysA:~# zoneadm -z zone1 install Created zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T163634Z.zone1.install Image: Preparing at /zones/zone1/root. AI Manifest: /tmp/manifest.xml.RXaycg SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: zone1 Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/support/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 183/183 33556/33556 222.2/222.2 2.8M/s PHASE ITEMS Installing new actions 46825/46825 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 1696.847 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T163634Z.zone1.install Step 5) Boot the Zone. root@sysA:~# zoneadm -z zone1 boot Step 6) Login to zone's console to complete the specification of system information. root@sysA:~# zlogin -C zone1 Answer the usual questions and wait for a login prompt. Then you can end the console session with the usual "~." incantation. Step 7) Shutdown the zone so it can be "moved." root@sysA:~# zoneadm -z zone1 shutdown Step 8) Detach the zone so that the original global zone can't use it. root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 installed /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 484M 1.51G 23% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool Step 9) Review the result and shutdown sysA so that sysB can use the shared disk. root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# init 0 Step 10) Now boot sysB and configure a zone with the parameters shown above in Step 1. (Again, the safest method is to use "zonecfg ... export" on sysA as described in section "Method Overview" above.) The one difference is the name of the rootzpool storage device, which was shown in the list of assumptions, and which you must determine by booting sysB and using the "format" or "zpool import" command. When that is done, you should see the output shown next. (I used the same zonename - "zone1" - in this example, but you can choose any valid zonename you want.) root@sysB:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysB:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: linkname: net0 ... rootzpool: storage: dev:dsk/c7t3d0 Step 11) Attaching the zone automatically imports the zpool. root@sysB:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T184034Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T184034Z.zone1.attach root@sysB:~# zoneadm -z zone1 boot root@sysB:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 Step 12) Now let's migrate the zone back to sysA. Create a file in zone1 so we can verify it exists after we migrate the zone back, then begin migrating it back. root@zone1:~# ls /opt root@zone1:~# touch /opt/fileA root@zone1:~# ls -l /opt/fileA -rw-r--r-- 1 root root 0 Oct 22 14:47 /opt/fileA root@zone1:~# exit logout [Connection to zone 'zone1' pts/2 closed] root@sysB:~# zoneadm -z zone1 shutdown root@sysB:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool root@sysB:~# init 0 Step 13) Back on sysA, check the status. Oracle Corporation SunOS 5.11 11.1 September 2012 root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - Step 14) Re-attach the zone back to sysA. root@sysA:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T190441Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T190441Z.zone1.attach root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 491M 1.51G 24% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 boot root@sysA:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 root@zone1:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 1.98G 538M 1.46G 26% 1.00x ONLINE - Step 15) Check for the file created on sysB, earlier. root@zone1:~# ls -l /opt total 1 -rw-r--r-- 1 root root 0 Oct 22 14:47 fileA Next Steps Here is a brief list of some of the fun things you can try next. Add space to the zone by adding a second storage device to the rootzpool. Make sure that you add it to the configurations of both zones! Create a new zone, specifying two disks in the rootzpool when you first configure the zone. When you install that zone, or clone it from another zone, zoneadm uses those two disks to create a mirrored pool. (Three disks will result in a three-way mirror, etc.) Conclusion Hopefully you have seen the ease with which you can now move Solaris Zones from one system to another.

    Read the article

  • CodePlex Daily Summary for Saturday, November 12, 2011

    CodePlex Daily Summary for Saturday, November 12, 2011Popular ReleasesDynamic PagedCollection (Silverlight / WPF Pagination): PagedCollection: All classes which facilitate your dynamic pagination in Silverlight or WPF !Media Companion: MC 3.422b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Made the TV Shows folder list sorted. Re-visibled 'Manually Add Path' in Root Folders. Sorted list to process during new tv episode search Rebuild Movies now processes thru folders alphabetically Fix for issue #208 - Display Missing Episodes is not popu...DotSpatial: DotSpatial Release Candidate 1 (1.0.823): Supports loading extensions using System.ComponentModel.Composition. DemoMap compiled as x86 so that GDAL runs on x64 machines. How to: Use an Assembly from the WebBe aware that your browser may add an identifier to downloaded files which results in "blocked" dll files. You can follow the following link to learn how to "Unblock" files. Right click on the zip file before unzipping, choose properties, go to the general tab and click the unblock button. http://msdn.microsoft.com/en-us/library...XPath Visualizer: XPathVisualizer v1.3 Latest: This is v1.3.0.6 of XpathVisualizer. This is an update release for v1.3. These workitems have been fixed since v1.3.0.5: 7429 7432 7427MSBuild Extension Pack: November 2011: Release Blog Post The MSBuild Extension Pack November 2011 release provides a collection of over 415 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...Quick Performance Monitor: Version 1.8.0: Now you can add multiple counters/instances at once! Yes, I know its overdue...CODE Framework: 4.0.11110.0: Various minor fixes and tweaks.Extensions for Reactive Extensions (Rxx): Rxx 1.2: What's NewRelated Work Items Please read the latest release notes for details about what's new. Content SummaryRxx provides the following features. See the Documentation for details. Many IObservable<T> extension methods and IEnumerable<T> extension methods. Many useful types such as ViewModel, CommandSubject, ListSubject, DictionarySubject, ObservableDynamicObject, Either<TLeft, TRight>, Maybe<T> and others. Various interactive labs that illustrate the runtime behavior of the extensio...Player Framework by Microsoft: HTML5 Player Framework 1.0: Additional DownloadsHTML5 Player Framework Examples - This is a set of examples showing how to setup and initialize the HTML5 Player Framework. This includes examples of how to use the Player Framework with both the HTML5 video tag and Silverlight player. Note: Be sure to unblock the zip file before using. Note: In order to test Silverlight fallback in the included sample app, you need to run the html and xap files over http (e.g. over localhost). Silverlight Players - Visit the Silverlig...MapWindow 4: MapWindow GIS v4.8.6 - Final release - 64Bit: What’s New in 4.8.6 (Final release)A few minor issues have been fixed What’s New in 4.8.5 (Beta release)Assign projection tool. (Sergei Leschinsky) Projection dialects. (Sergei Leschinsky) Projections database converted to SQLite format. (Sergei Leschinsky) Basic code for database support - will be developed further (ShapefileDataClient class, IDataProvider interface). (Sergei Leschinsky) 'Export shapefile to database' tool. (Sergei Leschinsky) Made the GEOS library static. geos.dl...NewLife XCode ??????: XCode v8.2.2011.1107、XCoder v4.5.2011.1108: v8.2.2011.1107 ?IEntityOperate.Create?Entity.CreateInstance??????forEdit,????????(FindByKeyForEdit)???,???false ??????Entity.CreateInstance,????forEdit,???????????????????? v8.2.2011.1103 ??MS????,??MaxMin??(????????)、NotIn??(????)、?Top??(??NotIn)、RowNumber??(?????) v8.2.2011.1101 SqlServer?????????DataPath,?????????????????????? Oracle?????????DllPath,????OCI??,???????????ORACLE_HOME?? Oracle?????XCode.Oracle.IsUseOwner,???????????Ow...Facebook C# SDK: v5.3.2: This is a RTW release which adds new features and bug fixes to v5.2.1. Query/QueryAsync methods uses graph api instead of legacy rest api. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ (experimental) added support for early preview for .NET 4.5 (binaries not distributed in codeplex nor nuget.org, will need to manually build from Facebook-Net45.sln) added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added ne...Delete Inactive TS Ports: List and delete the Inactive TS Ports: UPDATEAdded support for windows 2003 servers and removed some null reference errors when the registry key was not present List and delete the Inactive TS Ports - The InactiveTSPortList.EXE accepts command line arguments The InactiveTSPortList.Standalone.WithoutPrompt.exe runs as a standalone exe without the need for any command line arguments.Ribbon Editor for Microsoft Dynamics CRM 2011: Ribbon Editor (0.1.2207.267): BUG FIXES: - Cannot add multiple JavaScript and Url under Actions - Cannot add <Or> node under <OrGroup> - Adding a rule under <Or> node put the new rule node at the wrong placeWF Designer Express: WF Designer Express v0.6: ?????(???????????) Multi Language(Now, Japanese and English)ClosedXML - The easy way to OpenXML: ClosedXML 0.60.0: Added almost full support for auto filters (missing custom date filters). See examples Filter Values, Custom Filters Fixed issues 7016, 7391, 7388, 7389, 7198, 7196, 7194, 7186, 7067, 7115, 7144Microsoft Research Boogie: Nightly builds: This download category contains automatically released nightly builds, reflecting the current state of Boogie's development. We try to make sure each nightly build passes the test suite. If you suspect that was not the case, please try the previous nightly build to see if that really is the problem. Also, please see the installation instructions.GoogleMap Control: GoogleMap Control 6.0: Major design changes to the control in order to achieve better scalability and extensibility for the new features comming with GoogleMaps API. GoogleMap control switched to GoogleMaps API v3 and .NET 4.0. GoogleMap control is 100% ScriptControl now, it requires ScriptManager to be registered on the pages where and before it is used. Markers, polylines, polygons and directions were implemented as ExtenderControl, instead of being inner properties of GoogleMap control. Better perfomance. Better...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: V2.1: Version 2.1 (click on the right) this uses V4.0 of .net Version 2.1 adds the following features: (apologize if I forget some, added a lot of little things) Manual Lookup with TV or Movie (finally huh!), you can look up a movie or TV episode directly, you can right click on anythign, and choose manual lookup, then will allow you to type anything you want to look up and it will assign it to the file you right clicked. No Rename: a very popular request, this is an option you can set so that t...SubExtractor: Release 1020: Feature: added "baseline double quotes" character to selector box Feature: added option to save SRT files as ANSI (instead of previous UTF-8 only) Feature: made "Save Sup files to Source directory" apply to both Sup and Idx source files. Fix: removed SDH text (...) or [...] that is split over 2 lines Fix: better decision-making in when to prefix a line with a '-' because SDH was removedNew Projects- Gemini - code branch: the branch for all Hzj_jie's code(MPLF) MediaPortal LoveFilm Plugin: MediaPortal LoveFilm (Love Film) Plugin. Browse films, add/remove from lists, streams etc._: _Batalha Naval em C#: Jogo de batalha naval para a disciplina de C#BusyHoliday - Making Outlook's Calendar Busy for Holidays: When users add holidays to their Outlook calendar these are added as an all day event. However, these are added as free time. With users in other countries, if they were to invite a user in a country with a public holiday, they won't be able to see that the user is on a public holiday or busy. BusyHoliday will update existing holidays as busy time. This may improve users ability to collaborate efficiently across geographical boundries. C# and MCU: C# and MCU and webCampo Minado C Sharp: C sharp source code of the game minefield.CP2011test: CP2011 TestDynamic PagedCollection (Silverlight / WPF Pagination): PagedCollection helps a developer to easily control the pagination of their collection. This collection doesnt need all datas in memory, it can be attached to RIA/SQL/XML data ... and load needed data one after the other. The PagedCollection can be used with Silverlight or WPF.Fast MVVM: The faster way to build your MVVM (Model, View, ViewModel) applications. It's a basic and educational way to learn and work with MVVM. We have more than just a pack of classes or templates. We have a new concept that turn it a little easier.Habitat: Core game services, frameworks and templates for desktop, mobile and web game developers.Historical Data Repository: Repository stores serializable timestamped data items in compressed files structured in a way that allows to quickly find data by time, quickly read and write data items in chronological order. Key features: - all data is stored under a single root folder; like file system, repository can contain a tree of folders each of which contains its own data stream; when reading, multiple streams can be combined into a single output stream maintaining chronological order; the tree structure can b...Immunity Buster: This game is for HIV. We will be making a virus which will help to spread the knowledge about the virus and have Fun among the peers.JoeCo Blog Application: JoeCo Blog Application for ECE 574jweis sso user center: jweis is a user center,distination is to use in a contry ,ssoKinect Shape Game Connect with Windows Phone: The classic Shape Game for Microsoft Kinect extended to allow players to join in the game and deploy shapes from their windows phone. Uses TCP sockets. For educational purposes. Free to download and distribute.Liekhus Behavior Driven DevExpress XAF: Using the BDD (Behavior Driven Development) techniques of SpecFlow and the EasyTest scriptability of DevExpress XAF (eXpress Application Framework), you can leverage test first style of application development in a speed never though imaginable.Linq to BBYOpen: C# Linq Provider to access the REST based BBYOpen services from BestBuy.Mi clima: This is the source code for the application "Mi clima" published in the Windows Phone Marketplace. MySqlQueryToolkit: This project bundles all of the common performance tests into a single handy user interface. Simply drop your query in and use the tools to make your query faster. Includes index suggestion, column data type and length suggestion and common profiling options.OData Validation Framework: Project to show how to validate data using client and server side OData/WCF Data ServicesOMR Table Data Syncronizer - Only Data: This tools compare and syncronize table data. No schema syncronizing included on this project. Data comparition is fastest! 1,000,000 data compare time is: ~200 ms.OrchardPo: This is the po file management module that is used on http://orchardproject.netResistance Calculator: A calculator for parallel and/or series resistanceSMS Team 5: h? th?ng g?i SMS t? d?ngSportsStore: Prueba en tfs de sports storeUDP Transport: This is an attempt at implementation of WCF UDP transport compliant with soa.udp OASIS Standard WebUtils: Biblioteca com funções genéricas úteis para uso com ASP NET MVC??a??e?a - Greek Government Open Data Program - Windows Mobile Client: ? efa?µ??? Open Government sa? d??e? ?µes? p??sßas? ap? t? ????t? sa? se ????? t??? ??µ???, ??e? t?? ap?f?se?? ?a? ??e? t?? dap??e? t?? ????????? ???t??? ?a? ???? t?? ?p??es??? t?? ????????? ??µ?s??? µ?s? t?? ??ße???t???? p?????µµat?? ??a??e?a.

    Read the article

  • C# Domain-Driven Design Sample Released

    - by Artur Trosin
    In the post I want to declare that NDDD Sample application(s) is released and share the work with you. You can access it here: http://code.google.com/p/ndddsample. NDDDSample from functionality perspective matches DDDSample 1.1.0 which is based Java and on joint effort by Eric Evans' company Domain Language and the Swedish software consulting company Citerus. But because NDDDSample is based on .NET technologies those two implementations could not be matched directly. However concepts, practices, values, patterns, especially DDD, are cross-language and cross-platform :). Implementation of .NET version of the application was an interesting journey because now as .NET developer I better understand the differences positive and negative between these two platforms. Even there are those differences they can be overtaken, in many cases it was not so hard to match a java libs\framework with .NET during the implementation. Here is a list of technology stack: 1. .net 3.5 - framework 2. VS.NET 2008 - IDE 3. ASP.NET MVC2.0 - for administration and tracking UI 4. WCF - communication mechanism 5. NHibernate - ORM 6. Rhino Commons - Nhibernate session management, base classes for in memory unit tests 7. SqlLite - database 8. Windsor - inversion of control container 9. Windsor WCF facility - for better integration with NHibernate 10. MvcContrib - and in particular its Castle WindsorControllerFactory in order to enable IoC for controllers 11. WPF - for incident logging application 12. Moq - mocking lib used for unit tests 13. NUnit - unit testing framework 14. Log4net - logging framework 15. Cloud based on Azure SDK These are not the latest technologies, tools and libs for the moment but if there are someone thinks that it would be useful to migrate the sample to latest current technologies and versions please comment. Cloud version of the application is based on Azure emulated environment provided by the SDK, so it hasn't been tested on ‘real' Azure scenario (we just do not have access to it). Thanks to participants, Eugen Gorgan who was involved directly in development, Ruslan Rusu and Victor Lungu spend their free time to discuss .NET specific decisions, Eugen Navitaniuc helped with Java related questions. Also, big thank to Cornel Cretu, he designed a nice logo and helped with some browser incompatibility issues. Any review and feedback are welcome! Thank you, Artur Trosin

    Read the article

  • Responding to Invites

    - by Daniel Moth
    Following up from my post about Sending Outlook Invites here is a shorter one on how to respond. Whatever your choice (ACCEPT, TENTATIVE, DECLINE), if the sender has not unchecked the "Request Response" option, then send your response. Always send your response. Even if you think the sender made a mistake in keeping it on, send your response. Seriously, not responding is plain rude. If you knew about the meeting, and you are happy investing your time in it, and the time and location work for you, and there is an implicit/explicit agenda, then ACCEPT and send it. If one or more of those things don't work for you then you have a few options. Send a DECLINE explaining why. Reply with email to ask for further details or for a change to be made. If you don’t receive a response to your email, send a DECLINE when you've waited enough. Send a TENTATIVE if you haven't made up your mind yet. Hint: if they really require you there, they'll respond asking "why tentative" and you have a discussion about it. When you deem appropriate, instead of the options above, you can also use the counter propose feature of Outlook but IMO that feature has questionable interaction model and UI (on both sender and recipient) so many people get confused by it. BTW, two of my outlook rules are relevant to invites. The first one auto-marks as read the ACCEPT responses if there is no comment in the body of the accept (I check later who has accepted and who hasn't via the "Tracking" button of the invite). I don’t have a rule for the DECLINE and TENTATIVE cause typically I follow up with folks that send those.   The second rule ensures that all Invites go to a specific folder. That is the first folder I see when I triage email. It is also the only folder which I have configured to show a count of all items inside it, rather than the unread count - when sending a response to an invite the item disappears from the folder and hence it is empty and not nagging me. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • SQL SERVER – NTFS File System Performance for SQL Server

    - by pinaldave
    Note: Before practicing any of the suggestion of this article, consult your IT Infrastructural Admin, applying the suggestion without proper testing can only damage your system. Question: “Pinal, we have 80 GB of data including all the database files, we have our data in NTFS file system. We have proper backups are set up. Any suggestion for our NTFS file system performance improvement. Our SQL Server box is running only SQL Server and nothing else. Please advise.” When I receive questions which I have just listed above, it often sends me deep thought. Honestly, I know a lot but there are plenty of things, I believe can be built with community knowledge base. Today I need you to help me to complete this list. I will start the list and you help me complete it. NTFS File System Performance Best Practices for SQL Server Disable Indexing on disk volumes Disable generation of 8.3 names (command: FSUTIL BEHAVIOR SET DISABLE8DOT3 1) Disable last file access time tracking (command: FSUTIL BEHAVIOR SET DISABLELASTACCESS 1) Keep some space empty (let us say 15% for reference) on drive is possible (Only on Filestream Data storage volume) Defragement the volume Add your suggestions here… The one which I often get a pretty big debate is NTFS allocation size. I have seen that on the disk volume which stores filestream data, when increased allocation to 64K from 4K, it reduces the fragmentation. Again, I suggest you attempt this after proper testing on your server. Every system is different and the file stored is different. Here is when I would like to request you to share your experience with related to NTFS allocation size. If you do not agree with any of the above suggestions, leave a comment with reference and I will modify it. Please note that above list prepared assuming the SQL Server application is only running on the computer system. The next question does all these still relevant for SSD – I personally have no experience with SSD with large database so I will refrain from comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to Achieve OC4J RMI Load Balancing

    - by fip
    This is an old, Oracle SOA and OC4J 10G topic. In fact this is not even a SOA topic per se. Questions of RMI load balancing arise when you developed custom web applications accessing human tasks running off a remote SOA 10G cluster. Having returned from a customer who faced challenges with OC4J RMI load balancing, I felt there is still some confusions in the field how OC4J RMI load balancing work. Hence I decide to dust off an old tech note that I wrote a few years back and share it with the general public. Here is the tech note: Overview A typical use case in Oracle SOA is that you are building web based, custom human tasks UI that will interact with the task services housed in a remote BPEL 10G cluster. Or, in a more generic way, you are just building a web based application in Java that needs to interact with the EJBs in a remote OC4J cluster. In either case, you are talking to an OC4J cluster as RMI client. Then immediately you must ask yourself the following questions: 1. How do I make sure that the web application, as an RMI client, even distribute its load against all the nodes in the remote OC4J cluster? 2. How do I make sure that the web application, as an RMI client, is resilient to the node failures in the remote OC4J cluster, so that in the unlikely case when one of the remote OC4J nodes fail, my web application will continue to function? That is the topic of how to achieve load balancing with OC4J RMI client. Solutions You need to configure and code RMI load balancing in two places: 1. Provider URL can be specified with a comma separated list of URLs, so that the initial lookup will land to one of the available URLs. 2. Choose a proper value for the oracle.j2ee.rmi.loadBalance property, which, along side with the PROVIDER_URL property, is one of the JNDI properties passed to the JNDI lookup.(http://docs.oracle.com/cd/B31017_01/web.1013/b28958/rmi.htm#BABDGFBI) More details below: About the PROVIDER_URL The JNDI property java.name.provider.url's job is, when the client looks up for a new context at the very first time in the client session, to provide a list of RMI context The value of the JNDI property java.name.provider.url goes by the format of a single URL, or a comma separate list of URLs. A single URL. For example: opmn:ormi://host1:6003:oc4j_instance1/appName1 A comma separated list of multiple URLs. For examples:  opmn:ormi://host1:6003:oc4j_instanc1/appName, opmn:ormi://host2:6003:oc4j_instance1/appName, opmn:ormi://host3:6003:oc4j_instance1/appName When the client looks up for a new Context the very first time in the client session, it sends a query against the OPMN referenced by the provider URL. The OPMN host and port specifies the destination of such query, and the OC4J instance name and appName are actually the “where clause” of the query. When the PROVIDER URL reference a single OPMN server Let's consider the case when the provider url only reference a single OPMN server of the destination cluster. In this case, that single OPMN server receives the query and returns a list of the qualified Contexts from all OC4Js within the cluster, even though there is a single OPMN server in the provider URL. A context represent a particular starting point at a particular server for subsequent object lookup. For example, if the URL is opmn:ormi://host1:6003:oc4j_instance1/appName, then, OPMN will return the following contexts: appName on oc4j_instance1 on host1 appName on oc4j_instance1 on host2, appName on oc4j_instance1 on host3,  (provided that host1, host2, host3 are all in the same cluster) Please note that One OPMN will be sufficient to find the list of all contexts from the entire cluster that satisfy the JNDI lookup query. You can do an experiment by shutting down appName on host1, and observe that OPMN on host1 will still be able to return you appname on host2 and appName on host3. When the PROVIDER URL reference a comma separated list of multiple OPMN servers When the JNDI propery java.naming.provider.url references a comma separated list of multiple URLs, the lookup will return the exact same things as with the single OPMN server: a list of qualified Contexts from the cluster. The purpose of having multiple OPMN servers is to provide high availability in the initial context creation, such that if OPMN at host1 is unavailable, client will try the lookup via OPMN on host2, and so on. After the initial lookup returns and cache a list of contexts, the JNDI URL(s) are no longer used in the same client session. That explains why removing the 3rd URL from the list of JNDI URLs will not stop the client from getting the EJB on the 3rd server. About the oracle.j2ee.rmi.loadBalance Property After the client acquires the list of contexts, it will cache it at the client side as “list of available RMI contexts”.  This list includes all the servers in the destination cluster. This list will stay in the cache until the client session (JVM) ends. The RMI load balancing against the destination cluster is happening at the client side, as the client is switching between the members of the list. Whether and how often the client will fresh the Context from the list of Context is based on the value of the  oracle.j2ee.rmi.loadBalance. The documentation at http://docs.oracle.com/cd/B31017_01/web.1013/b28958/rmi.htm#BABDGFBI list all the available values for the oracle.j2ee.rmi.loadBalance. Value Description client If specified, the client interacts with the OC4J process that was initially chosen at the first lookup for the entire conversation. context Used for a Web client (servlet or JSP) that will access EJBs in a clustered OC4J environment. If specified, a new Context object for a randomly-selected OC4J instance will be returned each time InitialContext() is invoked. lookup Used for a standalone client that will access EJBs in a clustered OC4J environment. If specified, a new Context object for a randomly-selected OC4J instance will be created each time the client calls Context.lookup(). Please note the regardless of the setting of oracle.j2ee.rmi.loadBalance property, the “refresh” only occurs at the client. The client can only choose from the "list of available context" that was returned and cached from the very first lookup. That is, the client will merely get a new Context object from the “list of available RMI contexts” from the cache at the client side. The client will NOT go to the OPMN server again to get the list. That also implies that if you are adding a node to the server cluster AFTER the client’s initial lookup, the client would not know it because neither the server nor the client will initiate a refresh of the “list of available servers” to reflect the new node. About High Availability (i.e. Resilience Against Node Failure of Remote OC4J Cluster) What we have discussed above is about load balancing. Let's also discuss high availability. This is how the High Availability works in RMI: when the client use the context but get an exception such as socket is closed, it knows that the server referenced by that Context is problematic and will try to get another unused Context from the “list of available contexts”. Again, this list is the list that was returned and cached at the very first lookup in the entire client session.

    Read the article

  • C# Extension Methods - To Extend or Not To Extend...

    - by James Michael Hare
    I've been thinking a lot about extension methods lately, and I must admit I both love them and hate them. They are a lot like sugar, they taste so nice and sweet, but they'll rot your teeth if you eat them too much.   I can't deny that they aren't useful and very handy. One of the major components of the Shared Component library where I work is a set of useful extension methods. But, I also can't deny that they tend to be overused and abused to willy-nilly extend every living type.   So what constitutes a good extension method? Obviously, you can write an extension method for nearly anything whether it is a good idea or not. Many times, in fact, an idea seems like a good extension method but in retrospect really doesn't fit.   So what's the litmus test? To me, an extension method should be like in the movies when a person runs into their twin, separated at birth. You just know you're related. Obviously, that's hard to quantify, so let's try to put a few rules-of-thumb around them.   A good extension method should:     Apply to any possible instance of the type it extends.     Simplify logic and improve readability/maintainability.     Apply to the most specific type or interface applicable.     Be isolated in a namespace so that it does not pollute IntelliSense.     So let's look at a few examples in relation to these rules.   The first rule, to me, is the most important of all. Once again, it bears repeating, a good extension method should apply to all possible instances of the type it extends. It should feel like the long lost relative that should have been included in the original class but somehow was missing from the family tree.    Take this nifty little int extension, I saw this once in a blog and at first I really thought it was pretty cool, but then I started noticing a code smell I couldn't quite put my finger on. So let's look:       public static class IntExtensinos     {         public static int Seconds(int num)         {             return num * 1000;         }           public static int Minutes(int num)         {             return num * 60000;         }     }     This is so you could do things like:       ...     Thread.Sleep(5.Seconds());     ...     proxy.Timeout = 1.Minutes();     ...     Awww, you say, that's cute! Well, that's the problem, it's kitschy and it doesn't always apply (and incidentally you could achieve the same thing with TimeStamp.FromSeconds(5)). It's syntactical candy that looks cool, but tends to rot and pollute the code. It would allow things like:       total += numberOfTodaysOrders.Seconds();     which makes no sense and should never be allowed. The problem is you're applying an extension method to a logical domain, not a type domain. That is, the extension method Seconds() doesn't really apply to ALL ints, it applies to ints that are representative of time that you want to convert to milliseconds.    Do you see what I mean? The two problems, in a nutshell, are that a) Seconds() called off a non-time value makes no sense and b) calling Seconds() off something to pass to something that does not take milliseconds will be off by a factor of 1000 or worse.   Thus, in my mind, you should only ever have an extension method that applies to the whole domain of that type.   For example, this is one of my personal favorites:       public static bool IsBetween<T>(this T value, T low, T high)         where T : IComparable<T>     {         return value.CompareTo(low) >= 0 && value.CompareTo(high) <= 0;     }   This allows you to check if any IComparable<T> is within an upper and lower bound. Think of how many times you type something like:       if (response.Employee.Address.YearsAt >= 2         && response.Employee.Address.YearsAt <= 10)     {     ...     }     Now, you can instead type:       if(response.Employee.Address.YearsAt.IsBetween(2, 10))     {     ...     }     Note that this applies to all IComparable<T> -- that's ints, chars, strings, DateTime, etc -- and does not depend on any logical domain. In addition, it satisfies the second point and actually makes the code more readable and maintainable.   Let's look at the third point. In it we said that an extension method should fit the most specific interface or type possible. Now, I'm not saying if you have something that applies to enumerables, you create an extension for List, Array, Dictionary, etc (though you may have reasons for doing so), but that you should beware of making things TOO general.   For example, let's say we had an extension method like this:       public static T ConvertTo<T>(this object value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This lets you do more fluent conversions like:       double d = "5.0".ConvertTo<double>();     However, if you dig into Reflector (LOVE that tool) you will see that if the type you are calling on does not implement IConvertible, what you convert to MUST be the exact type or it will throw an InvalidCastException. Now this may or may not be what you want in this situation, and I leave that up to you. Things like this would fail:       object value = new Employee();     ...     // class cast exception because typeof(IEmployee) != typeof(Employee)     IEmployee emp = value.ConvertTo<IEmployee>();       Yes, that's a downfall of working with Convertible in general, but if you wanted your fluent interface to be more type-safe so that ConvertTo were only callable on IConvertibles (and let casting be a manual task), you could easily make it:         public static T ConvertTo<T>(this IConvertible value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This is what I mean by choosing the best type to extend. Consider that if we used the previous (object) version, every time we typed a dot ('.') on an instance we'd pull up ConvertTo() whether it was applicable or not. By filtering our extension method down to only valid types (those that implement IConvertible) we greatly reduce our IntelliSense pollution and apply a good level of compile-time correctness.   Now my fourth rule is just my general rule-of-thumb. Obviously, you can make extension methods as in-your-face as you want. I included all mine in my work libraries in its own sub-namespace, something akin to:       namespace Shared.Core.Extensions { ... }     This is in a library called Shared.Core, so just referencing the Core library doesn't pollute your IntelliSense, you have to actually do a using on Shared.Core.Extensions to bring the methods in. This is very similar to the way Microsoft puts its extension methods in System.Linq. This way, if you want 'em, you use the appropriate namespace. If you don't want 'em, they won't pollute your namespace.   To really make this work, however, that namespace should only include extension methods and subordinate types those extensions themselves may use. If you plant other useful classes in those namespaces, once a user includes it, they get all the extensions too.   Also, just as a personal preference, extension methods that aren't simply syntactical shortcuts, I like to put in a static utility class and then have extension methods for syntactical candy. For instance, I think it imaginable that any object could be converted to XML:       namespace Shared.Core     {         // A collection of XML Utility classes         public static class XmlUtility         {             ...             // Serialize an object into an xml string             public static string ToXml(object input)             {                 var xs = new XmlSerializer(input.GetType());                   // use new UTF8Encoding here, not Encoding.UTF8. The later includes                 // the BOM which screws up subsequent reads, the former does not.                 using (var memoryStream = new MemoryStream())                 using (var xmlTextWriter = new XmlTextWriter(memoryStream, new UTF8Encoding()))                 {                     xs.Serialize(xmlTextWriter, input);                     return Encoding.UTF8.GetString(memoryStream.ToArray());                 }             }             ...         }     }   I also wanted to be able to call this from an object like:       value.ToXml();     But here's the problem, if i made this an extension method from the start with that one little keyword "this", it would pop into IntelliSense for all objects which could be very polluting. Instead, I put the logic into a utility class so that users have the choice of whether or not they want to use it as just a class and not pollute IntelliSense, then in my extensions namespace, I add the syntactical candy:       namespace Shared.Core.Extensions     {         public static class XmlExtensions         {             public static string ToXml(this object value)             {                 return XmlUtility.ToXml(value);             }         }     }   So now it's the best of both worlds. On one hand, they can use the utility class if they don't want to pollute IntelliSense, and on the other hand they can include the Extensions namespace and use as an extension if they want. The neat thing is it also adheres to the Single Responsibility Principle. The XmlUtility is responsible for converting objects to XML, and the XmlExtensions is responsible for extending object's interface for ToXml().

    Read the article

  • CodePlex Daily Summary for Saturday, June 22, 2013

    CodePlex Daily Summary for Saturday, June 22, 2013Popular ReleasesGac Library -- C++ Utilities for GPU Accelerated GUI and Script: Gaclib 0.5.2.0: Gaclib.zip contains the following content GacUIDemo Demo solution and projects Public Source GacUI library Document HTML document. Please start at reference_gacui.html Content Necessary CSS/JPG files for document. Improvements to the previous release Add 4 demos Controls.DataGrid.ChemicalElements This demo shows how to display data in a GuiVirtualDataGrid control using different styles for different cells. Controls.DataGrid.FileExplorer This demo shows how to use GuiVirtualDataGrid w...Three-Dimensional Maneuver Gear for Minecraft: TDMG 1.1.0.0 for 1.5.2: CodePlex???(????????) ?????????(???1/4) ??????????? ?????????? ???????????(??????????) ??????????????????????? ↑????、?????????????????????(???????) ???、??????????、?????????????????????、????????1.5?????????? Shift+W(????)??????????????????10°、?10°(?????????)???Pizarrón Virtual: Pizarron virtual codigo fuente: Código fuenteHyper-V Management Pack Extensions 2012: HyperVMPE2012: Hyper-V Management Pack Extensions 2012 Beta ReleaseOutlook 2013 Add-In: Email appointments: This new version includes the following changes: - Ability to drag emails to the calendar to create appointments. Will gather all the recipients from all the emails and create an appointment on the day you drop the emails, with the text and subject of the last selected email (if more than one selected). - Increased maximum of numbers to display appointments to 30. You will have to uninstall the previous version (add/remove programs) if you had installed it before. Before unzipping the file...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.5.2: v1.5.2 - This is a service release. We've fixed a number of issues with Tasks and IoC. We've made some consistency improvements across platforms and fixed a number of minor bugs. See changes.txt for details. Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro inversion of control container (IoC); source code...CODE Framework: 4.0.30618.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.Toolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.18): XrmToolbox improvement Use new connection controls (use of Microsoft.Xrm.Client.dll) New display capabilities for tools (size, image and colors) Added prerequisites check Added Most Used Tools feature Tools improvementNew toolSolution Transfer Tool (v1.0.0.0) developed by DamSim Updated toolView Layout Replicator (v1.2013.6.17) Double click on source view to display its layoutXml All tools list Access Checker (v1.2013.6.17) Attribute Bulk Updater (v1.2013.6.18) FetchXml Tester (v1.2013.6.1...Media Companion: Media Companion MC3.570b: New* Movie - using XBMC TMDB - now renames movies if option selected. * Movie - using Xbmc Tmdb - Actor images saved from TMDb if option selected. Fixed* Movie - Checks for poster.jpg against missing poster filter * Movie - Fixed continual scraping of vob movie file (not DVD structure) * Both - Correctly display audio channels * Both - Correctly populate audio info in nfo's if multiple audio tracks. * Both - added icons and checked for DTS ES and Dolby TrueHD audio tracks. * Both - Stream d...Document.Editor: 2013.24: What's new for Document.Editor 2013.24: Improved Video Editing support Improved Link Editing support Minor Bug Fix's, improvements and speed upsExtJS based ASP.NET Controls: FineUI v3.3.0: ??FineUI ?? ExtJS ??? ASP.NET ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license)。 ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI???? ExtJS ?????????,???? ExtJS ?。 ????? FineUI ? ExtJS ?:http://fineui.com/bbs/forum.php?mod=viewthrea...BarbaTunnel: BarbaTunnel 8.0: Check Version History for more information about this release.ExpressProfiler: ExpressProfiler v1.5: [+] added Start time, End time event columns [+] added SP:StmtStarting, SP:StmtCompleted events [*] fixed bug with Audit:Logout eventpatterns & practices: Data Access Guidance: Data Access Guidance Drop4 2013.06.17: Drop 4Kooboo CMS: Kooboo CMS 4.1.1: The stable release of Kooboo CMS 4.1.0 with fixed the following issues: https://github.com/Kooboo/CMS/issues/1 https://github.com/Kooboo/CMS/issues/11 https://github.com/Kooboo/CMS/issues/13 https://github.com/Kooboo/CMS/issues/15 https://github.com/Kooboo/CMS/issues/19 https://github.com/Kooboo/CMS/issues/20 https://github.com/Kooboo/CMS/issues/24 https://github.com/Kooboo/CMS/issues/43 https://github.com/Kooboo/CMS/issues/45 https://github.com/Kooboo/CMS/issues/46 https://github....VidCoder: 1.5.0 Beta: The betas have started up again! If you were previously on the beta track you will need to install this to get back on it. That's because you can now run both the Beta and Stable version of VidCoder side-by-side! Note that the OpenCL and Intel QuickSync changes being tested by HandBrake are not in the betas yet. They will appear when HandBrake integrates them into the main branch. Updated HandBrake core to SVN 5590. This adds a new FDK AAC encoder. The FAAC encoder has been removed and now...Wsus Package Publisher: Release v1.2.1306.16: Date/Time are displayed as Local Time. (Last Contact, Last Report and DeadLine) Wpp now remember the last used path for update publishing. (See 'Settings' Form for options) Add an option to allow users to publish an update even if the Framework has judged the certificate as invalid. (Attention : Using this option will NOT allow you to publish or revise an update if your certificate is really invalid). When publishing a new update, filter update files to ensure that there is not files wi...Employee Info Starter Kit: v6.0 - ASP.NET MVC Edition: Release Home - Getting Started - Hands on Coding Walkthrough – Technology Stack - Design & Architecture EISK v6.0 – ASP.NET MVC edition bundles most of the greatest and successful platforms, frameworks and technologies together, to enable web developers to learn and build manageable and high performance web applications with rich user experience effectively and quickly. User End SpecificationsCreating a new employee record Read existing employee records Update an existing employee reco...OLAP PivotTable Extensions: Release 0.8.1: Use the 32-bit download for... Excel 2007 Excel 2010 32-bit (even Excel 2010 32-bit on a 64-bit operating system) Excel 2013 32-bit (even Excel 2013 32-bit on a 64-bit operating system) Use the 64-bit download for... Excel 2010 64-bit Excel 2013 64-bit Just download and run the EXE. There is no need to uninstall the previous release. If you have problems getting the add-in to work, see the Troubleshooting Installation wiki page. The new features in this release are: View #VALUE! Err...DirectXTex texture processing library: June 2013: June 15, 2013 Custom filtering implementation for Resize & GenerateMipMaps(3D) - Point, Box, Linear, Cubic, and Triangle TEX_FILTER_TRIANGLE finite low-pass triangle filter TEX_FILTER_WRAP, TEX_FILTER_MIRROR texture semantics for custom filtering TEX_FILTER_BOX alias for TEX_FILTER_FANT WIC Ordered and error diffusion dithering for non-WIC conversion sRGB gamma correct custom filtering and conversion DDS_FLAGS_EXPAND_LUMINANCE - Reader conversion option for L8, L16, and A8L8 legacy ...New Projects7tin - For Business Expansion: 7tin.net one project that every Businessman can open a shop and sale everything. It's free.AntikCompta: AntikCompta is the easiest way to share comptability beetween an antiquaire and it's account manager.Community wallpaper: share your desktop wall paper with others using People Near Me (PNM) protocolDeep.NET: This project aims at providing basic tools for developers to build learning models that can extract hierarchical representations of knowledge. DynamicAccess: DynamicAccess is a library to aid connecting DLR languages such as ironpython and ironruby to non-dynamic languages like managed C++. It also fills in some gaps in the current C# support of dynamic objects, such as member access by string and deletion of members or indexes.EntInscripcion: Este es un proyecto de inscripcionesExtensible Lightweight Framework: ELF is developed in C# .NET 4.0. It offers extensions on standard components from the .NET Framework and custom components such as MVP. Most classes implement interfaces and are resolved using IoC. Currently a single feature is available, the coming month new features will be added. WCF Service Client - Exception Handling - Retry Logic - Reusable Eye tracking with Kinect for Windows: Attempts to track eyes and calculate the pupillary distance of the user using Kinect for Windows.Hanoi: Another implementation of the popular game of Towers of Hanoi for SmartPhones with Windows Mobile 2003 or newer. But it's not only the classic Towers of Hanoi, it has some value-added features like timing, time-limited gaming, column order changing, etc. It supports from 1 disk to 8. Very geeky game.HR_project: WPF ?????????? ? ???????? ? ???? ??????. ???????? ? ?????????? ????? ??????????? ? ???? ??????.HttpUtility: HttpUtility???C#?HttpRequest(HttpWebRequest)?????。 ??HttpUtility????????? 1.??HttpRequest?Get??,?????html?? 2.??HttpRequest?Post??,?????html?? 3.??HttpRequest??IL2 Stats DB: A SQL 2008 Database Scheme to store data generated by the IL2 Log Parser.maillib: Libreria mail per l'invio di posta elettronicamgl Pluginsystem: mgl is a pluginsystem. MIBETerminal: ---Mobile FIT VUT: Školní aplikace pro zacínající studenty FIT VUT. Obsahuje potrebné informace pro studium na Vysokém ucení v Brne fakulta Informacních technologií. Aplikace obsahuje: -školní prírucku -aktuální jídelní lístkem menz -kontaktní informace na zamestnance FIT VUT -mapu fakulty FIT -aktuality -plánovac úloh Aplikace se nachází na Windows Marketplace: http://www.windowsphone.com/cs-CZ/apps/c2e8036d-970a-4ab1-8ca4-b97788a0dcb5 OMANE: OMANEPDF Unlocker Software for those who want to unlock PDF Restrictions: PDF Unlocker Software has the competence to unlock PDF restrictions. With this tool user can exclusive attribute to remove PDF security completelyPortalCemil: Portal CemilResonance: Resonance is a system to train neural networks, it allows to automate the train of a neural network, distributing the calculation on multiple machines.SauravUtil: These are some small utilities created specially by saurav sarkar. 1. Wcf Tutorial using Entity Framework 5 2. Rest Coming soonSelcukEticaretDenemesi: Test için bir çalisma yürütecegimSPWTF: "SPWTF or SharePoint well thats foolish" is a project that intends to bridge various gaps in the OOTB SharePoint apps story. Enjoy!SQL Server Integration Services Reporting: The canned SSMS Integration Services reports rewritten for deployment on Reporting Services.Swyish Chess: Chess Application built using C# and WPFThree-Dimensional Maneuver Gear for Minecraft: Minecraft?????????????。Urdu Translation: Urdu Translation Project Visual Studio Templates compliant with StyleCop Rules: This project contains the templates and instructions to make your Visual Studio 2008 create new files compliant to StyleCop rules.webapps-in-action.com: Here you'll find all the Code Samples & Solutions from my Blog http://webapps-in-action.com

    Read the article

  • FTP Publishing with the new Windows Azure Release

    - by Harish Ranganathan
    There is a good chance you might have stumbled upon the new Windows Azure Release that we made on June 6th.  Scott Guthrie’s Post quite summarizes the overall new features. One of my favorite features is the Windows Azure Websites and the ability to do publish files to Azure using your FTP Client. Windows Azure Websites offers low cost (free upto 10 websites) web hosting where you can deploy any website that can run on IIS 7.0, quickly. The earlier releases of Azure SDKs and the Azure platform support .NET 3.5 & above for running your applications.  This was a constraint for many since there are/were a lot of ASP.NET 2.0 applications built over time and simply to put it on Azure, many of you were skeptical to migrate it to .NET 4. Windows Azure Websites offer the flexibility of running IIS 7.0 supported .NET Versions which means you can run .NET 1.1, 2.0, 3.5 and .NET 4.  Not just that! You can also run classic ASP Applications. Windows Azure Websites don’t need you to go through the complexity of adding the Cloud Project Template and then publishing the Configuration Files.  Lets take a step by step understanding of Websites and publishing using FTP. I downloaded the Club Website Starter Kit from http://www.asp.net/downloads/starter-kits/club It also requires a database and I downloaded the SQL Scripts and created a SQL Server Database called Club. This installs a Web Site Project Template.  Note that I am running Windows 8 Release Preview and Visual Studio 2012 RC.  After installing the template, select File – New – Website and don’t forget to choose the Framework version as .NET 2.0 You can see the “Club Website Starter Kit” .  Once you select the Website gets created.  You would encounter a warning indicating that the Club Website Starter Kit uses SQL Express and the recommended database is LocalDB Express.  Click ok to continue.  Once the Website is created open up the Web.config and locate the “ClubSiteDB” connection string.  By default, it points to a SQL Express Database.  Instead configure it to use your local SQL Server. Also, open up Global.asax and comment out the following line if (!Roles.RoleExists("Administrators")) Roles.CreateRole("Administrators"); There seems to be an issue in the code that doesn’t create the role.  Post that, hit CTRL+F5 and you should be able to see the Website Running, as below So, now we have the Club Starter Kit site up running locally.  Moving to Azure Visit http://manage.windowsazure.com/ and sign up for a trial account.  This allows you to host up to 10 websites for free and a host of other benefits.  The free Websites can be extended to an year without any charge.  Once you have signed up, sign in to the portal using the Live ID used for sign up. After signing in, you would be presented with the “All Items” listing page which lists, Websites, Cloud Services, Databases etc.,  If this is the first time, you wouldn’t find anything. Click on the “Websites” link from the left menu.  Click on “New” in the bottom and it should show up a dialog.  In the same, select Website and click on “Quick Create” and in the URL Textbox, specify “MyFirstDemo” and click the “Create Web Site” link below. It should take a few seconds to create the Website.  Once the Website is created, click on the listing and it should open up the Dashboard.  Since we haven’t done anything yet, there shouldn’t be any statistics Click on the “Download publish profile” link in the right bottom.  This file has the FTP publishing settings. Also, if you scroll down you can see the FTP URL for this site.  It should typically start ftp://waws-xxxx-xxx-xxxx In the downloaded publish profile file, you can also find the ftp URL.  Pick the following from this file publishUrl (the 2nd one, the one that features after publishMethod =”FTP”) and the userName and userPWD that follows. Note that we have everything required to publish the files.  But since the Club Starter Kit uses Databases, we need to have the Database running on SQL Azure.  Go back to the Main Menu and click on “New” in the bottom but this time select “SQL Database” and provide “Club” as Database name for “Quick Create” If this is the first time a Server would be created.  Otherwise, it would pickup the existing server name. Once the database is created, you can use the SQL Azure Migration Wizard http://sqlazuremw.codeplex.com/ and provide the credentials to connect to local database and then the SQL Azure database for migrating the “Club” database.  The migration wizard UI hasn’t changed much and is the same as explained by me in one my posts earlier http://geekswithblogs.net/ranganh/archive/2009/09/29/taking-your-northwind-database-to-sql-azure-and-binding-it.aspx Once the database is migrated, come back to the main screen and click on the Database base in the Azure Management Portal.  It opens up the dashboard of the database.  Click on “Show connection Strings” and it would popup a list of connection string formats.  Choose the ADO.NET connection string and after editing the password with the password that you provided when creating the database server in the Azure Portal, paste it into the config file of the Club Starter Kit Website.  Just to reiterate, the connection string key is ClubSiteDB. Try running the Website once to ensure that the application though running locally could connect to the SQL Database running on Azure. Once you are able to run the website successfully, we are all set to do the FTP Publishing. Download your favorite FTP tool.  I use http://filezilla-project.org/ In the Host Textbox, paste the FTP URL that you picked up from the publish profile file and also paste the username and password.  Click on “QuickConnect”.  If everything is fine, you should be able to connect to the remote server.  If it is successfully connected, you can see the wwwroot folder of the Website, running in Azure Make sure on the “Local Site” in the left, you choose the path to the folder of your Website.  Open up the Website folder on the left such that it lists all the files and folders inside.  Select all of them and click select “Upload” or simply drag and drop all the files to the root folder that is listed above.  Once the publishing is done, you should be able to hit the SiteURL that you can find the dashboard page of the website.  In our case, it would be http://MyFirstDemo.azurewebsites.net That’s it, we have now done FTP publishing in Azure and that too we are running a .NET 2.0 Website on Azure. Cheers !!!

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >