Search Results

Search found 18542 results on 742 pages for 'nhibernate query'.

Page 383/742 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • Building applications with WCF - Intro

    - by skjagini
    I am going to write series of articles using Windows Communication Framework (WCF) to develop client and server applications and this is the first part of that series. What is WCF As Juwal puts in his Programming WCF book, WCF provides an SDK for developing and deploying services on Windows, provides runtime environment to expose CLR types as services and consume services as CLR types. Building services with WCF is incredibly easy and it’s implementation provides a set of industry standards and off the shelf plumbing including service hosting, instance management, reliability, transaction management, security etc such that it greatly increases productivity Scenario: Lets consider a typical bank customer trying to create an account, deposit amount and transfer funds between accounts, i.e. checking and savings. To make it interesting, we are going to divide the functionality into multiple services and each of them working with database directly. We will run test cases with and without transactional support across services. In this post we will build contracts, services, data access layer, unit tests to verify end to end communication etc, nothing big stuff here and we dig into other features of the WCF in subsequent posts with incremental changes. In any distributed architecture we have two pieces i.e. services and clients. Services as the name implies provide functionality to execute various pieces of business logic on the server, and clients providing interaction to the end user. Services can be built with Web Services or with WCF. Service built on WCF have the advantage of binding independent, i.e. can run against TCP and HTTP protocol without any significant changes to the code. Solution Services Profile: For creating a new bank customer, getting details about existing customer ProfileContract ProfileService Checking Account: To get checking account balance, deposit or withdraw amount CheckingAccountContract CheckingAccountService Savings Account: To get savings account balance, deposit or withdraw amount SavingsAccountContract SavingsAccountService ServiceHost: To host services, i.e. running the services at particular address, binding and contract where client can connect to Client: Helps end user to use services like creating account and amount transfer between the accounts BankDAL: Data access layer to work with database     BankDAL It’s no brainer not to use an ORM as many matured products are available currently in market including Linq2Sql, Entity Framework (EF), LLblGenPro etc. For this exercise I am going to use Entity Framework 4.0, CTP 5 with code first approach. There are two approaches when working with data, data driven and code driven. In data driven we start by designing tables and their constrains in database and generate entities in code while in code driven (code first) approach entities are defined in code and the metadata generated from the entities is used by the EF to create tables and table constrains. In previous versions the entity classes had  to derive from EF specific base classes. In EF 4 it  is not required to derive from any EF classes, the entities are not only persistence ignorant but also enable full test driven development using mock frameworks.  Application consists of 3 entities, Customer entity which contains Customer details; CheckingAccount and SavingsAccount to hold the respective account balance. We could have introduced an Account base class for CheckingAccount and SavingsAccount which is certainly possible with EF mappings but to keep it simple we are just going to follow 1 –1 mapping between entity and table mappings. Lets start out by defining a class called Customer which will be mapped to Customer table, observe that the class is simply a plain old clr object (POCO) and has no reference to EF at all. using System;   namespace BankDAL.Model { public class Customer { public int Id { get; set; } public string FullName { get; set; } public string Address { get; set; } public DateTime DateOfBirth { get; set; } } }   In order to inform EF about the Customer entity we have to define a database context with properties of type DbSet<> for every POCO which needs to be mapped to a table in database. EF uses convention over configuration to generate the metadata resulting in much less configuration. using System.Data.Entity;   namespace BankDAL.Model { public class BankDbContext: DbContext { public DbSet<Customer> Customers { get; set; } } }   Entity constrains can be defined through attributes on Customer class or using fluent syntax (no need to muscle with xml files), CustomerConfiguration class. By defining constrains in a separate class we can maintain clean POCOs without corrupting entity classes with database specific information.   using System; using System.Data.Entity.ModelConfiguration;   namespace BankDAL.Model { public class CustomerConfiguration: EntityTypeConfiguration<Customer> { public CustomerConfiguration() { Initialize(); }   private void Initialize() { //Setting the Primary Key this.HasKey(e => e.Id);   //Setting required fields this.HasRequired(e => e.FullName); this.HasRequired(e => e.Address); //Todo: Can't create required constraint as DateOfBirth is not reference type, research it //this.HasRequired(e => e.DateOfBirth); } } }   Any queries executed against Customers property in BankDbContext are executed against Cusomers table. By convention EF looks for connection string with key of BankDbContext when working with the context.   We are going to define a helper class to work with Customer entity with methods for querying, adding new entity etc and these are known as repository classes, i.e., CustomerRepository   using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CustomerRepository { private readonly IDbSet<Customer> _customers;   public CustomerRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _customers = bankDbContext.Customers; }   public IQueryable<Customer> Query() { return _customers; }   public void Add(Customer customer) { _customers.Add(customer); } } }   From the above code it is observable that the Query methods returns customers as IQueryable i.e. customers are retrieved only when actually used i.e. iterated. Returning as IQueryable also allows to execute filtering and joining statements from business logic using lamba expressions without cluttering the data access layer with tens of methods.   Our CheckingAccountRepository and SavingsAccountRepository look very similar to each other using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CheckingAccountRepository { private readonly IDbSet<CheckingAccount> _checkingAccounts;   public CheckingAccountRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _checkingAccounts = bankDbContext.CheckingAccounts; }   public IQueryable<CheckingAccount> Query() { return _checkingAccounts; }   public void Add(CheckingAccount account) { _checkingAccounts.Add(account); }   public IQueryable<CheckingAccount> GetAccount(int customerId) { return (from act in _checkingAccounts where act.CustomerId == customerId select act); }   } } The repository classes look very similar to each other for Query and Add methods, with the help of C# generics and implementing repository pattern (Martin Fowler) we can reduce the repeated code. Jarod from ElegantCode has posted an article on how to use repository pattern with EF which we will implement in the subsequent articles along with WCF Unity life time managers by Drew Contracts It is very easy to follow contract first approach with WCF, define the interface and append ServiceContract, OperationContract attributes. IProfile contract exposes functionality for creating customer and getting customer details.   using System; using System.ServiceModel; using BankDAL.Model;   namespace ProfileContract { [ServiceContract] public interface IProfile { [OperationContract] Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth);   [OperationContract] Customer GetCustomer(int id);   } }   ICheckingAccount contract exposes functionality for working with checking account, i.e., getting balance, deposit and withdraw of amount. ISavingsAccount contract looks the same as checking account.   using System.ServiceModel;   namespace CheckingAccountContract { [ServiceContract] public interface ICheckingAccount { [OperationContract] decimal? GetCheckingAccountBalance(int customerId);   [OperationContract] void DepositAmount(int customerId,decimal amount);   [OperationContract] void WithdrawAmount(int customerId, decimal amount);   } }   Services   Having covered the data access layer and contracts so far and here comes the core of the business logic, i.e. services.   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ProfileService implements the IProfile contract for creating customer and getting customer detail using CustomerRepository. using System; using System.Linq; using System.ServiceModel; using BankDAL; using BankDAL.Model; using BankDAL.Repositories; using ProfileContract;   namespace ProfileService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Profile: IProfile { public Customer CreateAccount( string customerName, string address, DateTime dateOfBirth) { Customer cust = new Customer { FullName = customerName, Address = address, DateOfBirth = dateOfBirth };   using (var bankDbContext = new BankDbContext()) { new CustomerRepository(bankDbContext).Add(cust); bankDbContext.SaveChanges(); } return cust; }   public Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth) { return CreateAccount(customerName, address, dateOfBirth); } public Customer GetCustomer(int id) { return new CustomerRepository(new BankDbContext()).Query() .Where(i => i.Id == id).FirstOrDefault(); }   } } From the above code you shall observe that we are calling bankDBContext’s SaveChanges method and there is no save method specific to customer entity because EF manages all the changes centralized at the context level and all the pending changes so far are submitted in a batch and it is represented as Unit of Work. Similarly Checking service implements ICheckingAccount contract using CheckingAccountRepository, notice that we are throwing overdraft exception if the balance falls by zero. WCF has it’s own way of raising exceptions using fault contracts which will be explained in the subsequent articles. SavingsAccountService is similar to CheckingAccountService. using System; using System.Linq; using System.ServiceModel; using BankDAL.Model; using BankDAL.Repositories; using CheckingAccountContract;   namespace CheckingAccountService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Checking:ICheckingAccount { public decimal? GetCheckingAccountBalance(int customerId) { using (var bankDbContext = new BankDbContext()) { CheckingAccount account = (new CheckingAccountRepository(bankDbContext) .GetAccount(customerId)).FirstOrDefault();   if (account != null) return account.Balance;   return null; } }   public void DepositAmount(int customerId, decimal amount) { using(var bankDbContext = new BankDbContext()) { var checkingAccountRepository = new CheckingAccountRepository(bankDbContext); CheckingAccount account = (checkingAccountRepository.GetAccount(customerId)) .FirstOrDefault();   if (account == null) { account = new CheckingAccount() { CustomerId = customerId }; checkingAccountRepository.Add(account); }   account.Balance = account.Balance + amount; if (account.Balance < 0) throw new ApplicationException("Overdraft not accepted");   bankDbContext.SaveChanges(); } } public void WithdrawAmount(int customerId, decimal amount) { DepositAmount(customerId, -1*amount); } } }   BankServiceHost The host acts as a glue binding contracts with it’s services, exposing the endpoints. The services can be exposed either through the code or configuration file, configuration file is preferred as it allows run time changes to service behavior even after deployment. We have 3 services and for each of the service you need to define name (the class that implements the service with fully qualified namespace) and endpoint known as ABC, i.e. address, binding and contract. We are using netTcpBinding and have defined the base address with for each of the contracts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <system.serviceModel> <services> <service name="ProfileService.Profile"> <endpoint binding="netTcpBinding" contract="ProfileContract.IProfile"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Profile"/> </baseAddresses> </host> </service> <service name="CheckingAccountService.Checking"> <endpoint binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Checking"/> </baseAddresses> </host> </service> <service name="SavingsAccountService.Savings"> <endpoint binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Savings"/> </baseAddresses> </host> </service> </services> </system.serviceModel> Have to open the services by creating service host which will handle the incoming requests from clients.   using System;   namespace ServiceHost { class Program { static void Main(string[] args) { CreateHosts(); Console.ReadLine(); }   private static void CreateHosts() { CreateHost(typeof(ProfileService.Profile),"Profile Service"); CreateHost(typeof(SavingsAccountService.Savings), "Savings Account Service"); CreateHost(typeof(CheckingAccountService.Checking), "Checking Account Service"); }   private static void CreateHost(Type type, string hostDescription) { System.ServiceModel.ServiceHost host = new System.ServiceModel.ServiceHost(type); host.Open();   if (host.ChannelDispatchers != null && host.ChannelDispatchers.Count != 0 && host.ChannelDispatchers[0].Listener != null) Console.WriteLine("Started: " + host.ChannelDispatchers[0].Listener.Uri); else Console.WriteLine("Failed to start:" + hostDescription); } } } BankClient    The client has no knowledge about service business logic other than the functionality it exposes through the contract, end points and a proxy to work against. The endpoint data and server proxy can be generated by right clicking on the project reference and choosing ‘Add Service Reference’ and entering the service end point address. Or if you have access to source, you can manually reference contract dlls and update clients configuration file to point to the service end point if the server and client happens to be being built using .Net framework. One of the pros with the manual approach is you don’t have to work against messy code generated files.   <system.serviceModel> <client> <endpoint name="tcpProfile" address="net.tcp://localhost:1000/Profile" binding="netTcpBinding" contract="ProfileContract.IProfile"/> <endpoint name="tcpCheckingAccount" address="net.tcp://localhost:1000/Checking" binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <endpoint name="tcpSavingsAccount" address="net.tcp://localhost:1000/Savings" binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/>   </client> </system.serviceModel> The client uses a façade to connect to the services   using System.ServiceModel; using CheckingAccountContract; using ProfileContract; using SavingsAccountContract;   namespace Client { public class ProxyFacade { public static IProfile ProfileProxy() { return (new ChannelFactory<IProfile>("tcpProfile")).CreateChannel(); }   public static ICheckingAccount CheckingAccountProxy() { return (new ChannelFactory<ICheckingAccount>("tcpCheckingAccount")) .CreateChannel(); }   public static ISavingsAccount SavingsAccountProxy() { return (new ChannelFactory<ISavingsAccount>("tcpSavingsAccount")) .CreateChannel(); }   } }   With that in place, lets get our unit tests going   using System; using System.Diagnostics; using BankDAL.Model; using NUnit.Framework; using ProfileContract;   namespace Client { [TestFixture] public class Tests { private void TransferFundsFromSavingsToCheckingAccount(int customerId, decimal amount) { ProxyFacade.CheckingAccountProxy().DepositAmount(customerId, amount); ProxyFacade.SavingsAccountProxy().WithdrawAmount(customerId, amount); }   private void TransferFundsFromCheckingToSavingsAccount(int customerId, decimal amount) { ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, amount); ProxyFacade.CheckingAccountProxy().WithdrawAmount(customerId, amount); }     [Test] public void CreateAndGetProfileTest() { IProfile profile = ProxyFacade.ProfileProxy(); const string customerName = "Tom"; int customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)).Id; Customer customer = profile.GetCustomer(customerId); Assert.AreEqual(customerName,customer.FullName); }   [Test] public void DepositWithDrawAndTransferAmountTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Smith" + DateTime.Now.ToString("HH:mm:ss"); var customer = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)); // Deposit to Savings ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 100); ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 25); Assert.AreEqual(125, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); // Withdraw ProxyFacade.SavingsAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(95, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id));   // Deposit to Checking ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 60); ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 40); Assert.AreEqual(100, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); // Withdraw ProxyFacade.CheckingAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(70, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Savings to Checking TransferFundsFromSavingsToCheckingAccount(customer.Id,10); Assert.AreEqual(85, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Checking to Savings TransferFundsFromCheckingToSavingsAccount(customer.Id, 50); Assert.AreEqual(135, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(30, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); }   [Test] public void FundTransfersWithOverDraftTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Angelina" + DateTime.Now.ToString("HH:mm:ss");   var customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1972, 1, 1)).Id;   ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, 100); TransferFundsFromSavingsToCheckingAccount(customerId,80); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId));   try { TransferFundsFromSavingsToCheckingAccount(customerId,30); } catch (Exception e) { Debug.WriteLine(e.Message); }   Assert.AreEqual(110, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId)); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); } } }   We are creating a new instance of the channel for every operation, we will look into instance management and how creating a new instance of channel affects it in subsequent articles. The first two test cases deals with creation of Customer, deposit and withdraw of month between accounts. The last case, FundTransferWithOverDraftTest() is interesting. Customer starts with depositing $100 in SavingsAccount followed by transfer of $80 in to checking account resulting in $20 in savings account.  Customer then initiates $30 transfer from Savings to Checking resulting in overdraft exception on Savings with $30 being deposited to Checking. As we are not running both the requests in transactions the customer ends up with more amount than what he started with $100. In subsequent posts we will look into transactions handling.  Make sure the ServiceHost project is set as start up project and start the solution. Run the test cases either from NUnit client or TestDriven.Net/Resharper which ever is your favorite tool. Make sure you have updated the data base connection string in the ServiceHost config file to point to your local database

    Read the article

  • Scripting out Contained Database Users

    - by Argenis
      Today’s blog post comes from a Twitter thread on which @SQLSoldier, @sqlstudent144 and @SQLTaiob were discussing the internals of contained database users. Unless you have been living under a rock, you’ve heard about the concept of contained users within a SQL Server database (hit the link if you have not). In this article I’d like to show you that you can, indeed, script out contained database users and recreate them on another database, as either contained users or as good old fashioned logins/server principals as well. Why would this be useful? Well, because you would not need to know the password for the user in order to recreate it on another instance. I know there is a limited number of scenarios where this would be necessary, but nonetheless I figured I’d throw this blog post to show how it can be done. A more obscure use case: with the password hash (which I’m about to show you how to obtain) you could also crack the password using a utility like hashcat, as highlighted on this SQLServerCentral article. The Investigation SQL Server uses System Base Tables to save the password hashes of logins and contained database users. For logins it uses sys.sysxlgns, whereas for contained database users it leverages sys.sysowners. I’ll show you what I do to figure this stuff out: I create a login/contained user, and then I immediately browse the transaction log with, for example, fn_dblog. It’s pretty obvious that only two base tables touched by the operation are sys.sysxlgns, and also sys.sysprivs – the latter is used to track permissions. If I connect to the DAC on my instance, I can query for the password hash of this login I’ve just created. A few interesting things about this hash. This was taken on my laptop, and I happen to be running SQL Server 2014 RTM CU2, which is the latest public build of SQL Server 2014 as of time of writing. In 2008 R2 and prior versions (back to 2000), the password hashes would start with 0x0100. The reason why this changed is because starting with SQL Server 2012 password hashes are kept using a SHA512 algorithm, as opposed to SHA-1 (used since 2000) or Snefru (used in 6.5 and 7.0). SHA-1 is nowadays deemed unsafe and is very easy to crack. For regular SQL logins, this information is exposed through the sys.sql_logins catalog view, so there is really no need to connect to the DAC to grab an SID/password hash pair. For contained database users, there is (currently) no method of obtaining SID or password hashes without connecting to the DAC. If we create a contained database user, this is what we get from the transaction log: Note that the System Base Table used in this case is sys.sysowners. sys.sysprivs is used as well, and again this is to track permissions. To query sys.sysowners, you would have to connect to the DAC, as I mentioned previously. And this is what you would get: There are other ways to figure out what SQL Server uses under the hood to store contained database user password hashes, like looking at the execution plan for a query to sys.dm_db_uncontained_entities (Thanks, Robert Davis!) SIDs, Logins, Contained Users, and Why You Care…Or Not. One of the reasons behind the existence of Contained Users was the concept of portability of databases: it is really painful to maintain Server Principals (Logins) synced across most shared-nothing SQL Server HA/DR technologies (Mirroring, Availability Groups, and Log Shipping). Often times you would need the Security Identifier (SID) of these logins to match across instances, and that meant that you had to fetch whatever SID was assigned to the login on the principal instance so you could recreate it on a secondary. With contained users you normally wouldn’t care about SIDs, as the users are always available (and synced, as long as synchronization takes place) across instances. Now you might be presented some particular requirement that might specify that SIDs synced between logins on certain instances and contained database users on other databases. How would you go about creating a contained database user with a specific SID? The answer is that you can’t do it directly, but there’s a little trick that would allow you to do it. Create a login with a specified SID and password hash, create a user for that server principal on a partially contained database, then migrate that user to contained using the system stored procedure sp_user_migrate_to_contained, then drop the login. CREATE LOGIN <login_name> WITH PASSWORD = <password_hash> HASHED, SID = <sid> ; GO USE <partially_contained_db>; GO CREATE USER <user_name> FROM LOGIN <login_name>; GO EXEC sp_migrate_user_to_contained @username = <user_name>, @rename = N’keep_name’, @disablelogin = N‘disable_login’; GO DROP LOGIN <login_name>; GO Here’s how this skeleton would look like in action: And now I have a contained user with a specified SID and password hash. In my example above, I renamed the user after migrated it to contained so that it is, hopefully, easier to understand. Enjoy!

    Read the article

  • Making your WCF Web Apis to speak in multiple languages

    - by cibrax
    One of the key aspects of how the web works today is content negotiation. The idea of content negotiation is based on the fact that a single resource can have multiple representations, so user agents (or clients) and servers can work together to chose one of them. The http specification defines several “Accept” headers that a client can use to negotiate content with a server, and among all those, there is one for restricting the set of natural languages that are preferred as a response to a request, “Accept-Language”. For example, a client can specify “es” in this header for specifying that he prefers to receive the content in spanish or “en” in english. However, there are certain scenarios where the “Accept-Language” header is just not enough, and you might want to have a way to pass the “accepted” language as part of the resource url as an extension. For example, http://localhost/ProductCatalog/Products/1.es” returns all the descriptions for the product with id “1” in spanish. This is useful for scenarios in which you want to embed the link somewhere, such a document, an email or a page.  Supporting both scenarios, the header and the url extension, is really simple in the new WCF programming model. You only need to provide a processor implementation for any of them. Let’s say I have a resource implementation as part of a product catalog I want to expose with the WCF web apis. [ServiceContract][Export]public class ProductResource{ IProductRepository repository;  [ImportingConstructor] public ProductResource(IProductRepository repository) { this.repository = repository; }  [WebGet(UriTemplate = "{id}")] public Product Get(string id, HttpResponseMessage response) { var product = repository.GetById(int.Parse(id)); if (product == null) { response.StatusCode = HttpStatusCode.NotFound; response.Content = new StringContent(Messages.OrderNotFound); }  return product; }} The Get method implementation in this resource assumes the desired culture will be attached to the current thread (Thread.CurrentThread.Culture). Another option is to pass the desired culture as an additional argument in the method, so my processor implementation will handle both options. This method is also using an auto-generated class for handling string resources, Messages, which is available in the different cultures that the service implementation supports. For example, Messages.resx contains “OrderNotFound”: “Order Not Found” Messages.es.resx contains “OrderNotFound”: “No se encontro orden” The processor implementation bellow tackles the first scenario, in which the desired language is passed as part of the “Accept-Language” header. public class CultureProcessor : Processor<HttpRequestMessage, CultureInfo>{ string defaultLanguage = null;  public CultureProcessor(string defaultLanguage = "en") { this.defaultLanguage = defaultLanguage; this.InArguments[0].Name = HttpPipelineFormatter.ArgumentHttpRequestMessage; this.OutArguments[0].Name = "culture"; }  public override ProcessorResult<CultureInfo> OnExecute(HttpRequestMessage request) { CultureInfo culture = null; if (request.Headers.AcceptLanguage.Count > 0) { var language = request.Headers.AcceptLanguage.First().Value; culture = new CultureInfo(language); } else { culture = new CultureInfo(defaultLanguage); }  Thread.CurrentThread.CurrentCulture = culture; Messages.Culture = culture;  return new ProcessorResult<CultureInfo> { Output = culture }; }}   As you can see, the processor initializes a new CultureInfo instance with the value provided in the “Accept-Language” header, and set that instance to the current thread and the auto-generated resource class with all the messages. In addition, the CultureInfo instance is returned as an output argument called “culture”, making possible to receive that argument in any method implementation   The following code shows the implementation of the processor for handling languages as url extensions.   public class CultureExtensionProcessor : Processor<HttpRequestMessage, Uri>{ public CultureExtensionProcessor() { this.OutArguments[0].Name = HttpPipelineFormatter.ArgumentUri; }  public override ProcessorResult<Uri> OnExecute(HttpRequestMessage httpRequestMessage) { var requestUri = httpRequestMessage.RequestUri.OriginalString;  var extensionPosition = requestUri.LastIndexOf(".");  if (extensionPosition > -1) { var extension = requestUri.Substring(extensionPosition + 1);  var query = httpRequestMessage.RequestUri.Query;  requestUri = string.Format("{0}?{1}", requestUri.Substring(0, extensionPosition), query); ;  var uri = new Uri(requestUri);  httpRequestMessage.Headers.AcceptLanguage.Clear();  httpRequestMessage.Headers.AcceptLanguage.Add(new StringWithQualityHeaderValue(extension));  var result = new ProcessorResult<Uri>();  result.Output = uri;  return result; }  return new ProcessorResult<Uri>(); }} The last step is to inject both processors as part of the service configuration as it is shown bellow, public void RegisterRequestProcessorsForOperation(HttpOperationDescription operation, IList<Processor> processors, MediaTypeProcessorMode mode){ processors.Insert(0, new CultureExtensionProcessor()); processors.Add(new CultureProcessor());} Once you configured the two processors in the pipeline, your service will start speaking different languages :). Note: Url extensions don’t seem to be working in the current bits when you are using Url extensions in a base address. As far as I could see, ASP.NET intercepts the request first and tries to route the request to a registered ASP.NET Http Handler with that extension. For example, “http://localhost/ProductCatalog/products.es” does not work, but “http://localhost/ProductCatalog/products/1.es” does.

    Read the article

  • New Replication, Optimizer and High Availability features in MySQL 5.6.5!

    - by Rob Young
    As the Product Manager for the MySQL database it is always great to announce when the MySQL Engineering team delivers another great product release.  As a field DBA and developer it is even better when that release contains improvements and innovation that I know will help those currently using MySQL for apps that range from modest intranet sites to the most highly trafficked web sites on the web.  That said, it is my pleasure to take my hat off to MySQL Engineering for today's release of the MySQL 5.6.5 Development Milestone Release ("DMR"). The new highlighted features in MySQL 5.6.5 are discussed here: New Self-Healing Replication ClustersThe 5.6.5 DMR improves MySQL Replication by adding Global Transaction Ids and automated utilities for self-healing Replication clusters.  Prior to 5.6.5 this has been somewhat of a pain point for MySQL users with most developing custom solutions or looking to costly, complex third-party solutions for these capabilities.  With 5.6.5 these shackles are all but removed by a solution that is included with the GPL version of the database and supporting GPL tools.  You can learn all about the details of the great, problem solving Replication features in MySQL 5.6 in Mat Keep's Developer Zone article.  New Replication Administration and Failover UtilitiesAs mentioned above, the new Replication features, Global Transaction Ids specifically, are now supported by a set of automated GPL utilities that leverage the new GTIDs to provide administration and manual or auto failover to the most up to date slave (that is the default, but user configurable if needed) in the event of a master failure. The new utilities, along with links to Engineering related blogs, are discussed in detail in the DevZone Article noted above. Better Query Optimization and ThroughputThe MySQL Optimizer team continues to amaze with the latest round of improvements in 5.6.5. Along with much refactoring of the legacy code base, the Optimizer team has improved complex query optimization and throughput by adding these functional improvements: Subquery Optimizations - Subqueries are now included in the Optimizer path for runtime optimization.  Better throughput of nested queries enables application developers to simplify and consolidate multiple queries and result sets into a single unit or work. Optimizer now uses CURRENT_TIMESTAMP as default for DATETIME columns - For simplification, this eliminates the need for application developers to assign this value when a column of this type is blank by default. Optimizations for Range based queries - Optimizer now uses ready statistics vs Index based scans for queries with multiple range values. Optimizations for queries using filesort and ORDER BY.  Optimization criteria/decision on execution method is done now at optimization vs parsing stage. Print EXPLAIN in JSON format for hierarchical readability and Enterprise tool consumption. You can learn the details about these new features as well all of the Optimizer based improvements in MySQL 5.6 by following the Optimizer team blog. You can download and try the MySQL 5.6.5 DMR here. (look under "Development Releases")  Please let us know what you think!  The new HA utilities for Replication Administration and Failover are available as part of the MySQL Workbench Community Edition, which you can download here .Also New in MySQL LabsAs has become our tradition when announcing DMRs we also like to provide "Early Access" development features to the MySQL Community via the MySQL Labs.  Today is no exception as we are also releasing the following to Labs for you to download, try and let us know your thoughts on where we need to improve:InnoDB Online OperationsMySQL 5.6 now provides Online ADD Index, FK Drop and Online Column RENAME.  These operations are non-blocking and will continue to evolve in future DMRs.  You can learn the grainy details by following John Russell's blog.InnoDB data access via Memcached API ("NotOnlySQL") - Improved refresh of an earlier feature releaseSimilar to Cluster 7.2, MySQL 5.6 provides direct NotOnlySQL access to InnoDB data via the familiar Memcached API. This provides the ultimate in flexibility for developers who need fast, simple key/value access and complex query support commingled within their applications.Improved Transactional Performance, ScaleThe InnoDB Engineering team has once again under promised and over delivered in the area of improved performance and scale.  These improvements are also included in the aggregated Spring 2012 labs release:InnoDB CPU cache performance improvements for modern, multi-core/CPU systems show great promise with internal tests showing:    2x throughput improvement for read only activity 6x throughput improvement for SELECT range Read/Write benchmarks are in progress More details on the above are available here. You can download all of the above in an aggregated "InnoDB 2012 Spring Labs Release" binary from the MySQL Labs. You can also learn more about these improvements and about related fixes to mysys mutex and hash sort by checking out the InnoDB team blog.MySQL 5.6.5 is another installment in what we believe will be the best release of the MySQL database ever.  It also serves as a shining example of how the MySQL Engineering team at Oracle leads in MySQL innovation.You can get the overall Oracle message on the MySQL 5.6.5 DMR and Early Access labs features here. As always, thanks for your continued support of MySQL, the #1 open source database on the planet!

    Read the article

  • Some notes on Reflector 7

    - by CliveT
    Both Bart and I have blogged about some of the changes that we (and other members of the team) have made to .NET Reflector for version 7, including the new tabbed browsing model, the inclusion of Jason Haley's PowerCommands add-in and some improvements to decompilation such as handling iterator blocks. The intention of this blog post is to cover all of the main new features in one place, and to describe the three new editions of .NET Reflector 7. If you'd simply like to try out the latest version of the beta for yourself you can do so here. Three new editions .NET Reflector 7 will come in three new editions: .NET Reflector .NET Reflector VS .NET Reflector VSPro The first edition is just the standalone Windows application. The latter two editions include the Windows application, but also add the power of Reflector into Visual Studio so that you can save time switching tools and quickly get to the bottom of a debugging issue that involves third-party code. Let's take a look at some of the new features in each edition. Tabbed browsing .NET Reflector now has a tabbed browsing model, in which the individual tabs have independent histories. You can open a new tab to view the selected object by using CTRL+CLICK. I've found this really useful when I'm investigating a particular piece of code but then want to focus on some other methods that I find along the way. For version 7, we wanted to implement the basic idea of tabs to see whether it is something that users will find helpful. If it is something that enhances productivity, we will add more tab-based features in a future version. PowerCommands add-in We have also included Jason Haley's PowerCommands add-in as part of version 7. This add-in provides a number of useful commands, including support for opening .xap files and extracting the constituent assemblies, and a query editor that allows C# queries to be written and executed against the Reflector object model . All of the PowerCommands features can be turned on from the options menu. We will be really interested to see what people are finding useful for further integration into the main tool in the future. My personal favourite part of the PowerCommands add-in is the query editor. You can set up as many of your own queries as you like, but we provide 25 to get you started. These do useful things like listing all extension methods in a given assembly, and displaying other lower-level information, such as the number of times that a given method uses the box IL instruction. These queries can be extracted and then executed from the 'Run Query' context menu within the assembly explorer. Moreover, the queries can be loaded, modified, and saved using the built-in editor, allowing very specific user customization and sharing of queries. The PowerCommands add-in contains many other useful utilities. For example, you can open an item using an external application, work with enumeration bit flags, or generate assembly binding redirect files. You can see Bart's earlier post for a more complete list. .NET Reflector VS .NET Reflector VS adds a brand new Reflector object browser into Visual Studio to save you time opening .NET Reflector separately and browsing for an object. A 'Decompile and Explore' option is also added to the context menu of references in the Solution Explorer, so you don't need to leave Visual Studio to look through decompiled code. We've also added some simple navigation features to allow you to move through the decompiled code as quickly and easily as you can in .NET Reflector. When this is selected, the add-in decompiles the given assembly, Once the decompilation has finished, a clone of the Reflector assembly explorer can be used inside Visual Studio. When Reflector generates the source code, it records the location information. You can therefore navigate from the source file to other decompiled source using the 'Go To Definition' context menu item. This then takes you to the definition in another decompiled assembly. .NET Reflector VSPro .NET Reflector VSPro builds on the features in .NET Reflector VS to add the ability to debug any source code you decompile. When you decompile with .NET Reflector VSPro, a matching .pdb is generated, so you can use Visual Studio to debug the source code as if it were part of the project. You can now use all the standard debugging techniques that you are used to in the Visual Studio debugger, and step through decompiled code as if it were your own. Again, you can select assemblies for decompilation. They are then decompiled. And then you can debug as if they were one of your own source code files. The future of .NET Reflector As I have mentioned throughout this post, most of the new features in version 7 are exploratory steps and we will be watching feedback closely. Although we don't want to speculate now about any other new features or bugs that will or won't be fixed in the next few versions of .NET Reflector, Bart has mentioned in a previous post that there are lots of improvements we intend to make. We plan to do this with great care and without taking anything away from the simplicity of the core product. User experience is something that we pride ourselves on at Red Gate, and it is clear that Reflector is still a long way off our usual standards. We plan for the next few versions of Reflector to be worked on by some of our top usability specialists who have been involved with our other market-leading products such as the ANTS Profilers and SQL Compare. I re-iterate the need for the really great simple mode in .NET Reflector to remain intact regardless of any other improvements we are planning to make. I really hope that you enjoy using some of the new features in version 7 and that Reflector continues to be your favourite .NET development tool for a long time to come.

    Read the article

  • Kendo UI Mobile with Knockout for Master-Detail Views

    - by Steve Michelotti
    Lately I’ve been playing with Kendo UI Mobile to build iPhone apps. It’s similar to jQuery Mobile in that they are both HTML5/JavaScript based frameworks for buildings mobile apps. The primary thing that drew me to investigate Kendo UI was its innate ability to adaptively render a native looking app based on detecting the device it’s currently running on. In other words, it will render to look like a native iPhone app if it’s running on an iPhone and it will render to look like a native Droid app if it’s running on a Droid. This is in contrast to jQuery Mobile which looks the same on all devices and, therefore, it can never quite look native for whatever device it’s running on. My first impressions of Kendo UI were great. Using HTML5 data-* attributes to define “roles” for UI elements is easy, the rendering looked great, and the basic navigation was simple and intuitive. However, I ran into major confusion when trying to figure out how to “correctly” build master-detail views. Since I was already very family with KnockoutJS, I set out to use that framework in conjunction with Kendo UI Mobile to build the following simple scenario: I wanted to have a simple “Task Manager” application where my first screen just showed a list of tasks like this:   Then clicking on a specific task would navigate to a detail screen that would show all details of the specific task that was selected:   Basic navigation between views in Kendo UI is simple. The href of an <a> tag just needs to specify a hash tag followed by the ID of the view to navigate to as shown in this jsFiddle (notice the href of the <a> tag matches the id of the second view):   Direct link to jsFiddle: here. That is all well and good but the problem I encountered was: how to pass data between the views? Specifically, I need the detail view to display all the details of whichever task was selected. If I was doing this with my typical technique with KnockoutJS, I know exactly what I would do. First I would create a view model that had my collection of tasks and a property for the currently selected task like this: 1: function ViewModel() { 2: var self = this; 3: self.tasks = ko.observableArray(data); 4: self.selectedTask = ko.observable(null); 5: } Then I would bind my list of tasks to the unordered list - I would attach a “click” handler to each item (each <li> in the unordered list) so that it would select the “selectedTask” for the view model. The problem I found is this approach simply wouldn’t work for Kendo UI Mobile. It completely ignored the click handlers that I was trying to attach to the <a> tags – it just wanted to look at the href (at least that’s what I observed). But if I can’t intercept this, then *how* can I pass data or any context to the next view? The only thing I was able to find in the Kendo documentation is that you can pass query string arguments on the view name you’re specifying in the href. This enabled me to do the following: Specify the task ID in each href – something like this: <a href=”#taskDetail?id=3></a> Attach an “init method” (via the “data-show” attribute on the details view) that runs whenever the view is activated Inside this “init method”, grab the task ID passed from the query string to look up the item from my view model’s list of tasks in order to set the selected task I was able to get all that working with about 20 lines of JavaScript as shown in this jsFiddle. If you click on the Results tab, you can navigate between views and see the the detail screen is correctly binding to the selected item:   Direct link to jsFiddle: here.   With all that being done, I was very happy to get it working with the behavior I wanted. However, I have no idea if that is the “correct” way to do it or if there is a “better” way to do it. I know that Kendo UI comes with its own data binding framework but my preference is to be able to use (the well-documented) KnockoutJS since I’m already familiar with that framework rather than having to learn yet another new framework. While I think my solution above is probably “acceptable”, there are still a couple of things that bug me about it. First, it seems odd that I have to loop through my items to *find* my selected item based on the ID that was passed on the query string - normally, with Knockout I can just refer directly to my selected item from where it was used. Second, it didn’t feel exactly right that I had to rely on the “data-show” method of the details view to set my context – normally with Knockout, I could just attach a click handler to the <a> tag that was actually clicked by the user in order to set the “selected item.” I’m not sure if I’m being too picky. I know there are many people that have *way* more expertise in Kendo UI compared to me – I’d be curious to know if there are better ways to achieve the same results.

    Read the article

  • Trigger Happy

    - by Tim Dexter
    Its been a while, I know, we’ll say no more OK? I’ll just write …In the latest BIP 11.1.1.6 release and if I’m really honest; the release before this (we'll call it dot 5 for brevity.) The boys and gals in the engine room have been real busy enhancing BIP with some new functionality. Those of you that use the scheduling engine in OBIEE may already know and use the ‘conditional scheduling’ feature. This allows you to be more intelligent about what reports get run and sent to folks on a scheduled basis. You create a ‘trigger’ analysis (answer) that is executed at schedule time prior to the main report. When the schedule rolls around, the trigger is run, if it returns rows, then the main report is run and delivered. If there are no rows returned, then the main report is not run. Useful right? Your users are not bombarded with 20 reports in their inbox every week that they need to wade throu. They get a handful that they know they need to look at. If you ensure you use conditional formatting in the report then they can find the anomalous data in the reports very quickly and move on to the rest of their day more quickly. You could even think of OBIEE as a virtual team member, scouring the data on your behalf 24/7 and letting you know when its found an issue.BI Publisher, wanting the team t-shirt and the khaki pants, has followed suit. You can now set up ‘triggers’ for it to execute before it runs the main report. Just like its big brother, if the scheduled report trigger returns rows of data; it then executes the main report. Otherwise, the report is skipped until the next schedule time rolls around. Sound familiar?BIP differs a little, in that you only need to construct a query to act as the trigger rather than a complete report. Let assume we have a monthly wage by department report on a schedule. We only want to send the report to managers if their departmental wages reach and/or exceed a certain amount. The toughest part about this is coming up with the SQL to test the business rule you want to implement. For my example, its not that tough: select d.department_name, sum(e.salary) as wage_total from employees e, departments d where d.department_id = e.department_id group by d.department_name having sum(e.salary) > 230000 We're looking for departments where the wage cost is greater than 230,000 Dexter Dollars! With a bit of messing I found out you can parametrize the query. Users can then set a value at schedule time if they need to. To create the trigger is straightforward enough. You can create multiple triggers for users to select at schedule time. Notice I also used a parameter in the query, :wamount. Note the matching parameter in the tree on the left. You also dont need to return multiple columns, one is fine, the key is if there are rows returned. You can build the rest of your report as usual. At scheduling time the Schedule tab has a bit more on it. If your users want to set the trigger, they check the Use Trigger box. The page will then pop fields to pick the appropriate trigger they want to use, even a trigger on another data model if needed. Note it will also ask for the parameter value associated with the trigger. At this point you should note that the data model does not make a distinction between trigger and data model (extract) parameters. So users will see the parameters on the General and Schedule tabs. If per chance you do need to just have a trigger parameters. You can just hide them from the report using the Parameters popup in the report designer, just un-check the 'Show' box I have tested the opposite case where you do not want main report parameters seen in the trigger section. BIP handles that for you! Once the report hits its allotted schedule time, the trigger is executed. Based on the results the report will either run or be 'skipped.' Now, you have a smarter scheduler that will only deliver reports when folks need to see them and take action on the contents. More official info here for developers and here for users.

    Read the article

  • Closing the gap between strategy and execution with Oracle Business Intelligence 11g

    - by manan.goel(at)oracle.com
    Wikipedia defines strategy as a plan of action designed to achieve a particular goal. An example of this is General Electric's acquisitions and divestiture strategy (plan) designed to propel GE to number 1 or 2 place (goal) in every business segment that it operated in. Execution on the other hand can be defined as the actions taken to getting things done. In GE's case execution will be steps followed for mergers/acquisitions or divestiture. Business press has written extensively about the importance of both strategy and execution in achieving desired business objectives. Perhaps the quote from Thomas Edison says it best - "vision without execution is hallucination". Conversely, it can be said that "execution without vision" is well may be "wishful thinking". Research overwhelmingly point towards the wide gap between strategy and execution. According to a published study, 49% of surveyed executives perceive a gap between their organizations' ability to develop and communicate sound strategies and their ability to implement those strategies. Further, of these respondents, 64% don't have full confidence that their companies will be able to close the gap. Having established the severity and importance of the problem let's talk about the reasons for the strategy-execution gap. The common reasons include: -        Lack of clearly defined goals -        Lack of consistent measure of success -        Lack of ownership -        Lack of alignment -        Lack of communication -        Lack of proper execution -        Lack of monitoring       There are multiple approaches to solving the problem including organizational development practices, technology enablement etc. In most cases a combination of approaches is required to achieve the desired result. For the purposes of this discussion, I'll focus on technology.  Imagine an integrated closed loop technology platform that automates the entire management cycle from defining strategy to assigning ownership to communicating goals to achieving alignment to collaboration to taking actions to monitoring progress and achieving mid course corrections. Besides, for best ROI and lowest TCO such a system should also have characteristics like:  Complete -        Full functionality -        Rich end user access Open -        Any data source -        Any business application -        Any technology stack  Integrated -        Common metadata -        Common security -        Common system management From a capabilities perspective the system should provide the following capabilities: Define -        Strategy -        Objectives -        Ownership -        KPI's Communicate -        Pervasive -        Collaborative -        Role based -        Secure Execute -        Integrated -        Intuitive -        Secure -        Ubiquitous Monitor -        Multiple styles and formats -        Exception based -        Push & Pull Having talked about the business problem and outlined the blueprint for a technology solution, let's talk about how Oracle Business Intelligence 11g can help. Oracle Business Intelligence is a comprehensive business intelligence solution for reporting, ad hoc query and analysis, OLAP, dashboards and scorecards. Oracle's best in class BI platform is based on an architecturally integrated technology foundation that provides a unified end user experience and features a Common Enterprise Information Model, with common security, query request generation and optimization, and system management. The BI platform is ·         Complete - meaning it delivers all modes and styles of BI including reporting, ad hoc query and analysis, OLAP, dashboards and scorecards with a rich end user experience that includes visualization, collaboration, alerts and notifications, search and mobile access. ·         Open - meaning the BI platform integrates with any data source, ETL tool, business application, application server, security infrastructure, portal technology as well as any ODBC compliant third party analytical tool. The suite accesses data from multiple heterogeneous sources--including popular relational and multidimensional data sources and major ERP and CRM applications from Oracle and SAP. ·         Integrated - meaning the BI platform is based on an architecturally integrated technology foundation built on an open, standards based service oriented architecture.  The platform features a common enterprise information model, common security model and a common configuration, deployment and systems management framework. To summarize, Oracle Business Intelligence is a comprehensive, integrated BI platform that lets you define strategy, identify objectives, assign ownership, define KPI's, collaborate, take action, monitor, report and do course corrections all form a single interface and a single system. The platform's integrated metadata model and task based design ensures that the entire workflow from defining strategy to execution to monitoring is completely integrated delivering end to end visibility, transparency and agility. Click here to learn more about Oracle BI 11g. 

    Read the article

  • Database Owner Conundrum

    - by Johnm
    Have you ever restored a database from a production environment on Server A into a development environment on Server B and had some items, such as Service Broker, mysteriously cease functioning? You might want to consider reviewing the database owner property of the database. The Scenario Recently, I was developing some messaging functionality that utilized the Service Broker feature of SQL Server in a development environment. Within the instance of the development environment resided two databases: One was a restored version of a production database that we will call "RestoreDB". The second database was a brand new database that has yet to exist in the production environment that we will call "DevDB". The goal is to setup a communication path between RestoreDB and DevDB that will later be implemented into the production database. After implementing all of the Service Broker objects that are required to communicate within a database as well as between two databases on the same instance I found myself a bit confounded. My testing was showing that the communication was successful when it was occurring internally within DevDB; but the communication between RestoreDB and DevDB did not appear to be working. Profiler to the rescue After carefully reviewing my code for any misspellings, missing commas or any other minor items that might be a syntactical cause of failure, I decided to launch Profiler to aid in the troubleshooting. After simulating the cross database messaging, I noticed the following error appearing in Profiler: An exception occurred while enqueueing a message in the target queue. Error: 33009, State: 2. The database owner SID recorded in the master database differs from the database owner SID recorded in database '[Database Name Here]'. You should correct this situation by resetting the owner of database '[Database Name Here]' using the ALTER AUTHORIZATION statement. Now, this error message is a helpful one. Not only does it identify the issue in plain language, it also provides a potential solution. An execution of the following query that utilizes the catalog view sys.transmission_queue revealed the same error message for each communication attempt: SELECT     * FROM        sys.transmission_queue; Seeing the situation as a learning opportunity I dove a bit deeper. Reviewing the database properties  The owner of a specific database can be easily viewed by right-clicking the database in SQL Server Management Studio and selecting the "properties" option. The owner is listed on the "General" page of the properties screen. In my scenario, the database in the production server was created by Frank the DBA; therefore his server login appeared as the owner: "ServerName\Frank". While this is interesting information, it certainly doesn't tell me much in regard to the SID (security identifier) and its existence, or lack thereof, in the master database as the error suggested. I pulled together the following query to gather more interesting information: SELECT     a.name     , a.owner_sid     , b.sid     , b.name     , b.type_desc FROM        master.sys.databases a     LEFT OUTER JOIN master.sys.server_principals b         ON a.owner_sid = b.sid WHERE     a.name not in ('master','tempdb','model','msdb'); This query also helped identify how many other user databases in the instance were experiencing the same issue. In this scenario, I saw that there were no matching SIDs in server_principals to the owner SID for my database. What login should be used as the database owner instead of Frank's? The system stored procedure sp_helplogins will provide a list of the valid logins that can be used. Here is an example of its use, revealing all available logins: EXEC sp_helplogins;  Fixing a hole The error message stated that the recommended solution was to execute the ALTER AUTHORIZATION statement. The full statement for this scenario would appear as follows: ALTER AUTHORIZATION ON DATABASE:: [Database Name Here] TO [Login Name]; Another option is to execute the following statement using the sp_changedbowner system stored procedure; but please keep in mind that this stored procedure has been deprecated and will likely disappear in future versions of SQL Server: EXEC dbo.sp_changedbowner @loginname = [Login Name]; .And They Lived Happily Ever After Upon changing the database owner to an existing login and simulating the inner and cross database messaging the errors have ceased. More importantly, all messages sent through this feature now successfully complete their journey. I have added the ownership change to my restoration script for the development environment.

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sam Drake
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Exadata?????????INSERT?UPDATE

    - by Liu Maclean(???)
    Hybrid Columnar Compression??????Exadata?????????????,??????????(advanced compression)??,Hybrid columnar compression (HCC) ???Exadata????????HCC???????????CU(compression unit?????),??CU??????????,?????????????????????????,???CU????block??????????????? ???????INSERT/UPDATE??,??????????????,????UPDATE/INSERT???HCC?????????????????? hybrid columnar compression???????????????(bulk initial load)??,??????(direct load)??ALTER TABLE MOVE, IMPDP???????(append INSERT),??HCC??????????????????????? ???????????????????,?????????CU????????? ??????????????HCC?????????????for OLTP?????? ????????: SQL*Plus: Release 11.2.0.2.0 Production on Wed Sep 12 06:14:53 2012 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> grant dba to scott; Grant succeeded. SQL> conn scott/oracle Connected. SQL> SQL> create table hcc_maclean tablespace users compress for query high as select * from dba_objects; Table created. 1* select rowid,owner,object_name,dbms_rowid.rowid_block_number(rowid) from hcc_maclean where owner='MACLEAN' SQL> / ROWID OWNER OBJECT_NAME DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) ------------------------------ ------------------------------ -------------------- ------------------------------------ AAAThuAAEAAAHTJAOI MACLEAN SALES 29897 AAAThuAAEAAAHTJAOJ MACLEAN MYCUSTOMERS 29897 AAAThuAAEAAAHTJAOK MACLEAN MYCUST_ARCHIVE 29897 AAAThuAAEAAAHTJAOL MACLEAN MYCUST_QUERY 29897 AAAThuAAEAAAHTJAOh MACLEAN COMPRESS_QUERY 29897 AAAThuAAEAAAHTJAOi MACLEAN UNCOMPRESS 29897 AAAThuAAEAAAHTJAOj MACLEAN CHAINED_ROWS 29897 AAAThuAAEAAAHTJAOk MACLEAN COMPRESS_QUERY1 29897 8 rows selected. select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from hcc_maclean where owner='MACLEAN'; session A: update hcc_maclean set OBJECT_NAME=OBJECT_NAME||'DBM' where rowid='AAAThuAAEAAAHTJAOI'; session B: update hcc_maclean set OBJECT_NAME=OBJECT_NAME||'DBM' where rowid='AAAThuAAEAAAHTJAOJ'; SQL> select sid,wait_event_text,BLOCKER_SID from v$wait_chains; SID WAIT_EVENT_TEXT BLOCKER_SID ---------- ---------------------------------------------------------------- ----------- 13 enq: TX - row lock contention 136 136 SQL*Net message from client ????session A block B,????HCC???update row??CU?????CU?????? SQL> alter system checkpoint; System altered. SQL> / System altered. SQL> alter system dump datafile 4 block 29897 2 ; Block header dump: 0x010074c9 Object id on Block? Y seg/obj: 0x1386e csc: 0x00.1cad7e itc: 3 flg: E typ: 1 - DATA brn: 0 bdba: 0x10074c8 ver: 0x01 opc: 0 inc: 0 exflg: 0 Itl Xid Uba Flag Lck Scn/Fsc 0x01 0xffff.000.00000000 0x00000000.0000.00 C--- 0 scn 0x0000.001cabfa 0x02 0x000a.00a.00000430 0x00c051a7.0169.17 ---- 1 fsc 0x0000.00000000 0x03 0x0000.000.00000000 0x00000000.0000.00 ---- 0 fsc 0x0000.00000000 avsp=0x14 tosp=0x14 r0_9ir2=0x0 mec_kdbh9ir2=0x0 76543210 shcf_kdbh9ir2=---------- 76543210 flag_9ir2=--R----- Archive compression: Y fcls_9ir2[0]={ } 0x16:pti[0] nrow=1 offs=0 0x1a:pri[0] offs=0x30 block_row_dump: tab 0, row 0, @0x30 tl: 8016 fb: --H-F--N lb: 0x2 cc: 1 ==>??CU??ITL 0x02 nrid: 0x010074ca.0 col 0: [8004] Compression level: 02 (Query High) Length of CU row: 8004 kdzhrh: ------PC CBLK: 1 Start Slot: 00 NUMP: 01 PNUM: 00 POFF: 7984 PRID: 0x010074ca.0 CU header: CU version: 0 CU magic number: 0x4b445a30 CU checksum: 0xf8faf86e CU total length: 8694 CU flags: NC-U-CRD-OP ncols: 15 nrows: 995 algo: 0 CU decomp length: 8487 len/value length: 100111 row pieces per row: 1 num deleted rows: 1 deleted rows: 904, START_CU: ????????????row?????: SQL> select DBMS_COMPRESSION.GET_COMPRESSION_TYPE('SCOTT','HCC_MACLEAN','AAAThuAAEAAAHTJAOk') from dual; DBMS_COMPRESSION.GET_COMPRESSION_TYPE('SCOTT','HCC_MACLEAN','AAATHUAAEAAAHTJAOK' -------------------------------------------------------------------------------- 4 COMP_NOCOMPRESS CONSTANT NUMBER := 1;COMP_FOR_OLTP CONSTANT NUMBER := 2;COMP_FOR_QUERY_HIGH CONSTANT NUMBER := 4;COMP_FOR_QUERY_LOW CONSTANT NUMBER := 8;COMP_FOR_ARCHIVE_HIGH CONSTANT NUMBER := 16;COMP_FOR_ARCHIVE_LOW CONSTANT NUMBER := 32; COMP_RATIO_MINROWS CONSTANT NUMBER := 1000000;COMP_RATIO_ALLROWS CONSTANT NUMBER := -1; ?????????????,??COMP_FOR_QUERY_HIGH?4,COMP_FOR_QUERY_LOW ?8 ?????????GET_COMPRESSION_TYPE??rowid????????4?????COMP_FOR_QUERY_HIGH????: SQL> update hcc_maclean set OBJECT_NAME=OBJECT_NAME||'DBM' where owner='MACLEAN'; 8 rows updated. SQL> commit; Commit complete. SQL> select DBMS_COMPRESSION.GET_COMPRESSION_TYPE('SCOTT','HCC_MACLEAN',rowid) from HCC_MACLEAN where owner='MACLEAN'; DBMS_COMPRESSION.GET_COMPRESSION_TYPE('SCOTT','HCC_MACLEAN',ROWID) ------------------------------------------------------------------ 1 1 1 1 1 1 1 1 8 rows selected. ??????????????COMPRESSION_TYPE?COMP_FOR_QUERY_HIGH???COMP_NOCOMPRESS,????????compress for query high????????????????? ?11g????????????????????HCC??????????? ALTER TABLE MOVE???????????????????HCC??? SQL> ALTER TABLE hcc_MACLEAN move COMPRESS FOR ARCHIVE HIGH; Table altered. SQL> select DBMS_COMPRESSION.GET_COMPRESSION_TYPE('SCOTT','HCC_MACLEAN',rowid) from HCC_MACLEAN where owner='MACLEAN'; DBMS_COMPRESSION.GET_COMPRESSION_TYPE('SCOTT','HCC_MACLEAN',ROWID) ------------------------------------------------------------------ 16 16 16 16 16 16 16 16 8 rows selected.

    Read the article

  • Migrating R Scripts from Development to Production

    - by Mark Hornick
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 “How do I move my R scripts stored in one database instance to another? I have my development/test system and want to migrate to production.” Users of Oracle R Enterprise Embedded R Execution will often store their R scripts in the R Script Repository in Oracle Database, especially when using the ORE SQL API. From previous blog posts, you may recall that Embedded R Execution enables running R scripts managed by Oracle Database using both R and SQL interfaces. In ORE 1.3.1., the SQL API requires scripts to be stored in the database and referenced by name in SQL queries. The SQL API enables seamless integration with database-based applications and ease of production deployment. Loading R scripts in the repository Before talking about migration, we’ll first introduce how users store R scripts in Oracle Database. Users can add R scripts to the repository in R using the function ore.scriptCreate, or SQL using the function sys.rqScriptCreate. For the sample R script     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100) users wrap this in a function and store it in the R Script Repository with a name. In R, this looks like ore.scriptCreate("RandomRedDots", function () { line-height: 115%; font-family: "Courier New";">     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100)) }) In SQL, this looks like begin sys.rqScriptCreate('RandomRedDots',  'function(){     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100)   }'); end; / The R function ore.scriptDrop and SQL function sys.rqScriptDrop can be used to drop these scripts as well. Note that the system will give an error if the script name already exists. Accessing R scripts once they’ve been loaded If you’re not using a source code control system, it is possible that your R scripts can be misplaced or files modified, making what is stored in Oracle Database to only or best copy of your R code. If you’ve loaded your R scripts to the database, it is straightforward to access these scripts from the database table SYS.RQ_SCRIPTS. For example, select * from sys.rq_scripts where name='myScriptName'; From R, scripts in the repository can be loaded into the R client engine using a function similar to the following: ore.scriptLoad <- function(name) { query <- paste("select script from sys.rq_scripts where name='",name,"'",sep="") str.f <- OREbase:::.ore.dbGetQuery(query) assign(name,eval(parse(text = str.f)),pos=1) } ore.scriptLoad("myFunctionName") This function is also useful if you want to load an existing R script from the repository into another R script in the repository – think modular coding style. Just include this function in the body of the other function and load the named script. Migrating R scripts from one database instance to another To move a set of functions from one system to another, the following script loads the functions from one R script repository into the client R engine, then connects to the target database and creates the scripts there with the same names. scriptNames <- OREbase:::.ore.dbGetQuery("select name from sys.rq_scripts where name not like 'RQG$%' and name not like 'RQ$%'")$NAME for(s in scriptNames) { cat(s,"\n") ore.scriptLoad(s) } ore.disconnect() ore.connect("rquser","orcl","localhost","rquser") for(s in scriptNames) { cat(s,"\n") ore.scriptDrop(s) ore.scriptCreate(s,get(s)) } Best Practice When naming R scripts, keep in mind that the name can be up to 128 characters. As such, consider organizing scripts in a directory structure manner. For example, if an organization has multiple groups or applications sharing the same database and there are multiple components, use “/” to facilitate the function organization: line-height: 115%;">ore.scriptCreate("/org1/app1/component1/myFuntion1", myFunction1) ore.scriptCreate("/org1/app1/component1/myFuntion2", myFunction2) ore.scriptCreate("/org1/app2/component2/myFuntion2", myFunction2) ore.scriptCreate("/org2/app2/component1/myFuntion3", myFunction3) ore.scriptCreate("/org3/app2/component1/myFuntion4", myFunction4) Users can then query for all functions using the path prefix when looking up functions. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Building a Repository Pattern against an EF 5 EDMX Model - Part 1

    - by Juan
    I am part of a year long plus project that is re-writing an existing application for a client.  We have decided to develop the project using Visual Studio 2012 and .NET 4.5.  The project will be using a number of technologies and patterns to include Entity Framework 5, WCF Services, and WPF for the client UI.This is my attempt at documenting some of the successes and failures that I will be coming across in the development of the application.In building the data access layer we have to access a database that has already been designed by a dedicated dba. The dba insists on using Stored Procedures which has made the use of EF a little more difficult.  He will not allow direct table access but we did manage to get him to allow us to use Views.  Since EF 5 does not have good support to do Code First with Stored Procedures, my option was to create a model (EDMX) against the existing database views.   I then had to go select each entity and map the Insert/Update/Delete functions to their respective stored procedure. The next step after I had completed mapping the stored procedures to the entities in the EDMX model was to figure out how to build a generic repository that would work well with Entity Framework 5.  After reading the blog posts below, I adopted much of their code with some changes to allow for the use of Ninject for dependency injection.http://www.tcscblog.com/2012/06/22/entity-framework-generic-repository/ http://www.tugberkugurlu.com/archive/generic-repository-pattern-entity-framework-asp-net-mvc-and-unit-testing-triangle IRepository.cs public interface IRepository : IDisposable where T : class { void Add(T entity); void Update(T entity, int id); T GetById(object key); IQueryable Query(Expression> predicate); IQueryable GetAll(); int SaveChanges(); int SaveChanges(bool validateEntities); } GenericRepository.cs public abstract class GenericRepository : IRepository where T : class { public abstract void Add(T entity); public abstract void Update(T entity, int id); public abstract T GetById(object key); public abstract IQueryable Query(Expression> predicate); public abstract IQueryable GetAll(); public int SaveChanges() { return SaveChanges(true); } public abstract int SaveChanges(bool validateEntities); public abstract void Dispose(); } One of the issues I ran into was trying to do an update. I kept receiving errors so I posted a question on Stack Overflow http://stackoverflow.com/questions/12585664/an-object-with-the-same-key-already-exists-in-the-objectstatemanager-the-object and came up with the following hack. If someone has a better way, please let me know. DbContextRepository.cs public class DbContextRepository : GenericRepository where T : class { protected DbContext Context; protected DbSet DbSet; public DbContextRepository(DbContext context) { if (context == null) throw new ArgumentException("context"); Context = context; DbSet = Context.Set(); } public override void Add(T entity) { if (entity == null) throw new ArgumentException("Cannot add a null entity."); DbSet.Add(entity); } public override void Update(T entity, int id) { if (entity == null) throw new ArgumentException("Cannot update a null entity."); var entry = Context.Entry(entity); if (entry.State == EntityState.Detached) { var attachedEntity = DbSet.Find(id); // Need to have access to key if (attachedEntity != null) { var attachedEntry = Context.Entry(attachedEntity); attachedEntry.CurrentValues.SetValues(entity); } else { entry.State = EntityState.Modified; // This should attach entity } } } public override T GetById(object key) { return DbSet.Find(key); } public override IQueryable Query(Expression> predicate) { return DbSet.Where(predicate); } public override IQueryable GetAll() { return Context.Set(); } public override int SaveChanges(bool validateEntities) { Context.Configuration.ValidateOnSaveEnabled = validateEntities; return Context.SaveChanges(); } #region IDisposable implementation public override void Dispose() { if (Context != null) { Context.Dispose(); GC.SuppressFinalize(this); } } #endregion IDisposable implementation } At this point I am able to start creating individual repositories that are needed and add a Unit of Work.  Stay tuned for the next installment in my path to creating a Repository Pattern against EF5.

    Read the article

  • ComboBox Data Binding

    - by Geertjan
    Let's create a databound combobox, levering MVC in a desktop application. The result will be a combobox, provided by the NetBeans ChoiceView, that displays data retrieved from a database: What follows is not much different from the NetBeans Platform CRUD Application Tutorial and you're advised to consult that document if anything that follows isn't clear enough. One kind of interesting thing about the instructions that follow is that it shows that you're able to create an application where each element of the MVC architecture can be located within a separate module: Start by creating a new NetBeans Platform application named "MyApplication". Model We're going to start by generating JPA entity classes from a database connection. In the New Project wizard, choose "Java Class Library". Click Next. Name the Java Class Library "MyEntities". Click Finish. Right-click the MyEntities project, choose New, and then select "Entity Classes from Database". Work through the wizard, selecting the tables of interest from your database, and naming the package "entities". Click Finish. Now a JPA entity is created for each of the selected tables. In the Project Properties dialog of the project, choose "Copy Dependent Libraries" in the Packaging panel. Build the project. In your project's "dist" folder (visible in the Files window), you'll now see a JAR, together with a "lib" folder that contains the JARs you'll need. In your NetBeans Platform application, create a module named "MyModel", with code name base "org.my.model". Right-click the project, choose Properties, and in the "Libraries" panel, click Add Dependency button in the Wrapped JARs subtab to add all the JARs from the previous step to the module. Also include "derby-client.jar" or the equivalent driver for your database connection to the module. Controler In your NetBeans Platform application, create a module named "MyControler", with code name base "org.my.controler". Right-click the module's Libraries node, in the Projects window, and add a dependency on "Explorer & Property Sheet API". In the MyControler module, create a class with this content: package org.my.controler; import org.openide.explorer.ExplorerManager; public class MyUtils { static ExplorerManager controler; public static ExplorerManager getControler() { if (controler == null) { controler = new ExplorerManager(); } return controler; } } View In your NetBeans Platform application, create a module named "MyView", with code name base "org.my.view".  Create a new Window Component, in "explorer" view, for example, let it open on startup, with class name prefix "MyView". Add dependencies on the Nodes API and on the Explorer & Property Sheet API. Also add dependencies on the "MyModel" module and the "MyControler" module. Before doing so, in the "MyModel" module, make the "entities" package and the "javax.persistence" packages public (in the Libraries panel of the Project Properties dialog) and make the one package that you have in the "MyControler" package public too. Define the top part of the MyViewTopComponent as follows: public final class MyViewTopComponent extends TopComponent implements ExplorerManager.Provider { ExplorerManager controler = MyUtils.getControler(); public MyViewTopComponent() { initComponents(); setName(Bundle.CTL_MyViewTopComponent()); setToolTipText(Bundle.HINT_MyViewTopComponent()); setLayout(new BoxLayout(this, BoxLayout.PAGE_AXIS)); controler.setRootContext(new AbstractNode(Children.create(new ChildFactory<Customer>() { @Override protected boolean createKeys(List list) { EntityManager entityManager = Persistence. createEntityManagerFactory("MyEntitiesPU").createEntityManager(); Query query = entityManager.createNamedQuery("Customer.findAll"); list.addAll(query.getResultList()); return true; } @Override protected Node createNodeForKey(Customer key) { Node customerNode = new AbstractNode(Children.LEAF, Lookups.singleton(key)); customerNode.setDisplayName(key.getName()); return customerNode; } }, true))); controler.addPropertyChangeListener(new PropertyChangeListener() { @Override public void propertyChange(PropertyChangeEvent evt) { Customer selectedCustomer = controler.getSelectedNodes()[0].getLookup().lookup(Customer.class); StatusDisplayer.getDefault().setStatusText(selectedCustomer.getName()); } }); JPanel row1 = new JPanel(new FlowLayout(FlowLayout.LEADING)); row1.add(new JLabel("Customers: ")); row1.add(new ChoiceView()); add(row1); } @Override public ExplorerManager getExplorerManager() { return controler; } ... ... ... Now run the application and you'll see the same as the image with which this blog entry started.

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • Using the Data Form Web Part (SharePoint 2010) Site Agnostically!

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/24/154465.aspxAs a Developer whom has worked closely with web designers (Power users) in a SharePoint environment, I have come across the issue of making the Data Form Web Part reusable across the site collection! In SharePoint 2007 it was very easy and this blog pointed the way to make it happen: Josh Gaffey's Blog. In SharePoint 2010 something changed! This method failed except for using a Data Form Web Part that pointed to a list in the Site Collection Root! I am making this discussion relative to a developer whom creates a solution (WSP) with all the artifacts embedded and the user shouldn’t have any involvement in the process except to activate features. The Scenario: 1. A Power User creates a Data Form Web Part using SharePoint Designer 2010! It is a great web part the uses all the power of SharePoint Designer and XSLT (Conditional formatting, etc.). 2. Other Users in the site collection want to use that specific web part in sub sites in the site collection. Pointing to a list with the same name, not at the site collection root! The Issues: 1. The Data Form Web Part Data Source uses a List ID (GUID) to point to the specific list. Which means a list in a sub site will have a list with a new GUID different than the one which was created with SharePoint Designer! Obviously, the List needs to be the same List (Fields, Content Types, etc.) with different data. 2. How can we make this web part site agnostic, and dependent only on the lists Name? I had this problem come up over and over and decided to put my solution forward! The Solution: 1. Use the XSL of the Data Form Web Part Created By the Power User in SharePoint Designer! 2. Extend the OOTB Data Form Web Part to use this XSL and Point to a List by name. The solution points to a hybrid solution that requires some coding (Developer) and the XSL (Power User) artifacts put together in a Visual Studio SharePoint Solution. Here are the solution steps in summary: 1. Create an empty SharePoint project in Visual Studio 2. Create a Module and Feature and put the XSL file created by the Power User into it a. Scope the feature to web 3. Create a Feature Receiver to Create the List. The same list from which the Data Form Web Part was created with by the Power User. a. Scope the feature to web 4. Create a Web Part extending the Data Form Web a. Point the Data Form Web Part to point to the List by Name b. Point the Data Form Web Part XSL link to the XSL added using the Module feature c. Scope The feature to Site i. This is because all web parts are in the site collection web part gallery. So in a Narrative Summary: We are creating a list in code which has the same name and (site Columns) as the list from which the Power User created the Data Form Web Part Using SharePoint Designer. We are creating a Web Part in code which extends the OOTB Data Form Web Part to point to a list by name and use the XSL created by the Power User. Okay! Here are the steps with images and code! At the end of this post I will provide a link to the code for a solution which works in any site! I want to TOOT the HORN for the power of this solution! It is the mantra a use with all my clients! What is a basic skill a SharePoint Developer: Create an application that uses the data from a SharePoint list and make that data visible to the user in a manner which meets requirements! Create an Empty SharePoint 2010 Project Here I am naming my Project DJ.DataFormWebPart Create a Code Folder Copy and paste the Extension and Utilities classes (Found in the solution provided at the end of this post) Change the Namespace to match this project The List to which the Data Form Web Part which was used to make the XSL by the Power User in SharePoint Designer is now going to be created in code! If already in code, then all the better! Here I am going to create a list in the site collection root and add some data to it! For the purpose of this discussion I will actually create this list in code before using SharePoint Designer for simplicity! So here I create the List and deploy it within this solution before I do anything else. I will use a List I created before for demo purposes. Footer List is used within the footer of my master page. Add a new Feature: Here I name the Feature FooterList and add a Feature Event Receiver: Here is the code for the Event Receiver: I have a previous blog post about adding lists in code so I will not take time to narrate this code: using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; using DJ.DataFormWebPart.Code; namespace DJ.DataFormWebPart.Features.FooterList { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("a58644fd-9209-41f4-aa16-67a53af7a9bf")] public class FooterListEventReceiver : SPFeatureReceiver { SPWeb currentWeb = null; SPSite currentSite = null; const string columnGroup = "DJ"; const string ctName = "FooterContentType"; // Uncomment the method below to handle the event raised after a feature has been activated. public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb spWeb = properties.GetWeb() as SPWeb) { using (SPSite site = new SPSite(spWeb.Site.ID)) { using (SPWeb rootWeb = site.OpenWeb(site.RootWeb.ID)) { //add the fields addFields(rootWeb); //add content type SPContentType testCT = rootWeb.ContentTypes[ctName]; // we will not create the content type if it exists if (testCT == null) { //the content type does not exist add it addContentType(rootWeb, ctName); } if ((spWeb.Lists.TryGetList("FooterList") == null)) { //create the list if it dosen't to exist CreateFooterList(spWeb, site); } } } } } #region ContentType public void addFields(SPWeb spWeb) { Utilities.addField(spWeb, "Link", SPFieldType.URL, false, columnGroup); Utilities.addField(spWeb, "Information", SPFieldType.Text, false, columnGroup); } private static void addContentType(SPWeb spWeb, string name) { SPContentType myContentType = new SPContentType(spWeb.ContentTypes["Item"], spWeb.ContentTypes, name) { Group = columnGroup }; spWeb.ContentTypes.Add(myContentType); addContentTypeLinkages(spWeb, myContentType); myContentType.Update(); } public static void addContentTypeLinkages(SPWeb spWeb, SPContentType ct) { Utilities.addContentTypeLink(spWeb, "Link", ct); Utilities.addContentTypeLink(spWeb, "Information", ct); } private void CreateFooterList(SPWeb web, SPSite site) { Guid newListGuid = web.Lists.Add("FooterList", "Footer List", SPListTemplateType.GenericList); SPList newList = web.Lists[newListGuid]; newList.ContentTypesEnabled = true; var footer = site.RootWeb.ContentTypes[ctName]; newList.ContentTypes.Add(footer); newList.ContentTypes.Delete(newList.ContentTypes["Item"].Id); newList.Update(); var view = newList.DefaultView; //add all view fields here //view.ViewFields.Add("NewsTitle"); view.ViewFields.Add("Link"); view.ViewFields.Add("Information"); view.Update(); } } } Basically created a content type with two site columns Link and Information. I had to change some code as we are working at the SPWeb level and need Content Types at the SPSite level! I’ll use a new Site Collection for this demo (Best Practice) keep old artifacts from impinging on development: Next we will add this list to the root of the site collection by deploying this solution, add some data and then use SharePoint Designer to create a Data Form Web Part. The list has been added, now let’s add some data: Okay let’s add a Data Form Web Part in SharePoint Designer. Create a new web part page in the site pages library: I will name it TestWP.aspx and edit it in advanced mode: Let’s add an empty Data Form Web Part to the web part zone: Click on the web part to add a data source: Choose FooterList in the Data Source menu: Choose appropriate fields and select insert as multiple item view: Here is what it look like after insertion: Let’s add some conditional formatting if the information filed is not blank: Choose Create (right side) apply formatting: Choose the Information Field and set the condition not null: Click Set Style: Here is the result: Okay! Not flashy but simple enough for this demo. Remember this is the job of the Power user! All we want from this web part is the XLS-Style Sheet out of SharePoint Designer. We are going to use it as the XSL for our web part which we will be creating next. Let’s add a web part to our project extending the OOTB Data Form Web Part. Add new item from the Visual Studio add menu: Choose Web Part: Change WebPart to DataFormWebPart (Oh well my namespace needs some improvement, but it will sure make it readily identifiable as an extended web part!) Below is the code for this web part: using System; using System.ComponentModel; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using Microsoft.SharePoint; using Microsoft.SharePoint.WebControls; using System.Text; namespace DJ.DataFormWebPart.DataFormWebPart { [ToolboxItemAttribute(false)] public class DataFormWebPart : Microsoft.SharePoint.WebPartPages.DataFormWebPart { protected override void OnInit(EventArgs e) { base.OnInit(e); this.ChromeType = PartChromeType.None; this.Title = "FooterListDF"; try { //SPSite site = SPContext.Current.Site; SPWeb web = SPContext.Current.Web; SPList list = web.Lists.TryGetList("FooterList"); if (list != null) { string queryList1 = "<Query><Where><IsNotNull><FieldRef Name='Title' /></IsNotNull></Where><OrderBy><FieldRef Name='Title' Ascending='True' /></OrderBy></Query>"; uint maximumRowList1 = 10; SPDataSource dataSourceList1 = GetDataSource(list.Title, web.Url, list, queryList1, maximumRowList1); this.DataSources.Add(dataSourceList1); this.XslLink = web.Url + "/Assests/Footer.xsl"; this.ParameterBindings = BuildDataFormParameters(); this.DataBind(); } } catch (Exception ex) { this.Controls.Add(new LiteralControl("ERROR: " + ex.Message)); } } private SPDataSource GetDataSource(string dataSourceId, string webUrl, SPList list, string query, uint maximumRow) { SPDataSource dataSource = new SPDataSource(); dataSource.UseInternalName = true; dataSource.ID = dataSourceId; dataSource.DataSourceMode = SPDataSourceMode.List; dataSource.List = list; dataSource.SelectCommand = "" + query + ""; Parameter listIdParam = new Parameter("ListID"); listIdParam.DefaultValue = list.ID.ToString( "B").ToUpper(); Parameter maximumRowsParam = new Parameter("MaximumRows"); maximumRowsParam.DefaultValue = maximumRow.ToString(); QueryStringParameter rootFolderParam = new QueryStringParameter("RootFolder", "RootFolder"); dataSource.SelectParameters.Add(listIdParam); dataSource.SelectParameters.Add(maximumRowsParam); dataSource.SelectParameters.Add(rootFolderParam); dataSource.UpdateParameters.Add(listIdParam); dataSource.DeleteParameters.Add(listIdParam); dataSource.InsertParameters.Add(listIdParam); return dataSource; } private string BuildDataFormParameters() { StringBuilder parameters = new StringBuilder("<ParameterBindings><ParameterBinding Name=\"dvt_apos\" Location=\"Postback;Connection\"/><ParameterBinding Name=\"UserID\" Location=\"CAMLVariable\" DefaultValue=\"CurrentUserName\"/><ParameterBinding Name=\"Today\" Location=\"CAMLVariable\" DefaultValue=\"CurrentDate\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_firstrow\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_nextpagedata\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_adhocmode\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_adhocfiltermode\" Location=\"Postback;Connection\"/>"); parameters.Append("</ParameterBindings>"); return parameters.ToString(); } } } The OnInit method we use to set the list name and the XSL Link property of the Data Form Web Part. We do not have the link to XSL in our Solution so we will add the XSL now: Add a Module in the Visual Studio add menu: Rename Sample.txt in the module to footer.xsl and then copy the XSL from SharePoint Designer Look at elements.xml to where the footer.xsl is being provisioned to which is Assets/footer.xsl, make sure the Web parts xsl link is pointing to this url: Okay we are good to go! Let’s check our features and package: DataFormWebPart should be scoped to site and have the web part: The Footer List feature should be scoped to web and have the Assets module (Okay, I see, a spelling issue but it won’t affect this demo) If everything is correct we should be able to click a couple of sub site feature activations and have our list and web part in a sub site. (In fact this solution can be activated anywhere) Here is the list created at SubSite1 with new data It. Next let’s add the web part on a test page and see if it works as expected: It does! So we now have a repeatable way to use a WSP to move a Data Form Web Part around our sites! Here is a link to the code: DataFormWebPart Solution

    Read the article

  • CodePlex Daily Summary for Friday, April 06, 2012

    CodePlex Daily Summary for Friday, April 06, 2012Popular ReleasesBetter Explorer: Better Explorer 2.0.0.861 Alpha: - fixed new folder button operation not work well in some situations - removed some unnecessary code like subclassing that is not needed anymore - Added option to make Better Exlorer default (at least for WIN+E operations) - Added option to enable file operation replacements (like Terracopy) to work with Better Explorer - Added some basic usability to "Share" button - Other fixesLightFarsiDictionary - ??????? ??? ?????/???????: LightFarsiDictionary - v1: LightFarsiDictionary - v1WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.3: Version: 2.5.0.3 (Milestone 3): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete [O] WAF: Mark the StringBuilderExtensions class as obsolete because the AppendInNewLine method can be replaced with string.Jo...Community TFS Build Extensions: April 2012: Release notes to follow...ClosedXML - The easy way to OpenXML: ClosedXML 0.65.2: Aside from many bug fixes we now have Conditional Formatting The conditional formatting was sponsored by http://www.bewing.nl (big thanks) New on v0.65.1 Fixed issue when loading conditional formatting with default values for icon sets New on v0.65.2 Fixed issue loading conditional formatting Improved inserts performanceLiberty: v3.2.0.0 Release 4th April 2012: Change Log-Added -Halo 3 support (invincibility, ammo editing) -Halo 3: ODST support (invincibility, ammo editing) -The file transfer page now shows its progress in the Windows 7 taskbar -"About this build" settings page -Reach Change what an object is carrying -Reach Change which node a carried object is attached to -Reach Object node viewer and exporter -Reach Change which weapons you are carrying from the object editor -Reach Edit the weapon controller of vehicles and turrets -An error dia...MSBuild Extension Pack: April 2012: Release Blog Post The MSBuild Extension Pack April 2012 release provides a collection of over 435 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUID’...DotNetNuke® Community Edition CMS: 06.01.05: Major Highlights Fixed issue that stopped users from creating vocabularies when the portal ID was not zero Fixed issue that caused modules configured to be displayed on all pages to be added to the wrong container in new pages Fixed page quota restriction issue in the Ribbon Bar Removed restriction that would not allow users to use a dash in page names. Now users can create pages with names like "site-map" Fixed issue that was causing the wrong container to be loaded in modules wh...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.3.1: One Click Install from NuGet Changes to Version 2.1.3.11. [assembly: AllowPartiallyTrustedCallers] has been added back into the AssemblyInfo.cs file to prevent failures with other assemblies in Medium trust environments. 2. The Lite data embedded into the assembly has been updated to include devices from December 2011. The 42 new RingMark properties will return Unknown if RingMark data is not available. Changes to Version 2.1.2.11Code Changes 1. The project is now licenced under the Mozilla...MVC Controls Toolkit: Mvc Controls Toolkit 2.0.0: Added Support for Mvc4 beta and WebApi The SafeqQuery and HttpSafeQuery IQueryable implementations that works as wrappers aroung any IQueryable to protect it from unwished queries. "Client Side" pager specialized in paging javascript data coming either from a remote data source, or from local data. LinQ like fluent javascript api to build queries either against remote data sources, or against local javascript data, with exactly the same interface. There are 3 different query objects exp...ExtAspNet: ExtAspNet v3.1.2: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-04-04 v3.1.2 -??IE?????????????BUG(??"about:blank"?...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.50: Highlight features & improvements: • Significant performance optimization. • Allow store owners to create several shipments per order. Added a new shipping status: “Partially shipped”. • Pre-order support added. Enables your customers to place a Pre-Order and pay for the item in advance. Displays “Pre-order” button instead of “Buy Now” on the appropriate pages. Makes it possible for customer to buy available goods and Pre-Order items during one session. It can be managed on a product variant ...WiX Toolset: WiX v3.6 RC0: WiX v3.6 RC0 (3.6.2803.0) provides support for VS11 and a more stable Burn engine. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/4/3/WiX-v3.6-Release-Candidate-Zero-availableSageFrame: SageFrame 2.0: Sageframe is an open source ASP.NET web development framework developed using ASP.NET 3.5 with service pack 1 (sp1) technology. It is designed specifically to help developers build dynamic website by providing core functionality common to most web applications.iTuner - The iTunes Companion: iTuner 1.5.4475: Fix to parse empty playlists in iTunes LibraryDocument.Editor: 2012.2: Whats New for Document.Editor 2012.2: New Save Copy support New Page Setup support Minor Bug Fix's, improvements and speed upsDotNet.Highcharts: DotNet.Highcharts 1.2 with Examples: Tested and adapted to the latest version of Highcharts 2.2.1 Fixed Issue 359: Not implemented serialization array of type: System.Drawing.Color[] Fixed Issue 364: Crosshairs defined as array of CrosshairsForamt generates bad Highchart code For the example project: Added newest version of Highcharts 2.2.1 Added new demos to How To's section: Bind Data From Dictionary Bind Data From Object List Custom Theme Tooltip Crosshairs Theming The Reset Button Plot Band EventsVidCoder: 1.3.2: Added option for the minimum title length to scan. Added support to enable or disable LibDVDNav. Added option to prompt to delete source files after clearing successful completed items. Added option to disable remembering recent files and folders. Tweaked number box to only select all on a quick click.MJP's DirectX 11 Samples: Light Indexed Deferred Rendering: Implements light indexed deferred using per-tile light lists calculated in a compute shader, as well as a traditional deferred renderer that uses a compute shader for per-tile light culling and per-pixel shading.Pcap.Net: Pcap.Net 0.9.0 (66492): Pcap.Net - March 2012 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and includes a packet interpretation framework. Version 0.9.0 (Change Set 66492)March 31, 2012 release of the Pcap.Net framework. Follow Pcap.Net on Google+Follow Pcap.Net on Google+ Files Pcap.Net.DevelopersPack.0.9.0.66492.zip - Includes all the tutorial example projects source files, the binaries in a 3rdParty directory and the documentation. It include...New Projects.Net Mutation Testing Tool: This tool will help for making and unit testing mutation changes. Mutation tests improve unit tests and applications.African Honey Bee Application Suite: This is the home of the software behind the African Honey Bee community enrichment project.Create Schema: Object extension method that generates schema creation scripts in SQL (MS SQL SERVER) from c# classes to rapidly get database tables from an existing class structure. Just edit the scripts (they are just guesses of how the database should look) to make them fit your database needs and then run them as a query in SQL Server. Check out vecklare.blogspot.se for more examples. CrtTfsDemo: demo project to test code review tool integration with tfsDolphins Salaam: This Library is a Cross Platform, UDP Broadcast solution for Peer to Peer network client detection. It is written in pure C# and is Mono and .Net compatible. It has been tested on Mac OS X, Windows, Linux, iOS, Android, Xbox and it is expected to be compatible with every other Mono compatible platform. The library is ready to use out of box and there is almost zero configurations needed for it to start working, regarding that, it is also flexible and configurable for your needs.Enhanced Content Query Webpart: This Enhanced Content Query Webpart for SharePoint 2010 is meant for the advanced application developer. You will be able to create any rollup of pages of any Content Type(s), Site Column(s) from any subsite(s) with (any) style you want. Xslt skills are required.fangxue: huijiaFrameworkComponent: frameworkcomponent 4 companyFsXna3DGame: 3D game written in XNA and F# (still very much a work in progress)GeoLock: GeoLock is a proof of concept (PoC). GeoLock demonstrates how to retrieve geolocation data from surrounding Wi-Fi networks triangulation without Global Positioning System (GPS) hardware. grab-libs: A quick hack to bundle up the shared libraries needed to run an i386 executable on an x86_64 machine.Logical Game Of Life: Make Logic Component with a game of life algoritmMagic Box: Magic Box is the application for "hiding" files in NTFS file system. NTFS Alternate Data Streams are used for this.Notes: Application to create fast notes on your Windows Phone.Orchard Google Infographics: **NOTE: Only works with Orchard >= v1.4** This module leverages Google's Infographics api for generating QR codes for each content item that uses the Orchard.Autoroute part. The api documentation may be found at https://developers.google.com/chart/infographics/docs/qr_codesOutlook Calendar Cleaner: If you port your email from a Lotus Notes server to Exchange Server or to Office 365, your Mac Outlook calendar items may be missing. This includes calendar appointments and meeting requests. This may include items that were ported over and also new items created after the port.Polarity: Polarity is a plugin-oriented bot written in C# for deviantArt chat written with simplicity and ease of use for plugin developers as the number one goal.Pool Game Paradise: Pool Game Club of Augmentum Inc. Wuhan Site.RCT2 DatChecker: A utility for Rollercoaster Tycoon 2 that catalogs, displays, and helps manage object files (*.DAT). Examine objects frame by frame or copy object images to the clipboard. Sort objects by size, type, or content. Use DatChecker to find and remove unwanted or duplicate objects, or RecipeML Manager: This project creates C# classes that support the RecipeML standard. It also enhances that standard to support full daily and weekly menus.s0t0o0c0k3246352543: 345234324324324324ScoutGames: scout games portalSharePoint All Page Scripting: This solution is the skeleton of the JavaScript loading in all pages on the SharePoint site collection. This solution can be deployed as a "sandbox solution". Supports the Office365 of course. ??????????SharePoint ?????????????????????????Java?????????????。 ????????????????????????。 ???、????Office365?????????。testtom04052012git02: testtom04052012git02testtom04052012tfs01: testtom04052012tfs01testtom04052012tfs03: testtom04052012tfs03Visual Studio Pattern Automation Toolkit (VSPAT): VSPAT is a set of development and deployment tools from Microsoft that generate and execute Visual Studio extensions called 'Pattern Toolkits' that redeliver best practices, expertise and proven solutions faster, more reliably and more consistently. If you are an *IT professional* are you are looking to build and deploy custom solutions that include proven best practices to improve quality, consistency and time to market. And if you wish to spend significantly less time having to learn all...Wet: Wet, the game.WPF Data Editors: WPF Data Editor controls contains most of editor controls in WPF.xSolon Instructions: Framework to run scripts

    Read the article

  • obiee memory usage

    - by user554629
    Heap memory is a frequent customer topic. Here's the quick refresher, oriented towards AIX, but the principles apply to other unix implementations. 1. 32-bit processes have a maximum addressability of 4GB; usable application heap size of 2-3 GB.  On AIX it is controlled by an environment variable: export LDR_CNTRL=....=MAXDATA=0x080000000   # 2GB ( The leading zero is deliberate, not required )   1a. It is  possible to get 3.25GB  heap size for a 32-bit process using @DSA (Discontiguous Segment Allocation)     export LDR_CNTRL=MAXDATA=0xd0000000@DSA  # 3.25 GB 32-bit only        One side-effect of using AIX segments "c" and "d" is that shared libraries will be loaded privately, and not shared.        If you need the additional heap space, this is worth the trade-off.  This option is frequently used for 32-bit java.   1b. 64-bit processes have no need for the @DSA option. 2. 64-bit processes can double the 32-bit heap size to 4GB using: export LDR_CNTRL=....=MAXDATA=0x100000000  # 1 with 8-zeros    2a. But this setting would place the same memory limitations on obiee as a 32-bit process    2b. The major benefit of 64-bit is to break the binds of 32-bit addressing.  At a minimum, use 8GB export LDR_CNTRL=....=MAXDATA=0x200000000  # 2 with 8-zeros    2c.  Many large customers are providing extra safety to their servers by using 16GB: export LDR_CNTRL=....=MAXDATA=0x400000000  # 4 with 8-zeros There is no performance penalty for providing virtual memory allocations larger than required by the application.  - If the server only uses 2GB of space in 64-bit ... specifying 16GB just provides an upper bound cushion.    When an unexpected user query causes a sudden memory surge, the extra memory keeps the server running. 3.  The next benefit to 64-bit is that you can provide huge thread stack sizes for      strange queries that might otherwise crash the server.      nqsserver uses fast recursive algorithms to traverse complicated control structures.    This means lots of thread space to hold the stack frames.    3a. Stack frames mostly contain register values;  64-bit registers are twice as large as 32-bit          At a minimum you should  quadruple the size of the server stack threads in NQSConfig.INI          when migrating from 32- to 64-bit, to prevent a rogue query from crashing the server.           Allocate more than is normally necessary for safety.    3b. There is no penalty for allocating more stack size than you need ...           it is just virtual memory;   no real resources  are consumed until the extra space is needed.    3c. Increasing thread stack sizes may require the process heap size (MAXDATA) to be increased.          Heap space is used for dynamic memory requests, and for thread stacks.          No performance penalty to run with large heap and thread stack sizes.           In a 32-bit world, this safety would require careful planning to avoid exceeding 2GM usable storage.     3d. Increasing the number of threads also may require additional heap storage.          Most thread stack frames on obiee are allocated when the server is started,          and the real memory usage increases as threads run work. Does 2.8GB sound like a lot of memory for an AIX application server? - I guess it is what you are accustomed to seeing from "grandpa's applications". - One of the primary design goals of obiee is to trade memory for services ( db, query caches, etc) - 2.8GB is still well under the 4GB heap size allocated with MAXDATA=0x100000000 - 2.8GB process size is also possible even on 32-bit Windows applications - It is not unusual to receive a sudden request for 30MB of contiguous storage on obiee.- This is not a memory leak;  eventually the nqsserver storage will stabilize, but it may take days to do so. vmstat is the tool of choice to observe memory usage.  On AIX vmstat will show  something that may be  startling to some people ... that available free memory ( the 2nd column ) is always  trending toward zero ... no available free memory.  Some customers have concluded that "nearly zero memory free" means it is time to upgrade the server with more real memory.   After the upgrade, the server again shows very little free memory available. Should you be concerned about this?   Many customers are !!  Here is what is happening: - AIX filesystems are built on a paging model.   If you read/write a  filesystem block it is paged into memory ( no read/write system calls ) - This filesystem "page" has its own "backing store" on disk, the original filesystem block.   When the system needs the real memory page holding the file block, there is no need to "page out".    The page can be stolen immediately, because the original is still on disk in the filesystem. - The filesystem  pages tend to collect ... every filesystem block that was ever seen since    system boot is available in memory.  If another application needs the file block, it is retrieved with no physical I/O. What happens if the system does need the memory ... to satisfy a 30MB heap request by nqsserver, for example? - Since the filesystem blocks have their own backing store ( not on a paging device )   the kernel can just steal any filesystem block ... on a least-recently-used basis   to satisfy a new real memory request for "computation pages". No cause for alarm.   vmstat is accurately displaying whether all filesystem blocks have been touched, and now reside in memory.   Back to nqsserver:  when should you be worried about its memory footprint? Answer:  Almost never.   Stop monitoring it ... stop fussing over it ... stop trying to optimize it. This is a production application, and nqsserver uses the memory it requires to accomplish the job, based on demand. C'mon ... never worry?   I'm from New York ... worry is what we do best. Ok, here is the metric you should be watching, using vmstat: - Are you paging ... there are several columns of vmstat outputbash-2.04$ vmstat 3 3 System configuration: lcpu=4 mem=4096MB kthr    memory              page              faults        cpu    ----- ------------ ------------------------ ------------ -----------  r  b    avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa  0  0 208492  2600   0   0   0   0    0   0  13   45  73  0  0 99  0  0  0 208492  2600   0   0   0   0    0   0   9   12  77  0  0 99  0  0  0 208492  2600   0   0   0   0    0   0   9   40  86  0  0 99  0 avm is the "available free memory" indicator that trends toward zerore   is "re-page".  The kernel steals a real memory page for one process;  immediately repages back to original processpi  "page in".   A process memory page previously paged out, now paged back in because the process needs itpo "page out" A process memory block was paged out, because it was needed by some other process Light paging activity ( re, pi, po ) is not a concern for worry.   Processes get started, need some memory, go away. Sustained paging activity  is cause for concern.   obiee users are having a terrible day if these counters are always changing. Hang on ... if nqsserver needs that memory and I reduce MAXDATA to keep the process under control, won't the nqsserver process crash when the memory is needed? Yes it will.   It means that nqsserver is configured to require too much memory and there are  lots of options to reduce the real memory requirement.  - number of threads  - size of query cache  - size of sort But I need nqsserver to keep running. Real memory is over-committed.    Many things can cause this:- running all application processes on a single server    ... DB server, web servers, WebLogic/WebSphere, sawserver, nqsserver, etc.   You could move some of those to another host machine and communicate over the network  The need for real memory doesn't go away, it's just distributed to other host machines. - AIX LPAR is configured with too little memory.     The AIX admin needs to provide more real memory to the LPAR running obiee. - More memory to this LPAR affects other partitions. Then it's time to visit your friendly IBM rep and buy more memory.

    Read the article

  • Convert from Procedural to Object Oriented Code

    - by Anthony
    I have been reading Working Effectively with Legacy Code and Clean Code with the goal of learning strategies on how to begin cleaning up the existing code-base of a large ASP.NET webforms application. This system has been around since 2005 and since then has undergone a number of enhancements. Originally the code was structured as follows (and is still largely structured this way): ASP.NET (aspx/ascx) Code-behind (c#) Business Logic Layer (c#) Data Access Layer (c#) Database (Oracle) The main issue is that the code is procedural masquerading as object-oriented. It virtually violates all of the guidelines described in both books. This is an example of a typical class in the Business Logic Layer: public class AddressBO { public TransferObject GetAddress(string addressID) { if (StringUtils.IsNull(addressID)) { throw new ValidationException("Address ID must be entered"); } AddressDAO addressDAO = new AddressDAO(); return addressDAO.GetAddress(addressID); } public TransferObject Insert(TransferObject addressDetails) { if (StringUtils.IsNull(addressDetails.GetString("EVENT_ID")) || StringUtils.IsNull(addressDetails.GetString("LOCALITY")) || StringUtils.IsNull(addressDetails.GetString("ADDRESS_TARGET")) || StringUtils.IsNull(addressDetails.GetString("ADDRESS_TYPE_CODE")) || StringUtils.IsNull(addressDetails.GetString("CREATED_BY"))) { throw new ValidationException( "You must enter an Event ID, Locality, Address Target, Address Type Code and Created By."); } string addressID = Sequence.GetNextValue("ADDRESS_ID_SEQ"); addressDetails.SetValue("ADDRESS_ID", addressID); string syncID = Sequence.GetNextValue("SYNC_ID_SEQ"); addressDetails.SetValue("SYNC_ADDRESS_ID", syncID); TransferObject syncDetails = new TransferObject(); Transaction transaction = new Transaction(); try { AddressDAO addressDAO = new AddressDAO(); addressDAO.Insert(addressDetails, transaction); // insert the record for the target TransferObject addressTargetDetails = new TransferObject(); switch (addressDetails.GetString("ADDRESS_TARGET")) { case "PARTY_ADDRESSES": { addressTargetDetails.SetValue("ADDRESS_ID", addressID); addressTargetDetails.SetValue("ADDRESS_TYPE_CODE", addressDetails.GetString("ADDRESS_TYPE_CODE")); addressTargetDetails.SetValue("PARTY_ID", addressDetails.GetString("PARTY_ID")); addressTargetDetails.SetValue("EVENT_ID", addressDetails.GetString("EVENT_ID")); addressTargetDetails.SetValue("CREATED_BY", addressDetails.GetString("CREATED_BY")); addressDAO.InsertPartyAddress(addressTargetDetails, transaction); break; } case "PARTY_CONTACT_ADDRESSES": { addressTargetDetails.SetValue("ADDRESS_ID", addressID); addressTargetDetails.SetValue("ADDRESS_TYPE_CODE", addressDetails.GetString("ADDRESS_TYPE_CODE")); addressTargetDetails.SetValue("PUBLIC_RELEASE_FLAG", addressDetails.GetString("PUBLIC_RELEASE_FLAG")); addressTargetDetails.SetValue("CONTACT_ID", addressDetails.GetString("CONTACT_ID")); addressTargetDetails.SetValue("EVENT_ID", addressDetails.GetString("EVENT_ID")); addressTargetDetails.SetValue("CREATED_BY", addressDetails.GetString("CREATED_BY")); addressDAO.InsertContactAddress(addressTargetDetails, transaction); break; } << many more cases here >> default: { break; } } // synchronise SynchronisationBO synchronisationBO = new SynchronisationBO(); syncDetails = synchronisationBO.Synchronise("I", transaction, "ADDRESSES", addressDetails.GetString("ADDRESS_TARGET"), addressDetails, addressTargetDetails); // commit transaction.Commit(); } catch (Exception) { transaction.Rollback(); throw; } return new TransferObject("ADDRESS_ID", addressID, "SYNC_DETAILS", syncDetails); } << many more methods are here >> } It has a lot of duplication, the class has a number of responsibilities, etc, etc - it is just generally 'un-clean' code. All of the code throughout the system is dependent on concrete implementations. This is an example of a typical class in the Data Access Layer: public class AddressDAO : GenericDAO { public static readonly string BASE_SQL_ADDRESSES = "SELECT " + " a.address_id, " + " a.event_id, " + " a.flat_unit_type_code, " + " fut.description as flat_unit_description, " + " a.flat_unit_num, " + " a.floor_level_code, " + " fl.description as floor_level_description, " + " a.floor_level_num, " + " a.building_name, " + " a.lot_number, " + " a.street_number, " + " a.street_name, " + " a.street_type_code, " + " st.description as street_type_description, " + " a.street_suffix_code, " + " ss.description as street_suffix_description, " + " a.postal_delivery_type_code, " + " pdt.description as postal_delivery_description, " + " a.postal_delivery_num, " + " a.locality, " + " a.state_code, " + " s.description as state_description, " + " a.postcode, " + " a.country, " + " a.lock_num, " + " a.created_by, " + " TO_CHAR(a.created_datetime, '" + SQL_DATETIME_FORMAT + "') as created_datetime, " + " a.last_updated_by, " + " TO_CHAR(a.last_updated_datetime, '" + SQL_DATETIME_FORMAT + "') as last_updated_datetime, " + " a.sync_address_id, " + " a.lat," + " a.lon, " + " a.validation_confidence, " + " a.validation_quality, " + " a.validation_status " + "FROM ADDRESSES a, FLAT_UNIT_TYPES fut, FLOOR_LEVELS fl, STREET_TYPES st, " + " STREET_SUFFIXES ss, POSTAL_DELIVERY_TYPES pdt, STATES s " + "WHERE a.flat_unit_type_code = fut.flat_unit_type_code(+) " + "AND a.floor_level_code = fl.floor_level_code(+) " + "AND a.street_type_code = st.street_type_code(+) " + "AND a.street_suffix_code = ss.street_suffix_code(+) " + "AND a.postal_delivery_type_code = pdt.postal_delivery_type_code(+) " + "AND a.state_code = s.state_code(+) "; public TransferObject GetAddress(string addressID) { //Build the SELECT Statement StringBuilder selectStatement = new StringBuilder(BASE_SQL_ADDRESSES); //Add WHERE condition selectStatement.Append(" AND a.address_id = :addressID"); ArrayList parameters = new ArrayList{DBUtils.CreateOracleParameter("addressID", OracleDbType.Decimal, addressID)}; // Execute the SELECT statement Query query = new Query(); DataSet results = query.Execute(selectStatement.ToString(), parameters); // Check if 0 or more than one rows returned if (results.Tables[0].Rows.Count == 0) { throw new NoDataFoundException(); } if (results.Tables[0].Rows.Count > 1) { throw new TooManyRowsException(); } // Return a TransferObject containing the values return new TransferObject(results); } public void Insert(TransferObject insertValues, Transaction transaction) { // Store Values string addressID = insertValues.GetString("ADDRESS_ID"); string syncAddressID = insertValues.GetString("SYNC_ADDRESS_ID"); string eventID = insertValues.GetString("EVENT_ID"); string createdBy = insertValues.GetString("CREATED_BY"); // postal delivery string postalDeliveryTypeCode = insertValues.GetString("POSTAL_DELIVERY_TYPE_CODE"); string postalDeliveryNum = insertValues.GetString("POSTAL_DELIVERY_NUM"); // unit/building string flatUnitTypeCode = insertValues.GetString("FLAT_UNIT_TYPE_CODE"); string flatUnitNum = insertValues.GetString("FLAT_UNIT_NUM"); string floorLevelCode = insertValues.GetString("FLOOR_LEVEL_CODE"); string floorLevelNum = insertValues.GetString("FLOOR_LEVEL_NUM"); string buildingName = insertValues.GetString("BUILDING_NAME"); // street string lotNumber = insertValues.GetString("LOT_NUMBER"); string streetNumber = insertValues.GetString("STREET_NUMBER"); string streetName = insertValues.GetString("STREET_NAME"); string streetTypeCode = insertValues.GetString("STREET_TYPE_CODE"); string streetSuffixCode = insertValues.GetString("STREET_SUFFIX_CODE"); // locality/state/postcode/country string locality = insertValues.GetString("LOCALITY"); string stateCode = insertValues.GetString("STATE_CODE"); string postcode = insertValues.GetString("POSTCODE"); string country = insertValues.GetString("COUNTRY"); // esms address string esmsAddress = insertValues.GetString("ESMS_ADDRESS"); //address/GPS string lat = insertValues.GetString("LAT"); string lon = insertValues.GetString("LON"); string zoom = insertValues.GetString("ZOOM"); //string validateDate = insertValues.GetString("VALIDATED_DATE"); string validatedBy = insertValues.GetString("VALIDATED_BY"); string confidence = insertValues.GetString("VALIDATION_CONFIDENCE"); string status = insertValues.GetString("VALIDATION_STATUS"); string quality = insertValues.GetString("VALIDATION_QUALITY"); // the insert statement StringBuilder insertStatement = new StringBuilder("INSERT INTO ADDRESSES ("); StringBuilder valuesStatement = new StringBuilder("VALUES ("); ArrayList parameters = new ArrayList(); // build the insert statement insertStatement.Append("ADDRESS_ID, EVENT_ID, CREATED_BY, CREATED_DATETIME, LOCK_NUM "); valuesStatement.Append(":addressID, :eventID, :createdBy, SYSDATE, 1 "); parameters.Add(DBUtils.CreateOracleParameter("addressID", OracleDbType.Decimal, addressID)); parameters.Add(DBUtils.CreateOracleParameter("eventID", OracleDbType.Decimal, eventID)); parameters.Add(DBUtils.CreateOracleParameter("createdBy", OracleDbType.Varchar2, createdBy)); // build the insert statement if (!StringUtils.IsNull(syncAddressID)) { insertStatement.Append(", SYNC_ADDRESS_ID"); valuesStatement.Append(", :syncAddressID"); parameters.Add(DBUtils.CreateOracleParameter("syncAddressID", OracleDbType.Decimal, syncAddressID)); } if (!StringUtils.IsNull(postalDeliveryTypeCode)) { insertStatement.Append(", POSTAL_DELIVERY_TYPE_CODE"); valuesStatement.Append(", :postalDeliveryTypeCode "); parameters.Add(DBUtils.CreateOracleParameter("postalDeliveryTypeCode", OracleDbType.Varchar2, postalDeliveryTypeCode)); } if (!StringUtils.IsNull(postalDeliveryNum)) { insertStatement.Append(", POSTAL_DELIVERY_NUM"); valuesStatement.Append(", :postalDeliveryNum "); parameters.Add(DBUtils.CreateOracleParameter("postalDeliveryNum", OracleDbType.Varchar2, postalDeliveryNum)); } if (!StringUtils.IsNull(flatUnitTypeCode)) { insertStatement.Append(", FLAT_UNIT_TYPE_CODE"); valuesStatement.Append(", :flatUnitTypeCode "); parameters.Add(DBUtils.CreateOracleParameter("flatUnitTypeCode", OracleDbType.Varchar2, flatUnitTypeCode)); } if (!StringUtils.IsNull(lat)) { insertStatement.Append(", LAT"); valuesStatement.Append(", :lat "); parameters.Add(DBUtils.CreateOracleParameter("lat", OracleDbType.Decimal, lat)); } if (!StringUtils.IsNull(lon)) { insertStatement.Append(", LON"); valuesStatement.Append(", :lon "); parameters.Add(DBUtils.CreateOracleParameter("lon", OracleDbType.Decimal, lon)); } if (!StringUtils.IsNull(zoom)) { insertStatement.Append(", ZOOM"); valuesStatement.Append(", :zoom "); parameters.Add(DBUtils.CreateOracleParameter("zoom", OracleDbType.Decimal, zoom)); } if (!StringUtils.IsNull(flatUnitNum)) { insertStatement.Append(", FLAT_UNIT_NUM"); valuesStatement.Append(", :flatUnitNum "); parameters.Add(DBUtils.CreateOracleParameter("flatUnitNum", OracleDbType.Varchar2, flatUnitNum)); } if (!StringUtils.IsNull(floorLevelCode)) { insertStatement.Append(", FLOOR_LEVEL_CODE"); valuesStatement.Append(", :floorLevelCode "); parameters.Add(DBUtils.CreateOracleParameter("floorLevelCode", OracleDbType.Varchar2, floorLevelCode)); } if (!StringUtils.IsNull(floorLevelNum)) { insertStatement.Append(", FLOOR_LEVEL_NUM"); valuesStatement.Append(", :floorLevelNum "); parameters.Add(DBUtils.CreateOracleParameter("floorLevelNum", OracleDbType.Varchar2, floorLevelNum)); } if (!StringUtils.IsNull(buildingName)) { insertStatement.Append(", BUILDING_NAME"); valuesStatement.Append(", :buildingName "); parameters.Add(DBUtils.CreateOracleParameter("buildingName", OracleDbType.Varchar2, buildingName)); } if (!StringUtils.IsNull(lotNumber)) { insertStatement.Append(", LOT_NUMBER"); valuesStatement.Append(", :lotNumber "); parameters.Add(DBUtils.CreateOracleParameter("lotNumber", OracleDbType.Varchar2, lotNumber)); } if (!StringUtils.IsNull(streetNumber)) { insertStatement.Append(", STREET_NUMBER"); valuesStatement.Append(", :streetNumber "); parameters.Add(DBUtils.CreateOracleParameter("streetNumber", OracleDbType.Varchar2, streetNumber)); } if (!StringUtils.IsNull(streetName)) { insertStatement.Append(", STREET_NAME"); valuesStatement.Append(", :streetName "); parameters.Add(DBUtils.CreateOracleParameter("streetName", OracleDbType.Varchar2, streetName)); } if (!StringUtils.IsNull(streetTypeCode)) { insertStatement.Append(", STREET_TYPE_CODE"); valuesStatement.Append(", :streetTypeCode "); parameters.Add(DBUtils.CreateOracleParameter("streetTypeCode", OracleDbType.Varchar2, streetTypeCode)); } if (!StringUtils.IsNull(streetSuffixCode)) { insertStatement.Append(", STREET_SUFFIX_CODE"); valuesStatement.Append(", :streetSuffixCode "); parameters.Add(DBUtils.CreateOracleParameter("streetSuffixCode", OracleDbType.Varchar2, streetSuffixCode)); } if (!StringUtils.IsNull(locality)) { insertStatement.Append(", LOCALITY"); valuesStatement.Append(", :locality"); parameters.Add(DBUtils.CreateOracleParameter("locality", OracleDbType.Varchar2, locality)); } if (!StringUtils.IsNull(stateCode)) { insertStatement.Append(", STATE_CODE"); valuesStatement.Append(", :stateCode"); parameters.Add(DBUtils.CreateOracleParameter("stateCode", OracleDbType.Varchar2, stateCode)); } if (!StringUtils.IsNull(postcode)) { insertStatement.Append(", POSTCODE"); valuesStatement.Append(", :postcode "); parameters.Add(DBUtils.CreateOracleParameter("postcode", OracleDbType.Varchar2, postcode)); } if (!StringUtils.IsNull(country)) { insertStatement.Append(", COUNTRY"); valuesStatement.Append(", :country "); parameters.Add(DBUtils.CreateOracleParameter("country", OracleDbType.Varchar2, country)); } if (!StringUtils.IsNull(esmsAddress)) { insertStatement.Append(", ESMS_ADDRESS"); valuesStatement.Append(", :esmsAddress "); parameters.Add(DBUtils.CreateOracleParameter("esmsAddress", OracleDbType.Varchar2, esmsAddress)); } if (!StringUtils.IsNull(validatedBy)) { insertStatement.Append(", VALIDATED_DATE"); valuesStatement.Append(", SYSDATE "); insertStatement.Append(", VALIDATED_BY"); valuesStatement.Append(", :validatedBy "); parameters.Add(DBUtils.CreateOracleParameter("validatedBy", OracleDbType.Varchar2, validatedBy)); } if (!StringUtils.IsNull(confidence)) { insertStatement.Append(", VALIDATION_CONFIDENCE"); valuesStatement.Append(", :confidence "); parameters.Add(DBUtils.CreateOracleParameter("confidence", OracleDbType.Decimal, confidence)); } if (!StringUtils.IsNull(status)) { insertStatement.Append(", VALIDATION_STATUS"); valuesStatement.Append(", :status "); parameters.Add(DBUtils.CreateOracleParameter("status", OracleDbType.Varchar2, status)); } if (!StringUtils.IsNull(quality)) { insertStatement.Append(", VALIDATION_QUALITY"); valuesStatement.Append(", :quality "); parameters.Add(DBUtils.CreateOracleParameter("quality", OracleDbType.Decimal, quality)); } // finish off the statement insertStatement.Append(") "); valuesStatement.Append(")"); // build the insert statement string sql = insertStatement + valuesStatement.ToString(); // Execute the INSERT Statement Dml dmlDAO = new Dml(); int rowsAffected = dmlDAO.Execute(sql, transaction, parameters); if (rowsAffected == 0) { throw new NoRowsAffectedException(); } } << many more methods go here >> } This system was developed by me and a small team back in 2005 after a 1 week .NET course. Before than my experience was in client-server applications. Over the past 5 years I've come to recognise the benefits of automated unit testing, automated integration testing and automated acceptance testing (using Selenium or equivalent) but the current code-base seems impossible to introduce these concepts. We are now starting to work on a major enhancement project with tight time-frames. The team consists of 5 .NET developers - 2 developers with a few years of .NET experience and 3 others with little or no .NET experience. None of the team (including myself) has experience in using .NET unit testing or mocking frameworks. What strategy would you use to make this code cleaner, more object-oriented, testable and maintainable?

    Read the article

  • Building a &ldquo;real&rdquo; extension for Expression Blend

    - by Timmy Kokke
    .Last time I showed you how to get started building extensions for Expression Blend. Lets build a useful extension this time and go a bit deeper into Blend. Source of project  => here Compiled dll => here (extract into /extensions folder of Expression Blend)   The Extension When working on large Xaml files in Blend it’s often hard to find a specific control in the "Objects and Timeline Pane”. An extension that searches the active document and presents all elements that satisfy the query would be helpful. When the user starts typing a search query a search will be performed and the results are shown in the list. After the user selects an item in the results list, the control in the "Objects and Timeline Pane” will be selected. Below is a sketch of what it is going to look like. The Solution Create a new WPF User Control project as shown in the earlier tutorial in the Configuring the extension project section, but name it AdvancedSearch this time. Delete the default UserControl1.Xaml to clear the solution (a new user control will be added later thought, but adding a user control is easier then renaming one). Create the main entry point of the addin by adding a new class to the solution and naming this  AdvancedSearchPackage. Add a reference to Microsoft.Expression.Extensibility and to System.ComponentModel.Composition . Implement the IPackage interface and add the Export attribute from the MEF to the definition. While you’re at it. Add references to Microsoft.Expression.DesignSurface, Microsoft.Expression.FrameWork and Microsoft.Expression.Markup. These will be used later. The Load method from the IPackage interface is going to create a ViewModel to bind to from the UI. Add another class to the solution and name this AdvancedSearchViewModel. This class needs to implement the INotifyPropertyChanged interface to enable notifications to the view.  Add a constructor to the class that takes an IServices interface as a parameter. Create a new instance of the AdvancedSearchViewModel in the load method in the AdvanceSearchPackage class. The AdvancedSearchPackage class should looks like this now:   using System.ComponentModel.Composition; using Microsoft.Expression.Extensibility;   namespace AdvancedSearch { [Export(typeof(IPackage))] public class AdvancedSearchPackage:IPackage {   public void Load(IServices services) { new AdvancedSearchViewModel(services); }   public void Unload() { } } }   Add a new UserControl to the project and name this AdvancedSearchView. The View will be created by the ViewModel, which will pass itself to the constructor of the view. Change the constructor of the View to take a AdvancedSearchViewModel object as a parameter. Add a private field to store the ViewModel and set this field in the constructor. Point the DataContext of the view to the ViewModel. The View will look something like this now:   namespace AdvancedSearch { public partial class AdvancedSearchView:UserControl { private readonly AdvancedSearchViewModel _advancedSearchViewModel;   public AdvancedSearchView(AdvancedSearchViewModel advancedSearchViewModel) { _advancedSearchViewModel = advancedSearchViewModel; InitializeComponent(); this.DataContext = _advancedSearchViewModel; } } }   The View is going to be created in the constructor of the ViewModel and stored in a read only property.   public FrameworkElement View { get; private set; }   public AdvancedSearchViewModel(IServices services) { _services = services; View = new AdvancedSearchView(this); } The last thing the solution needs before we’ll wire things up is a new class, PossibleNode. This class will be used later to store the search results. The solution should look like this now:   Adding UI to the UI The extension should build and run now, although nothing is showing up in Blend yet. To enable the user to perform a search query add a TextBox and a ListBox to the AdvancedSearchView.xaml file. I’ve set the rows of the grid too to make them look a little better. Add the TextChanged event to the TextBox and the SelectionChanged event to the ListBox, we’ll need those later on. <Grid> <Grid.RowDefinitions> <RowDefinition Height="32" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <TextBox TextChanged="SearchQueryTextChanged" HorizontalAlignment="Stretch" Margin="4" Name="SearchQuery" VerticalAlignment="Stretch" /> <ListBox SelectionChanged="SearchResultSelectionChanged" HorizontalAlignment="Stretch" Margin="4" Name="SearchResult" VerticalAlignment="Stretch" Grid.Row="1" /> </Grid>   This will create a user interface like: To make the View show up in Blend it has to be registered with the WindowService. The GetService<T> method is used to get services from Blend, which are your entry points into Blend.When writing extensions you will encounter this method very often. In this case we’re asking for an IWindowService interface. The IWindowService interface serves events for changing windows and themes, is used for adding or removing resources and is used for registering and unregistering Palettes. All panes in Blend are palettes and are registered thru the RegisterPalette method. The first parameter passed to this method is a string containing a unique ID for the palette. This ID can be used to get access to the palette later. The second parameter is the View. The third parameter is a title for the pane. This title is shown when the pane is visible. It is also shown in the window menu of Blend. The last parameter is a KeyBinding. I have chosen Ctrl+Shift+F to call the Advanced Search pane. This value is also shown in the window menu of Blend.   services.GetService<IWindowService>().RegisterPalette( "AdvancedSearch", viewModel.View, "Advanced Search", new KeyBinding { Key = Key.F, Modifiers = ModifierKeys.Control | ModifierKeys.Shift } );   You can compiler and run now. After Blend starts you can hit Ctrl+Shift+F or go the windows menu to call the advanced search extension. Searching for controls The search has to be cleared on every change of the active document. The DocumentServices fires an event every time a new document is opened, a document is closed or another document view is selected. Add the following line to the constructor of the ViewModel to handle the ActiveDocumentChanged event:   _services.GetService<IDocumentService>().ActiveDocumentChanged += ActiveDocumentChanged;   And implement the ActiveDocumentChanged method:   private void ActiveDocumentChanged(object sender, DocumentChangedEventArgs e) { }   To get to the contents of the document we first need to get access to the “Objects and Timeline” pane. This pane is registered in the PaletteRegistry in the same way as this extension has registered itself. The palettes are accessible thru an associative array. All you need to provide is the Identifier of the palette you want. The Id of the “Objects and Timeline” pane is “Designer_TimelinePane”. I’ve included a list of the other default panes at the bottom of this article. Each palette has a Content property which can be cast to the type of the pane.   var timelinePane = (TimelinePane)_services.GetService<IWindowService>() .PaletteRegistry["Designer_TimelinePane"] .Content;   Add a private field to the top of the AdvancedSearchViewModel class to store the active SceneViewModel. The SceneViewModel is needed to set the current selection and to get the little icons for the type of control.   private SceneViewModel _activeSceneViewModel;   When the active SceneViewModel changes, the ActiveSceneViewModel is stored in this field. The list of possible nodes is cleared and an PropertyChanged event is fired for this list to notify the UI to clear the list. This will make the eventhandler look like this: private void ActiveDocumentChanged(object sender, DocumentChangedEventArgs e) { var timelinePane = (TimelinePane)_services.GetService<IWindowService>() .PaletteRegistry["Designer_TimelinePane"].Content;   _activeSceneViewModel = timelinePane.ActiveSceneViewModel; PossibleNodes = new List<PossibleNode>(); InvokePropertyChanged("PossibleNodes"); } The PossibleNode class used to store information about the controls found by the search. It’s a dumb data class with only 3 properties, the name of the control, the SceneNode and a brush used for the little icon. The SceneNode is the base class for every possible object you can create in Blend, like Brushes, Controls, Annotations, ResourceDictionaries and VisualStates. The entire PossibleNode class looks like this:   using System.Windows.Media; using Microsoft.Expression.DesignSurface.ViewModel;   namespace AdvancedSearch { public class PossibleNode { public string Name { get; set; } public SceneNode SceneNode { get; set; } public DrawingBrush IconBrush { get; set; } } }   Add these two methods to the AdvancedSearchViewModel class:   public void Search(string searchText) { } public void SelectElement(PossibleNode node){ }   Both these methods are going to be called from the view. The Search method performs the search and updates the PossibleNodes list.  The controls in the active document can be accessed thru TimeLineItemsManager class. This class contains a read only collection of TimeLineItems. By using a Linq query the possible nodes are selected and placed in the PossibleNodes list.   var timelineItemManager = new TimelineItemManager(_activeSceneViewModel); PossibleNodes = new List<PossibleNode>( (from d in timelineItemManager.ItemList where d.DisplayName.ToLowerInvariant().StartsWith( searchText.ToLowerInvariant()) select new PossibleNode() { IconBrush = d.IconBrush, SceneNode = d.SceneNode, Name = d.DisplayName }).ToList() ); InvokePropertyChanged(InternalConst.PossibleNodes);   The Select method is pretty straight forward. It contains two lines.The first to clear the selection. Otherwise the selected element would be added to the current selection. The second line selects the nodes. It is given a new array with the node to be selected.   _activeSceneViewModel.ClearSelections(); _activeSceneViewModel.SelectNodes(new[] { node.SceneNode });   The last thing that needs to be done is to wire the whole thing to the View. The two event handlers just call the Search and SelectElement methods on the ViewModel.   private void SearchQueryTextChanged(object sender, TextChangedEventArgs e) { _advancedSearchViewModel.Search(SearchQuery.Text); }   private void SearchResultSelectionChanged(object sender, SelectionChangedEventArgs e) { if(e.AddedItems.Count>0) { _advancedSearchViewModel.SelectElement(e.AddedItems[0] as PossibleNode); } }   The Listbox has to be bound to the PossibleNodes list and a simple DataTemplate is added to show the selection. The IconWithOverlay control can be found in the Microsoft.Expression.DesignSurface.UserInterface.Timeline.UI namespace in the Microsoft.Expression.DesignSurface assembly. The ListBox should look something like:   <ListBox SelectionChanged="SearchResultSelectionChanged" HorizontalAlignment="Stretch" Margin="4" Name="SearchResult" VerticalAlignment="Stretch" Grid.Row="1" ItemsSource="{Binding PossibleNodes}"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <tlui:IconWithOverlay Margin="2,0,10,0" Width="12" Height="12" SourceBrush="{Binding Path=IconBrush, Mode=OneWay}" /> <TextBlock Text="{Binding Name}"/> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox>   Compile and run. Inside Blend the extension could look something like below. What’s Next When you’ve got the extension running. Try placing breakpoints in the code and see what else is in there. There’s a lot to explore and build extension on. I personally would love an extension to search for resources. Last but not least, you can download the source of project here.  If you have any questions let me know. If you just want to use this extension, you can download the compiled dll here. Just extract the . zip into the /extensions folder of Expression Blend. Notes Target framework I ran into some issues when using the .NET Framework 4 Client Profile as a target framework. I got some strange error saying certain obvious namespaces could not be found, Microsoft.Expression in my case. If you run into something like this, try setting the target framework to .NET Framework 4 instead of the client version.   Identifiers of default panes Identifier Type Title Designer_TimelinePane TimelinePane Objects and Timeline Designer_ToolPane ToolPane Tools Designer_ProjectPane ProjectPane Projects Designer_DataPane DataPane Data Designer_ResourcePane ResourcePane Resources Designer_PropertyInspector PropertyInspector Properties Designer_TriggersPane TriggersPane Triggers Interaction_Skin SkinView States Designer_AssetPane AssetPane Assets Interaction_Parts PartsPane Parts Designer_ResultsPane ResultsPane Results

    Read the article

  • CodePlex Daily Summary for Tuesday, March 08, 2011

    CodePlex Daily Summary for Tuesday, March 08, 2011Popular ReleasesAjax Minifier: AjaxMin 4.14: Fixed issue with CSS3 @media and @page parsing. Added support for more properties in the MSBuild task.DotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 2: What is new in DotNetAge 2.0 ? Completely update DJME to DJME2, enhance user experience ,more beautiful and more interactively visit DJME project home to lean more about DJME http://www.dotnetage.com/sites/home/djme.html A new widget engine has came! Faster and easiler. Runtime performance enhanced. SEO enhanced. UI Designer enhanced. A new web resources explorer. Page manager enhanced. BlogML supports added that allows you import/export your blog data to/from dotnetage publishi...Master Data Services Manager: stable 1.0.3: Update 2011-03-07 : bug fixes added external configuration File : configuration.config added TreeView Display of model (still in dev) http://img96.imageshack.us/img96/5067/screenshot073l.jpg added Connection Parameters (username, domain, password, stored encrypted in configuration file) http://img402.imageshack.us/img402/5350/screenshot072qc.jpgSharePoint Content Inventory: Release 1.1: Release 1.1Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.4 Beta: - Moved the core of the PopupMenu class to the new PopupMenuBase class. - Renamed the MenuTriggerElement class to MenuTriggerRelationship. - Renamed the ApplicationMenus property to MenuTriggers. - Renamed the _allowPinnedState property to AllowPinnedState. - Renamed the _restoreFocusOnClose property to RestoreFocusOnClose. - Renamed the SubmenuLaunchKey property to FlyoutKey. - Renamed the AutoMapTriggerElementToSelectableItem property to UseSelectedItemAsTriggerElement. - Renamed the AutoM...Kooboo CMS: Kooboo CMS 3.0 Beta: Files in this downloadkooboo_CMS.zip: The kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL,SQLCE, RavenDB and MongoDB. Default is XML based database. To use them, copy the related dlls into web root bin folder and remove old content provider dlls. Content provider has the name like "Kooboo.CMS.Content.Persistence.SQLServer.dll" View_Engines.zip: Supports of Razor, webform and NVelocity view engine. Copy the dlls into web root bin folder t...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.2: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager added fullscreen for the popup and popupformIronPython: 2.7 Release Candidate 2: On behalf of the IronPython team, I am pleased to announce IronPython 2.7 Release Candidate 2. The releases contains a few minor bug fixes, including a working webbrowser module. Please see the release notes for 61395 for what was fixed in previous releases.LINQ to Twitter: LINQ to Twitter Beta v2.0.20: Mono 2.8, Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation.IIS Tuner: IIS Tuner 1.0: IIS and ASP.NET performance optimization toolMinemapper: Minemapper v0.1.6: Once again supports biomes, thanks to an updated Minecraft Biome Extractor, which added support for the new Minecraft beta v1.3 map format. Updated mcmap to support new biome format.CRM 2011 OData Query Designer: CRM 2011 OData Query Designer: The CRM 2011 OData Query Designer is a Silverlight 4 application that is packaged as a Managed CRM 2011 Solution. This tool allows you to build OData queries by selecting filter criteria, select attributes and order by attributes. The tool also allows you to Execute the query and view the ATOM and JSON data returned. The look and feel of this component will improve and new functionality will be added in the near future so please provide feedback on your experience. Latest Update 8th March ...Sandcastle Help File Builder: SHFB v1.9.3.0 Release: This release supports the Sandcastle June 2010 Release (v2.6.10621.1). It includes full support for generating, installing, and removing MS Help Viewer files. This new release is compiled under .NET 4.0, supports Visual Studio 2010 solutions and projects as documentation sources, and adds support for projects targeting the Silverlight Framework. This release uses the Sandcastle Guided Installation package used by Sandcastle Styles. Download and extract to a folder and then run SandcastleI...AutoLoL: AutoLoL v1.6.4: It is now possible to run the clicker anyway when it can't detect the Masteries Window Fixed a critical bug in the open file dialog Removed the resize button Some UI changes 3D camera movement is now more intuitive (Trackball rotation) When an error occurs on the clicker it will attempt to focus AutoLoLYAF.NET (aka Yet Another Forum.NET): v1.9.5.5 RTW: YAF v1.9.5.5 RTM (Date: 3/4/2011 Rev: 4742) Official Discussion Thread here: http://forum.yetanotherforum.net/yaf_postsm47149_v1-9-5-5-RTW--Date-3-4-2011-Rev-4742.aspx Changes in v1.9.5.5 Rev. #4661 - Added "Copy" function to forum administration -- Now instead of having to manually re-enter all the access masks, etc, you can just duplicate an existing forum and modify after the fact. Rev. #4642 - New Setting to Enable/Disable Last Unread posts links Rev. #4641 - Added Arabic Language t...Snippet Designer: Snippet Designer 1.3.1: Snippet Designer 1.3.1 for Visual Studio 2010This is a bug fix release. Change logFixed bug where Snippet Designer would fail if you had the most recent Productivity Power Tools installed Fixed bug where "Export as Snippet" was failing in non-english locales Fixed bug where opening a new .snippet file would fail in non-english localesChiave File Encryption: Chiave 1.0: Final Relase for Chave 1.0 Stable: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Now with added support to Windows XP! Change Log from 0.9.2 to 1.0: ==================== Added: > Added Icon Overlay for Windows 7 Taskbar Icon. >Added Thumbnail Toolbar buttons to make the navigation easier...DirectQ: Release 1.8.7 (RC1): Release candidate 1 of 1.8.7GoogleTrail: TrailMap Beta 1: Trailmap beta 1 release Now we have updated custom map builder. Now we have complete gpx file editor. Now we have elevation data update service for any gpx file. (currently supports only google only).ASP.NET: Sprite and Image Optimization Preview 3: The ASP.NET Sprite and Image Optimization framework is designed to decrease the amount of time required to request and display a page from a web server by performing a variety of optimizations on the page’s images. This is the third preview of the feature and works with ASP.NET Web Forms 4, ASP.NET MVC 3, and ASP.NET Web Pages (Razor) projects. The binaries are also available via NuGet: AspNetSprites-Core AspNetSprites-WebFormsControl AspNetSprites-MvcAndRazorHelper It includes the foll...New ProjectsCaxangáV2: Ainda jogando Caxangá.. ou nãoCollection Membership Wizard for SCCM: This application assists SMS 2003 and SCCM 2007 administrators with the everyday task of adding lists of computers or users by name to collections in their hierarchy. The goal of this application is to be very easy to use and to have good performance.CoseaDirectos: Aplicación asp.net, silverlight, sql server 2008 para reclutamiento de personal DoctrineIgniter: Usage of Doctrine 2 ORM and Codeigniter 2 PHP platform.Field Finder: DB Administration tool. Easy search in the SQL server database structure. Provides predefined templates for SQL Scripts and VB.NET source code. Search in database Structure and database Data. First Project Of Skyline's Member: multipoint mouse on C#FloodWarn: A series of server and client apps for monitoring flood levels on the Snoqualmie River in King County, Washington.Fluent Json: Json generator and parser written in C#. Besides basic json support, this library enables you to fluently map your custom types to the json data format.Google Handy Translator: Handy dictionary which use Google Translator and also local database . It will activate by SHIFT+F10 and translate what we have in the clipboard.Grid Model: Extensions to the Task Parallel Library to support distributing tasks across multiple computers participating in a heterogeneous computing grid.HiShow: An ASP.NET website allows everybody too have an overview of many hitech-products: Images, price, and reviewIdeaBlade DevForce/Caliburn Application Framework: The IdeaBlade DevForce/Caliburn Application Framework makes it easy to get started with developing data driven Rich Internet Applications in Silverlight and WPF desktop applications with Caliburn Micro as the MVVM framework and DevForce 2010 as the data access layer. Implement DAO By IBatis and NHibernate: ????????,??IBatis?NHibernate???????,?????????,???????????。iTunes.Scrubber: iTunes.Scrubber is an Open Source library that allows people to easily update metadata for their iTunes libraries. It's developed in C#, and interfaces with various web services, including imdb, and thetvdbLaptop Rental System: This project is for Laptop Rental Software project. It will lasts for 2 weeks. Xuan ChienMathLib.NET: Aims to provide a fully managed implementation of core MATLAB(R) functions, designed to be used from dynamic languages such as IronPython and providing an API matching the MATLAB(R) API, to ease the transition from analysis to implementation.Mobile Application Development Framework: A general purpose Windows CE/Mobile Application Development FrameworkMSMSpec: MSMSpec autogenerates MSTest tests corresponding to MSpec tests. Useful where MSTest integration is desired / required / forced. Also enables using VS test tooling for MSpec tests. Requires VS2010 and .NET 4.0.Multi-touch GIS API for TableTops: This API facilitates the creation of multi-touch GIS applications for digital Tabletops. It is built on top of ESRI API for WPF 4.0 and it uses Windows 7 touch events. It also uses some gestures from the Gesture Toolkit.NewLineReplacer: Replace letter fast and easy in great textfilesOpenGLMaciejLis: Project is a game prototype created in C++ and OpenGLPlanetQuest: Application to poll the current extra-solar planet count at http://planetquest.jpl.nasa.gov/ Prime number exporter: Prime number exporter calculates prime numbers using the "Sieve of Eratosthenes" and exports a Textfile. QPAPrintLib: Print every document by its recommended programmRegistry Editor for Windows Mobile: A registry editor for Windows Mobile 6.x and 5.x based devices.Remote desktop on mobile phones: Mobile Remote Desktop enables you to connect to your computer from your mobile devices using bluetooth connectivity. Once connected, Mobile Remote Desktop gives you mouse and keyboard control of your computer from mobile.RestUpProxy: RestUpProxy is a .Net REST client designed to make using RESTful APIs a snap.SharePoint 2010 List Based 404 Handler: A SharePoint WSP that customises the 404 handler for a web application, allowing you to define how to handle missing page requests via a SharePoint list. This is the SharePoint 2010 version.SharePoint Content Inventory: SPContentInventory generates a complete content invetory for SharePoint 2007/2010 sites. The content inventory is exported as an Excel file providing information about all sites, lists and libraries.Shop: open source ecommerce solution for umbraco.SilverVision: Computer vision algorithms implementation in SilverlightSpCop: The aim of this project is to offer a utility similar to fxcop but for wsp packages. At the end it should contain enough rules to ensure good practices and allow automated audits or checks at build time for example.Syscable: Sistema para control de mensualidades para una empresa que proporcione servicios de television por cableSyscart: Aplicacion web para manejo de inventario en bodegas u otros establecimientos. Tranquility.Net (Wcf App Server): Allows developers to host multiple isolated Wcf services within a single Windows service. You'll no longer have to use IIS to host all your services. It's developed in C# .NET. Your services with a smile.USMC Knowledge for WP7: USMC Knowledge is an information application to provide active duty Marines, as well as those with an interest in the USMC with basic knowledge. It's developed in Silverlight for Windows Phone 7.Utility4Net: some base class such as xml,string,data,secerity,web,office... etc.. under Microsoft.NET Framework 4.0 by c# Part of the code r collected from the Internet WPF ImageUitls: WPF Image Utils is a set of image related applications that use WPF. Currently the project focuses on a picture viewerZen4Sync, Orchestration and Test Load platform for SQL Server Merge Replication: This project is all about providing a orchestration and test load platform able to validate any SQL Server Merge Replication based Architecture.Zimms: Collaboration Site for friends, a code depot, and scratch pad??tbl??????: ????tbl?????????。 ??tbl??????,??????????,?????、excel???。 ?????????????,????,????????!

    Read the article

  • 501 Error during Libjingle PCP on Amazone EC2 running Openfire

    - by AeroBuffalo
    I am trying to implement Google's Libjingle (version: 0.6.14) PCP example and I am getting a 501: feature not implemented error during execution. Specifically, the error occurs after each "account" has connected, been authenticated and began communicating with the other. An abbreviated log of the interaction is provided at the end. I have set up my own jabber server (using OpenFire on an Amazon EC2 server), have opened all of the necessary ports and have added each "account" to the other's roster. The server has been set to allow for file transfers. My being new to working with servers, I am not sure why this error is occur and how to go about fixing it. Thanks in advance, AeroBuffalo P.S. Let me know if there is any additional information needed (i.e. the full program log for either/both ends). Receiving End: [018:217] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:17:15 2012 [018:217] <iq to="[email protected]/pcp" type="set" id="5"> [018:217] <jingle xmlns="urn:xmpp:jingle:1" action="session-initiate" sid="402024303" initiator="[email protected]/pcp"> [018:217] <content name="securetunnel" creator="initiator"> [018:217] <description xmlns="http://www.google.com/talk/securetunnel"> [018:217] <type>send:winein.jpeg</type> [018:217] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [018:217] </description> [018:217] <transport xmlns="http://www.google.com/transport/p2p"/> [018:217] </content> [018:217] </jingle> [018:217] <session xmlns="http://www.google.com/session" type="initiate" id="402024303" initiator="[email protected]/pcp"> [018:217] <description xmlns="http://www.google.com/talk/securetunnel"> [018:217] <type>send:winein.jpeg</type> [018:217] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [018:217] </description></session> [018:217] </iq> [018:217] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [018:217] <presence to="[email protected]/pcp" from="forgesend" type="error"> [018:217] <error code="404" type="cancel"> [018:217] <remote-server-not-found xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [018:217] </error></presence> [018:218] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [018:218] <presence to="[email protected]/pcp" from="forgesend" type="error"> [018:218] <error code="404" type="cancel"> [018:218] <remote-server-not-found xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [018:218] </error></presence> [018:264] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [018:264] <iq type="result" id="3" to="[email protected]/pcp"> [018:264] <query xmlns="google:jingleinfo"> [018:264] <stun> [018:264] <server host="stun.xten.net" udp="3478"/> [018:264] <server host="jivesoftware.com" udp="3478"/> [018:264] <server host="igniterealtime.org" udp="3478"/> [018:264] <server host="stun.fwdnet.net" udp="3478"/> [018:264] </stun> [018:264] <publicip ip="65.101.207.121"/> [018:264] </query></iq> [018:420] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [018:420] <iq to="[email protected]/pcp" type="set" id="5" from="[email protected]/pcp"> [018:420] <jingle xmlns="urn:xmpp:jingle:1" action="session-initiate" sid="3548650675" initiator="[email protected]/pcp"> [018:420] <content name="securetunnel" creator="initiator"> [018:420] <description xmlns="http://www.google.com/talk/securetunnel"> [018:420] <type>recv:wineout.jpeg</type> [018:420] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [018:420] </description> [018:420] <transport xmlns="http://www.google.com/transport/p2p"/> [018:420] </content></jingle> [018:420] <session xmlns="http://www.google.com/session" type="initiate" id="3548650675" initiator="[email protected]/pcp"> [018:420] <description xmlns="http://www.google.com/talk/securetunnel"> [018:420] <type>recv:wineout.jpeg</type> [018:420] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [018:420] </description></session></iq> [018:421] TunnelSessionClientBase::OnSessionCreate: received=1 [018:421] Session:3548650675 Old state:STATE_INIT New state:STATE_RECEIVEDINITIATE Type:http://www.google.com/talk/securetunnel Transport:http://www.google.com/transport/p2p [018:421] TunnelSession::OnSessionState(Session::STATE_RECEIVEDINITIATE) [018:421] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:17:15 2012 [018:421] <iq to="[email protected]/pcp" id="5" type="result"/> [018:465] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [018:465] <iq to="[email protected]/pcp" id="5" type="result" from="[email protected]/pcp"/> [198:665] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:20:15 2012 [198:665] <iq type="get" id="162-10" from="forgejabber.com" to="[email protected]/pcp"> [198:665] <ping xmlns="urn:xmpp:ping"/> [198:665] /iq> [198:665] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:20:15 2012 [198:665] <iq type="error" id="162-10" to="forgejabber.com"> [198:665] <ping xmlns="urn:xmpp:ping"/> [198:665] <error code="501" type="cancel"> [198:665] <feature-not-implemented xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [198:665] </error> [198:665] </iq> Sender: [019:043] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:17:15 2012 [019:043] <iq type="get" id="3"> [019:043] <query xmlns="google:jingleinfo"/> [019:043] </iq> [019:043] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:17:15 2012 [019:043] <iq to="[email protected]/pcp" type="set" id="5"> [019:043] <jingle xmlns="urn:xmpp:jingle:1" action="session-initiate" sid="3548650675" initiator="[email protected]/pcp"> [019:043] <content name="securetunnel" creator="initiator"> [019:043] <description xmlns="http://www.google.com/talk/securetunnel"> [019:043] <type>recv:wineout.jpeg</type> [019:043] <client-cert>--BEGIN CERTIFICATE----END CERTIFICATE--</client-cert> [019:043] </description> [019:043] <transport xmlns="http://www.google.com/transport/p2p"/> [019:043] </content> [019:043] </jingle> [019:043] <session xmlns="http://www.google.com/session" type="initiate" id="3548650675" initiator="[email protected]/pcp"> [019:043] <description xmlns="http://www.google.com/talk/securetunnel"> [019:043] <type>recv:wineout.jpeg</type> [019:043] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [019:043] </description></session></iq> [019:043] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [019:043] <presence to="[email protected]/pcp" from="forgereceive" type="error"> [019:043] <error code="404" type="cancel"> [019:043] <remote-server-not-found xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [019:043] </error></presence> [019:044] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [019:044] <presence to="[email protected]/pcp" from="forgereceive" type="error"> [019:044] <error code="404" type="cancel"> [019:044] <remote-server-not-found xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [019:044] </error></presence> [019:044] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [019:044] <iq to="[email protected]/pcp" type="set" id="5" from="[email protected]/pcp"> [019:044] <jingle xmlns="urn:xmpp:jingle:1" action="session-initiate" sid="402024303" initiator="[email protected]/pcp"> [019:044] <content name="securetunnel" creator="initiator"> [019:044] <description xmlns="http://www.google.com/talk/securetunnel"> [019:044] <type>send:winein.jpeg</type> [019:044] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [019:044] </description> [019:044] <transport xmlns="http://www.google.com/transport/p2p"/> [019:044] </content></jingle> [019:044] <session xmlns="http://www.google.com/session" type="initiate" id="402024303" initiator="[email protected]/pcp"> [019:044] <description xmlns="http://www.google.com/talk/securetunnel"> [019:044] <type>send:winein.jpeg</type> [019:044] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [019:044] </description></session></iq> [019:044] TunnelSessionClientBase::OnSessionCreate: received=1 [019:044] Session:402024303 Old state:STATE_INIT New state:STATE_RECEIVEDINITIATE Type:http://www.google.com/talk/securetunnel Transport:http://www.google.com/transport/p2p [019:044] TunnelSession::OnSessionState(Session::STATE_RECEIVEDINITIATE) [019:044] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:17:15 2012 [019:044] <iq to="[email protected]/pcp" id="5" type="result"/> [019:088] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [019:088] <iq type="result" id="3" to="[email protected]/pcp"> [019:088] <query xmlns="google:jingleinfo"> [019:088] <stun> [019:088] <server host="stun.xten.net" udp="3478"/> [019:088] <server host="jivesoftware.com" udp="3478"/> [019:088] <server host="igniterealtime.org" udp="3478"/> [019:088] <server host="stun.fwdnet.net" udp="3478"/> [019:088] </stun> [019:088] <publicip ip="65.101.207.121"/> [019:088] </query> [019:088] </iq> [019:183] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [019:183] <iq to="[email protected]/pcp" id="5" type="result" from="[email protected]/pcp"/> [199:381] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:20:15 2012 [199:381] <iq type="get" id="474-11" from="forgejabber.com" to="[email protected]/pcp"> [199:381] <ping xmlns="urn:xmpp:ping"/> [199:381] </iq> [199:381] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:20:15 2012 [199:381] <iq type="error" id="474-11" to="forgejabber.com"> [199:381] <ping xmlns="urn:xmpp:ping"/> [199:381] <error code="501" type="cancel"> [199:381] <feature-not-implemented xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [199:382] </error></iq>

    Read the article

  • Application crashes when installing on Windows 7 but not on Windows XP

    - by JiBéDoublevé
    At my company, we're migrating from Windows XP to Windows 7. We've got 2 home made applications written in C# with the .NET framework 3.5. They use ClickOnce to be installed. We're in the test phase and the installation of these soft crashes on some Windows 7 machines and doesn't on others. The difference between these machines should be the configuration of the policies. The only error message we've got is this one: I tried to find some logs somewhere but there's nothing neither in the Event Viewer nor in the applications log (wich are poorly logged, then I'm not expecting miracle from this side :( ) These applications: work with FTP servers use WCF use old deprecated libraries (as I'm not at work, I'll edit this post when I'll have the info) use nHibernate 2 use LLBLGen use a deprecated Infragistics library export data into Excel files Did you encounter such an issue while migrating? Or do you have an idea where I should investigate on?

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >