Search Results

Search found 16321 results on 653 pages for 'configuration wizard'.

Page 196/653 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • MySQL server stopped working after upgrade

    - by umpirsky
    I upgraded to 12.04 and my MySQL server just stopped working. It throws: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I tried to reinstall it from software center, but it fails with: Package operation failed The installation or removal of a software package failed. installArchives() failed: Selecting previously unselected package mysql-server. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 243412 files and directories currently installed.) Unpacking mysql-server (from .../mysql-server_5.5.22-0ubuntu1_all.deb) ... Setting up mysql-server-5.5 (5.5.22-0ubuntu1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: mysql-server-5.5 mysql-server Error in function: Setting up mysql-server-5.5 (5.5.22-0ubuntu1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured I also tried: $ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: mysql-server-5.5 mysql-server-core-5.5 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up mysql-server-5.5 (5.5.22-0ubuntu1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: mysql-server-5.5 E: Sub-process /usr/bin/dpkg returned an error code (1) Any idea? EDIT: Crash report is being auto generated. EDIT: After trying and trying I got suggestion to do: #apt-get --purge remove mysql-server-5.1 mysql-server-5.5 Reading package lists... Done Building dependency tree Reading state information... Done Virtual packages like 'mysql-server-5.1' can't be removed The following packages will be REMOVED: mysql-server-5.5* 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 31.3 MB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 243407 files and directories currently installed.) Removing mysql-server-5.5 ... Purging configuration files for mysql-server-5.5 ... Processing triggers for ureadahead ... Processing triggers for man-db ... The most important part is: Virtual packages like 'mysql-server-5.1' can't be removed Any idea?

    Read the article

  • Building applications with WCF - Intro

    - by skjagini
    I am going to write series of articles using Windows Communication Framework (WCF) to develop client and server applications and this is the first part of that series. What is WCF As Juwal puts in his Programming WCF book, WCF provides an SDK for developing and deploying services on Windows, provides runtime environment to expose CLR types as services and consume services as CLR types. Building services with WCF is incredibly easy and it’s implementation provides a set of industry standards and off the shelf plumbing including service hosting, instance management, reliability, transaction management, security etc such that it greatly increases productivity Scenario: Lets consider a typical bank customer trying to create an account, deposit amount and transfer funds between accounts, i.e. checking and savings. To make it interesting, we are going to divide the functionality into multiple services and each of them working with database directly. We will run test cases with and without transactional support across services. In this post we will build contracts, services, data access layer, unit tests to verify end to end communication etc, nothing big stuff here and we dig into other features of the WCF in subsequent posts with incremental changes. In any distributed architecture we have two pieces i.e. services and clients. Services as the name implies provide functionality to execute various pieces of business logic on the server, and clients providing interaction to the end user. Services can be built with Web Services or with WCF. Service built on WCF have the advantage of binding independent, i.e. can run against TCP and HTTP protocol without any significant changes to the code. Solution Services Profile: For creating a new bank customer, getting details about existing customer ProfileContract ProfileService Checking Account: To get checking account balance, deposit or withdraw amount CheckingAccountContract CheckingAccountService Savings Account: To get savings account balance, deposit or withdraw amount SavingsAccountContract SavingsAccountService ServiceHost: To host services, i.e. running the services at particular address, binding and contract where client can connect to Client: Helps end user to use services like creating account and amount transfer between the accounts BankDAL: Data access layer to work with database     BankDAL It’s no brainer not to use an ORM as many matured products are available currently in market including Linq2Sql, Entity Framework (EF), LLblGenPro etc. For this exercise I am going to use Entity Framework 4.0, CTP 5 with code first approach. There are two approaches when working with data, data driven and code driven. In data driven we start by designing tables and their constrains in database and generate entities in code while in code driven (code first) approach entities are defined in code and the metadata generated from the entities is used by the EF to create tables and table constrains. In previous versions the entity classes had  to derive from EF specific base classes. In EF 4 it  is not required to derive from any EF classes, the entities are not only persistence ignorant but also enable full test driven development using mock frameworks.  Application consists of 3 entities, Customer entity which contains Customer details; CheckingAccount and SavingsAccount to hold the respective account balance. We could have introduced an Account base class for CheckingAccount and SavingsAccount which is certainly possible with EF mappings but to keep it simple we are just going to follow 1 –1 mapping between entity and table mappings. Lets start out by defining a class called Customer which will be mapped to Customer table, observe that the class is simply a plain old clr object (POCO) and has no reference to EF at all. using System;   namespace BankDAL.Model { public class Customer { public int Id { get; set; } public string FullName { get; set; } public string Address { get; set; } public DateTime DateOfBirth { get; set; } } }   In order to inform EF about the Customer entity we have to define a database context with properties of type DbSet<> for every POCO which needs to be mapped to a table in database. EF uses convention over configuration to generate the metadata resulting in much less configuration. using System.Data.Entity;   namespace BankDAL.Model { public class BankDbContext: DbContext { public DbSet<Customer> Customers { get; set; } } }   Entity constrains can be defined through attributes on Customer class or using fluent syntax (no need to muscle with xml files), CustomerConfiguration class. By defining constrains in a separate class we can maintain clean POCOs without corrupting entity classes with database specific information.   using System; using System.Data.Entity.ModelConfiguration;   namespace BankDAL.Model { public class CustomerConfiguration: EntityTypeConfiguration<Customer> { public CustomerConfiguration() { Initialize(); }   private void Initialize() { //Setting the Primary Key this.HasKey(e => e.Id);   //Setting required fields this.HasRequired(e => e.FullName); this.HasRequired(e => e.Address); //Todo: Can't create required constraint as DateOfBirth is not reference type, research it //this.HasRequired(e => e.DateOfBirth); } } }   Any queries executed against Customers property in BankDbContext are executed against Cusomers table. By convention EF looks for connection string with key of BankDbContext when working with the context.   We are going to define a helper class to work with Customer entity with methods for querying, adding new entity etc and these are known as repository classes, i.e., CustomerRepository   using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CustomerRepository { private readonly IDbSet<Customer> _customers;   public CustomerRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _customers = bankDbContext.Customers; }   public IQueryable<Customer> Query() { return _customers; }   public void Add(Customer customer) { _customers.Add(customer); } } }   From the above code it is observable that the Query methods returns customers as IQueryable i.e. customers are retrieved only when actually used i.e. iterated. Returning as IQueryable also allows to execute filtering and joining statements from business logic using lamba expressions without cluttering the data access layer with tens of methods.   Our CheckingAccountRepository and SavingsAccountRepository look very similar to each other using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CheckingAccountRepository { private readonly IDbSet<CheckingAccount> _checkingAccounts;   public CheckingAccountRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _checkingAccounts = bankDbContext.CheckingAccounts; }   public IQueryable<CheckingAccount> Query() { return _checkingAccounts; }   public void Add(CheckingAccount account) { _checkingAccounts.Add(account); }   public IQueryable<CheckingAccount> GetAccount(int customerId) { return (from act in _checkingAccounts where act.CustomerId == customerId select act); }   } } The repository classes look very similar to each other for Query and Add methods, with the help of C# generics and implementing repository pattern (Martin Fowler) we can reduce the repeated code. Jarod from ElegantCode has posted an article on how to use repository pattern with EF which we will implement in the subsequent articles along with WCF Unity life time managers by Drew Contracts It is very easy to follow contract first approach with WCF, define the interface and append ServiceContract, OperationContract attributes. IProfile contract exposes functionality for creating customer and getting customer details.   using System; using System.ServiceModel; using BankDAL.Model;   namespace ProfileContract { [ServiceContract] public interface IProfile { [OperationContract] Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth);   [OperationContract] Customer GetCustomer(int id);   } }   ICheckingAccount contract exposes functionality for working with checking account, i.e., getting balance, deposit and withdraw of amount. ISavingsAccount contract looks the same as checking account.   using System.ServiceModel;   namespace CheckingAccountContract { [ServiceContract] public interface ICheckingAccount { [OperationContract] decimal? GetCheckingAccountBalance(int customerId);   [OperationContract] void DepositAmount(int customerId,decimal amount);   [OperationContract] void WithdrawAmount(int customerId, decimal amount);   } }   Services   Having covered the data access layer and contracts so far and here comes the core of the business logic, i.e. services.   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ProfileService implements the IProfile contract for creating customer and getting customer detail using CustomerRepository. using System; using System.Linq; using System.ServiceModel; using BankDAL; using BankDAL.Model; using BankDAL.Repositories; using ProfileContract;   namespace ProfileService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Profile: IProfile { public Customer CreateAccount( string customerName, string address, DateTime dateOfBirth) { Customer cust = new Customer { FullName = customerName, Address = address, DateOfBirth = dateOfBirth };   using (var bankDbContext = new BankDbContext()) { new CustomerRepository(bankDbContext).Add(cust); bankDbContext.SaveChanges(); } return cust; }   public Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth) { return CreateAccount(customerName, address, dateOfBirth); } public Customer GetCustomer(int id) { return new CustomerRepository(new BankDbContext()).Query() .Where(i => i.Id == id).FirstOrDefault(); }   } } From the above code you shall observe that we are calling bankDBContext’s SaveChanges method and there is no save method specific to customer entity because EF manages all the changes centralized at the context level and all the pending changes so far are submitted in a batch and it is represented as Unit of Work. Similarly Checking service implements ICheckingAccount contract using CheckingAccountRepository, notice that we are throwing overdraft exception if the balance falls by zero. WCF has it’s own way of raising exceptions using fault contracts which will be explained in the subsequent articles. SavingsAccountService is similar to CheckingAccountService. using System; using System.Linq; using System.ServiceModel; using BankDAL.Model; using BankDAL.Repositories; using CheckingAccountContract;   namespace CheckingAccountService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Checking:ICheckingAccount { public decimal? GetCheckingAccountBalance(int customerId) { using (var bankDbContext = new BankDbContext()) { CheckingAccount account = (new CheckingAccountRepository(bankDbContext) .GetAccount(customerId)).FirstOrDefault();   if (account != null) return account.Balance;   return null; } }   public void DepositAmount(int customerId, decimal amount) { using(var bankDbContext = new BankDbContext()) { var checkingAccountRepository = new CheckingAccountRepository(bankDbContext); CheckingAccount account = (checkingAccountRepository.GetAccount(customerId)) .FirstOrDefault();   if (account == null) { account = new CheckingAccount() { CustomerId = customerId }; checkingAccountRepository.Add(account); }   account.Balance = account.Balance + amount; if (account.Balance < 0) throw new ApplicationException("Overdraft not accepted");   bankDbContext.SaveChanges(); } } public void WithdrawAmount(int customerId, decimal amount) { DepositAmount(customerId, -1*amount); } } }   BankServiceHost The host acts as a glue binding contracts with it’s services, exposing the endpoints. The services can be exposed either through the code or configuration file, configuration file is preferred as it allows run time changes to service behavior even after deployment. We have 3 services and for each of the service you need to define name (the class that implements the service with fully qualified namespace) and endpoint known as ABC, i.e. address, binding and contract. We are using netTcpBinding and have defined the base address with for each of the contracts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <system.serviceModel> <services> <service name="ProfileService.Profile"> <endpoint binding="netTcpBinding" contract="ProfileContract.IProfile"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Profile"/> </baseAddresses> </host> </service> <service name="CheckingAccountService.Checking"> <endpoint binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Checking"/> </baseAddresses> </host> </service> <service name="SavingsAccountService.Savings"> <endpoint binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Savings"/> </baseAddresses> </host> </service> </services> </system.serviceModel> Have to open the services by creating service host which will handle the incoming requests from clients.   using System;   namespace ServiceHost { class Program { static void Main(string[] args) { CreateHosts(); Console.ReadLine(); }   private static void CreateHosts() { CreateHost(typeof(ProfileService.Profile),"Profile Service"); CreateHost(typeof(SavingsAccountService.Savings), "Savings Account Service"); CreateHost(typeof(CheckingAccountService.Checking), "Checking Account Service"); }   private static void CreateHost(Type type, string hostDescription) { System.ServiceModel.ServiceHost host = new System.ServiceModel.ServiceHost(type); host.Open();   if (host.ChannelDispatchers != null && host.ChannelDispatchers.Count != 0 && host.ChannelDispatchers[0].Listener != null) Console.WriteLine("Started: " + host.ChannelDispatchers[0].Listener.Uri); else Console.WriteLine("Failed to start:" + hostDescription); } } } BankClient    The client has no knowledge about service business logic other than the functionality it exposes through the contract, end points and a proxy to work against. The endpoint data and server proxy can be generated by right clicking on the project reference and choosing ‘Add Service Reference’ and entering the service end point address. Or if you have access to source, you can manually reference contract dlls and update clients configuration file to point to the service end point if the server and client happens to be being built using .Net framework. One of the pros with the manual approach is you don’t have to work against messy code generated files.   <system.serviceModel> <client> <endpoint name="tcpProfile" address="net.tcp://localhost:1000/Profile" binding="netTcpBinding" contract="ProfileContract.IProfile"/> <endpoint name="tcpCheckingAccount" address="net.tcp://localhost:1000/Checking" binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <endpoint name="tcpSavingsAccount" address="net.tcp://localhost:1000/Savings" binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/>   </client> </system.serviceModel> The client uses a façade to connect to the services   using System.ServiceModel; using CheckingAccountContract; using ProfileContract; using SavingsAccountContract;   namespace Client { public class ProxyFacade { public static IProfile ProfileProxy() { return (new ChannelFactory<IProfile>("tcpProfile")).CreateChannel(); }   public static ICheckingAccount CheckingAccountProxy() { return (new ChannelFactory<ICheckingAccount>("tcpCheckingAccount")) .CreateChannel(); }   public static ISavingsAccount SavingsAccountProxy() { return (new ChannelFactory<ISavingsAccount>("tcpSavingsAccount")) .CreateChannel(); }   } }   With that in place, lets get our unit tests going   using System; using System.Diagnostics; using BankDAL.Model; using NUnit.Framework; using ProfileContract;   namespace Client { [TestFixture] public class Tests { private void TransferFundsFromSavingsToCheckingAccount(int customerId, decimal amount) { ProxyFacade.CheckingAccountProxy().DepositAmount(customerId, amount); ProxyFacade.SavingsAccountProxy().WithdrawAmount(customerId, amount); }   private void TransferFundsFromCheckingToSavingsAccount(int customerId, decimal amount) { ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, amount); ProxyFacade.CheckingAccountProxy().WithdrawAmount(customerId, amount); }     [Test] public void CreateAndGetProfileTest() { IProfile profile = ProxyFacade.ProfileProxy(); const string customerName = "Tom"; int customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)).Id; Customer customer = profile.GetCustomer(customerId); Assert.AreEqual(customerName,customer.FullName); }   [Test] public void DepositWithDrawAndTransferAmountTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Smith" + DateTime.Now.ToString("HH:mm:ss"); var customer = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)); // Deposit to Savings ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 100); ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 25); Assert.AreEqual(125, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); // Withdraw ProxyFacade.SavingsAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(95, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id));   // Deposit to Checking ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 60); ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 40); Assert.AreEqual(100, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); // Withdraw ProxyFacade.CheckingAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(70, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Savings to Checking TransferFundsFromSavingsToCheckingAccount(customer.Id,10); Assert.AreEqual(85, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Checking to Savings TransferFundsFromCheckingToSavingsAccount(customer.Id, 50); Assert.AreEqual(135, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(30, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); }   [Test] public void FundTransfersWithOverDraftTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Angelina" + DateTime.Now.ToString("HH:mm:ss");   var customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1972, 1, 1)).Id;   ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, 100); TransferFundsFromSavingsToCheckingAccount(customerId,80); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId));   try { TransferFundsFromSavingsToCheckingAccount(customerId,30); } catch (Exception e) { Debug.WriteLine(e.Message); }   Assert.AreEqual(110, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId)); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); } } }   We are creating a new instance of the channel for every operation, we will look into instance management and how creating a new instance of channel affects it in subsequent articles. The first two test cases deals with creation of Customer, deposit and withdraw of month between accounts. The last case, FundTransferWithOverDraftTest() is interesting. Customer starts with depositing $100 in SavingsAccount followed by transfer of $80 in to checking account resulting in $20 in savings account.  Customer then initiates $30 transfer from Savings to Checking resulting in overdraft exception on Savings with $30 being deposited to Checking. As we are not running both the requests in transactions the customer ends up with more amount than what he started with $100. In subsequent posts we will look into transactions handling.  Make sure the ServiceHost project is set as start up project and start the solution. Run the test cases either from NUnit client or TestDriven.Net/Resharper which ever is your favorite tool. Make sure you have updated the data base connection string in the ServiceHost config file to point to your local database

    Read the article

  • Timeout Considerations for Solicit Response – Part 2

    - by Michael Stephenson
    To follow up a previous article about timeouts and how they can affect your application I have extended the sample we were using to include WCF. I will execute some test scenarios and discuss the results. The sample We begin by consuming exactly the same web service which is sitting on a remote server. This time I have created a .net 3.5 application which will consume the web service using the basichttp binding. To show you the configuration for the consumption of this web service please refer to the below diagram. You can see like before we also have the connectionManagement element in the configuration file. I have added a WCF service reference (also using the asynchronous proxy methods) and have the below code sample in the application which will asynchronously make the web service calls and handle the responses on a call back method invoked by a delegate. If you have read the previous article you will notice that the code is almost the same.   Sample 1 – WCF with Default Timeouts In this test I set about recreating the same scenario as previous where we would run the test but this time using WCF as the messaging component. For the first test I would use the default configuration settings which WCF had setup when we added a reference to the web service. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client-side trace is as follows:   The server-side trace is as follows: Some observations on the results are as follows: The timeouts happened quicker than in the previous tests because some calls were timing out before they attempted to connect to the server The first few calls that timed out did actually connect to the server and did execute successfully on the server   Test 2 – Increase Open Connection Timeout & Send Timeout In this test I wanted to increase both the send and open timeout values to try and give everything a chance to go through. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The client side trace for this test was   The server-side trace for this test was: Some observations on this test are: This test proved if the timeouts are high enough everything will just go through   Test 3 – Increase just the Send Timeout In this test we wanted to increase just the send timeout. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The below is the client side trace The below is the server side trace Some observations on this test are: In this test from both the client and server perspective everything ran through fine The open connection timeout did not seem to have any effect   Test 4 – Increase Just the Open Connection Timeout In this test I wanted to validate the change to the open connection setting by increasing just this on its own. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client side trace was The server side trace was Some observations on this test are: In this test you can see that the open connection which relates to opening the channel timeout increase was not the thing which stopped the calls timing out It's the send of data which is timing out On the server you can see that the successful few calls were fine but there were also a few calls which hit the server but timed out on the client You can see that not all calls hit the server which was one of the problems with the WSE and ASMX options   Test 5 – Smaller Increase in Send Timeout In this test I wanted to make a smaller increase to the send timeout than previous just to prove that it was the key setting which was controlling what was timing out. The timeout values for this test are: openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:02:30"   The Test We simulated 21 calls to the web service Test Results The client side trace was   The server side trace was Some observations on this test are: You can see that most of the calls got through fine On the client you can see that call 20 timed out but still hit the server and executed fine.   Summary At this point between the two articles we have quite a lot of scenarios showing the different way the timeout setting have played into our original performance issue, and now we can see how WCF could offer an improved way to handle the problem. To summarise the differences in the timeout properties for the three technology stacks: ASMX The timeout value only applies to the execution time of your request on the server. The timeout does not consider how long your code might be waiting client side to get a connection. WSE The timeout value includes both the time to obtain a connection and also the time to execute the request. A timeout will not be thrown as an error until an attempt to connect to the server is made. This means a 40 second timeout setting may not throw the error until 60 seconds when the connection to the server is made. If the connection to the server is made you should be aware that your message will be processed and you should design for this. WCF The WCF send timeout is the setting most equivalent to the settings we were looking at previously. Like WSE this setting the counter includes the time to get a connection as well as the time to execute on a server. Unlike WSE and ASMX an error will be thrown as soon as the send timeout from making your call from user code has elapsed regardless of whether we are waiting for a connection or have an open connection to the server. This may to a user appear to have better latency in getting an error response compared to WSE or ASMX.

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 6: Chaining multiple Token Services

    - by Your DisplayName here!
    See the previous posts first. So far we looked at the (simpler) scenario where a client acquires a token from an identity provider and uses that for authentication against a relying party WCF service. Another common scenario is, that the client first requests a token from an identity provider, and then uses this token to request a new token from a Resource STS or a partner’s federation gateway. This sounds complicated, but is actually very easy to achieve using WIF’s WS-Trust client support. The sequence is like this: Request a token from an identity provider. You use some “bootstrap” credential for that like Windows integrated, UserName or a client certificate. The realm used for this request is the identifier of the Resource STS/federation gateway. Use the resulting token to request a new token from the Resource STS/federation gateway. The realm for this request would be the ultimate service you want to talk to. Use this resulting token to authenticate against the ultimate service. Step 1 is very much the same as the code I have shown in the last post. In the following snippet, I use a client certificate to get a token from my STS: private static SecurityToken GetIdPToken() {     var factory = new WSTrustChannelFactory(         new CertificateWSTrustBinding(SecurityMode.TransportWithMessageCredential,         idpEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;       factory.Credentials.ClientCertificate.SetCertificate(         StoreLocation.CurrentUser,         StoreName.My,         X509FindType.FindBySubjectDistinguishedName,         "CN=Client");       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(rstsRealm),         KeyType = KeyTypes.Symmetric     };       var channel = factory.CreateChannel();     return channel.Issue(rst); } To use a token to request another token is slightly different. First the IssuedTokenWSTrustBinding is used and second the channel factory extension methods are used to send the identity provider token to the Resource STS: private static SecurityToken GetRSTSToken(SecurityToken idpToken) {     var binding = new IssuedTokenWSTrustBinding();     binding.SecurityMode = SecurityMode.TransportWithMessageCredential;       var factory = new WSTrustChannelFactory(         binding,         rstsEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;     factory.Credentials.SupportInteractive = false;       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(svcRealm),         KeyType = KeyTypes.Symmetric     };       factory.ConfigureChannelFactory();     var channel = factory.CreateChannelWithIssuedToken(idpToken);     return channel.Issue(rst); } For this particular case I chose an ADFS endpoint for issued token authentication (see part 1 for more background). Calling the service now works exactly like I described in my last post. You may now wonder if the same thing can be also achieved using configuration only – absolutely. But there are some gotchas. First of all the configuration files becomes quite complex. As we discussed in part 4, the bindings must be nested for WCF to unwind the token call-stack. But in this case svcutil cannot resolve the first hop since it cannot use metadata to inspect the identity provider. This binding must be supplied manually. The other issue is around the value for the realm/appliesTo when requesting a token for the R-STS. Using the manual approach you have full control over that parameter and you can simply use the R-STS issuer URI. Using the configuration approach, the exact address of the R-STS endpoint will be used. This means that you may have to register multiple R-STS endpoints in the identity provider. Another issue you will run into is, that ADFS does only accepts its configured issuer URI as a known realm by default. You’d have to manually add more audience URIs for the specific endpoints using the ADFS Powershell commandlets. I prefer the “manual” approach. That’s it. Hope this is useful information.

    Read the article

  • Some mail details about Orange Mauritius

    Being an internet service provider is not easy after all for a lot of companies. Luckily, there are quite some good international operators in this world. For example Orange Mauritius aka Mauritius Telecom aka Wanadoo(?) aka MyT here in Mauritius. The local circumstances give them a quasi-monopol position on fixed lines for telephony and therefore cable-based DSL internet connectivity. So far, not bad but as usual... the details. Just for the records, I am only using the services of Orange for mobile but friends and customers are bound, eh stuck, with other services of Orange Mauritius. And usually, being the IT guy, they get in touch with me to complain about problems or to ask questions on either their ADSL / MyT connection, mail services or whatever. Most of those issues are user-related and easily to solve by tweaking the configuration of their computer a little bit but sometimes it's getting weird. Using Orange ADSL... somewhere else Now, let's imagine we are an Orange ADSL customer for ages and we are using their mail services with our very own mail address like "[email protected]". We configured our mail client like Thunderbird, Outlook Express, Outlook or Windows Mail as publicly described, and we are able to receive and send emails like a champion. No problems at all, the world is green. Did I mention that we have a laptop? Ok, let's take our movable piece of information technology and visit a friend here on the island. Not surprising, he is also customer of Orange, so we can read and answer emails. But Orange is not the online internet service provider and one day, we happen to hang out with someone that uses Emtel via WiMAX or UMTS.. And the fun starts... We can still receive and read emails from our Orange mail account and the IT world is still bright but try to send mails to someone outside the domain "@intnet.mu" or "@orange.mu". Your mail client will deny sending mail with SMTP message 5.1.0 "blah not allowed". First guess, there is problem with the mail client, maybe magically the configuration changed over-night. But no it is still working at home... So, there is for sure a problem with the guy's internet connection. At least, it is his fault not to have Orange internet services, so it can not work properly... The Orange Mail FAQ After some more frustation we finally checkout the Orange Mail FAQ to see whether this (obviously?) common problem has been described already. Sorry, but those FAQ entries are even more confusing as it is not really clear how to handle this scenario. Best of all is that most of the entries are still refering to use servers of the domain "intnet.mu". I mean Orange will disable those systems in favour of the domain "orange.mu" in the near future and does not amend their FAQs. Come on, guys! Ok, settings for POP3 are there. Hm, what about the secure version POP3S? No signs at all... Even changing your mail client to use password encryption with STARTTLS is not allowed at all. Use "bow.intnet.mu" for incoming mail... Ahhh, pretty obvious host name. I mean, at least something like pop.intnet.mu or pop3.intnet.mu would have been more accurate. Funny of all, the hostname "pop.orange.mu" is accessible to receive your mail account. Alright, checking SMTP options for authentication or other like POP-before-SMTP or whatever well-known and established mechanism to send emails are described. I guess that spotting a whale or shark in Mauritian waters would be easier. Trial and error on SMTP settings reveal that neither STARTTLS or any other connection / password encryption is available. Using SSL/TLS on SMTP only reveals that there is no service answering your request. Calling customer service So, we have to bite into the bitter apple and get in touch with Orange customer service and complain/explain them our case and ask for advice. After some hiccups, we finally manage to get hold of someone competent in mail services and we receive the golden spoon of mail configuration made by Orange Mauritius: SMTP hostname: smtpauth.intnet.mu And the world of IT is surprisingly green again. Customer satisfaction? Dear Orange Mauritius, what's the problem with this information? Are you scared of mail spammer? Why isn't there any case in your FAQs? Ok, talking about your FAQs - simply said: they are badly outdated! Configure your mail client to use server name based in the domain intnet.mu but specify your account username with orange.mu as domain part. Although, that there are servers available on the domain orange.mu after all. So, why don't you provide current information like this: POP3 server name: pop.orange.muSMTP server name: smtp.orange.muSMTP authenticated: smtpauth.orange.mu It's not difficult, is it? In my humble opinion not really and you would provide clean, consistent and up-to-date information for your customers. This would produce less frustation and so less traffic on your customer service lines. Which after all, would improve the total user experience and satisfaction level on both sides. Without knowing these facts. Now, imagine you would take your laptop abroad and have to use other internet service providers to be able to be online... Calling your customer service would be unnecessary expensive!

    Read the article

  • Availability Best Practices on Oracle VM Server for SPARC

    - by jsavit
    This is the first of a series of blog posts on configuring Oracle VM Server for SPARC (also called Logical Domains) for availability. This series will show how to how to plan for availability, improve serviceability, avoid single points of failure, and provide resiliency against hardware and software failures. Availability is a broad topic that has filled entire books, so these posts will focus on aspects specifically related to Oracle VM Server for SPARC. The goal is to improve Reliability, Availability and Serviceability (RAS): An article defining RAS can be found here. Oracle VM Server for SPARC Principles for Availability Let's state some guiding principles for availability that apply to Oracle VM Server for SPARC: Avoid Single Points Of Failure (SPOFs). Systems should be configured so a component failure does not result in a loss of application service. The general method to avoid SPOFs is to provide redundancy so service can continue without interruption if a component fails. For a critical application there may be multiple levels of redundancy so multiple failures can be tolerated. Oracle VM Server for SPARC makes it possible to configure systems that avoid SPOFs. Configure for availability at a level of resource and effort consistent with business needs. Effort and resource should be consistent with business requirements. Production has different availability requirements than test/development, so it's worth expending resources to provide higher availability. Even within the category of production there may be different levels of criticality, outage tolerances, recovery and repair time requirements. Keep in mind that a simple design may be more understandable and effective than a complex design that attempts to "do everything". Design for availability at the appropriate tier or level of the platform stack. Availability can be provided in the application, in the database, or in the virtualization, hardware and network layers they depend on - or using a combination of all of them. It may not be necessary to engineer resilient virtualization for stateless web applications applications where availability is provided by a network load balancer, or for enterprise applications like Oracle Real Application Clusters (RAC) and WebLogic that provide their own resiliency. It's (often) the same architecture whether virtual or not: For example, providing resiliency against a lost device path or failing disk media is done for the same reasons and may use the same design whether in a domain or not. It's (often) the same technique whether using domains or not: Many configuration steps are the same. For example, configuring IPMP or creating a redundant ZFS pool is pretty much the same within the guest whether you're in a guest domain or not. There are configuration steps and choices for provisioning the guest with the virtual network and disk devices, which we will discuss. Sometimes it is different using domains: There are new resources to configure. Most notable is the use of alternate service domains, which provides resiliency in case of a domain failure, and also permits improved serviceability via "rolling upgrades". This is an important differentiator between Oracle VM Server for SPARC and traditional virtual machine environments where all virtual I/O is provided by a monolithic infrastructure that itself is a SPOF. Alternate service domains are widely used to provide resiliency in production logical domains environments. Some things are done via logical domains commands, and some are done in the guest: For example, with Oracle VM Server for SPARC we provide multiple network connections to the guest, and then configure network resiliency in the guest via IP Multi Pathing (IPMP) - essentially the same as for non-virtual systems. On the other hand, we configure virtual disk availability in the virtualization layer, and the guest sees an already-resilient disk without being aware of the details. These blogs will discuss configuration details like this. Live migration is not "high availability" in the sense of "continuous availability": If the server is down, then you don't live migrate from it! (A cluster or VM restart elsewhere would be used). However, live migration can be part of the RAS (Reliability, Availability, Serviceability) picture by improving Serviceability - you can move running domains off of a box before planned service or maintenance. The blog Best Practices - Live Migration on Oracle VM Server for SPARC discusses this. Topics Here are some of the topics that will be covered: Network availability using IP Multipathing and aggregates Disk path availability using virtual disks defined with multipath groups ("mpgroup") Disk media resiliency configuring for redundant disks that can tolerate media loss Multiple service domains - this is probably the most significant item and the one most specific to Oracle VM Server for SPARC. It is very widely deployed in production environments as the means to provide network and disk availability, but it can be confusing. Subsequent articles will describe why and how to configure multiple service domains. Note, for the sake of precision: an I/O domain is any domain that has a physical I/O resource (such as a PCIe bus root complex). A service domain is a domain providing virtual device services to other domains; it is almost always an I/O domain too (so it can have something to serve). Resources Here are some important links; we'll be drawing on their content in the next several articles: Oracle VM Server for SPARC Documentation Maximizing Application Reliability and Availability with SPARC T5 Servers whitepaper by Gary Combs Maximizing Application Reliability and Availability with the SPARC M5-32 Server whitepaper by Gary Combs Summary Oracle VM Server for SPARC offers features that can be used to provide highly-available environments. This and the following blog entries will describe how to plan and deploy them.

    Read the article

  • Oracle VM RAC template - what it took

    - by wcoekaer
    In my previous posting I introduced the latest Oracle Real Application Cluster / Oracle VM template. I mentioned how easy it is to deploy a complete Oracle RAC cluster with Oracle VM. In fact, you don't need any prior knowledge at all to get a complete production-ready setup going. Here is an example... I built a 4 node RAC cluster, completely configured in just over 40 minutes - starting from import template into Oracle VM, create VMs to fully up and running Oracle RAC. And what was needed? 1 textfile with some hostnames and ip addresses and deploycluster.py. The setup is a 4 node cluster where each VM has 8GB of RAM and 4 vCPUs. The shared ASM storage in this case is 100GB, 5 x 20GB volumes. The VM names are racovm.0-racovm.3. The deploycluster script starts the VMs, verifies the configuration and sends the database cluster configuration info through Oracle VM Manager to the 4 node VMs. Once the VMs are up and running, the first VM starts the actual Oracle RAC setup inside and talks to the 3 other VMs. I did not log into any VM until after everything was completed. In fact, I connected to the database remotely before logging in at all. # ./deploycluster.py -u admin -H localhost --vms racovm.0,racovm.1,racovm.2,racovm.3 --netconfig ./netconfig.ini Oracle RAC OneCommand (v1.1.0) for Oracle VM - deploy cluster - (c) 2011-2012 Oracle Corporation (com: 26700:v1.1.0, lib: 126247:v1.1.0, var: 1100:v1.1.0) - v2.4.3 - wopr8.wimmekes.net (x86_64) Invoked as root at Sat Jun 2 17:31:29 2012 (size: 37500, mtime: Wed May 16 00:13:19 2012) Using: ./deploycluster.py -u admin -H localhost --vms racovm.0,racovm.1,racovm.2,racovm.3 --netconfig ./netconfig.ini INFO: Login password to Oracle VM Manager not supplied on command line or environment (DEPLOYCLUSTER_MGR_PASSWORD), prompting... Password: INFO: Attempting to connect to Oracle VM Manager... INFO: Oracle VM Client (3.1.1.305) protocol (1.8) CONNECTED (tcp) to Oracle VM Manager (3.1.1.336) protocol (1.8) IP (192.168.1.40) UUID (0004fb0000010000cbce8a3181569a3e) INFO: Inspecting /root/rac/deploycluster/netconfig.ini for number of nodes defined... INFO: Detected 4 nodes in: /root/rac/deploycluster/netconfig.ini INFO: Located a total of (4) VMs; 4 VMs with a simple name of: ['racovm.0', 'racovm.1', 'racovm.2', 'racovm.3'] INFO: Verifying all (4) VMs are in Running state INFO: VM with a simple name of "racovm.0" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.1" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.2" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.3" is in a Stopped state, attempting to start it...OK. INFO: Detected that all (4) VMs specified on command have (5) common shared disks between them (ASM_MIN_DISKS=5) INFO: The (4) VMs passed basic sanity checks and in Running state, sending cluster details as follows: netconfig.ini (Network setup): /root/rac/deploycluster/netconfig.ini buildcluster: yes INFO: Starting to send cluster details to all (4) VM(s)....... INFO: Sending to VM with a simple name of "racovm.0".... INFO: Sending to VM with a simple name of "racovm.1"..... INFO: Sending to VM with a simple name of "racovm.2"..... INFO: Sending to VM with a simple name of "racovm.3"...... INFO: Cluster details sent to (4) VMs... Check log (default location /u01/racovm/buildcluster.log) on build VM (racovm.0)... INFO: deploycluster.py completed successfully at 17:32:02 in 33.2 seconds (00m:33s) Logfile at: /root/rac/deploycluster/deploycluster2.log my netconfig.ini # Node specific information NODE1=db11rac1 NODE1VIP=db11rac1-vip NODE1PRIV=db11rac1-priv NODE1IP=192.168.1.56 NODE1VIPIP=192.168.1.65 NODE1PRIVIP=192.168.2.2 NODE2=db11rac2 NODE2VIP=db11rac2-vip NODE2PRIV=db11rac2-priv NODE2IP=192.168.1.58 NODE2VIPIP=192.168.1.66 NODE2PRIVIP=192.168.2.3 NODE3=db11rac3 NODE3VIP=db11rac3-vip NODE3PRIV=db11rac3-priv NODE3IP=192.168.1.173 NODE3VIPIP=192.168.1.174 NODE3PRIVIP=192.168.2.4 NODE4=db11rac4 NODE4VIP=db11rac4-vip NODE4PRIV=db11rac4-priv NODE4IP=192.168.1.175 NODE4VIPIP=192.168.1.176 NODE4PRIVIP=192.168.2.5 # Common data PUBADAP=eth0 PUBMASK=255.255.255.0 PUBGW=192.168.1.1 PRIVADAP=eth1 PRIVMASK=255.255.255.0 RACCLUSTERNAME=raccluster DOMAINNAME=wimmekes.net DNSIP= # Device used to transfer network information to second node # in interview mode NETCONFIG_DEV=/dev/xvdc # 11gR2 specific data SCANNAME=db11vip SCANIP=192.168.1.57 last few lines of the in-VM log file : 2012-06-02 14:01:40:[clusterstate:Time :db11rac1] Completed successfully in 2 seconds (0h:00m:02s) 2012-06-02 14:01:40:[buildcluster:Done :db11rac1] Build 11gR2 RAC Cluster 2012-06-02 14:01:40:[buildcluster:Time :db11rac1] Completed successfully in 1779 seconds (0h:29m:39s) From start_vm to completely configured : 29m:39s. The other 10m was the import template and create 4 VMs from template along with the shared storage configuration. This consists of a complete Oracle 11gR2 RAC database with ASM, CRS and the RDBMS up and running on all 4 nodes. Simply connect and use. Production ready. Oracle on Oracle.

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 5: Service Client (more Flexibility with WSTrustChannelFactory)

    - by Your DisplayName here!
    See the previous posts first. WIF includes an API to manually request tokens from a token service. This gives you more control over the request and more flexibility since you can use your own token caching scheme instead of being bound to the channel object lifetime. The API is straightforward. You first request a token from the STS and then use that token to create a channel to the relying party service. I’d recommend using the WS-Trust bindings that ship with WIF to talk to ADFS 2 – they are pre-configured to match the binding configuration of the ADFS 2 endpoints. The following code requests a token for a WCF service from ADFS 2: private static SecurityToken GetToken() {     // Windows authentication over transport security     var factory = new WSTrustChannelFactory(         new WindowsWSTrustBinding(SecurityMode.Transport),         stsEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(svcEndpoint),         KeyType = KeyTypes.Symmetric     };       var channel = factory.CreateChannel();     return channel.Issue(rst); } Afterwards, the returned token can be used to create a channel to the service. Again WIF has some helper methods here that make this very easy: private static void CallService(SecurityToken token) {     // create binding and turn off sessions     var binding = new WS2007FederationHttpBinding(         WSFederationHttpSecurityMode.TransportWithMessageCredential);     binding.Security.Message.EstablishSecurityContext = false;       // create factory and enable WIF plumbing     var factory = new ChannelFactory<IService>(binding, new EndpointAddress(svcEndpoint));     factory.ConfigureChannelFactory<IService>();       // turn off CardSpace - we already have the token     factory.Credentials.SupportInteractive = false;       var channel = factory.CreateChannelWithIssuedToken<IService>(token);       channel.GetClaims().ForEach(c =>         Console.WriteLine("{0}\n {1}\n  {2} ({3})\n",             c.ClaimType,             c.Value,             c.Issuer,             c.OriginalIssuer)); } Why is this approach more flexible? Well – some don’t like the configuration voodoo. That’s a valid reason for using the manual approach. You also get more control over the token request itself since you have full control over the RST message that gets send to the STS. One common parameter that you may want to set yourself is the appliesTo value. When you use the automatic token support in the WCF federation binding, the appliesTo is always the physical service address. This means in turn that this address will be used as the audience URI value in the SAML token. Well – this in turn means that when you have an application that consists of multiple services, you always have to configure all physical endpoint URLs in ADFS 2 and in the WIF configuration of the service(s). Having control over the appliesTo allows you to use more symbolic realm names, e.g. the base address or a completely logical name. Since the URL is never de-referenced you have some degree of freedom here. In the next post we will look at the necessary code to request multiple tokens in a call chain. This is a common scenario when you first have to acquire a token from an identity provider and have to send that on to a federation gateway or Resource STS. Stay tuned.

    Read the article

  • Oracle GoldenGate 12c - Leading Enterprise Replication

    - by Doug Reid
    Oracle GoldenGate 12c released  on October 17th and includes several new cutting edge features that firmly establishes GoldenGate's leader position in the data replication space.   In fact, this release more than doubles the performance of data delivery, supports Oracle's new multitenant database feature,  it's more secure, has more options for high availability, and has made great strides to simplify the configuration and deployment of the product.     Read through the press release if you haven't already and do not miss the quote from Cern's Eva Dafonte Perez, regarding Oracle GoldenGate 12c "….performs five times faster compared to previous GoldenGate versions and simplifies the management of a multi-tier environment" There are a variety of new and improved features in the Oracle GoldenGate 12c.  Here are the highlights: Optimized for Oracle Database 12c -  GoldenGate 12c is custom tailored to the unique capabilities of Oracle database 12c and out of the box GoldenGate 12c supports multitenant (pluggable database (PDB)) and non-consolidated deployments of Oracle Database 12c.   The naming convention used by database 12c is now in three parts (PDB-name, schema-name, and object name).  We have made changes to the GoldenGate capture process to support the new naming convention and streamlined the whole process so a single GoldenGate capture process is being used at the container level rather than at each individual PDB.  By having the capture process at the container level resource usage and the number of processes are reduced. To view a conceptual architecture diagram click here. Integrated Delivery for the Oracle Database - Leveraging a lightweight streaming API built exclusively for Oracle GoldenGate 12c, this process distributes load, auto tunes the degree of parallelism, scales better, and delivers blinding rates of changed data delivery to the Oracle database.  One of the goals for Oracle GoldenGate 12c was to reduce IT costs by simplifying the configuration and reduce the time to manage complex infrastructures.  In previous versions of Oracle GoldenGate, customers would split transaction loads by grouping tables into multiple different delivery processes (click here to view the previous method). Each delivery process executed independently and without any interaction or knowledge of other delivery processes.  This setup was complicated to configure and time consuming as the developer needed in-depth knowledge of the source and target schemas and the transaction profile. With GoldenGate 12c and Integrated Delivery we have made it easier to configure and faster to deploy.  To view a conceptual architecture diagram of integrated delivery click here Coordinated Delivery for Non-Oracle Databases - Coordinated Delivery orchestrates high-speed apply processes and simplifies the configuration of GoldenGate for non-Oracle targets. In Oracle GoldenGate 12c a single delivery process is used with multiple threads (click here) and key events, such as primary key updates, event markers, DDL, etc, are coordinated between the various threads to insure that the transactions are applied in the same sequence as they were captured, all while delivery improved performance.  Replication Between On-Premises and Cloud-Based systems. - The trend for business to utilize both on-premises and cloud-based systems is rising and businesses need to replicate data back and forth.   GoldenGate 12c can be configured in a variety of ways to provide real-time replication when unrestricted or restricted (limited ports or HTTP tunneling) networks are between on-premises and cloud-based systems.    Expanded Heterogeneity - It wouldn't be a GoldenGate release without new and improved platform support.   Release 1 includes support for MySQL 5.6 and Sybase 15.7.   Upcoming in the next release GoldenGate, support will be expanded for MS SQL Server, DB2, and Teradata. Tighter Security - Oracle GoldenGate 12c is integrated with the Oracle wallet to shield usernames and passwords using strong encryption and aliases.   Customers accustomed to using the Oracle Wallet with other Oracle products will instantly be familiar with how to use this great new feature Expanded Oracle Application and Technology Support -   GoldenGate can be used along with Oracle Coherence to enable real-time changed data feeds to the Coherence cache using Toplink and the Oracle GoldenGate JMS adapter.     Plus,  Oracle Advanced Customer Services (ACS) now offers a low downtime E-Business Suite platform and database migrations using GoldenGate as the enabling technology.  Keep tuned for more blogs on the new features and the upcoming launch webcast where we will go into these new features in more detail.   In the mean time make sure to read through our white paper "Oracle GoldenGate 12c Release 1 New Features Overview"

    Read the article

  • SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered world record performance with subsecond response time on the Oracle OLAP Perf Version 2 benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 11. The SPARC T4-4 server achieved throughput of 430,000 cube-queries/hour with an average response time of 0.85 seconds and the median response time of 0.43 seconds. This was achieved by using only 60% of the available CPU resources leaving plenty of headroom for future growth. The SPARC T4-4 server operated on an Oracle OLAP cube with a 4 billion row fact table of sales data containing 4 dimensions. This represents as many as 90 quintillion aggregate rows (90 followed by 18 zeros). Performance Landscape Oracle OLAP Perf Version 2 Benchmark 4 Billion Fact Table Rows System Queries/hour Users* Response Time (sec) Average Median SPARC T4-4 430,000 7,300 0.85 0.43 * Users - the supported number of users with a given think time of 60 seconds Configuration Summary and Results Hardware Configuration: SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 1 TB memory Data Storage 1 x Sun Fire X4275 (using COMSTAR) 2 x Sun Storage F5100 Flash Array (each with 80 FMODs) Redo Storage 1 x Sun Fire X4275 (using COMSTAR with 8 HDD) Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option Benchmark Description The Oracle OLAP Perf Version 2 benchmark is a workload designed to demonstrate and stress the Oracle OLAP product's core features of fast query, fast update, and rich calculations on a multi-dimensional model to support enhanced Data Warehousing. The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle OLAP cube consisting of a number of years of sales data with fully pre-computed aggregations. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc. Results from version 2 of the benchmark are not comparable with version 1. The primary difference is the type of queries along with the query mix. Key Points and Best Practices Since typical BI users are often likely to issue similar queries, with different constants in the where clauses, setting the init.ora prameter "cursor_sharing" to "force" will provide for additional query throughput and a larger number of potential users. Except for this setting, together with making full use of available memory, out of the box performance for the OLAP Perf workload should provide results similar to what is reported here. For a given number of query users with zero think time, the main measured metrics are the average query response time, the median query response time, and the query throughput. A derived metric is the maximum number of users the system can support achieving the measured response time assuming some non-zero think time. The calculation of the maximum number of users follows from the well-known response-time law N = (rt + tt) * tp where rt is the average response time, tt is the think time and tp is the measured throughput. Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec (430,000 queries/hour), the above formula shows that the T4-4 server will support 7,300 concurrent users with a think time of 60 seconds and an average response time of 0.85 seconds. For more information see chapter 3 from the book "Quantitative System Performance" cited below. -- See Also Quantitative System Performance Computer System Analysis Using Queueing Network Models Edward D. Lazowska, John Zahorjan, G. Scott Graham, Kenneth C. Sevcik external local Oracle Database 11g – Oracle OLAP oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/2/2012.

    Read the article

  • Rkhunter 122 suspect files; do I have a problem?

    - by user276166
    I am new to ubuntu. I am using Xfce Ubuntu 14.04 LTS. I have ran rkhunter a few weeks age and only got a few warnings. The forum said that they were normal. But, this time rkhunter reported 122 warnings. Please advise. casey@Shaman:~$ sudo rkhunter -c [ Rootkit Hunter version 1.4.0 ] Checking system commands... Performing 'strings' command checks Checking 'strings' command [ OK ] Performing 'shared libraries' checks Checking for preloading variables [ None found ] Checking for preloaded libraries [ None found ] Checking LD_LIBRARY_PATH variable [ Not found ] Performing file properties checks Checking for prerequisites [ Warning ] /usr/sbin/adduser [ Warning ] /usr/sbin/chroot [ Warning ] /usr/sbin/cron [ OK ] /usr/sbin/groupadd [ Warning ] /usr/sbin/groupdel [ Warning ] /usr/sbin/groupmod [ Warning ] /usr/sbin/grpck [ Warning ] /usr/sbin/nologin [ Warning ] /usr/sbin/pwck [ Warning ] /usr/sbin/rsyslogd [ Warning ] /usr/sbin/useradd [ Warning ] /usr/sbin/userdel [ Warning ] /usr/sbin/usermod [ Warning ] /usr/sbin/vipw [ Warning ] /usr/bin/awk [ Warning ] /usr/bin/basename [ Warning ] /usr/bin/chattr [ Warning ] /usr/bin/cut [ Warning ] /usr/bin/diff [ Warning ] /usr/bin/dirname [ Warning ] /usr/bin/dpkg [ Warning ] /usr/bin/dpkg-query [ Warning ] /usr/bin/du [ Warning ] /usr/bin/env [ Warning ] /usr/bin/file [ Warning ] /usr/bin/find [ Warning ] /usr/bin/GET [ Warning ] /usr/bin/groups [ Warning ] /usr/bin/head [ Warning ] /usr/bin/id [ Warning ] /usr/bin/killall [ OK ] /usr/bin/last [ Warning ] /usr/bin/lastlog [ Warning ] /usr/bin/ldd [ Warning ] /usr/bin/less [ OK ] /usr/bin/locate [ OK ] /usr/bin/logger [ Warning ] /usr/bin/lsattr [ Warning ] /usr/bin/lsof [ OK ] /usr/bin/mail [ OK ] /usr/bin/md5sum [ Warning ] /usr/bin/mlocate [ OK ] /usr/bin/newgrp [ Warning ] /usr/bin/passwd [ Warning ] /usr/bin/perl [ Warning ] /usr/bin/pgrep [ Warning ] /usr/bin/pkill [ Warning ] /usr/bin/pstree [ OK ] /usr/bin/rkhunter [ OK ] /usr/bin/rpm [ Warning ] /usr/bin/runcon [ Warning ] /usr/bin/sha1sum [ Warning ] /usr/bin/sha224sum [ Warning ] /usr/bin/sha256sum [ Warning ] /usr/bin/sha384sum [ Warning ] /usr/bin/sha512sum [ Warning ] /usr/bin/size [ Warning ] /usr/bin/sort [ Warning ] /usr/bin/stat [ Warning ] /usr/bin/strace [ Warning ] /usr/bin/strings [ Warning ] /usr/bin/sudo [ Warning ] /usr/bin/tail [ Warning ] /usr/bin/test [ Warning ] /usr/bin/top [ Warning ] /usr/bin/touch [ Warning ] /usr/bin/tr [ Warning ] /usr/bin/uniq [ Warning ] /usr/bin/users [ Warning ] /usr/bin/vmstat [ Warning ] /usr/bin/w [ Warning ] /usr/bin/watch [ Warning ] /usr/bin/wc [ Warning ] /usr/bin/wget [ Warning ] /usr/bin/whatis [ Warning ] /usr/bin/whereis [ Warning ] /usr/bin/which [ OK ] /usr/bin/who [ Warning ] /usr/bin/whoami [ Warning ] /usr/bin/unhide.rb [ Warning ] /usr/bin/mawk [ Warning ] /usr/bin/lwp-request [ Warning ] /usr/bin/heirloom-mailx [ OK ] /usr/bin/w.procps [ Warning ] /sbin/depmod [ Warning ] /sbin/fsck [ Warning ] /sbin/ifconfig [ Warning ] /sbin/ifdown [ Warning ] /sbin/ifup [ Warning ] /sbin/init [ Warning ] /sbin/insmod [ Warning ] /sbin/ip [ Warning ] /sbin/lsmod [ Warning ] /sbin/modinfo [ Warning ] /sbin/modprobe [ Warning ] /sbin/rmmod [ Warning ] /sbin/route [ Warning ] /sbin/runlevel [ Warning ] /sbin/sulogin [ Warning ] /sbin/sysctl [ Warning ] /bin/bash [ Warning ] /bin/cat [ Warning ] /bin/chmod [ Warning ] /bin/chown [ Warning ] /bin/cp [ Warning ] /bin/date [ Warning ] /bin/df [ Warning ] /bin/dmesg [ Warning ] /bin/echo [ Warning ] /bin/ed [ OK ] /bin/egrep [ Warning ] /bin/fgrep [ Warning ] /bin/fuser [ OK ] /bin/grep [ Warning ] /bin/ip [ Warning ] /bin/kill [ Warning ] /bin/less [ OK ] /bin/login [ Warning ] /bin/ls [ Warning ] /bin/lsmod [ Warning ] /bin/mktemp [ Warning ] /bin/more [ Warning ] /bin/mount [ Warning ] /bin/mv [ Warning ] /bin/netstat [ Warning ] /bin/ping [ Warning ] /bin/ps [ Warning ] /bin/pwd [ Warning ] /bin/readlink [ Warning ] /bin/sed [ Warning ] /bin/sh [ Warning ] /bin/su [ Warning ] /bin/touch [ Warning ] /bin/uname [ Warning ] /bin/which [ OK ] /bin/kmod [ Warning ] /bin/dash [ Warning ] [Press <ENTER> to continue] Checking for rootkits... Performing check of known rootkit files and directories 55808 Trojan - Variant A [ Not found ] ADM Worm [ Not found ] AjaKit Rootkit [ Not found ] Adore Rootkit [ Not found ] aPa Kit [ Not found ] Apache Worm [ Not found ] Ambient (ark) Rootkit [ Not found ] Balaur Rootkit [ Not found ] BeastKit Rootkit [ Not found ] beX2 Rootkit [ Not found ] BOBKit Rootkit [ Not found ] cb Rootkit [ Not found ] CiNIK Worm (Slapper.B variant) [ Not found ] Danny-Boy's Abuse Kit [ Not found ] Devil RootKit [ Not found ] Dica-Kit Rootkit [ Not found ] Dreams Rootkit [ Not found ] Duarawkz Rootkit [ Not found ] Enye LKM [ Not found ] Flea Linux Rootkit [ Not found ] Fu Rootkit [ Not found ] Fuck`it Rootkit [ Not found ] GasKit Rootkit [ Not found ] Heroin LKM [ Not found ] HjC Kit [ Not found ] ignoKit Rootkit [ Not found ] IntoXonia-NG Rootkit [ Not found ] Irix Rootkit [ Not found ] Jynx Rootkit [ Not found ] KBeast Rootkit [ Not found ] Kitko Rootkit [ Not found ] Knark Rootkit [ Not found ] ld-linuxv.so Rootkit [ Not found ] Li0n Worm [ Not found ] Lockit / LJK2 Rootkit [ Not found ] Mood-NT Rootkit [ Not found ] MRK Rootkit [ Not found ] Ni0 Rootkit [ Not found ] Ohhara Rootkit [ Not found ] Optic Kit (Tux) Worm [ Not found ] Oz Rootkit [ Not found ] Phalanx Rootkit [ Not found ] Phalanx2 Rootkit [ Not found ] Phalanx2 Rootkit (extended tests) [ Not found ] Portacelo Rootkit [ Not found ] R3dstorm Toolkit [ Not found ] RH-Sharpe's Rootkit [ Not found ] RSHA's Rootkit [ Not found ] Scalper Worm [ Not found ] Sebek LKM [ Not found ] Shutdown Rootkit [ Not found ] SHV4 Rootkit [ Not found ] SHV5 Rootkit [ Not found ] Sin Rootkit [ Not found ] Slapper Worm [ Not found ] Sneakin Rootkit [ Not found ] 'Spanish' Rootkit [ Not found ] Suckit Rootkit [ Not found ] Superkit Rootkit [ Not found ] TBD (Telnet BackDoor) [ Not found ] TeLeKiT Rootkit [ Not found ] T0rn Rootkit [ Not found ] trNkit Rootkit [ Not found ] Trojanit Kit [ Not found ] Tuxtendo Rootkit [ Not found ] URK Rootkit [ Not found ] Vampire Rootkit [ Not found ] VcKit Rootkit [ Not found ] Volc Rootkit [ Not found ] Xzibit Rootkit [ Not found ] zaRwT.KiT Rootkit [ Not found ] ZK Rootkit [ Not found ] [Press <ENTER> to continue] Performing additional rootkit checks Suckit Rookit additional checks [ OK ] Checking for possible rootkit files and directories [ None found ] Checking for possible rootkit strings [ None found ] Performing malware checks Checking running processes for suspicious files [ None found ] Checking for login backdoors [ None found ] Checking for suspicious directories [ None found ] Checking for sniffer log files [ None found ] Performing Linux specific checks Checking loaded kernel modules [ OK ] Checking kernel module names [ OK ] [Press <ENTER> to continue] Checking the network... Performing checks on the network ports Checking for backdoor ports [ None found ] Checking for hidden ports [ Skipped ] Performing checks on the network interfaces Checking for promiscuous interfaces [ None found ] Checking the local host... Performing system boot checks Checking for local host name [ Found ] Checking for system startup files [ Found ] Checking system startup files for malware [ None found ] Performing group and account checks Checking for passwd file [ Found ] Checking for root equivalent (UID 0) accounts [ None found ] Checking for passwordless accounts [ None found ] Checking for passwd file changes [ Warning ] Checking for group file changes [ Warning ] Checking root account shell history files [ None found ] Performing system configuration file checks Checking for SSH configuration file [ Not found ] Checking for running syslog daemon [ Found ] Checking for syslog configuration file [ Found ] Checking if syslog remote logging is allowed [ Not allowed ] Performing filesystem checks Checking /dev for suspicious file types [ Warning ] Checking for hidden files and directories [ Warning ] [Press <ENTER> to continue] System checks summary ===================== File properties checks... Required commands check failed Files checked: 137 Suspect files: 122 Rootkit checks... Rootkits checked : 291 Possible rootkits: 0 Applications checks... All checks skipped The system checks took: 5 minutes and 11 seconds All results have been written to the log file (/var/log/rkhunter.log)

    Read the article

  • Unexpected advantage of Engineered Systems

    - by user12244672
    It's not surprising that Engineered Systems accelerate the debugging and resolution of customer issues. But what has surprised me is just how much faster issue resolution is with Engineered Systems such as SPARC SuperCluster. These are powerful, complex, systems used by customers wanting extreme database performance, app performance, and cost saving server consolidation. A SPARC SuperCluster consists or 2 or 4 powerful T4-4 compute nodes, 3 or 6 extreme performance Exadata Storage Cells, a ZFS Storage Appliance 7320 for general purpose storage, and ultra fast Infiniband switches.  Each with its own firmware. It runs Solaris 11, Solaris 10, 11gR2, LDoms virtualization, and Zones virtualization on the T4-4 compute nodes, a modified version of Solaris 11 in the ZFS Storage Appliance, a modified and highly tuned version of Oracle Linux running Exadata software on the Storage Cells, another Linux derivative in the Infiniband switches, etc. It has an Infiniband data network between the components, a 10Gb data network to the outside world, and a 1Gb management network. And customers can run whatever middleware and apps they want on it, clustered in whatever way they want. In one word, powerful.  In another, complex. The system is highly Engineered.  But it's designed to run general purpose applications. That is, the physical components, configuration, cabling, virtualization technologies, switches, firmware, Operating System versions, network protocols, tunables, etc. are all preset for optimum performance and robustness. That improves the customer experience as what the customer runs leverages our technical know-how and best practices and is what we've tested intensely within Oracle. It should also make debugging easier by fixing a large number of variables which would otherwise be in play if a customer or Systems Integrator had assembled such a complex system themselves from the constituent components.  For example, there's myriad network protocols which could be used with Infiniband.  Myriad ways the components could be interconnected, myriad tunable settings, etc. But what has really surprised me - and I've been working in this area for 15 years now - is just how much easier and faster Engineered Systems have made debugging and issue resolution. All those error opportunities for sub-optimal cabling, unusual network protocols, sub-optimal deployment of virtualization technologies, issues with 3rd party storage, issues with 3rd party multi-pathing products, etc., are simply taken out of the equation. All those error opportunities for making an issue unique to a particular set-up, the "why aren't we seeing this on any other system ?" type questions, the doubts, just go away when we or a customer discover an issue on an Engineered System. It enables a really honed response, getting to the root cause much, much faster than would otherwise be the case. Here's a couple of examples from the last month, one found in-house by my team, one found by a customer: Example 1: We found a node eviction issue running 11gR2 with Solaris 11 SRU 12 under extreme load on what we call our ExaLego test system (mimics an Exadata / SuperCluster 11gR2 Exadata Storage Cell set-up).  We quickly established that an enhancement in SRU12 enabled an 11gR2 process to query Infiniband's Subnet Manager, replacing a fallback mechanism it had used previously.  Under abnormally heavy load, the query could return results which were misinterpreted resulting in node eviction.  In several daily joint debugging sessions between the Solaris, Infiniband, and 11gR2 teams, the issue was fully root caused, evaluated, and a fix agreed upon.  That fix went back into all Solaris releases the following Monday.  From initial issue discovery to the fix being put back into all Solaris releases was just 10 days. Example 2: A customer reported sporadic performance degradation.  The reasons were unclear and the information sparse.  The SPARC SuperCluster Engineered Systems support teams which comprises both SPARC/Solaris and Database/Exadata experts worked to root cause the issue.  A number of contributing factors were discovered, including tunable parameters.  An intense collaborative investigation between the engineering teams identified the root cause to a CPU bound networking thread which was being starved of CPU cycles under extreme load.  Workarounds were identified.  Modifications have been put back into 11gR2 to alleviate the issue and a development project already underway within Solaris has been sped up to provide the final resolution on the Solaris side.  The fixed SPARC SuperCluster configuration greatly aided issue reproduction and dramatically sped up root cause analysis, allowing the correct workarounds and fixes to be identified, prioritized, and implemented.  The customer is now extremely happy with performance and robustness.  Since the configuration is common to other customers, the lessons learned are being proactively rolled out to other customers and incorporated into the installation procedures for future customers.  This effectively acts as a turbo-boost to performance and reliability for all SPARC SuperCluster customers.  If this had occurred in a "home grown" system of this complexity, I expect it would have taken at least 6 months to get to the bottom of the issue.  But because it was an Engineered System, known, understood, and qualified by both the Solaris and Database teams, we were able to collaborate closely to identify cause and effect and expedite a solution for the customer.  That is a key advantage of Engineered Systems which should not be underestimated.  Indeed, the initial issue mitigation on the Database side followed by final fix on the Solaris side, highlights the high degree of collaboration and excellent teamwork between the Oracle engineering teams.  It's a compelling advantage of the integrated Oracle Red Stack in general and Engineered Systems in particular.

    Read the article

  • Atheros 922 PCI WIFI is disabled in Unity but enabled in terminal - How to get it to work?

    - by zewone
    I am trying to get my PCI Wireless Atheros 922 card to work. It is disabled in Unity: both the network utility and the desktop (see screenshot http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) I tried many different advises on many different forums. Installed 12.10 instead of 12.04, enabled all interfaces... etc. I have read about the aht9 driver... The terminal shows no hw or sw lock for the Atheros card, nevertheless, it is still disabled. Nothing worked so far, the card is still disabled. Any help is much appreciated. Here are more tech details: myuser@adri1:~$ sudo lshw -C network *-network:0 DISABLED description: Wireless interface product: AR922X Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 2 bus info: pci@0000:03:02.0 logical name: wlan1 version: 01 serial: 00:18:e7:cd:68:b1 width: 32 bits clock: 66MHz capabilities: pm bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.5.0-17-generic firmware=N/A latency=168 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:d8000000-d800ffff *-network:1 description: Ethernet interface product: VT6105/VT6106S [Rhine-III] vendor: VIA Technologies, Inc. physical id: 6 bus info: pci@0000:03:06.0 logical name: eth0 version: 8b serial: 00:11:09:a3:76:4a size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=via-rhine driverversion=1.5.0 duplex=half latency=32 link=no maxlatency=8 mingnt=3 multicast=yes port=MII speed=10Mbit/s resources: irq:18 ioport:d300(size=256) memory:d8013000-d80130ff *-network DISABLED description: Wireless interface physical id: 1 bus info: usb@1:8.1 logical name: wlan0 serial: 00:11:09:51:75:36 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2500usb driverversion=3.5.0-17-generic firmware=N/A link=no multicast=yes wireless=IEEE 802.11bg myuser@adri1:~$ sudo rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy1: Wireless LAN Soft blocked: no Hard blocked: yes 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no myuser@adri1:~$ dmesg | grep wlan0 [ 15.114235] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready myuser@adri1:~$ dmesg | egrep 'ath|firm' [ 14.617562] ath: EEPROM regdomain: 0x30 [ 14.617568] ath: EEPROM indicates we should expect a direct regpair map [ 14.617572] ath: Country alpha2 being used: AM [ 14.617575] ath: Regpair used: 0x30 [ 14.637778] ieee80211 phy0: >Selected rate control algorithm 'ath9k_rate_control' [ 14.639410] Registered led device: ath9k-phy0 myuser@adri1:~$ dmesg | grep wlan1 [ 15.119922] IPv6: ADDRCONF(NETDEV_UP): wlan1: link is not ready myuser@adri1:~$ lspci -nn | grep 'Atheros' 03:02.0 Network controller [0280]: Atheros Communications Inc. AR922X Wireless Network Adapter [168c:0029] (rev 01) myuser@adri1:~$ sudo ifconfig eth0 Link encap:Ethernet HWaddr 00:11:09:a3:76:4a inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:9ff:fea3:764a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5457 errors:0 dropped:0 overruns:0 frame:0 TX packets:2548 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3425684 (3.4 MB) TX bytes:282192 (282.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:590 errors:0 dropped:0 overruns:0 frame:0 TX packets:590 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:53729 (53.7 KB) TX bytes:53729 (53.7 KB) myuser@adri1:~$ sudo iwconfig wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on lo no wireless extensions. eth0 no wireless extensions. wlan1 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off myuser@adri1:~$ lsmod | grep "ath9k" ath9k 116549 0 mac80211 461161 3 rt2x00usb,rt2x00lib,ath9k ath9k_common 13783 1 ath9k ath9k_hw 376155 2 ath9k,ath9k_common ath 19187 3 ath9k,ath9k_common,ath9k_hw cfg80211 175375 4 rt2x00lib,ath9k,mac80211,ath myuser@adri1:~$ iwlist scan wlan0 Failed to read scan data : Network is down lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. wlan1 Failed to read scan data : Network is down myuser@adri1:~$ lsb_release -d Description: Ubuntu 12.10 myuser@adri1:~$ uname -mr 3.5.0-17-generic i686 ![Schizophrenic Ubuntu](http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) Any help much appreciated... Thanks, Philippe 31-10-2012 ... I have some more updates. When I do the following command it does see my Wifi router... So even if it is still disabled... the card seems to work and see the router (ESSID:"5791BC26-CE9C-11D1-97BF-0000F81E") See below: sudo iwlist wlan1 scanning wlan1 Scan completed : Cell 01 - Address: 00:19:70:8F:B0:EA Channel:10 Frequency:2.457 GHz (Channel 10) Quality=51/70 Signal level=-59 dBm Encryption key:on ESSID:"5791BC26-CE9C-11D1-97BF-0000F81E" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000025dbf2188 Extra: Last beacon: 108ms ago IE: Unknown: 002035373931424332362D434539432D313144312D393742462D3030303046383145 IE: Unknown: 010882848B960C121824 IE: Unknown: 03010A IE: Unknown: 0706424520010D14 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: 2A0100 IE: Unknown: 32043048606C IE: Unknown: DD180050F2020101030003A4000027A4000042435E0062322F00 IE: Unknown: DD0900037F01010000FF7F IE: Unknown: DD0A00037F04010000000000

    Read the article

  • Intel Centrino Wireless-N 1000 Again ! Ubuntu 13.04 x64

    - by vafa
    First I have to say that I tried everything written about this concept. The problem is that it stops working randomly in 3 main forms : 1 - sometimes it disconnect from wireless network and reconnect automatically 2 - sometimes it disconnect and wont connect no matter what (needs reboot) 3 - some times it's still connected but cannot ping or surf or whatever. I already tried disabling N mod using these commands : sudo modprobe -r iwlwifi modprobe iwlwifi 11n_disable=1 (or 0, whatever) it didn't help . these are the results of lspci, sudo lshw -C network, ifconfig, iwconfig, rfkill list when it disconnected and didn't connect till reboot : ifconfig : eth0 Link encap:Ethernet HWaddr c8:0a:a9:34:65:77 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:1563213476557380 errors:9379306629148050 dropped:3126435543049350 overruns:1563217771524675 frame:7816088857623375 TX packets:1563217771524675 errors:6252871086098700 dropped:0 overruns:1563217771524675 carrier:3126435543049350 collisions:7816088857623375 txqueuelen:1000 RX bytes:1563217771524675 (1.5 PB) TX bytes:1563217771524675 (1.5 PB) ham0 Link encap:Ethernet HWaddr 7a:79:19:a5:e4:93 inet addr:25.165.228.147 Bcast:25.255.255.255 Mask:255.0.0.0 inet6 addr: fe80::7879:19ff:fea5:e493/64 Scope:Link inet6 addr: 2620:9b::19a5:e493/96 Scope:Global UP BROADCAST RUNNING MULTICAST MTU:1404 Metric:1 RX packets:7743 errors:0 dropped:0 overruns:0 frame:0 TX packets:1250 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:665642 (665.6 KB) TX bytes:204056 (204.0 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:41138 errors:0 dropped:0 overruns:0 frame:0 TX packets:41138 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6420962 (6.4 MB) TX bytes:6420962 (6.4 MB) wlan0 Link encap:Ethernet HWaddr 00:1e:64:45:fb:70 inet6 addr: fe80::21e:64ff:fe45:fb70/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:286999 errors:0 dropped:0 overruns:0 frame:0 TX packets:226966 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:324386887 (324.3 MB) TX bytes:30674804 (30.6 MB) iwconfig : ham0 no wireless extensions. eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=14 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off sudo lshw -C network: *-network description: Wireless interface product: Centrino Wireless-N 1000 [Condor Peak] vendor: Intel Corporation physical id: 0 bus info: pci@0000:07:00.0 logical name: wlan0 version: 00 serial: 00:1e:64:45:fb:70 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.8.0-30-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bg resources: irq:46 memory:c0400000-c0401fff *-network description: Ethernet interface product: AR8131 Gigabit Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:09:00.0 logical name: eth0 version: c0 serial: c8:0a:a9:34:65:77 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.1-NAPI latency=0 link=no multicast=yes port=twisted pair resources: irq:47 memory:c0900000-c093ffff ioport:5000(size=128) *-network description: Ethernet interface physical id: 2 logical name: ham0 serial: 7a:79:19:a5:e4:93 size: 10Mbit/s capabilities: ethernet physical configuration: autonegotiation=off broadcast=yes driver=tun driverversion=1.6 duplex=full ip=25.165.228.147 link=yes multicast=yes port=twisted pair speed=10Mbit/s lspci: 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:01.0 PCI bridge: Intel Corporation Mobile 4 Series Chipset PCI Express Graphics Port (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.3 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 4 (rev 03) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.3 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 01:00.0 VGA compatible controller: NVIDIA Corporation G98M [GeForce G 105M] (rev a1) 07:00.0 Network controller: Intel Corporation Centrino Wireless-N 1000 [Condor Peak] 09:00.0 Ethernet controller: Qualcomm Atheros AR8131 Gigabit Ethernet (rev c0) rfkill list : 1: acer-wireless: Wireless LAN Soft blocked: no Hard blocked: no 2: acer-bluetooth: Bluetooth Soft blocked: yes Hard blocked: no 9: phy0: Wireless LAN Soft blocked: no Hard blocked: no any help will be REALLLYYYY appreciated

    Read the article

  • My error with upgrading 4.0 to 4.2- What NOT to do...

    - by Steve Tunstall
    Last week, I was helping a client upgrade from the 2011.1.4.0 code to the newest 2011.1.4.2 code. We downloaded the 4.2 update from MOS, upload and unpacked it on both controllers, and upgraded one of the controllers in the cluster with no issues at all. As this was a brand-new system with no networking or pools made on it yet, there were not any resources to fail back and forth between the controllers. Each controller had it's own, private, management interface (igb0 and igb1) and that's it. So we took controller 1 as the passive controller and upgraded it first. The first controller came back up with no issues and was now on the 4.2 code. Great. We then did a takeover on controller 1, making it the active head (although there were no resources for it to take), and then proceeded to upgrade controller 2. Upon upgrading the second controller, we ran the health check with no issues. We then ran the update and it ran and rebooted normally. However, something strange then happened. It took longer than normal to come back up, and when it did, we got the "cluster controllers on different code" error message that one gets when the two controllers of a cluster are running different code. But we just upgraded the second controller to 4.2, so they should have been the same, right??? Going into the Maintenance-->System screen of controller 2, we saw something very strange. The "current version" was still on 4.0, and the 4.2 code was there but was in the "previous" state with the rollback icon, as if it was the OLDER code and not the newer code. I have never seen this happen before. I would have thought it was a bad 4.2 code file, but it worked just fine with controller 1, so I don't think that was it. Other than the fact the code did not update, there was nothing else going on with this system. It had no yellow lights, no errors in the Problems section, and no errors in any of the logs. It was just out of the box a few hours ago, and didn't even have a storage pool yet. So.... We deleted the 4.2 code, uploaded it from scratch, ran the health check, and ran the upgrade again. once again, it seemed to go great, rebooted, and came back up to the same issue, where it came to 4.0 instead of 4.2. See the picture below.... HERE IS WHERE I MADE A BIG MISTAKE.... I SHOULD have instantly called support and opened a Sev 2 ticket. They could have done a shared shell and gotten the correct Fishwork engineer to look at the files and the code and determine what file was messed up and fixed it. The system was up and working just fine, it was just on an older code version, not really a huge problem at all. Instead, I went ahead and clicked the "Rollback" icon, thinking that the system would rollback to the 4.2 code.   Ouch... What happened was that the system said, "Fine, I will delete the 4.0 code and boot to your 4.2 code"... Which was stupid on my part because something was wrong with the 4.2 code file here and the 4.0 was just fine.  So now the system could not boot at all, and the 4.0 code was completely missing from the system, and even a high-level Fishworks engineer could not help us. I had messed it up good. We could only get to the ILOM, and I had to re-image the system from scratch using a hard-to-get-and-use FishStick USB drive. These are tightly controlled and difficult to get, almost always handcuffed to an engineer who will drive out to re-image a system. This took another day of my client's time.  So.... If you see a "previous version" of your system code which is actually a version higher than the current version... DO NOT ROLL IT BACK.... It did not upgrade for a very good reason. In my case, after the system was re-imaged to a code level just 3 back, we once again tried the same 4.2 code update and it worked perfectly the first time and is now great and stable.  Lesson learned.  By the way, our buddy Ryan Matthews wanted to point out the best practice and supported way of performing an upgrade of an active/active ZFSSA, where both controllers are doing some of the work. These steps would not have helpped me for the above issue, but it's important to follow the correct proceedure when doing an upgrade. 1) Upload software to both controllers and wait for it to unpack 2) On controller "A" navigate to configuration/cluster and click "takeover" 3) Wait for controller "B" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 4) Wait for controller "B" to apply the update and finish rebooting 5) Login to controller "B", navigate to configuration/cluster and click "takeover" 6) Wait for controller "A" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 7) Wait for controller "A" to apply the update and finish rebooting 8) Login to controller "B", navigate to configuration/cluster and click "failback"

    Read the article

  • Time to Get Started with Oracle Data Integrator 12c!

    - by Sandrine Riley
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} It is time to get started with Oracle Data Integrator 12c! We would like to highlight for you a great place to begin your journey with ODI.  Here you will find the Getting Started section for ODI, which provides a few options. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Step 1 – Start by downloading the Getting Started (PDF). The document provides general background information and detailed examples to help you learn how to use Oracle Data Integrator. This document guides you through a first project with Oracle Data Integrator 12c, and the instructions in this document are required for using the Getting Started Demonstration Environment. If you would like to run the Getting Started Demonstration Environment on your system go to Step 2 A. If you would like to run the Getting Started Demonstration Environment installed and pre-configured in a Virtual Image, go to Step 2 B. Step 2 - A-   A -  If you would like to run the Getting Started Demo Environment on your system, you will want to then download the Demo Environment for Getting Started (ZIP) archive containing ODI metadata, configuration scripts and sample data. B-   B -  Alternatively, a pre-configured virtual machine is available with the Getting Started installation and configuration. The Virtual Machine platform uses Oracle Virtual Box technology, and use of this configuration would necessitate the download/installation of Virtual Box and the Getting Started virtual machine. If you prefer this route and would rather run the Getting Started Demonstration Environment installed and preconfigured in a Virtual Machine, please download the ODI 12c Getting Started Virtual Machine. Step 3 – In continuation with the Getting Started Demonstration Environment and the Getting Started Guide, you will now be able to run through a first project with Oracle Data Integrator. Now that you have successfully done the exercises above, you may want to play on your own, and develop with Oracle Data Integrator on your environment...  here you can download a full version of Oracle Data Integrator 12c.  Enjoy, and happy developing! Learn more about Oracle Data Integrator; including FAQ, the Oracle by Example Series, Sample Code, and White Papers. Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Database Partitioning and Multiple Data Source Considerations

    - by Jeffrey McDaniel
    With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to installation to help improve overall performance throughout the lifecycle of the data warehouse, preventing future maintenance that would result in data loss. Before installation try to determine how many data sources and partitions will be required along with the ranges.  In P6 Reporting Database 3.0 any adjustments outside of defaults must be made in the scripts and changes will require new ETL runs for each data source.  Considerations: 1. Standard Edition or Enterprise Edition of Oracle Database.   If you aren't using Oracle Enterprise Edition Database; the partitioning feature is not available. Multiple Data sources are only supported on Enterprise Edition of Oracle   Database. 2. Number of Data source Ids for partitioning during configuration.   This setting will specify how many partitions will be allocated for tables containing data source information.  This setting requires some evaluation prior to installation as       there are repercussions if you don't estimate correctly.   For example, if you configured the software for only 2 data sources and the partition setting was set to 2, however along came a 3rd data source.  The necessary steps to  accommodate this change are as follows: a) By default, 3 partitions are configured in the Reporting Database scripts. Edit the create_star_tables_part.sql script located in <installation directory>\star\scripts   and search for partition.  You’ll see P1, P2, P3.  Add additional partitions and sub-partitions for P4 and so on. These will appear in several areas.  (See P6 Reporting Database 3.0 Installation and Configuration guide for more information on this and how to adjust partition ranges). b) Run starETL -r.  This will recreate each table with the new partition key.  The effect of this step is that all tables data will be lost except for history related tables.   c) Run starETL for each of the 3 data sources (with the data source # (starETL.bat "-s2" -as defined in P6 Reporting Database 3.0 Installation and Configuration guide) The best strategy for this setting is to overestimate based on possible growth.  If during implementation it is deemed that there are atleast 2 data sources with possibility for growth, it is a better idea to set this setting to 4 or 5, allowing room for the future and preventing a ‘start over’ scenario. 3. The Number of Partitions and the Number of Months per Partitions are not specific to multi-data source.  These settings work in accordance to a sub partition of larger tables with regard to time related data.  These settings are dataset specific for optimization.  The number of months per partition is self explanatory, optimally the smaller the partition, the better query performance so if the dataset has an extremely large number of spread/history records, a lower number of months is optimal.  Working in accordance with this setting is the number of partitions, this will determine how many "buckets" will be created per the number of months setting.  For example, if you kept the default for # of partitions of 3, and select 2 months for each partitions you would end up with: -1st partition, 2 months -2nd partition, 2 months -3rd partition, all the remaining records Therefore with records to this setting, it is important to analyze your source db spread ranges and history settings when determining the proper number of months per partition and number of partitions to optimize performance.  Also be aware the DBA will need to monitor when these partition ranges will fill up and when additional partitions will need to be added.  If you get to the final range partition and there are no additional range partitions all data will be included into the last partition. 

    Read the article

  • Using an alternate JSON Serializer in ASP.NET Web API

    - by Rick Strahl
    The new ASP.NET Web API that Microsoft released alongside MVC 4.0 Beta last week is a great framework for building REST and AJAX APIs. I've been working with it for quite a while now and I really like the way it works and the complete set of features it provides 'in the box'. It's about time that Microsoft gets a decent API for building generic HTTP endpoints into the framework. DataContractJsonSerializer sucks As nice as Web API's overall design is one thing still sucks: The built-in JSON Serialization uses the DataContractJsonSerializer which is just too limiting for many scenarios. The biggest issues I have with it are: No support for untyped values (object, dynamic, Anonymous Types) MS AJAX style Date Formatting Ugly serialization formats for types like Dictionaries To me the most serious issue is dealing with serialization of untyped objects. I have number of applications with AJAX front ends that dynamically reformat data from business objects to fit a specific message format that certain UI components require. The most common scenario I have there are IEnumerable query results from a database with fields from the result set rearranged to fit the sometimes unconventional formats required for the UI components (like jqGrid for example). Creating custom types to fit these messages seems like overkill and projections using Linq makes this much easier to code up. Alas DataContractJsonSerializer doesn't support it. Neither does DataContractSerializer for XML output for that matter. What this means is that you can't do stuff like this in Web API out of the box:public object GetAnonymousType() { return new { name = "Rick", company = "West Wind", entered= DateTime.Now }; } Basically anything that doesn't have an explicit type DataContractJsonSerializer will not let you return. FWIW, the same is true for XmlSerializer which also doesn't work with non-typed values for serialization. The example above is obviously contrived with a hardcoded object graph, but it's not uncommon to get dynamic values returned from queries that have anonymous types for their result projections. Apparently there's a good possibility that Microsoft will ship Json.NET as part of Web API RTM release.  Scott Hanselman confirmed this as a footnote in his JSON Dates post a few days ago. I've heard several other people from Microsoft confirm that Json.NET will be included and be the default JSON serializer, but no details yet in what capacity it will show up. Let's hope it ends up as the default in the box. Meanwhile this post will show you how you can use it today with the beta and get JSON that matches what you should see in the RTM version. What about JsonValue? To be fair Web API DOES include a new JsonValue/JsonObject/JsonArray type that allow you to address some of these scenarios. JsonValue is a new type in the System.Json assembly that can be used to build up an object graph based on a dictionary. It's actually a really cool implementation of a dynamic type that allows you to create an object graph and spit it out to JSON without having to create .NET type first. JsonValue can also receive a JSON string and parse it without having to actually load it into a .NET type (which is something that's been missing in the core framework). This is really useful if you get a JSON result from an arbitrary service and you don't want to explicitly create a mapping type for the data returned. For serialization you can create an object structure on the fly and pass it back as part of an Web API action method like this:public JsonValue GetJsonValue() { dynamic json = new JsonObject(); json.name = "Rick"; json.company = "West Wind"; json.entered = DateTime.Now; dynamic address = new JsonObject(); address.street = "32 Kaiea"; address.zip = "96779"; json.address = address; dynamic phones = new JsonArray(); json.phoneNumbers = phones; dynamic phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); //var jsonString = json.ToString(); return json; } which produces the following output (formatted here for easier reading):{ name: "rick", company: "West Wind", entered: "2012-03-08T15:33:19.673-10:00", address: { street: "32 Kaiea", zip: "96779" }, phoneNumbers: [ { type: "Home", number: "808 123-1233" }, { type: "Mobile", number: "808 123-1234" }] } If you need to build a simple JSON type on the fly these types work great. But if you have an existing type - or worse a query result/list that's already formatted JsonValue et al. become a pain to work with. As far as I can see there's no way to just throw an object instance at JsonValue and have it convert into JsonValue dictionary. It's a manual process. Using alternate Serializers in Web API So, currently the default serializer in WebAPI is DataContractJsonSeriaizer and I don't like it. You may not either, but luckily you can swap the serializer fairly easily. If you'd rather use the JavaScriptSerializer built into System.Web.Extensions or Json.NET today, it's not too difficult to create a custom MediaTypeFormatter that uses these serializers and can replace or partially replace the native serializer. Here's a MediaTypeFormatter implementation using the ASP.NET JavaScriptSerializer:using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using System.IO; namespace Westwind.Web.WebApi { public class JavaScriptSerializerFormatter : MediaTypeFormatter { public JavaScriptSerializerFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type== typeof(JsonArray) ) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var ser = new JavaScriptSerializer(); string json; using (var sr = new StreamReader(stream)) { json = sr.ReadToEnd(); sr.Close(); } object val = ser.Deserialize(json,type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var ser = new JavaScriptSerializer(); var json = ser.Serialize(value); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } Formatter implementation is pretty simple: You override 4 methods to tell which types you can handle and then handle the input or output streams to create/parse the JSON data. Note that when creating output you want to take care to still allow JsonValue/JsonObject/JsonArray types to be handled by the default serializer so those objects serialize properly - if you let either JavaScriptSerializer or JSON.NET handle them they'd try to render the dictionaries which is very undesirable. If you'd rather use Json.NET here's the JSON.NET version of the formatter:// this code requires a reference to JSON.NET in your project #if true using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using Newtonsoft.Json; using System.IO; using Newtonsoft.Json.Converters; namespace Westwind.Web.WebApi { public class JsonNetFormatter : MediaTypeFormatter { public JsonNetFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type == typeof(JsonArray)) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; var sr = new StreamReader(stream); var jreader = new JsonTextReader(sr); var ser = new JsonSerializer(); ser.Converters.Add(new IsoDateTimeConverter()); object val = ser.Deserialize(jreader, type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; string json = JsonConvert.SerializeObject(value, Formatting.Indented, new JsonConverter[1] { new IsoDateTimeConverter() } ); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } #endif   One advantage of the Json.NET serializer is that you can specify a few options on how things are formatted and handled. You get null value handling and you can plug in the IsoDateTimeConverter which is nice to product proper ISO dates that I would expect any Json serializer to output these days. Hooking up the Formatters Once you've created the custom formatters you need to enable them for your Web API application. To do this use the GlobalConfiguration.Configuration object and add the formatter to the Formatters collection. Here's what this looks like hooked up from Application_Start in a Web project:protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers // optional RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // Add JavaScriptSerializer formatter instead - add at top to make default //config.Formatters.Insert(0, new JavaScriptSerializerFormatter()); // Add Json.net formatter - add at the top so it fires first! // This leaves the old one in place so JsonValue/JsonObject/JsonArray still are handled config.Formatters.Insert(0, new JsonNetFormatter()); } One thing to remember here is the GlobalConfiguration object which is Web API's static configuration instance. I think this thing is seriously misnamed given that GlobalConfiguration could stand for anything and so is hard to discover if you don't know what you're looking for. How about WebApiConfiguration or something more descriptive? Anyway, once you know what it is you can use the Formatters collection to insert your custom formatter. Note that I insert my formatter at the top of the list so it takes precedence over the default formatter. I also am not removing the old formatter because I still want JsonValue/JsonObject/JsonArray to be handled by the default serialization mechanism. Since they process in sequence and I exclude processing for these types JsonValue et al. still get properly serialized/deserialized. Summary Currently DataContractJsonSerializer in Web API is a pain, but at least we have the ability with relatively limited effort to replace the MediaTypeFormatter and plug in our own JSON serializer. This is useful for many scenarios - if you have existing client applications that used MVC JsonResult or ASP.NET AJAX results from ASMX AJAX services you can plug in the JavaScript serializer and get exactly the same serializer you used in the past so your results will be the same and don't potentially break clients. JSON serializers do vary a bit in how they serialize some of the more complex types (like Dictionaries and dates for example) and so if you're migrating it might be helpful to ensure your client code doesn't break when you switch to ASP.NET Web API. Going forward it looks like Microsoft is planning on plugging in Json.Net into Web API and make that the default. I think that's an awesome choice since Json.net has been around forever, is fast and easy to use and provides a ton of functionality as part of this great library. I just wish Microsoft would have figured this out sooner instead of now at the last minute integrating with it especially given that Json.Net has a similar set of lower level JSON objects JsonValue/JsonObject etc. which now will end up being duplicated by the native System.Json stuff. It's not like we don't already have enough confusion regarding which JSON serializer to use (JavaScriptSerializer, DataContractJsonSerializer, JsonValue/JsonObject/JsonArray and now Json.net). For years I've been using my own JSON serializer because the built in choices are both limited. However, with an official encorsement of Json.Net I'm happily moving on to use that in my applications. Let's see and hope Microsoft gets this right before ASP.NET Web API goes gold.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • Unable to connect to Samba printer

    - by user127236
    I have a headless Ubuntu 12.04 server for files and printers. It shares files via Samba just fine. However, the HP PSC-750xi connected to the server via USB is not accessible from my Ubuntu 12.04 laptop. I can browse for it in the Printing control panel, but any attempt to authenticate my ID to the printer with my user credentials results in the error "This print share is not accessible". I have included the Samba smb.conf file below. Any help appreciated. Thanks... JGB # # Sample configuration file for the Samba suite for Debian GNU/Linux. # # # This is the main Samba configuration file. You should read the # smb.conf(5) manual page in order to understand the options listed # here. Samba has a huge number of configurable options most of which # are not shown in this example # # Some options that are often worth tuning have been included as # commented-out examples in this file. # - When such options are commented with ";", the proposed setting # differs from the default Samba behaviour # - When commented with "#", the proposed setting is the default # behaviour of Samba but the option is considered important # enough to be mentioned here # # NOTE: Whenever you modify this file you should run the command # "testparm" to check that you have not made any basic syntactic # errors. # A well-established practice is to name the original file # "smb.conf.master" and create the "real" config file with # testparm -s smb.conf.master >smb.conf # This minimizes the size of the really used smb.conf file # which, according to the Samba Team, impacts performance # However, use this with caution if your smb.conf file contains nested # "include" statements. See Debian bug #483187 for a case # where using a master file is not a good idea. # #======================= Global Settings ======================= [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no writeable = yes server string = %h server (Samba, Ubuntu) unix password sync = yes workgroup = WORKGROUP syslog = 0 panic action = /usr/share/samba/panic-action %d usershare allow guests = yes max log size = 1000 pam password change = yes ## Browsing/Identification ### # Change this to the workgroup/NT-domain name your Samba server will part of # server string is the equivalent of the NT Description field # Windows Internet Name Serving Support Section: # WINS Support - Tells the NMBD component of Samba to enable its WINS Server # wins support = no # WINS Server - Tells the NMBD components of Samba to be a WINS Client # Note: Samba can be either a WINS Server, or a WINS Client, but NOT both ; wins server = w.x.y.z # This will prevent nmbd to search for NetBIOS names through DNS. # What naming service and in what order should we use to resolve host names # to IP addresses ; name resolve order = lmhosts host wins bcast #### Networking #### # The specific set of interfaces / networks to bind to # This can be either the interface name or an IP address/netmask; # interface names are normally preferred ; interfaces = 127.0.0.0/8 eth0 # Only bind to the named interfaces and/or networks; you must use the # 'interfaces' option above to use this. # It is recommended that you enable this feature if your Samba machine is # not protected by a firewall or is a firewall itself. However, this # option cannot handle dynamic or non-broadcast interfaces correctly. ; bind interfaces only = yes #### Debugging/Accounting #### # This tells Samba to use a separate log file for each machine # that connects # Cap the size of the individual log files (in KiB). # If you want Samba to only log through syslog then set the following # parameter to 'yes'. # syslog only = no # We want Samba to log a minimum amount of information to syslog. Everything # should go to /var/log/samba/log.{smbd,nmbd} instead. If you want to log # through syslog you should set the following parameter to something higher. # Do something sensible when Samba crashes: mail the admin a backtrace ####### Authentication ####### # "security = user" is always a good idea. This will require a Unix account # in this server for every user accessing the server. See # /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/ServerType.html # in the samba-doc package for details. # security = user # You may wish to use password encryption. See the section on # 'encrypt passwords' in the smb.conf(5) manpage before enabling. # If you are using encrypted passwords, Samba will need to know what # password database type you are using. # This boolean parameter controls whether Samba attempts to sync the Unix # password with the SMB password when the encrypted SMB password in the # passdb is changed. # For Unix password sync to work on a Debian GNU/Linux system, the following # parameters must be set (thanks to Ian Kahan <<[email protected]> for # sending the correct chat script for the passwd program in Debian Sarge). # This boolean controls whether PAM will be used for password changes # when requested by an SMB client instead of the program listed in # 'passwd program'. The default is 'no'. # This option controls how unsuccessful authentication attempts are mapped # to anonymous connections ########## Domains ########### # Is this machine able to authenticate users. Both PDC and BDC # must have this setting enabled. If you are the BDC you must # change the 'domain master' setting to no # ; domain logons = yes # # The following setting only takes effect if 'domain logons' is set # It specifies the location of the user's profile directory # from the client point of view) # The following required a [profiles] share to be setup on the # samba server (see below) ; logon path = \\%N\profiles\%U # Another common choice is storing the profile in the user's home directory # (this is Samba's default) # logon path = \\%N\%U\profile # The following setting only takes effect if 'domain logons' is set # It specifies the location of a user's home directory (from the client # point of view) ; logon drive = H: # logon home = \\%N\%U # The following setting only takes effect if 'domain logons' is set # It specifies the script to run during logon. The script must be stored # in the [netlogon] share # NOTE: Must be store in 'DOS' file format convention ; logon script = logon.cmd # This allows Unix users to be created on the domain controller via the SAMR # RPC pipe. The example command creates a user account with a disabled Unix # password; please adapt to your needs ; add user script = /usr/sbin/adduser --quiet --disabled-password --gecos "" %u # This allows machine accounts to be created on the domain controller via the # SAMR RPC pipe. # The following assumes a "machines" group exists on the system ; add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u # This allows Unix groups to be created on the domain controller via the SAMR # RPC pipe. ; add group script = /usr/sbin/addgroup --force-badname %g ########## Printing ########## # If you want to automatically load your printer list rather # than setting them up individually then you'll need this # load printers = yes # lpr(ng) printing. You may wish to override the location of the # printcap file ; printing = bsd ; printcap name = /etc/printcap # CUPS printing. See also the cupsaddsmb(8) manpage in the # cupsys-client package. ; printing = cups ; printcap name = cups ############ Misc ############ # Using the following line enables you to customise your configuration # on a per machine basis. The %m gets replaced with the netbios name # of the machine that is connecting ; include = /home/samba/etc/smb.conf.%m # Most people will find that this option gives better performance. # See smb.conf(5) and /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/speed.html # for details # You may want to add the following on a Linux system: # SO_RCVBUF=8192 SO_SNDBUF=8192 # socket options = TCP_NODELAY # The following parameter is useful only if you have the linpopup package # installed. The samba maintainer and the linpopup maintainer are # working to ease installation and configuration of linpopup and samba. ; message command = /bin/sh -c '/usr/bin/linpopup "%f" "%m" %s; rm %s' & # Domain Master specifies Samba to be the Domain Master Browser. If this # machine will be configured as a BDC (a secondary logon server), you # must set this to 'no'; otherwise, the default behavior is recommended. # domain master = auto # Some defaults for winbind (make sure you're not using the ranges # for something else.) ; idmap uid = 10000-20000 ; idmap gid = 10000-20000 ; template shell = /bin/bash # The following was the default behaviour in sarge, # but samba upstream reverted the default because it might induce # performance issues in large organizations. # See Debian bug #368251 for some of the consequences of *not* # having this setting and smb.conf(5) for details. ; winbind enum groups = yes ; winbind enum users = yes # Setup usershare options to enable non-root users to share folders # with the net usershare command. # Maximum number of usershare. 0 (default) means that usershare is disabled. ; usershare max shares = 100 # Allow users who've been granted usershare privileges to create # public shares, not just authenticated ones #======================= Share Definitions ======================= # Un-comment the following (and tweak the other settings below to suit) # to enable the default home directory shares. This will share each # user's home director as \\server\username ;[homes] ; comment = Home Directories ; browseable = no # By default, the home directories are exported read-only. Change the # next parameter to 'no' if you want to be able to write to them. ; read only = yes # File creation mask is set to 0700 for security reasons. If you want to # create files with group=rw permissions, set next parameter to 0775. ; create mask = 0700 # Directory creation mask is set to 0700 for security reasons. If you want to # create dirs. with group=rw permissions, set next parameter to 0775. ; directory mask = 0700 # By default, \\server\username shares can be connected to by anyone # with access to the samba server. Un-comment the following parameter # to make sure that only "username" can connect to \\server\username # The following parameter makes sure that only "username" can connect # # This might need tweaking when using external authentication schemes ; valid users = %S # Un-comment the following and create the netlogon directory for Domain Logons # (you need to configure Samba to act as a domain controller too.) ;[netlogon] ; comment = Network Logon Service ; path = /home/samba/netlogon ; guest ok = yes ; read only = yes # Un-comment the following and create the profiles directory to store # users profiles (see the "logon path" option above) # (you need to configure Samba to act as a domain controller too.) # The path below should be writable by all users so that their # profile directory may be created the first time they log on ;[profiles] ; comment = Users profiles ; path = /home/samba/profiles ; guest ok = no ; browseable = no ; create mask = 0600 ; directory mask = 0700 [printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = yes create mask = 0700 # Windows clients look for this share name as a source of downloadable # printer drivers [print$] comment = Printer Drivers browseable = yes writeable = no path = /var/lib/samba/printers # Uncomment to allow remote administration of Windows print drivers. # You may need to replace 'lpadmin' with the name of the group your # admin users are members of. # Please note that you also need to set appropriate Unix permissions # to the drivers directory for these users to have write rights in it ; write list = root, @lpadmin # A sample share for sharing your CD-ROM with others. ;[cdrom] ; comment = Samba server's CD-ROM ; read only = yes ; locking = no ; path = /cdrom ; guest ok = yes # The next two parameters show how to auto-mount a CD-ROM when the # cdrom share is accesed. For this to work /etc/fstab must contain # an entry like this: # # /dev/scd0 /cdrom iso9660 defaults,noauto,ro,user 0 0 # # The CD-ROM gets unmounted automatically after the connection to the # # If you don't want to use auto-mounting/unmounting make sure the CD # is mounted on /cdrom # ; preexec = /bin/mount /cdrom ; postexec = /bin/umount /cdrom [mediafiles] path = /media/multimedia/

    Read the article

  • CodePlex Daily Summary for Friday, June 28, 2013

    CodePlex Daily Summary for Friday, June 28, 2013Popular ReleasesEsoteric language interpreters collection: WARP, FALSE, Befunge-93, BrainFuck version 1.9: WARP Add factorial source Add brainfuck interpreter source Correct flex number system parse test by using regexes Implement the { operator Implement the } operator Support standard input redirection (so that the , operator reads one line only) The following executes the WARP brainfuck interpreter with a hello world brainfuck program: echo "+++++ +++++[> +++++ ++ > +++++ +++++> +++> + <<<< - ]> ++ .> + .+++++ ++ ..+++ .> ++ .<< +++++ +++++ +++++ .> .+++ .----- -.----- --- .> +.> ."...SharePoint Calendar Helper: 1.0.0.0: The first release, enjoy!WebsiteFilter: WebsiteFilter 1.0: WebsiteFilter Need .net framework4.0ax 2012 Security Privilege generator: xpo privilige generator: While upgrading AX 2009 solutions to AX 2012. I became tired of creating all those privileges. It for every menu item every time the same case; create a view and a maintain privilege. So for all those lazy developer out there, her it is, a privilege Builder. How it works, easy select right mouse on the menu item and you get your privileges.Outlook 2013 Add-In: Configuration Form: This new version includes the following changes: - Refactored code a bit. - Removing configuration from main form to gain more space to display items. - Moved configuration to separate form. You can click the little "gear" icon to access the configuration form (still very simple). - Added option to show past day appointments from the selected day (previous in time, that is). - Added some tooltips. You will have to uninstall the previous version (add/remove programs) if you had installed it ...Stored Procedure Pager: LYB.NET.SPPager 1.10: check bugs: 1 the total page count of default stored procedure ".LYBPager" always takes error as this: page size: 10 total item count: 100 then total page count should be 10, but last version is 11. 2 update some comments with English forbidding messy code.Terminals: Version 3.0 - Release: Changes since version 2.0:Choose 100% portable or installed version Removed connection warning when running RDP 8 (Windows 8) client Fixed Active directory search Extended Active directory search by LDAP filters Fixed single instance mode when running on Windows Terminal server Merged usage of Tags and Groups Added columns sorting option in tables No UAC prompts on Windows 7 Completely new file persistence data layer New MS SQL persistence layer (Store data in SQL database)...NuGet: NuGet 2.6: Released June 26, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.6Python Tools for Visual Studio: 2.0 Beta: We’re pleased to announce the release of Python Tools for Visual Studio 2.0 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, and cross platform debugging support. For a quick overview of the general IDE experience, please watch this video: http://www.youtube.com/watch?v=TuewiStN...Player Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for Windows Azure Media Services Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expectin...AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...Compare .NET Objects: Version 1.7.2.0: If you like it, please rate it. :) Performance Improvements Fix for deleted row in a data table Added ability to ignore the collection order Fix for Ignoring by AttributesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.95: update parser to allow for CSS3 calc( function to nest. add recognition of -pponly (Preprocess-Only) switch in AjaxMinManifestTask build task. Fix crashing bug in EXE when processing a manifest file using the -xml switch and an error message needs to be displayed (like a missing input file). Create separate Clean and Bundle build tasks for working with manifest files (AjaxMinManifestCleanTask and AjaxMinBundleTask). Removed the IsCleanOperation from AjaxMinManifestTask -- use AjaxMinMan...VG-Ripper & PG-Ripper: VG-Ripper 2.9.44: changes NEW: Added Support for "ImgChili.net" links FIXED: Auto UpdaterDocument.Editor: 2013.25: What's new for Document.Editor 2013.25: Improved Spell Check support Improved User Interface Minor Bug Fix's, improvements and speed upsWPF Composites: Version 4.3.0: In this Beta release, I broke my code out into two separate projects. There is a core FasterWPF.dll with the minimal required functionality. This can run with only the Aero.dll and the Rx .dll's. Then, I have a FasterWPFExtras .dll that requires and supports the Extended WPF Toolkit™ Community Edition V 1.9.0 (including Xceed DataGrid) and the Thriple .dll. This is for developers who want more . . . Finally, you may notice the other OPTIONAL .dll's available in the download such as the Dyn...Channel9's Absolute Beginner Series: Windows Phone 8: Entire source code for the Channel 9 series, Windows Phone 8 Development for Absolute Beginners.Indent Guides for Visual Studio: Indent Guides v13: ImportantThis release does not support Visual Studio 2010. The latest stable release for VS 2010 is v12.1. Version History Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not appearing in newly opened files Fixed some potential crashes Fixed lines going through pragma statements Various updates for VS 2012 and VS 2013 Removed VS 2010 support Changed in v12.1: Fixed crash when unable to start...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.1.0 - Prerelease d: Fluent Ribbon Control Suite 2.1.0 - Prerelease d(supports .NET 3.5, 4.0 and 4.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (not for .NET 3.5) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery *Walkthrough (do...Magick.NET: Magick.NET 6.8.5.1001: Magick.NET compiled against ImageMagick 6.8.5.10. Breaking changes: - MagickNET.Initialize has been made obsolete because the ImageMagick files in the directory are no longer necessary. - MagickGeometry is no longer IDisposable. - Renamed dll's so they include the platform name. - Image profiles can now only be accessed and modified with ImageProfile classes. - Renamed DrawableBase to Drawable. - Removed Args part of PathArc/PathCurvetoArgs/PathQuadraticCurvetoArgs classes. The...New ProjectsAdjusting SharePoint Site Quota PowerShell: PowerShell Script that displays the current space statistics thresholds for the database, site and quota. With this script you change the quota of a site.AndroidOMS: android omsASP.NET????????????: ASP.NET???????????? GlobalProfile ?????ASP.NET???????????。 ?????????????XML??????,?????ASP.NET??IHttpModule???????????,?????HttpRuntime.Cache???????????。 ???ax 2012 Security Privilege generator: While upgrading AX 2009 solutions to AX 2012. I became tired of creating all those privileges.CometDocs.Net: The .Net binding of the CometDocs web API (https://www.cometdocs.com/developer/apiDocumentation)Compositional Signals: This project aims to develop a simple compositional signal processing pipeline for education purposesCRM 2011 Field Data Type Converter: Field data type converter for Microsoft Dynamics CRM 2011CRM Solution CommandLine Helper: CRM Solution Command Line helper provide command line interface to handle CRM solution deployment and automate basic tasks.cursosabado: curso sábado c#Custom Html Helper for Google Maps: This Custom HTML helper let you integrate google maps into your MVC app easiler than EVER!DemoGRAPHICS: This project was demo'ed at the Microsoft Build 2013 Conference as part of "What's new for HTML Developers in Blend for Visual Studio 2013 Preview".DotNetNuke Demo SkinObjects: A collection of skin objects for DotNetNuke designed to make demonstrations easier for people.Excel to Dynamics CRM: Excel to Dynamics CRM for Microsoft Dynamics CRM 2011 makes it easier for the CRM users to upsert (update and/or insert) data from an Excel file.Newbie Open Source Project: math developOpen Parsing Example: here is an example to parse my own fileformatPeople Finder: This is a web proyect aimed to help people to get contact with whom are lost. Initial purpose: Help those who had their kids kidnapped in hospitals.Recovery Solutions at Your Service: Find our some of the most incredible solutions for protecting your backup and data file from corruption, errors and damages.San Francisco Transportation: San Francisco Transportation appSecurity Analysing: Just beginning.Show My Apps: This project aims to build a WPF Application that monitors resources and display them in a beautiful screen, usually seen on TV.TSAsyncModulesExampleApp: TSAsyncModulesExampleApp - ??????????-?????? ????????? ?? TypeScriptUltimate Music Tagger: Ultimate Music Tagger is a powerful, easy and extreme fast tool to reorganize your music libraryUniversal Visualnovel Engine Tools: Universal Visualnovel Engine is a cross-platform AVG Game Engine which supports Windows Phone 8.0 OS.This project hosts the documents and samples of uvengine.WCF Data Service Example: The purpose of this application is to demonstrate the usage of WCF Data Service with Repository patternWebsiteSkip: websiteskip is an open source Web page jump httpmodule component.windowsrtdev: The purpose of this project is to provide a central location for "native" (ie: Win32 - not WinRT) Windows RT development libraries / applications.

    Read the article

  • Oracle Solaris: Zones on Shared Storage

    - by Jeff Victor
    Oracle Solaris 11.1 has several new features. At oracle.com you can find a detailed list. One of the significant new features, and the most significant new feature releated to Oracle Solaris Zones, is casually called "Zones on Shared Storage" or simply ZOSS (rhymes with "moss"). ZOSS offers much more flexibility because you can store Solaris Zones on shared storage (surprise!) so that you can perform quick and easy migration of a zone from one system to another. This blog entry describes and demonstrates the use of ZOSS. ZOSS provides complete support for a Solaris Zone that is stored on "shared storage." In this case, "shared storage" refers to fiber channel (FC) or iSCSI devices, although there is one lone exception that I will demonstrate soon. The primary intent is to enable you to store a zone on FC or iSCSI storage so that it can be migrated from one host computer to another much more easily and safely than in the past. With this blog entry, I wanted to make it easy for you to try this yourself. I couldn't assume that you have a SAN available - which is a good thing, because neither do I! What could I use, instead? [There he goes, foreshadowing again... -Ed.] Developing this entry reinforced the lesson that the solution to every lab problem is VirtualBox. Oracle VM VirtualBox (its formal name) helps here in a couple of important ways. It offers the ability to easily install multiple copies of Solaris as guests on top of any popular system (Microsoft Windows, MacOS, Solaris, Oracle Linux (and other Linuxes) etc.). It also offers the ability to create a separate virtual disk drive (VDI) that appears as a local hard disk to a guest. This virtual disk can be moved very easily from one guest to another. In other words, you can follow the steps below on a laptop or larger x86 system. Please note that the ability to use ZOSS to store a zone on a local disk is very useful for a lab environment, but not so useful for production. I do not suggest regularly moving disk drives among computers. In the method I describe below, that virtual hard disk will contain the zone that will be migrated among the (virtual) hosts. In production, you would use FC or iSCSI LUNs instead. The zonecfg(1M) man page details the syntax for each of the three types of devices. Why Migrate? Why is the migration of virtual servers important? Some of the most common reasons are: Moving a workload to a different computer so that the original computer can be turned off for extensive maintenance. Moving a workload to a larger system because the workload has outgrown its original system. If the workload runs in an environment (such as a Solaris Zone) that is stored on shared storage, you can restore the service of the workload on an alternate computer if the original computer has failed and will not reboot. You can simplify lifecycle management of a workload by developing it on a laptop, migrating it to a test platform when it's ready, and finally moving it to a production system. Concepts For ZOSS, the important new concept is named "rootzpool". You can read about it in the zonecfg(1M) man page, but here's the short version: it's the backing store (hard disk(s), or LUN(s)) that will be used to make a ZFS zpool - the zpool that will hold the zone. This zpool: contains the zone's Solaris content, i.e. the root file system does not contain any content not related to the zone can only be mounted by one Solaris instance at a time Method Overview Here is a brief list of the steps to create a zone on shared storage and migrate it. The next section shows the commands and output. You will need a host system with an x86 CPU (hopefully at least a couple of CPU cores), at least 2GB of RAM, and at least 25GB of free disk space. (The steps below will not actually use 25GB of disk space, but I don't want to lead you down a path that ends in a big sign that says "Your HDD is full. Good luck!") Configure the zone on both systems, specifying the rootzpool that both will use. The best way is to configure it on one system and then copy the output of "zonecfg export" to the other system to be used as input to zonecfg. This method reduces the chances of pilot error. (It is not necessary to configure the zone on both systems before creating it. You can configure this zone in multiple places, whenever you want, and migrate it to one of those places at any time - as long as those systems all have access to the shared storage.) Install the zone on one system, onto shared storage. Boot the zone. Provide system configuration information to the zone. (In the Real World(tm) you will usually automate this step.) Shutdown the zone. Detach the zone from the original system. Attach the zone to its new "home" system. Boot the zone. The zone can be used normally, and even migrated back, or to a different system. Details The rest of this shows the commands and output. The two hostnames are "sysA" and "sysB". Note that each Solaris guest might use a different device name for the VDI that they share. I used the device names shown below, but you must discover the device name(s) after booting each guest. In a production environment you would also discover the device name first and then configure the zone with that name. Fortunately, you can use the command "zpool import" or "format" to discover the device on the "new" host for the zone. The first steps create the VirtualBox guests and the shared disk drive. I describe the steps here without demonstrating them. Download VirtualBox and install it using a method normal for your host OS. You can read the complete instructions. Create two VirtualBox guests, each to run Solaris 11.1. Each will use its own VDI as its root disk. Install Solaris 11.1 in each guest.Install Solaris 11.1 in each guest. To install a Solaris 11.1 guest, you can either download a pre-built VirtualBox guest, and import it, or install Solaris 11.1 from the "text install" media. If you use the latter method, after booting you will not see a windowing system. To install the GUI and other important things, login and run "pkg install solaris-desktop" and take a break while it installs those important things. Life is usually easier if you install the VirtualBox Guest Additions because then you can copy and paste between the host and guests, etc. You can find the guest additions in the folder matching the version of VirtualBox you are using. You can also read the instructions for installing the guest additions. To create the zone's shared VDI in VirtualBox, you can open the storage configuration for one of the two guests, select the SATA controller, and click on the "Add Hard Disk" icon nearby. Choose "Create New Disk" and specify an appropriate path name for the file that will contain the VDI. The shared VDI must be at least 1.5 GB. Note that the guest must be stopped to do this. Add that VDI to the other guest - using its Storage configuration - so that each can access it while running. The steps start out the same, except that you choose "Choose Existing Disk" instead of "Create New Disk." Because the disk is configured on both of them, VirtualBox prevents you from running both guests at the same time. Identify device names of that VDI, in each of the guests. Solaris chooses the name based on existing devices. The names may be the same, or may be different from each other. This step is shown below as "Step 1." Assumptions In the example shown below, I make these assumptions. The guest that will own the zone at the beginning is named sysA. The guest that will own the zone after the first migration is named sysB. On sysA, the shared disk is named /dev/dsk/c7t2d0 On sysB, the shared disk is named /dev/dsk/c7t3d0 (Finally!) The Steps Step 1) Determine the name of the disk that will move back and forth between the systems. root@sysA:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,2829@d/disk@0,0 1. c7t2d0 /pci@0,0/pci8086,2829@d/disk@2,0 Specify disk (enter its number): ^D Step 2) The first thing to do is partition and label the disk. The magic needed to write an EFI label is not overly complicated. root@sysA:~# format -e c7t2d0 selecting c7t2d0 [disk formatted] FORMAT MENU: ... format fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. n SELECT ONE OF THE FOLLOWING: ... Enter Selection: 1 ... G=EFI_SYS 0=Exit? f SELECT ONE... ... 6 format label ... Specify Label type[1]: 1 Ready to label disk, continue? y format quit root@sysA:~# ls /dev/dsk/c7t2d0 /dev/dsk/c7t2d0 Step 3) Configure zone1 on sysA. root@sysA:~# zonecfg -z zone1 Use 'create' to begin configuring a new zone. zonecfg:zone1 create create: Using system default template 'SYSdefault' zonecfg:zone1 set zonename=zone1 zonecfg:zone1 set zonepath=/zones/zone1 zonecfg:zone1 add rootzpool zonecfg:zone1:rootzpool add storage dev:dsk/c7t2d0 zonecfg:zone1:rootzpool end zonecfg:zone1 exit root@sysA:~# oot@sysA:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: ... rootzpool: storage: dev:dsk/c7t2d0 Step 4) Install the zone. This step takes the most time, but you can wander off for a snack or a few laps around the gym - or both! (Just not at the same time...) root@sysA:~# zoneadm -z zone1 install Created zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T163634Z.zone1.install Image: Preparing at /zones/zone1/root. AI Manifest: /tmp/manifest.xml.RXaycg SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: zone1 Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/support/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 183/183 33556/33556 222.2/222.2 2.8M/s PHASE ITEMS Installing new actions 46825/46825 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 1696.847 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T163634Z.zone1.install Step 5) Boot the Zone. root@sysA:~# zoneadm -z zone1 boot Step 6) Login to zone's console to complete the specification of system information. root@sysA:~# zlogin -C zone1 Answer the usual questions and wait for a login prompt. Then you can end the console session with the usual "~." incantation. Step 7) Shutdown the zone so it can be "moved." root@sysA:~# zoneadm -z zone1 shutdown Step 8) Detach the zone so that the original global zone can't use it. root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 installed /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 484M 1.51G 23% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool Step 9) Review the result and shutdown sysA so that sysB can use the shared disk. root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# init 0 Step 10) Now boot sysB and configure a zone with the parameters shown above in Step 1. (Again, the safest method is to use "zonecfg ... export" on sysA as described in section "Method Overview" above.) The one difference is the name of the rootzpool storage device, which was shown in the list of assumptions, and which you must determine by booting sysB and using the "format" or "zpool import" command. When that is done, you should see the output shown next. (I used the same zonename - "zone1" - in this example, but you can choose any valid zonename you want.) root@sysB:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysB:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: linkname: net0 ... rootzpool: storage: dev:dsk/c7t3d0 Step 11) Attaching the zone automatically imports the zpool. root@sysB:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T184034Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T184034Z.zone1.attach root@sysB:~# zoneadm -z zone1 boot root@sysB:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 Step 12) Now let's migrate the zone back to sysA. Create a file in zone1 so we can verify it exists after we migrate the zone back, then begin migrating it back. root@zone1:~# ls /opt root@zone1:~# touch /opt/fileA root@zone1:~# ls -l /opt/fileA -rw-r--r-- 1 root root 0 Oct 22 14:47 /opt/fileA root@zone1:~# exit logout [Connection to zone 'zone1' pts/2 closed] root@sysB:~# zoneadm -z zone1 shutdown root@sysB:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool root@sysB:~# init 0 Step 13) Back on sysA, check the status. Oracle Corporation SunOS 5.11 11.1 September 2012 root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - Step 14) Re-attach the zone back to sysA. root@sysA:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T190441Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T190441Z.zone1.attach root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 491M 1.51G 24% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 boot root@sysA:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 root@zone1:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 1.98G 538M 1.46G 26% 1.00x ONLINE - Step 15) Check for the file created on sysB, earlier. root@zone1:~# ls -l /opt total 1 -rw-r--r-- 1 root root 0 Oct 22 14:47 fileA Next Steps Here is a brief list of some of the fun things you can try next. Add space to the zone by adding a second storage device to the rootzpool. Make sure that you add it to the configurations of both zones! Create a new zone, specifying two disks in the rootzpool when you first configure the zone. When you install that zone, or clone it from another zone, zoneadm uses those two disks to create a mirrored pool. (Three disks will result in a three-way mirror, etc.) Conclusion Hopefully you have seen the ease with which you can now move Solaris Zones from one system to another.

    Read the article

  • CodePlex Daily Summary for Thursday, February 10, 2011

    CodePlex Daily Summary for Thursday, February 10, 2011Popular ReleasesSnoop, the WPF Spy Utility: Snoop 2.6.1: This release is a bug fixing release. Most importantly, issues have been seen around WPF 4.0 applications not always showing up in the app chooser. Hopefully, they are fixed now. I thought this issue warranted a minor release since more and more people are going WPF 4.0 and I don't want anyone to have any problems. Dan Hanan also contributes again with several usability features. Thanks Dan! Happy Snooping! Work Item Description 5149 Dan Hanan: You can now use the mouse wheel t...RIBA - Rich Internet Business Application for Silverlight: Preview of MVVM Framework Source + Tutorials: This is a first public release of the MVVM Framework which is part of the final RIBA application. The complete RIBA example LOB application has yet to be published. Further Documentation on the MVVM part can be found on the Blog, http://www.SilverlightBlog.Net and in the downloadable source ( mvvm/doc/ ). Please post all issues and suggestions in the issue tracker.SharePoint Learning Kit: 1.5: SharePoint Learning Kit 1.5 has the following new functionality: *Support for SharePoint 2010 *E-Learning Actions can be localised *Two New Document Library Edit Options *Automatically add the Assignment List Web Part to the Web Part Gallery *Various Bug Fixes for the Drop Box There are 2 downloads for this release SLK-1.5-2010.zip for SharePoint 2010 SLK-1.5-2007.zip for SharePoint 2007 (WSS3 & MOSS 2007)Facebook C# SDK: 5.0.3 (BETA): This is fourth BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. For more information about this release see the following blog posts: Facebook C# SDK - Writing your first Facebook Application Facebook C# SDK v5 Beta Internals Facebook C# SDK V5.0.0 (BETA) Released We have spend time trying ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.161: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a new Twitter List network importer, makes some minor feature improvements, and fixes a few bugs. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file...WatchersNET.TagCloud: WatchersNET.TagCloud 01.09.03: Whats NewAdded New Skin TagTastic http://www.watchersnet.de/Portals/0/screenshots/dnn/TagCloud-TagTastic-Skin.jpg Added New Skin RoundedButton http://www.watchersnet.de/Portals/0/screenshots/dnn/TagCloud-RoundedButton-Skin.jpg changes Tag Count fixed on Tag Source Referrals Fixed Tag Count when multiple Tag Sources are usedFinestra Virtual Desktops: 1.1: This release adds a few more performance and graphical enhancements to 1.0. Switching desktops is now about as fast as you can blink. Desktop switching optimizations New welcome wizard for Vista/7 Fixed a few minor bugs Added a few more options to the options dialog (including ability to disable the taskbar switching)WCF Data Services Toolkit: WCF Data Services Toolkit: The source code and binary releases of the WCF Data Services Toolkit. For simplicity, the source code download doesn't include any of the MSTest files. If you want those, you can pull the code down via MercurialyoutubeFisher: youtubeFisher 3.0 [beta]: What's new: Video capturing improved Supports YouTube's new layout (january 2011) Internal refactoringNearforums - ASP.NET MVC forum engine: Nearforums v5.0: Version 5.0 of the ASP.NET MVC Forum Engine, containing the following improvements: .NET 4.0 as target framework using ASP.NET MVC 3. All views migrated to Razor for cleaner markup. Alternate template (Layout file) for mobile devices 4 Bug Fixes since Version 4.1 Visit the project Roadmap for more details.fuv: 1.0 release, codename Chopper Joe: features: search/replace :o to open file :s to save file :q to quitASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager html generation optimized new features for the lookup (add additional search data ) live demo went aeroEnhSim: EnhSim 2.3.6 BETA: 2.3.6 BETAThis release supports WoW patch 4.06 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Changes since 2.3.0 ...TestApi - a library of Test APIs: TestApi v0.6: TestApi v0.6 comes with the following changes: TestApi code development has been moved to Codeplex: Moved TestApi soluton to VS 2010; Moved all source code to Codeplex. All development work is done there now. Fault Injection API: Integrated the unmanaged FaultInjectionEngine.dll COM component in the build; Cleaned up FaultInjectionEngine.dll to build at warning level 4; Implemented “FaultScope” which allows for in-process fault injection; Added automation scripts & sample program; ...AutoLoL: AutoLoL v1.5.5: AutoChat now allows up to 6 items. Items with nr. 7-0 will be removed! News page url's are now opened in the default browser Added a context menu to the system tray icon (thanks to Alex Banagos) AutoChat now allows configuring the Chat Keys and the Modifier Key The recent files list now supports compact and full mode Fix: Swapped mouse buttons are now properly detected Fix: Sometimes the Play button was pressed while still greyed out Champion: Karma Note: You can also run the u...mojoPortal: 2.3.6.2: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2362-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.ReSharper Settings Manager: RSM v4.0 (For ReSharper 5.1): Donate ChangesChanged the mechanism of locating shared settings files (discussion, issue): You can now create a set of "global" settings files and configure how R# settings will get loaded (settings inheritance and overrides). All solutions located at the same folder and\or subfolders where the global settings file is located will load that file automatically. Improved "Manage Settings" dialog to support new settings sharing features. Bug fixes: 228048 16590 16584Rawr: Rawr 4.0.19 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...IronRuby: 1.1.2: IronRuby 1.1.2 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. In this release we fixed several major issues: - problems that blocked Gem installation in certain cases - regex syntax: the parser was replaced with a new one that is much more compatible with Ruby 1.9.2 - cras...MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (4): There was a small issue with the previous release that caused errors when installing the templates in VS10 Express. This release corrects the error. Only use this if you encountered issues when installing the previous release. No changes in the binaries.New ProjectsAJAX Map DataConnector: AJAX Map DataConnector is an Open Source + Open Data project focused on connecting the power of Bing Maps AJAX Control, Version 7.0 to the spatial query capabilities of SQL Server 2008. The examples provided here represent a starting point, showing some ways to harness SQL ServerASP.NET Mvc Cdn Management: Cdn Management loads different resources (like styles, scripts etc) from configuration file. It reads resource location (url) from a configuration file and renders it into a page.Azure Content Provider for Telerik File Browser: Windows Azure Storage FileBrowserContentProvider makes it easier for web developers to Connect the Telerik RadFileExplorer control to the Windows Azure Storage. It's developed in VB.NET. BeerForge: BeerForge is an open source application for the home brewer to ease the creation of beer recipes and provide handy calculation tools.budget-manage: ???????????,??????????。Command Line Parsing, C++ and C#: This is a simple project that does command line arguments parsing. C# and C++ projects are supplied, together with unit tests.DeleteAfterRunning: Application allowing you to convert any file to self-deleting executable. You have a picture, audio file or a document you want to share, but don’t want anyone to keep a copy? Simply create a self-deleting package using DeleteAfterRunning that can be opened only once.DotNetNuke Contest: A module for contest voting in DotNetNuke.Foundry: Experiments in language design and data modeling.GenAttributeLib: Libreria per la gestione di attributi generici di un oggetto.Graduation Project Management System: This is our graduation project. It uses Asp.net Mvc 3,Jquery,Ado.net Entity Framework,and so on. by Veiller hu,ZSPiKnow - A tiny wiki: iKnow is a (really) tiny wiki. Up and running in 5 minutes, easy to customize and of course open source and free to use.internationalOffice: This will contain all the code and bugs for a student project.Iroo Package Manager: Orchard package management UI (auto-update)Iroo Version Manager: Version management for Content ItemsJavApi: JavApi provides a collection of .NET classes in the form of the Java API. It thus allows you to use an identical API to develop for both platforms.jsfcore @ Personal Repository: This project contains s wmultiple sampleith various snippets and projects from blog posts, user group talks, and conference sessions. M3 CMS: M3 Cms is a lightweight content management system built on the ASP.NET MVC 4.0 framework. Uses SQLCE database and nHibernate+ActiveRecord framework.Market-Basket Synthetic Data Generator: An open-source C# market-basket synthetic data generator, capable of creating transactions, sequences and taxonomies, based on the IBM Quest version. Written to address the maintainability and portability problems of the original, feedback, fixes and extensions are encouraged!MicroLinq : Libraries for .NET Micro Framework: MicroLinq is a project to bring a small subset of the power of Linq to the .NET Micro Framework. Over time (hours) the project has expanded to include other helpful libraries and proof of concept code others might find useful.MJPEG Decoder for WPF, WinForms, WP7 and XNA: Library to decode MJPEG streams for Silverlight, Windows Phone 7, XNA 4.0, WinForms, and WPF. Sample code showing usage is included with the distribution. For more information, see the full article at Coding4Fun.modSIC: Modulo's Open Distributed SCAP Infrastructure Collector, or modSIC, makes it easier for security analysts to scan an environment vulnerabilities/compliance based on OVAL-Definitions file. It's an open source service specializing in distributed assessment on a network.MoneyTracker: MoneyTracker is used to keep track of transactions, create your own categories and view reports.PixelsCMS: ASP.NET CMS, PixelsCMS, MVC 2rainTwitter: A twitter module/skin for the rainmeter Windows desktop customization platform. (IN DEVELOPMENT)Reactor.ServiceBus: Reactor Service Bus is a light weight .Net service bus built upon the Apache NMS abstraction library. It provides a slim and easy to use interface that supports all the underlying brokers NMS supports.RsMenu: RsMenu es un MenuStrip que permite tener los Informes de Reporting Services en un elegante menu en nuestra aplicacion winform.Top Protocols Expert for Network Monitor: A Network Monitor Expert which shows you the usage frequency of protocols in a trace. This expert plugs into the Network Monitor UI so you run it directly from the Expert menu. trollr.net - remotely configure & control your .Net applications: trollr.net provides a remote configuration and control network to the .Net apps in your enterprise. App configuration becomes centralised and live/runtime changes are pushed to each application - no restart required to pick up the change. Cache control commands can also be sent!Tweet 4 ME: Tweet 4 ME is a Java Micro Edition (MIDP 2.0 CLDC 1.0) based Twitter client built with a custom GUI framework. The application is part of a college project.UntitledGameProject: Untitled Game Project, Logic and Design for eventual port to Android, IOS and WP7web-framework: web??????????。C#??,framework2.0,???????。WebMatrix helpers from old school teachings: Remember when the web was fun? Well it's back with WebMatrix. We are providing just a few minor helper extensions with a sample app that should help Neo on his quest to be one with the Matrix.

    Read the article

  • CodePlex Daily Summary for Monday, November 28, 2011

    CodePlex Daily Summary for Monday, November 28, 2011Popular ReleasesCommonLibrary.NET: CommonLibrary.NET 0.9.8 - Alpha: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. Samples in <root>\src\Lib\CommonLibrary.NET\Samples CommonLibrary.NET 0.9.8 AlphaNew Dynamic Scripting Language : workitem : 7493 Fixes 1622 6803Widget Suite for DotNetNuke: 01.04.00: The following features/enhancements are associated with this release: Bug: Removed the empty box/white space created by some widgets New Widget: FlexSlider New Widget: Google+ Button New Widget: Klout Badge Sample Widget Script FileTools for SharePoint: Reset SharePoint Configuration Cache: This tool is used to detect the existing location of the SharePoint configuration cache files then remove them to trigger the timer service to rebuild a fresh new cache. This tool runs on any SharePoint box version 2003 and above supporting x64 bit & x32 bit OS assuming .NET framework 3.5 is installed. You must run the tool with elevated privileges if running on Win 2008 server to ensure that the tool has enough rights to restart the timer service. The tool auto-detects whether its running i...WinRT File Based Database: 0.9.1.5: Implement IsBusy property to support Save button state. See Quick Start project that is distributed as part of the download for details on how to implement Save button, use IsBusy property and how to implement SimpleCommand to use behind the Save button.Multiwfn: Multiwfn2.2_source: Multiwfn2.2_sourceBatchus-GUI: Batchus-GUI-vb 0.1.3.3: Here is v0.1.3.3. It is relatively stable. Just need some more designer layout, and tutorials, and templates.Groovy IM: Groovy IM Version 0.3: Groovy IM Version 0.3 for Windows Phone 7Internet Cache Examiner: Internet Cache Examiner 0.9.2: This is the release binary for the 0.9.2 version of Internet Cache Examiner.Composite Data Service Framework: Composite Data Service Framework 1.0: This solution contains the Composite Data Service framework solution along with a Sample Project.FxCop Integrator for Visual Studio 2010: FxCop Integrator 2.0.0 RC: Replaced the MSBuild Tasks installer to fix the bug of the targets file. FxCop Integrator is not affected by this bug. (Nov 28 2011) New FeatureSupported calculating code metrics with Code Metrics PowerTool. (Work Item #6568: 6568). Provided MSBuild tasks. #7454: 7454 Supported to filter out auto-generated code from code analysis result. #7485: 7485 Supported exporting report of code analysis result. Supported multi-project analysis. Supported file level analysis. Added the featu...Terminals: Version 2 - Beta 4 Release: Beta 4 Refresh Build Dont forget to backup your config files BEFORE upgrading! As usual, please take time to use and abuse this release. We left logging in place, and this is a debug build so be sure to submit your logs on each bug reported, and please do report all bugs! Updated the About form to include the date and time of the build. Useful for CI builds to ensure we have the correct version "Favourites" and "History" save their expanded states after app restarts Code cleanup, secu...MiniTwitter: 1.76: MiniTwitter 1.76 ???? ?? ?????????? User Streams ???????????? User Streams ???????????、??????????????? REST ?????????? ?????????????????????????????? ??????????????????????????????Media Companion: MC 3.424b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Movie Show Resolutions... Resolved issue when reverting multiselection of movies to "-none-" Added movie rename support for subtitle files '.srt' & '.sub' Finalised code for '-1' fix - radiobutton to choose either filename or title Fixed issue with Movie Batch Wizard Fanart - ...Advanced Windows Phone Enginering Tool: WPE Downloads: This version of WPE gives you basic updating, restoring, and, erasing for your Windows Phone device.Anno 2070 Assistant: Beta v1.0 (STABLE): Anno 2070 Assistant Beta v1.0 Released! Features Included: Complete Building Layouts for Ecos, Tycoons & Techs Complete Production Chains for Ecos, Tycoons & Techs Completed Credits Screen Known Issues: Not all production chains and building layouts may be on the lists because they have not yet been discovered. However, data is still 99.9% complete. Currently the Supply & Demand, including Calculator screen are disabled until version 1.1.Minemapper: Minemapper v0.1.7: Including updated Minecraft Biome Extractor and mcmap to support the new Minecraft 1.0.0 release (new block types, etc).Visual Leak Detector for Visual C++ 2008/2010: v2.2.1: Enhancements: * strdup and _wcsdup functions support added. * Preliminary support for VS 11 added. Bugs Fixed: * Low performance after upgrading from VLD v2.1. * Memory leaks with static linking fixed (disabled calloc support). * Runtime error R6002 fixed because of wrong memory dump format. * version.h fixed in installer. * Some PVS studio warning fixed.NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.10: 3.6.0.10 22-Nov-2011 Update: Removed PreEmptive Platform integration (PreEmptive analytics) Removed all PreEmptive attributes Removed PreEmptive.dll assembly references from all projects Added first support to ADAM/AD LDS Thanks to PatBea. Work Item 9775: http://netsqlazman.codeplex.com/workitem/9775VideoLan DotNet for WinForm, WPF & Silverlight 5: VideoLan DotNet for WinForm, WPF, SL5 - 2011.11.22: The new version contains Silverlight 5 library: Vlc.DotNet.Silverlight. A sample could be tested here The new version add and correct many features : Correction : Reinitialize some variables Deprecate : Logging API, since VLC 1.2 (08/20/2011) Add subitem in LocationMedia (for Youtube videos, ...) Update Wpf sample to use Youtube videos Many others correctionsSharePoint 2010 FBA Pack: SharePoint 2010 FBA Pack 1.2.0: Web parts are now fully customizable via html templates (Issue #323) FBA Pack is now completely localizable using resource files. Thank you David Chen for submitting the code as well as Chinese translations of the FBA Pack! The membership request web part now gives the option of having the user enter the password and removing the captcha (Issue # 447) The FBA Pack will now work in a zone that does not have FBA enabled (Another zone must have FBA enabled, and the zone must contain the me...New ProjectsA lightweight database access component: DataAccessor?????????????,??DataAccessor????????????: 1.??????????SQL??,?????SQL???????????; 2.?????????; 3.????????????,??MSAccess??????????????????????????????????????,?????????; 4.?????????????DBMS,??:SqlServer、Oracle??MSAccess,??????DBMS???,??????N?????????; 5.????,?????????DataAccess??????DBMS; 6.???Sql????xml??????????????dbms?SQL?????,??sql??????????????; 7.???,???DLL??????????……AdHoc.Wavesque: Wavesque provides a simple infrastructure to generate waveforms.AIRO - Interoperable Experiment Automation Package: The main goal of this project is to provide engineers and scientists flexible and extendable framework for building test, measurement and control applications. This framework is compatible with IVI-COM drivers and extends IVI Instrument Classes with custom .NET (and COM) interfaces for such devices as: step motors, different positioning devices, magnet power supplies, lock-in amplifiers etc. We maintain IVI Foundation's aim: "simplify upgrading or replacing components in complex test systems...AKBK-Schulprojekt - USB-Guard: Das Projekt USB-Überwachung wird im Rahmen des Anwendungsentwicklungs-Unterrichts des Adolf-Kolping-Berufskollegs geplant und durchgeführt. Die Software wird zur Prävention von Manipulationsversuchen während einer Informatikklausur entwickelt. Sie erkennt manipulierte und nicht erlaubte USB-Datenträger, protokolliert deren Inhalt und gibt ggf. eine Warnung aus. Sie hilft dem Lehrer dabei, Manipulationsversuche schneller und effizienter zu erkennen.Android Vision: Project to learn all things Android and some image processingAuto Fill Title of Document in Document Library in SharePoint 2010: Automatically fill title of any document in any document library in SharePoint 2010.Batchus-GUI: A graphical user interface, used to create batch files.Bazeli: Windows Phone 7 application that supports tracking of expenses.Clannad: This is a family of many things.Csv2Entity: CSV 2 Entity is a serials of tools that deal with CSV files as well as Excel files and Access files. This framework include: A VSIX file which contains VB and CS source code generate wizard for CSV Objects. Read/Write CSV files facilities. Documentation Help facilities.Cypher Bot: Encrypt secrets, messages, documents, files and more. Then decrypt them. Then repeat with the US government's encryption standard: AES 256-bit (Accepted by NIST and NSA).Cypher Bot makes it fun and easy for anyone to secure files. This is the best security solution available on the web. You are now able to encrypt/decrypt files (avi, mov, mpeg, mp3, wav, png, jpg, txt, html, vb, js) and text ALL IN ONE beautiful slick interface. Cypher Bot is developed in visual basic.EmailWebLinker: A very simple text to html converter designed to deal with those email messages that contain a list of links to images. Any http links it finds to pictures are converted inline. eps files are downloaded and rendered. Can be easily extended.FileHasher: This project provides a simple tool for generating and verifying file hashes. I created this to help the QA team I work with. The project is all C# using .NET 3.5 SP1.Financial Controls: WPF/Silverlight Controls for Financial ApplicationsGroovy IM: Groovy IM makes it easy for Windows Phone 7 consumers to chat while on there Windows Phone 7 device(s). Groovy IM is developed in C# under the GPL V2 license.IBlog: Project created to learn things ASP.NETInjectivity (Dependency Injection): Injectivity is a dependency injection framework (written in C#) with a strong focus on the ease of configuration and performance. Having been written over 5 years and at version 2.8 with unit tests & intellisense comments it is a mature framework.Lizard Chess: Chess openings preparation tool using F#. WinForms C# used for UI.MCPD: I am doing a self study course for MCPD in .NET 4 (web track), so I am committing any custom source code as a result of my study in this open source location which I can later show the work for. * MCTS Exam 70-515 Web Applications Development with Microsoft .NET Framework 4MVC TreeView Helper: This fluent MVC TreeView helper makes it easy to build and customize an HTML unordered-list tree from a recursive model.Onion Architecture with ASP.NET MVC: Onion Architecture with ASP.NET MVCOpenBank: OpenBank est une application client/serveur destinée à la gestion de compte banquaire.Philosophy Widget: This Widget for the Mac OS X Dashboard aids in memorizing the association between known works of philosophy and their authors.Physic Engine: Physic EnginePUL Programming Utility Language: PUL is a programming utility language that allows people to do tasks automatically without having to manually do them, which that process would take longer. Using PUL, you can make programs that automatically do the work for you.QuakeMeApp: QuakeMeApp is a Windows Phone 7 Earthquake Alert AppSense/Net SourceCode Field Control: This is a Field Control for Sense/Net ECMS, it provides syntax highlighting.Silverlight Video CoverFlow: This a Silverlight Sample Application including a Coverflow of Video (streaming)SpaceConquest: Incorporated standard design patterns to build a peer to peer game in Java. The game rules were similar in complexity to games like Civilization and StarcraftSPGE - An XNA 2D graphics engine for Windows and Windows Phone 7.: SPGE is an open-source graphics engine build over XNA that allows the creation of simple 2D games that target Windows and Windows Phone 7. The aim of this graphic engine is to allow for an easy creation of simle 2D games, game prototyping, and teaching of game development.SQL CRUD Expression Builder: A library to build sql crud commands that is based on expressions so every part of the sql statement could be an expression like ColumnSetExpression, FilterExpression, JoinExpression, etc, is intended to be agnostic but right now is being tested only with SqlServer and MySql.TDD-Katas: *TDD-Katas* simply defines the Test Driven Development Katas. In this, I tried to create most famous katas to understand what is exactly Kata. So, get into the code and let us know for any improvementTFS Team Project Manager: TFS Team Project Manager automates various tasks across Team Projects in Team Foundation Server. If you find yourself managing multiple Team Projects for an organization and have recurring tasks and questions that repeat themselves over and over again, Team Project Manager probably has some answers for you. Team Project Manager can help you... * Manage build process templates (understanding which build templates are used by which build definitions, uploading new build process templates,...USB Camera Driver for Windows Embedded Compact 7: This project helps people to get the USB camera working on the Windows Embedded Compact 7.This is modified source of the WinCE 6.0 USB Camera Shared source available from the following link http://www.microsoft.com/download/en/details.aspx?id=19512 User Profile with RecordID replicator: <User Profile with RecordID replicator> will let SharePoint 2010 administrators create a new Profile database maintaining the RecordId of the profile to make sure the social features (I like it, tags, notes) are not gone.VS Tool for WSS 3.0: Visual Studio (2005 and 2008) add-ons for WSS. Included: - schema.xml explorerWhs2Smugmug: Windows Home Server Add-In for uploading files to Smugmug. Written in C# using .Net 3.5 and WCF. Also using the Smugmug API Project. ??UBBCODE: PHP????UBBCODE????,??????: 1.??????(10px ? 24px); 2.????; 3.?????; 4.??????; 5.??????; 6.??????(????????????); 7.??????; 8.?????; 9.?????; 10.???QQ??,??????; 11.???????(?????,??????????); 12.?????????; 13.????????; 14.?????????; 15.?????????; 16.?????????。

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >