Search Results

Search found 908 results on 37 pages for 'cascading deletes'.

Page 34/37 | < Previous Page | 30 31 32 33 34 35 36 37  | Next Page >

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Entity Framework version 1- Brief Synopsis and Tips &ndash; Part 1

    - by Rohit Gupta
    To Do Eager loading use Projections (for e.g. from c in context.Contacts select c, c.Addresses)  or use Include Query Builder Methods (Include(“Addresses”)) If there is multi-level hierarchical Data then to eager load all the relationships use Include Query Builder methods like customers.Include("Order.OrderDetail") to include Order and OrderDetail collections or use customers.Include("Order.OrderDetail.Location") to include all Order, OrderDetail and location collections with a single include statement =========================================================================== If the query uses Joins then Include() Query Builder method will be ignored, use Nested Queries instead If the query does projections then Include() Query Builder method will be ignored Use Address.ContactReference.Load() OR Contact.Addresses.Load() if you need to Deferred Load Specific Entity – This will result in extra round trips to the database ObjectQuery<> cannot return anonymous types... it will return a ObjectQuery<DBDataRecord> Only Include method can be added to Linq Query Methods Any Linq Query method can be added to Query Builder methods. If you need to append a Query Builder Method (other than Include) after a LINQ method  then cast the IQueryable<Contact> to ObjectQuery<Contact> and then append the Query Builder method to it =========================================================================== Query Builder methods are Select, Where, Include Methods which use Entity SQL as parameters e.g. "it.StartDate, it.EndDate" When Query Builder methods do projection then they return ObjectQuery<DBDataRecord>, thus to iterate over this collection use contact.Item[“Name”].ToString() When Linq To Entities methods do projection, they return collection of anonymous types --- thus the collection is strongly typed and supports Intellisense EF Object Context can track changes only on Entities, not on Anonymous types. If you use a Defining Query for a EntitySet then the EntitySet becomes readonly since a Defining Query is the same as a View (which is treated as a ReadOnly by default). However if you want to use this EntitySet for insert/update/deletes then we need to map stored procs (as created in the DB) to the insert/update/delete functions of the Entity in the Designer You can use either Execute method or ToList() method to bind data to datasources/bindingsources If you use the Execute Method then remember that you can traverse through the ObjectResult<> collection (returned by Execute) only ONCE. In WPF use ObservableCollection to bind to data sources , for keeping track of changes and letting EF send updates to the DB automatically. Use Extension Methods to add logic to Entities. For e.g. create extension methods for the EntityObject class. Create a method in ObjectContext Partial class and pass the entity as a parameter, then call this method as desired from within each entity. ================================================================ DefiningQueries and Stored Procedures: For Custom Entities, one can use DefiningQuery or Stored Procedures. Thus the Custom Entity Collection will be populated using the DefiningQuery (of the EntitySet) or the Sproc. If you use Sproc to populate the EntityCollection then the query execution is immediate and this execution happens on the Server side and any filters applied will be applied in the Client App. If we use a DefiningQuery then these queries are composable, meaning the filters (if applied to the entityset) will all be sent together as a single query to the DB, returning only filtered results. If the sproc returns results that cannot be mapped to existing entity, then we first create the Entity/EntitySet in the CSDL using Designer, then create a dummy Entity/EntitySet using XML in the SSDL. When creating a EntitySet in the SSDL for this dummy entity, use a TSQL that does not return any results, but does return the relevant columns e.g. select ContactID, FirstName, LastName from dbo.Contact where 1=2 Also insure that the Entity created in the SSDL uses the SQL DataTypes and not .NET DataTypes. If you are unable to open the EDMX file in the designer then note the Errors ... they will give precise info on what is wrong. The Thrid option is to simply create a Native Query in the SSDL using <Function Name="PaymentsforContact" IsComposable="false">   <CommandText>SELECT ActivityId, Activity AS ActivityName, ImagePath, Category FROM dbo.Activities </CommandText></FuncTion> Then map this Function to a existing Entity. This is a quick way to get a custom Entity which is regular Entity with renamed columns or additional columns (which are computed columns). The disadvantage to using this is that It will return all the rows from the Defining query and any filter (if defined) will be applied only at the Client side (after getting all the rows from DB). If you you DefiningQuery instead then we can use that as a Composable Query. The Fourth option (for mapping a READ stored proc results to a non-existent Entity) is to create a View in the Database which returns all the fields that the sproc also returns, then update the Model so that the model contains this View as a Entity. Then map the Read Sproc to this View Entity. The other option would be to simply create the View and remove the sproc altogether. ================================================================ To Execute a SProc that does not return a entity, use a EntityCommand to execute that proc. You cannot call a sproc FunctionImport that does not return Entities From Code, the only way is to use SSDL function calls using EntityCommand.  This changes with EntityFramework Version 4 where you can return Scalar Types, Complex Types, Entities or NonQuery ================================================================ UDF when created as a Function in SSDL, we need to set the Name & IsComposable properties for the Function element. IsComposable is always false for Sprocs, for UDF's set this to true. You cannot call UDF "Function" from within code since you cannot import a UDF Function into the CSDL Model (with Version 1 of EF). only stored procedures can be imported and then mapped to a entity ================================================================ Entity Framework requires properties that are involved in association mappings to be mapped in all of the function mappings for the entity (Insert, Update and Delete). Because Payment has an association to Reservation... hence we need to pass both the paymentId and reservationId to the Delete sproc even though just the paymentId is the PK on the Payment Table. ================================================================ When mapping insert, update and delete procs to a Entity, insure that all the three or none are mapped. Further if you have a base class and derived class in the CSDL, then you must map (ins, upd, del) sprocs to all parent and child entities in the inheritance relationship. Note that this limitation that base and derived entity methods must all must be mapped does not apply when you are mapping Read Stored Procedures.... ================================================================ You can write stored procedures SQL directly into the SSDL by creating a Function element in the SSDL and then once created, you can map this Function to a CSDL Entity directly in the designer during Function Import ================================================================ You can do Entity Splitting such that One Entity maps to multiple tables in the DB. For e.g. the Customer Entity currently derives from Contact Entity...in addition it also references the ContactPersonalInfo Entity. One can copy all properties from the ContactPersonalInfo Entity into the Customer Entity and then Delete the CustomerPersonalInfo entity, finall one needs to map the copied properties to the ContactPersonalInfo Table in Table Mapping (by adding another table (ContactPersonalInfo) to the Table Mapping... this is called Entity Splitting. Thus now when you insert a Customer record, it will automatically create SQL to insert records into the Contact, Customers and ContactPersonalInfo tables even though you have a Single Entity called Customer in the CSDL =================================================================== There is Table by Type Inheritance where another EDM Entity can derive from another EDM entity and absorb the inherted entities properties, for example in the Break Away Geek Adventures EDM, the Customer entity derives (inherits) from the Contact Entity and absorbs all the properties of Contact entity. Thus when you create a Customer Entity in Code and then call context.SaveChanges the Object Context will first create the TSQL to insert into the Contact Table followed by a TSQL to insert into the Customer table =================================================================== Then there is the Table per Hierarchy Inheritance..... where different types are created based on a condition (similar applying a condition to filter a Entity to contain filtered records)... the diference being that the filter condition populates a new Entity Type (derived from the base Entity). In the BreakAway sample the example is Lodging Entity which is a Abstract Entity and Then Resort and NonResort Entities which derive from Lodging Entity and records are filtered based on the value of the Resort Boolean field =================================================================== Then there is Table per Concrete Type Hierarchy where we create a concrete Entity for each table in the database. In the BreakAway sample there is a entity for the Reservation table and another Entity for the OldReservation table even though both the table contain the same number of fields. The OldReservation Entity can then inherit from the Reservation Entity and configure the OldReservation Entity to remove all Scalar Properties from the Entity (since it inherits the properties from Reservation and filters based on ReservationDate field) =================================================================== Complex Types (Complex Properties) Entities in EF can also contain Complex Properties (in addition to Scalar Properties) and these Complex Properties reference a ComplexType (not a EntityType) DropdownList, ListBox, RadioButtonList, CheckboxList, Bulletedlist are examples of List server controls (not data bound controls) these controls cannot use Complex properties during databinding, they need Scalar Properties. So if a Entity contains Complex properties and you need to bind those to list server controls then use projections to return Scalar properties and bind them to the control (the disadvantage is that projected collections are not tracked by the Object Context and hence cannot persist changes to the projected collections bound to controls) ObjectDataSource and EntityDataSource do account for Complex properties and one can bind entities with Complex Properties to Data Source controls and they will be tracked for changes... with no additional plumbing needed to persist changes to these collections bound to controls So DataBound controls like GridView, FormView need to use EntityDataSource or ObjectDataSource as a datasource for entities that contain Complex properties so that changes to the datasource done using the GridView can be persisted to the DB (enabling the controls for updates)....if you cannot use the EntityDataSource you need to flatten the ComplexType Properties using projections With EF Version 4 ComplexTypes are supported by the Designer and can add/remove/compose Complex Types directly using the Designer =================================================================== Conditional Mapping ... is like Table per Hierarchy Inheritance where Entities inherit from a base class and then used conditions to populate the EntitySet (called conditional Mapping). Conditional Mapping has limitations since you can only use =, is null and IS NOT NULL Conditions to do conditional mapping. If you need more operators for filtering/mapping conditionally then use QueryView(or possibly Defining Query) to create a readonly entity. QueryView are readonly by default... the EntitySet created by the QueryView is enabled for change tracking by the ObjectContext, however the ObjectContext cannot create insert/update/delete TSQL statements for these Entities when SaveChanges is called since it is QueryView. One way to get around this limitation is to map stored procedures for the insert/update/delete operations in the Designer. =================================================================== Difference between QueryView and Defining Query : QueryView is defined in the (MSL) Mapping File/section of the EDM XML, whereas the DefiningQuery is defined in the store schema (SSDL). QueryView is written using Entity SQL and is this database agnostic and can be used against any database/Data Layer. DefiningQuery is written using Database Lanaguage i.e. TSQL or PSQL thus you have more control =================================================================== Performance: Lazy loading is deferred loading done automatically. lazy loading is supported with EF version4 and is on by default. If you need to turn it off then use context.ContextOptions.lazyLoadingEnabled = false To improve Performance consider PreCompiling the ObjectQuery using the CompiledQuery.Compile method

    Read the article

  • Using RIA DomainServices with ASP.NET and MVC 2

    - by Bobby Diaz
    Recently, I started working on a new ASP.NET MVC 2 project and I wanted to reuse the data access (LINQ to SQL) and business logic methods (WCF RIA Services) that had been developed for a previous project that used Silverlight for the front-end.  I figured that I would be able to instantiate the various DomainService classes from within my controller’s action methods, because after all, the code for those services didn’t look very complicated.  WRONG!  I didn’t realize at first that some of the functionality is handled automatically by the framework when the domain services are hosted as WCF services.  After some initial searching, I came across an invaluable post by Joe McBride, which described how to get RIA Service .svc files to work in an MVC 2 Web Application, and another by Brad Abrams.  Unfortunately, Brad’s solution was for an earlier preview release of RIA Services and no longer works with the version that I am running (PDC Preview). I have not tried the RC version of WCF RIA Services, so I am not sure if any of the issues I am having have been resolved, but I wanted to come up with a way to reuse the shared libraries so I wouldn’t have to write a non-RIA version that basically did the same thing.  The classes I came up with work with the scenarios I have encountered so far, but I wanted to go ahead and post the code in case someone else is having the same trouble I had.  Hopefully this will save you a few headaches! 1. Querying When I first tried to use a DomainService class to perform a query inside one of my controller’s action methods, I got an error stating that “This DomainService has not been initialized.”  To solve this issue, I created an extension method for all DomainServices that creates the required DomainServiceContext and passes it to the service’s Initialize() method.  Here is the code for the extension method; notice that I am creating a sort of mock HttpContext for those cases when the service is running outside of IIS, such as during unit testing!     public static class ServiceExtensions     {         /// <summary>         /// Initializes the domain service by creating a new <see cref="DomainServiceContext"/>         /// and calling the base DomainService.Initialize(DomainServiceContext) method.         /// </summary>         /// <typeparam name="TService">The type of the service.</typeparam>         /// <param name="service">The service.</param>         /// <returns></returns>         public static TService Initialize<TService>(this TService service)             where TService : DomainService         {             var context = CreateDomainServiceContext();             service.Initialize(context);             return service;         }           private static DomainServiceContext CreateDomainServiceContext()         {             var provider = new ServiceProvider(new HttpContextWrapper(GetHttpContext()));             return new DomainServiceContext(provider, DomainOperationType.Query);         }           private static HttpContext GetHttpContext()         {             var context = HttpContext.Current;   #if DEBUG             // create a mock HttpContext to use during unit testing...             if ( context == null )             {                 var writer = new StringWriter();                 var request = new SimpleWorkerRequest("/", "/",                     String.Empty, String.Empty, writer);                   context = new HttpContext(request)                 {                     User = new GenericPrincipal(new GenericIdentity("debug"), null)                 };             } #endif               return context;         }     }   With that in place, I can use it almost as normally as my first attempt, except with a call to Initialize():     public ActionResult Index()     {         var service = new NorthwindService().Initialize();         var customers = service.GetCustomers();           return View(customers);     } 2. Insert / Update / Delete Once I got the records showing up, I was trying to insert new records or update existing data when I ran into the next issue.  I say issue because I wasn’t getting any kind of error, which made it a little difficult to track down.  But once I realized that that the DataContext.SubmitChanges() method gets called automatically at the end of each domain service submit operation, I could start working on a way to mimic the behavior of a hosted domain service.  What I came up with, was a base class called LinqToSqlRepository<T> that basically sits between your implementation and the default LinqToSqlDomainService<T> class.     [EnableClientAccess()]     public class NorthwindService : LinqToSqlRepository<NorthwindDataContext>     {         public IQueryable<Customer> GetCustomers()         {             return this.DataContext.Customers;         }           public void InsertCustomer(Customer customer)         {             this.DataContext.Customers.InsertOnSubmit(customer);         }           public void UpdateCustomer(Customer currentCustomer)         {             this.DataContext.Customers.TryAttach(currentCustomer,                 this.ChangeSet.GetOriginal(currentCustomer));         }           public void DeleteCustomer(Customer customer)         {             this.DataContext.Customers.TryAttach(customer);             this.DataContext.Customers.DeleteOnSubmit(customer);         }     } Notice the new base class name (just change LinqToSqlDomainService to LinqToSqlRepository).  I also added a couple of DataContext (for Table<T>) extension methods called TryAttach that will check to see if the supplied entity is already attached before attempting to attach it, which would cause an error! 3. LinqToSqlRepository<T> Below is the code for the LinqToSqlRepository class.  The comments are pretty self explanatory, but be aware of the [IgnoreOperation] attributes on the generic repository methods, which ensures that they will be ignored by the code generator and not available in the Silverlight client application.     /// <summary>     /// Provides generic repository methods on top of the standard     /// <see cref="LinqToSqlDomainService&lt;TContext&gt;"/> functionality.     /// </summary>     /// <typeparam name="TContext">The type of the context.</typeparam>     public abstract class LinqToSqlRepository<TContext> : LinqToSqlDomainService<TContext>         where TContext : System.Data.Linq.DataContext, new()     {         /// <summary>         /// Retrieves an instance of an entity using it's unique identifier.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="keyValues">The key values.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual TEntity GetById<TEntity>(params object[] keyValues) where TEntity : class         {             var table = this.DataContext.GetTable<TEntity>();             var mapping = this.DataContext.Mapping.GetTable(typeof(TEntity));               var keys = mapping.RowType.IdentityMembers                 .Select((m, i) => m.Name + " = @" + i)                 .ToArray();               return table.Where(String.Join(" && ", keys), keyValues).FirstOrDefault();         }           /// <summary>         /// Creates a new query that can be executed to retrieve a collection         /// of entities from the <see cref="DataContext"/>.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <returns></returns>         [IgnoreOperation]         public virtual IQueryable<TEntity> GetEntityQuery<TEntity>() where TEntity : class         {             return this.DataContext.GetTable<TEntity>();         }           /// <summary>         /// Inserts the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Insert<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.InsertOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Insert);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity) where TEntity : class         {             return this.Update(entity, null);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <param name="original">The original.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity, TEntity original)             where TEntity : class         {             if ( original == null )             {                 original = GetOriginal(entity);             }               var table = this.DataContext.GetTable<TEntity>();             table.TryAttach(entity, original);               return this.Submit(entity, original, DomainOperation.Update);         }           /// <summary>         /// Deletes the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Delete<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.TryAttach(entity);             //table.DeleteOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Delete);         }           protected virtual bool Submit(Object entity, Object original, DomainOperation operation)         {             var entry = new ChangeSetEntry(0, entity, original, operation);             var changes = new ChangeSet(new ChangeSetEntry[] { entry });             return base.Submit(changes);         }           private TEntity GetOriginal<TEntity>(TEntity entity) where TEntity : class         {             var context = CreateDataContext();             var table = context.GetTable<TEntity>();             return table.FirstOrDefault(e => e == entity);         }     } 4. Conclusion So there you have it, a fully functional Repository implementation for your RIA Domain Services that can be consumed by your ASP.NET and MVC applications.  I have uploaded the source code along with unit tests and a sample web application that queries the Customers table from inside a Controller, as well as a Silverlight usage example. As always, I welcome any comments or suggestions on the approach I have taken.  If there is enough interest, I plan on contacting Colin Blair or maybe even the man himself, Brad Abrams, to see if this is something worthy of inclusion in the WCF RIA Services Contrib project.  What do you think? Enjoy!

    Read the article

  • Summary of Oracle E-Business Suite Technology Webcasts and Training

    - by BillSawyer
    Last Updated: November 16, 2011We're glad to hear that you've been finding our ATG Live Webcast series to be useful.  If you missed a webcast, you can download the presentation materials and listen to the recordings below. We're collecting other learning-related materials right now.  We'll update this summary with pointers to new training resources on an ongoing basis.  ATG Live Webcast Replays All of the ATG Live Webcasts are hosted by the Oracle University Knowledge Center.  In order to access the replays, you will need a free Oracle.com account. You can register for an Oracle.com account here.If you are a first-time OUKC user, you will have to accept the Terms of Use. Sign-in with your Oracle.com account, or if you don't already have one, use the link provided on the sign-in screen to create an account. After signing in, accept the Terms of Use. Upon completion of these steps, you will be directed to the replay. You only need to accept the Terms of Use once. Your acceptance will be noted on your account for all future OUKC replays and event registrations. 1. E-Business Suite R12 Oracle Application Framework (OAF) Rich User Interface Enhancements (Presentation) Prabodh Ambale (Senior Manager, ATG Development) and Gustavo Jiminez (Development Manager, ATG Development) offer a comprehensive review of the latest user interface enhancements and updates to OA Framework in EBS 12.  The webcast provides a detailed look at new features designed to enhance usability, including new capabilities for personalization and extensions, and features that support the use of dashboards and web services. (January 2011) 2. E-Business Suite R12 Service Oriented Architectures (SOA) Using the E-Business Suite Adapter (Presentation, Viewlet) Neeraj Chauhan (Product Manager, ATG Development) reviews the Service Oriented Architecture (SOA) capabilities within E-Business Suite 12, focussing on using the E-Business Suite Adapter to integrate EBS with third-party applications via web services, and orchestrate services and distributed transactions across disparate applications. (February 2011) 3. Deploying Oracle VM Templates for Oracle E-Business Suite and Oracle PeopleSoft Enterprise Applications Ivo Dujmovic (Director, ATG Development) reviews the latest capabilities for using Oracle VM to deploy virtualized EBS database and application tier instances using prebuilt EBS templates, wire those virtualized instances together using the EBS virtualization kit, and take advantage of live migration of user sessions between failing application tier nodes.  (February 2011) 4. How to Reduce Total Cost of Ownership (TCO) Using Oracle E-Business Suite Management Packs (Presentation) Angelo Rosado (Product Manager, ATG Development) provides an overview of how EBS sysadmins can make their lives easier with the Management Packs for Oracle E-Business Suite Release 12.  This session highlights key features in Application Management Pack (AMP) and Application Change Management Pack) that can automate or streamline system configurations, monitor EBS performance and uptime, keep multiple EBS environments in sync with patches and configurations, and create patches for your own EBS customizations and apply them with Oracle's own patching tools.  (June 2011) 5. Upgrading E-Business Suite 11i Customizations to R12 (Presentation) Sara Woodhull (Principal Product Manager, ATG Development) provides an overview of how E-Business Suite developers can manage and upgrade existing EBS 11i customizations to R12.  Sara covers methods for comparing customizations between Release 11i and 12, managing common customization types, managing deprecated technologies, and more. (July 2011) 6. Tuning All Layers of E-Business Suite (Part 1 of 3) (Presentation) Lester Gutierrez, Senior Architect, and Deepak Bhatnagar, Senior Manager, from the E-Business Suite Application Performance team, lead Tuning All Layers of E-Business Suite (Part 1 of 3). This webcast provides an overview of how Oracle E-Business Suite system administrators, DBAs, developers, and implementers can improve E-Business Suite performance by following a performance tuning framework. Part 1 focuses on the performance triage approach, tuning applications modules, upgrade performance best practices, and tuning the database tier. This ATG Live Webcast is an expansion of the performance sessions at conferences that are perennial favourites with hardcore Apps DBAs. (August 2011)  7. Oracle E-Business Suite Directions: Deployment and System Administration (Presentation) Max Arderius, Manager Applications Technology Group, and Ivo Dujmovic, Director Applications Technology group, lead Oracle E-Business Suite Directions: Deployment and System Administration covering important changes in E-Business Suite R12.2. The changes discussed in this presentation include Oracle E-Business Suite architecture, installation, upgrade, WebLogic Server integration, online patching, and cloning. This webcast provides an overview of how Oracle E-Business Suite system administrators, DBAs, developers, and implementers can prepare themselves for these changes in R12.2 of Oracle E-Business Suite. (October 2011) Oracle University Courses For a general listing of all Oracle University courses related to E-Business Suite Technology, use the Oracle University E-Business Suite Technology course catalog link. Oracle University E-Business Suite Technology Course Catalog 1. R12 Oracle Applications System Administrator Fundamentals In this course students learn concepts and functions that are critical to the System Administrator role in implementing and managing the Oracle E-Business Suite. Topics covered include configuring security and user management, configuring flexfields, managing concurrent processing, and setting up other essential features such as profile options and printing. In addition, configuration and maintenance of an Oracle E-Business Suite through Oracle Applications Manager is discussed. Students also learn the fundamentals of Oracle Workflow including its setup. The System Administrator Fundamentals course provides the foundation needed to effectively control security and ensure smooth operations for an E-Business Suite installation. Demonstrations and hands-on practice reinforce the fundamental concepts of configuring an Oracle E-Business Suite, as well as handling day-to-day system administrator tasks. 2. R12.x Install/Patch/Maintain Oracle E-Business Suite This course will be applicable for customers who have implemented Oracle E-Business Suite Release 12 or Oracle E-Business Suite 12.1. This course explains how to go about installing and maintaining an Oracle E-Business Suite Release 12.x system. Both Standard and Express installation types are covered in detail. Maintenance topics include a detailed examination of the standard tools and utilities, and an in-depth look at patching an Oracle E-Business Suite system. After this course, students will be able to make informed decisions about how to install an Oracle E-Business Suite system that meets their specific requirements, and how to maintain the system afterwards. The extensive hands-on practices include performing an installation on a Linux system, navigating the file system to locate key files, running the standard maintenance tools and utilities, applying patches, and carrying out cloning operations. 3. R12.x Extend Oracle Applications: Building OA Framework Applications This class is a hands-on lab-intensive course that will keep the student busy and active for the duration of the course. While the course covers the fundamentals that support OA Framework-based applications, the course is really an exercise in J2EE programming. Over the duration of the course, the student will create an OA Framework-based application that selects, inserts, updates, and deletes data from a R12 Oracle Applications instance. 4. R12.x Extend Oracle Applications: Customizing OA Framework Applications This course has been significantly changed from the prior version to include additional deployments. The course doesn't teach the specifics of configuration of each product. That is left to the product-specific courses. What the course does cover is the general methods of building, personalizing, and extending OA Framework-based pages within the E-Business Suite. Additionally, the course covers the methods to deploy those types of customizations. The course doesn't include discussion of the Oracle Forms-based pages within the E-Business Suite. 5. R12.x Extend Oracle Applications: OA Framework Personalizations Personalization is the ability within an E-Business Suite instance to make changes to the look and behavior of OA Framework-based pages without programming. And, personalizations are likely to survive patches and upgrades, increasing their utility. This course will systematically walk you through the myriad of personalization options, starting with simple examples and increasing in complexity from there. 6. E-Business Suite: BI Publisher 5.6.3 for Developers Starting with the basic concepts, architecture, and underlying standards of Oracle XML Publisher, this course will lead a student through a progress of exercises building their expertise. By the end of the course, the student should be able to create Oracle XML Publisher RTF templates and data templates. They should also be able to deploy and maintain a BI Publisher report in an E-Business Suite instance. Students will also be introduced to Oracle BI Publisher Enterprise. 7. R12.x Implement Oracle Workflow This course provides an overview of the architecture and features of Oracle Workflow and the benefits of using Oracle Workflow in an e-business environment. You can learn how to design workflow processes to automate and streamline business processes, and how to define event subscriptions to perform processing triggered by business events. Students also learn how to respond to workflow notifications, how to administer and monitor workflow processes, and what setup steps are required for Oracle Workflow. Demonstrations and hands-on practice reinforce the fundamental concepts. 8. R12.x Oracle E-Business Suite Essentials for Implementers Oracle R12.1 E-Business Essentials for Implementers is a course that provides a functional foundation for any E-Business Suite Fundamentals course.

    Read the article

  • CodePlex Daily Summary for Thursday, May 27, 2010

    CodePlex Daily Summary for Thursday, May 27, 2010New ProjectsBinding Navigator: Clone of WinForms BindingNavigator that is able to work with any type of DataSource. For full functionality it requires the DataSource to implement...DEWD: DEWD is an Umbraco 4.0 extension used to edit sequential data such as rows in a SQL server database table. It's meant to allow developers to quickl...Eletronic Invoice Extensions: A simple DLL to use against XSD and validate a XML.EssionCalendar: EssionCal EssionCalExpression Encoder Batch Processor: Importing your videotapes into Windows is easy with the built-in utilities, but if your importer does not encode to your desired format, you need t...Feedback Form: Feedback application makes it easier for attendees who attend an seminar/event and event organizers. Organizers of the event will no longer need to...Find in Start Menu: Find in Start MenuIE8 Web Slices Pack: IE8 Web Slices Pack is a package of 5 types of web slices ready to be customized via a mini-CMS for your web site or for your custom IE8 installer....KafTK: Iskakov AzamatMicrosoft Dynamics NAV text export splitter: Utility for splitting Microsoft Dynamics NAV object exports (in text format) into files that each contain a single object. MultiPoint Vote: Voting is more innovative and catchy through this MultiPoint application. Using MultiPoint SDK 1.5 and Visual C#, this prototype emphasizes the cap...NebDotNet: NebDotNetnsim: Some simulation issues.OpenLight Group Common Lib: This project is a set of classes commonly used across OpenLight Group projects.pstsdk.net: .NET port of PST File Format SDK: pstsdk.net makes it easier for .NET developers to access the PST file format. This is a direct C# port of the PST File Format SDK project which is ...RavenMVC: A NoSQL Demo App using RavenDB and ASP.NET MVC 2.Remics: Remics is a toolkit for reverse engineering tools. Open source (MIT license). The goal of Remics is enabling developers (or researchers) quickly...RIA Services to Legacy DAL Integration Library: RIA Services to Legacy DAL Integration LibrarySharePoint List Adapters for SSIS: SSIS source and destination components to access SharePoint lists using basic authenticationSql Query Modelling Language: This project library creates simple Sql queries.Thumb nail creator and image resizer: a user control to create thubnails and resize images for displayTK: study projectViperWorks Ignition: Ignition is application framework for WinForms and WPF business applications. Built in webservice generation, reporting and rapid application devel...XNA Image Reflector: XNA Image Reflector allows you to add Web 2.0-like reflections to images in a few clicksXNArkanoid: XNArkanoid is a Windows Phone 7 remake of the classic Taito´s Arkanoid. It´s developed in C#, using XNA Framework v4.0.New ReleasesAcies: Acies - Alpha Build 0.0.5: First alpha release. Requires Microsoft XNA Framework Redistributable 3.1 (http://www.microsoft.com/downloads/details.aspx?FamilyID=53867a2a-e249...Ajax Toolkit for ASP.NET MVC: Ajax Toolkit for ASP.NET MVC v20100527: change array datasource to json datasource allow lowercaseAttribute Builder: Attribute Builder 1.0: Attribute Builder 1.0Support for parameter-less constructors, constructors with parameters and object initialization with field and property assign...Binding Navigator: TTBindingNavigator preview: First binary release. Only control without samples.Bojinx: Bojinx Core V4.5.15: Bugs fixed: Clean up in the event handler Added more disposal features to better clean up contexts on unload. Optimized command processing fur...CBM-Command: Version 1.0 - 2010-05-26: Release Notes - 2010-05-26 - Version 1.0New Features None Changes Fixed bug where you could move to an unopened panel Fixed bug where you could ...Community Forums NNTP bridge: Community Forums NNTP Bridge V03: This is the second release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open sourcen NNTP Bridge. ...Community Forums NNTP bridge: Community Forums NNTP Bridge V04: This is the second release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open sourcen NNTP Bridge. ...Community Forums NNTP bridge: Community Forums NNTP Bridge V05: This is the second release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open sourcen NNTP Bridge. ...Community Forums NNTP bridge: Community Forums NNTP Bridge V06: This is the second release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open sourcen NNTP Bridge. ...Community Forums NNTP bridge: Community Forums NNTP Bridge V07: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open sourcen NNTP Bridge. This release solves...Community Forums NNTP bridge: Community Forums NNTP Bridge V08: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release solves ...Dot Game: Dot Game Network version: Now, you can play over network.EpsiRisk: EPSI RISK V1: First Stable Version of EpsiRISK ! Enjoy !Event Scavenger: Viewer 3.3.0: Added grouping functionality to Viewer. Group expanding/collapsing only supported under Windows Vista/7 and onwards. Viewer version set to 3.3.0.FAST for Sharepoint MOSS 2010 Query Tool: Version 1.0: Full Release Added sorting Added custom trim duplicates Added UI improvements Added ignore certificate errorsFoursquare for Windows Phone 7: Foursquare 2010.05.26.01: Foursquare 2010.05.26.01 Updates: Corrected issue with isPrivate, sendToTwitter, and sendToFacebookHobbyBrew Mobile: Beta 3: ATTENZIONE notifica nuove versioni via email disponibile (leggi in fondo)! Supporto alla rotazione dello schermo: Malti e Luppoli si affiancano or...IE8 Web Slices Pack: IE8 Web Slices Pack v2.0: This release of the Web Slices Pack supports 5 different Web Slices and a mini CMS to administer the info in the Web Slices.manx: manx data 1.0: Initial dump of data. This is a raw SQL dump script that deletes all existing data and inserts the initial dataset as received from Paul Williams....Microsoft Web Protection Library: WPL May CTP: This preview of the Web Protection Library includes AntiXSS - an updated Anti-Cross Site Scripting Library removes some bugs and is now usable in ...MultiPoint Vote: MultiPoint Vote v.1: MultiPoint Vote v1 Features This accepts user inputs: number of participants, poll/survey title and the list of options A text file containing th...NLog - Advanced .NET Logging: Nightly Build 2010.05.26.001: Changes since the last build:2010-05-25 19:21:49 Jarek Kowalski Reordered parameters of AsynchronousAction<T> 2010-05-25 19:16:02 Jarek Kowalski N...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Class Libraries, version 1.0.1.124: The NodeXL class libraries can be used to display network graphs in .NET applications. To include a NodeXL network graph in a WPF desktop or Windo...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.124: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis is a minor re...OpenExpressApp let business people build application: OpenExpressApp for .Net4: 从.Net3.5 SP1 升级到.Net4 ,升级主要内容: 1. 解决了一些内存泄露问题 2. 修改了一些bug 3. 进行了部分代码重构 4. 使用MEF替代了PrismRavenMVC: RavenMVC 0.1: A NoSQL demo app in ASP.NET MVC using RavenDB.SharePoint List Adapters for SSIS: Initial Release: Contains the raw assemblies necessary to use the SharePoint list adapters with basic authentication. See the read me file for details on using in ...Sql Query Modelling Language: Sql Qml V1: This is first versionThumb nail creator and image resizer: ThumbnailCreator 1.0: This is the very first release I built it for my self so it a bit rough but i thought others might fint it usefulluptime.exe: uptime.exe v1.1: Changed the calculation of the uptime. It's now based on the LastbootUpTime value obtained from WMI.VCC: Latest build, v2.1.30526.0: Automatic drop of latest buildViperWorks Ignition: Test: TestXmlCodeEditor: Release 0.91 Alpha: Release 0.91 AlphaXNA Image Reflector: XNA Image Reflector v 1.1: This release has been compiled with XNA Game Studio 3.1, so you will need to download the XNA Runtime 3.1 in order to run it.Xna.Extend: Xna Extend (Ver 0.0.1 beta): This is the betas, betas, beta, test version (Version 0.0.1 beta). It includes only the input and audio components. If you experience any errors, o...XNArkanoid: XNArkanoid v 0.2b: XNArkanoid v 0.2. Initial beta releaseMost Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active ProjectsAStar.netpatterns & practices – Enterprise Librarypatterns & practices: Windows Azure Security GuidanceSqlServerExtensionsMono.AddinsBlogEngine.NETCustomer Portal Accelerator for Microsoft Dynamics CRMRawrCodeReviewGMap.NET - Great Maps for Windows Forms & Presentation

    Read the article

  • CodePlex Daily Summary for Wednesday, February 23, 2011

    CodePlex Daily Summary for Wednesday, February 23, 2011Popular ReleasesSQL Server CLR Function for Address Correction and Geocoding: Release 2.1: Adds support for the User Key argument in Process function calls.ClosedXML - The easy way to OpenXML: ClosedXML 0.45.2: New on this release: 1) Added data validation. See Data Validation 2) Deleting or clearing cells deletes the hyperlinks too. New on v0.45.1 1) Fixed issues 6237, 6240 New on v0.45.2 1) Fixed issues 6257, 6266 New Examples Data ValidationCrystalbyte Equinox (LINQ to IMAP): Equinox Alpha Release v0.3.0.0: Fixed bugsFixed issue introduced in last release; No connection could be established since the client did not read the welcome message during the initial connection attempt. Contact names are now being properly parsed into the envelope Introduced featuresImplemented SMTP support, the client is contained inside the Crystalbyte.Equinox.Smtp.dll which was added to the current release.OMEGA CMS: OMEGA CMA - Alpha 0.2: A few fixes for OMEGA Framework (DLL) A few tweeks for OMEGA CMSJSLint for Visual Studio 2010: 1.2.4: Bug Fix release.Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.2: New control, Toast Prompt! Removed progress bar since Silverlight Toolkit Feb 2010 has it.Umbraco CMS: Umbraco 4.7: Service release fixing 31 issues. A full changelog will be available with the final stable release of 4.7 Important when upgradingUpgrade as if it was a patch release (update /bin, /umbraco and /umbraco_client). For general upgrade information follow the guide found at http://our.umbraco.org/wiki/install-and-setup/upgrading-an-umbraco-installation 4.7 requires the .NET 4.0 framework Web.Config changes Update the web web.config to include the 4 changes found in (they're clearly marked in...HubbleDotNet - Open source full-text search engine: V1.1.0.0: Add Sqlite3 DBAdapter Add App Report when Query Cache is Collecting. Improve the performance of index through Synchronize. Add top 0 feature so that we can only get count of the result. Improve the score calculating algorithm of match. Let the score of the record that match all items large then others. Add MySql DBAdapter Improve performance for multi-fields sort . Using hash table to access the Payload data. The version before used bin search. Using heap sort instead of qui...Silverlight????[???]: silverlight????[???]2.0: ???????,?????,????????silverlight????。DBSourceTools: DBSourceTools_1.3.0.0: Release 1.3.0.0 Changed editors from FireEdit to ICSharpCode.TextEditor. Complete re-vamp of Intellisense ( further testing needed). Hightlight Field and Table Names in sql scripts. Added field dropdown on all tables and views in DBExplorer. Added data option for viewing data in Tables. Fixed comment / uncomment bug as reported by tareq. Included Synonyms in scripting engine ( nickt_ch ).IronPython: 2.7 Release Candidate 1: We are pleased to announce the first Release Candidate for IronPython 2.7. This release contains over two dozen bugs fixed in preparation for 2.7 Final. See the release notes for 60193 for details and what has already been fixed in the earlier 2.7 prereleases. - IronPython TeamCaliburn Micro: A Micro-Framework for WPF, Silverlight and WP7: Caliburn.Micro 1.0 RC: This is the official Release Candicate for Caliburn.Micro 1.0. The download contains the binaries, samples and VS templates. VS Templates The templates included are designed for situations where the Caliburn.Micro source needs to be embedded within a single project solution. This was targeted at government and other organizations that expressed specific requirements around using an open source project like this. NuGet This release does not have a corresponding NuGet package. The NuGet pack...Caliburn: A Client Framework for WPF and Silverlight: Caliburn 2.0 RC: This is the official Release Candidate for Caliburn 2.0. It contains all binaries, samples and generated code docs.Chiave File Encryption: Chiave 0.9: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Feedbacks are Welcome!....Rawr: Rawr 4.0.20 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...PowerGUI Visual Studio Extension: PowerGUI VSX 1.3.2: New FeaturesPowerGUI Console Tool Window PowerShell Project Type PowerGUI 2.4 SupportMiniTwitter: 1.66: MiniTwitter 1.66 ???? ?? ?????????? 2 ??????????????????? User Streams ?????????Windows Phone 7 Isolated Storage Explorer: WP7 Isolated Storage Explorer v1.0 Beta: Current release features:WPF desktop explorer client Visual Studio integrated tool window explorer client (Visual Studio 2010 Professional and above) Supported operations: Refresh (isolated storage information), Add Folder, Add Existing Item, Download File, Delete Folder, Delete File Explorer supports operations running on multiple remote applications at the same time Explorer detects application disconnect (1-2 second delay) Explorer confirms operation completed status Explorer d...Image.Viewer: 2011: First version of 2011Silverlight Toolkit: Silverlight for Windows Phone Toolkit - Feb 2011: Silverlight for Windows Phone Toolkit OverviewSilverlight for Windows Phone Toolkit offers developers additional controls for Windows Phone application development, designed to match the rich user experience of the Windows Phone 7. Suggestions? Features? Questions? Ask questions in the Create.msdn.com forum. Add bugs or feature requests to the Issue Tracker. Help us shape the Silverlight Toolkit with your feedback! Please clearly indicate that the work items and issues are for the phone t...New Projects.Net Dating & Social Software Suite: A free, open source dating and social software suite. Dot Net Dating Suite uses the latest features in .Net combined with experienced developers efforts. This allows us to deliver professional public SEO websites with the support of an enterprise enabled administration suite.AllegroSharp: Biblioteka zapewniajaca obsluge serwisu aukcyjnego Allegro.pl w oparciu o udostepniona publicznie usluge Allegro Web API.asdfasdfasdf: asdfasdfasdfauto: auto siteAWS Monitor: A web app utilizing the Amazon Web Services API with a focus on browsing through and analyzing data graphically, mostly from CloudWatch - the API providing metrics about the usage of all the Amazon Web Services such as EC2 (Cloud Computing) and ELB (Elastic Load Balancing).Calculation of FEM using DirectX Libraries: Calculation of FEM using DirectX LibrariesCART: System CART (Creative Application to Remedy Traffics) poprawi przepustowosc infrastruktury drogowej i plynnosci ruchu pojazdów w aglomeracjach miejskich. Chaining Assertion for MSTest: Chaining Assertion for MSTest. Simpleness Assert Extension Method on Object. This provides only one .cs file.CodeFirst Membership Provider: Custom Membership Provider based on SimpleMembershipProvider using Entity Framework Code-First, intended for use in ASP.NET MVC 3. Thanks to ASP.NET Web Pages team for building a simple membership provider :PFourSquareSharp: Simple OAuth2 Implementation of FourSquare API in .NETHelix Engine: The Helix Engine is an Isometric Rendering Engine for SilverlightLighthouse - Versatile Unit Test Runner for Silverlight: Lighthouse provides you with set of simple but powerful tools to run Silverlight Unit Tests from Command Line, Windows Test Runner Application and from Resharper plugin for Visual Studio.MISCE: Designs for a Minimal Instruction Set Computer for Education along with a simulator/compiler written in Excel. A computer "from first principles" that utilises a minimal instruction set and can be built for demonstration purposes as part of a technology or computing course. Oh My Log: Oh My Log inventorises eventlog files from Windows Servers in a network. Clean and simple sysadmin tool. OSIS Interop Tests: These are the tests created for use by the OSIS working group (http://osis.idcommons.net) to test interoperability features of user-centric solutions during interop events.Program Options: Parse command line optionsSearchable Property Updater for Microsoft Dynamics CRM 2011: Searchable Property Updater makes it easier for Microsoft Dynamics CRM 2011 customizers to bulk change the "Active for advanced find" property of entities attributesSilverlight Bolo: silverlight bolo cloneSimulacia a vizualizacia vlasov: simulacia a vizualizacia vlasov pomocou GPUSmartkernel: Smartkernel:Framework,Example,Platform,Tool,AppSusuCMS: SusuCMSSwimlanes for Scrum: Swimlanes Taskboard for Scrum Team Projects with TFS setup using SfTS (Scrum for Team System) V3 templateteamdoer: These are the actual source codes that power teamdoer.com - a simple collaborative task/project management software apptiencd: Tiencd projectVG Current Item Display Web Part: VG Current Item Display Web Part allows to display current SharePoint item metadata. For example if you put it on to the Web Part page or list item form it will display the custom view of the current item properties. Output is easily customized with XSLT file.Virtual Interactive Shopper: Addin for SeeMe Rehabilitation System. More information about the SeeMe system can be found at http://www.brontesprocessing.com/health/SeeMeVisual Studio Strategy Manager: Provide a way to create code generation strategies and to manage them in Visual Studio 2010. Strategies can interact with Visual Studio events like document events (saved, closed, opened), building event or DSL Tools events (model element created, deleted, modified..).VMarket: <project name>VMarket</project name> <programming language>asp.net</programming language> <project description> VMarket is virtual market systems which allow users to act as seller or buyer and manage their property online </project description>webclerk: webclerk's project based on microsoft techWebTest: Just Test!Work efficiency (WE) ????????: Work efficiency ???????? ????: ???? ????zrift: nothing here, move along

    Read the article

  • Understanding LINQ to SQL (11) Performance

    - by Dixin
    [LINQ via C# series] LINQ to SQL has a lot of great features like strong typing query compilation deferred execution declarative paradigm etc., which are very productive. Of course, these cannot be free, and one price is the performance. O/R mapping overhead Because LINQ to SQL is based on O/R mapping, one obvious overhead is, data changing usually requires data retrieving:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { Product product = database.Products.Single(item => item.ProductID == id); // SELECT... product.UnitPrice = unitPrice; // UPDATE... database.SubmitChanges(); } } Before updating an entity, that entity has to be retrieved by an extra SELECT query. This is slower than direct data update via ADO.NET:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (SqlConnection connection = new SqlConnection( "Data Source=localhost;Initial Catalog=Northwind;Integrated Security=True")) using (SqlCommand command = new SqlCommand( @"UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID", connection)) { command.Parameters.Add("@ProductID", SqlDbType.Int).Value = id; command.Parameters.Add("@UnitPrice", SqlDbType.Money).Value = unitPrice; connection.Open(); command.Transaction = connection.BeginTransaction(); command.ExecuteNonQuery(); // UPDATE... command.Transaction.Commit(); } } The above imperative code specifies the “how to do” details with better performance. For the same reason, some articles from Internet insist that, when updating data via LINQ to SQL, the above declarative code should be replaced by:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.ExecuteCommand( "UPDATE [dbo].[Products] SET [UnitPrice] = {0} WHERE [ProductID] = {1}", id, unitPrice); } } Or just create a stored procedure:CREATE PROCEDURE [dbo].[UpdateProductUnitPrice] ( @ProductID INT, @UnitPrice MONEY ) AS BEGIN BEGIN TRANSACTION UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID COMMIT TRANSACTION END and map it as a method of NorthwindDataContext (explained in this post):private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.UpdateProductUnitPrice(id, unitPrice); } } As a normal trade off for O/R mapping, a decision has to be made between performance overhead and programming productivity according to the case. In a developer’s perspective, if O/R mapping is chosen, I consistently choose the declarative LINQ code, unless this kind of overhead is unacceptable. Data retrieving overhead After talking about the O/R mapping specific issue. Now look into the LINQ to SQL specific issues, for example, performance in the data retrieving process. The previous post has explained that the SQL translating and executing is complex. Actually, the LINQ to SQL pipeline is similar to the compiler pipeline. It consists of about 15 steps to translate an C# expression tree to SQL statement, which can be categorized as: Convert: Invoke SqlProvider.BuildQuery() to convert the tree of Expression nodes into a tree of SqlNode nodes; Bind: Used visitor pattern to figure out the meanings of names according to the mapping info, like a property for a column, etc.; Flatten: Figure out the hierarchy of the query; Rewrite: for SQL Server 2000, if needed Reduce: Remove the unnecessary information from the tree. Parameterize Format: Generate the SQL statement string; Parameterize: Figure out the parameters, for example, a reference to a local variable should be a parameter in SQL; Materialize: Executes the reader and convert the result back into typed objects. So for each data retrieving, even for data retrieving which looks simple: private static Product[] RetrieveProducts(int productId) { using (NorthwindDataContext database = new NorthwindDataContext()) { return database.Products.Where(product => product.ProductID == productId) .ToArray(); } } LINQ to SQL goes through above steps to translate and execute the query. Fortunately, there is a built-in way to cache the translated query. Compiled query When such a LINQ to SQL query is executed repeatedly, The CompiledQuery can be used to translate query for one time, and execute for multiple times:internal static class CompiledQueries { private static readonly Func<NorthwindDataContext, int, Product[]> _retrieveProducts = CompiledQuery.Compile((NorthwindDataContext database, int productId) => database.Products.Where(product => product.ProductID == productId).ToArray()); internal static Product[] RetrieveProducts( this NorthwindDataContext database, int productId) { return _retrieveProducts(database, productId); } } The new version of RetrieveProducts() gets better performance, because only when _retrieveProducts is first time invoked, it internally invokes SqlProvider.Compile() to translate the query expression. And it also uses lock to make sure translating once in multi-threading scenarios. Static SQL / stored procedures without translating Another way to avoid the translating overhead is to use static SQL or stored procedures, just as the above examples. Because this is a functional programming series, this article not dive into. For the details, Scott Guthrie already has some excellent articles: LINQ to SQL (Part 6: Retrieving Data Using Stored Procedures) LINQ to SQL (Part 7: Updating our Database using Stored Procedures) LINQ to SQL (Part 8: Executing Custom SQL Expressions) Data changing overhead By looking into the data updating process, it also needs a lot of work: Begins transaction Processes the changes (ChangeProcessor) Walks through the objects to identify the changes Determines the order of the changes Executes the changings LINQ queries may be needed to execute the changings, like the first example in this article, an object needs to be retrieved before changed, then the above whole process of data retrieving will be went through If there is user customization, it will be executed, for example, a table’s INSERT / UPDATE / DELETE can be customized in the O/R designer It is important to keep these overhead in mind. Bulk deleting / updating Another thing to be aware is the bulk deleting:private static void DeleteProducts(int categoryId) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.DeleteAllOnSubmit( database.Products.Where(product => product.CategoryID == categoryId)); database.SubmitChanges(); } } The expected SQL should be like:BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 COMMIT TRANSACTION Hoverer, as fore mentioned, the actual SQL is to retrieving the entities, and then delete them one by one:-- Retrieves the entities to be deleted: exec sp_executesql N'SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 -- Deletes the retrieved entities one by one: BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=78,@p1=N'Optimus Prime',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=79,@p1=N'Bumble Bee',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 -- ... COMMIT TRANSACTION And the same to the bulk updating. This is really not effective and need to be aware. Here is already some solutions from the Internet, like this one. The idea is wrap the above SELECT statement into a INNER JOIN:exec sp_executesql N'DELETE [dbo].[Products] FROM [dbo].[Products] AS [j0] INNER JOIN ( SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0) AS [j1] ON ([j0].[ProductID] = [j1].[[Products])', -- The Primary Key N'@p0 int',@p0=9 Query plan overhead The last thing is about the SQL Server query plan. Before .NET 4.0, LINQ to SQL has an issue (not sure if it is a bug). LINQ to SQL internally uses ADO.NET, but it does not set the SqlParameter.Size for a variable-length argument, like argument of NVARCHAR type, etc. So for two queries with the same SQL but different argument length:using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.Where(product => product.ProductName == "A") .Select(product => product.ProductID).ToArray(); // The same SQL and argument type, different argument length. database.Products.Where(product => product.ProductName == "AA") .Select(product => product.ProductID).ToArray(); } Pay attention to the argument length in the translated SQL:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(1)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(2)',@p0=N'AA' Here is the overhead: The first query’s query plan cache is not reused by the second one:SELECT sys.syscacheobjects.cacheobjtype, sys.dm_exec_cached_plans.usecounts, sys.syscacheobjects.[sql] FROM sys.syscacheobjects INNER JOIN sys.dm_exec_cached_plans ON sys.syscacheobjects.bucketid = sys.dm_exec_cached_plans.bucketid; They actually use different query plans. Again, pay attention to the argument length in the [sql] column (@p0 nvarchar(2) / @p0 nvarchar(1)). Fortunately, in .NET 4.0 this is fixed:internal static class SqlTypeSystem { private abstract class ProviderBase : TypeSystemProvider { protected int? GetLargestDeclarableSize(SqlType declaredType) { SqlDbType sqlDbType = declaredType.SqlDbType; if (sqlDbType <= SqlDbType.Image) { switch (sqlDbType) { case SqlDbType.Binary: case SqlDbType.Image: return 8000; } return null; } if (sqlDbType == SqlDbType.NVarChar) { return 4000; // Max length for NVARCHAR. } if (sqlDbType != SqlDbType.VarChar) { return null; } return 8000; } } } In this above example, the translated SQL becomes:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'AA' So that they reuses the same query plan cache: Now the [usecounts] column is 2.

    Read the article

  • Java 7 update 6 installation fails on Windows 7 when Chrome is default browser

    - by ali1234
    I am configuring a brand new Lenovo U410 system with Windows 7 Home Premium for a user. I received the system direct from the shop. As part of the configuration I installed Java using the online installer. This worked correctly. Later, due to a mistake I made, I needed to restore the system to factory default. The factory default FORMATS C:\ and puts back (supposedly) the exact factory configuration. However, after doing this, I was no longer able to install Java successfully using the same method I used before. Now, whenever I attempt to use the online Java installer, the following happens. First of all, a window always appears "Welcome to Java", "Downloading Java Installer...". After short time this window disappears and then one of three things happens: The very first time I do this after doing a factory reset, I get a Windows error report, which contains this information: Application Name: JavaSetup7u5.exe Application Version: 7.0.50.6 Application Timestamp: 4feacd84 Fault Module Name: JavaIC.dll Fault Module Version: 9.9.9.9 Fault Module Timestamp: 4f2343d6 Exception Offset: 000052cb Exception Code: c0000417 Exception Data: 00000000 OS Version: 6.1.7600.2.0.0.768.3 Locale ID: 1033 Additional Information 1: 773c Additional Information 2: 773cd78cf06816f8246f359fa270f3bb Additional Information 3: f51a Additional Information 4: f51aaea7d22f36fa9e3a626b5a5cd1c3 2. Subsequent runs produce either this error message: "Error: Java(TM) installer - Downloaded file C:\Users\\AppData\Local\Temp\fx-runtime.exe is corrupt." or Nothing happens at all. I Believe this is a red herring. Running the installer again causes a different error because the files were downloaded and the installer crashed before it could clean up. This isn't the actual problem, as when this happens the installer deletes the downloaded files, and then when you run it for the third time, it downloads everything again and does the javaic.dll crash. I suspect the downloader is appending to the existing files or something, causing the corruption. I have tried all of the above as Administrator and as a normal user. I have tried reseting the system to factory defaults several times. I have tried downloading with Chrome and Internet Explorer 9. I have tried uninstalling all anti-virus software and disabling the windows firewall entirely. The only thing which makes a difference is running the installer in Windows XP compatibility mode, which allows the installation to complete. I know I can workaround this error by using the offline installer so please don't post that as an answer. I am looking for an explanation of the root cause. Additionally, if I use the offline installer, the updater does not work. The updater also does not work if I install in XP mode. The updater fails because it works by just downloading the newest online setup and running it. Also remember that the installers are digitally signed. The signitures verify correctly so there is no way in hell that this is caused by corrupted downloads. Some theories I have: The Java setup files on java.com actually changed in between the first successful install and my later attempts. Seems unlikely as none of the version numbers have changed. However, I have seen a couple of reports of this error which showed up in the past 24 hours. This looks like the most likely explanation right now: http://www.oracle.com/us/corporate/press/1735645 - Oracle released 7 update 6 two days ago. Careful inspection of the installers reveal that they are in fact attempting to download .6, not .5 as the download page claims. Not actually correct. Only the update tool tries to install 7u6. The online installer still tries 7u5. However, 7u6 being released two days ago is too much of a coincidence to ignore. Update: The 7u6 online installer is available from Oracle technetwork. It crashes in exactly the same way. The factory reset software uses GMT-8 and I am on GMT-1. As a result, after factory reset, any software which cares to check would think that the system was restored 7 hours in the future, due to Window's awful policy of storing local time in the system clock. This could be confusing a certificate check or similar. Update: I discovered that this does cause Windows Update to fail. The workaround, setting the clock back before starting factory reset, does not enable Java to install correctly. The factory reset image isn't really the same as what is installed in the main partition when you buy the system. Naughty Lenovo. The installer appears to crash while installing or displaying something to do with the Ask.com toolbar. That seems to be what javaic.dll does. Microsoft Tuesday was the 14th. Some update in that could be causing this. However, I'm factory reseting the machine every time, so unless the patches get slipstreamed into the recovery image, or there is some mechanism by which they get silently installed even if updates are disabled, then I don't see how this can be the cause. Major breakthrough: The default browser on Lenovo systems is Google Chrome. I noticed that the JavaIC.dll "sponsor check" actually does a check on your default browser in order to decide which sponsor ad to display. Normally that would get you the Ask toolbar on IE9. But that toolbar doesn't work on Chrome, and so the installer tries to display a different ad. The different ad is what causes the crash. Changing the default browser to IE9 allows the installer to run correctly. So this looks like a genuine bug in the sponsor ad code in the installer, caused by a combination of Google Chrome default browser and not being in the US. (Installer also checks your location using IP geolocation service and displays different ads based on that.)

    Read the article

  • Introduction to Extended Events

    - by extended_events
    For those fighting with all the Extended Event terminology, let's step back and have a small overall Introduction to Extended Events. This post will give you a simplified end to end view through some of the elements in Extended Events. Before we start, let’s review the first Extented Events that we are going to use: -          Events: The SQL Server code is populated with event calls that, by default, are disabled. Adding events to a session enables those event calls. Once enabled, they will execute the set of functionality defined by the session. -          Target: This is an Extended Event Object that can be used to log event information. Also it is important to understand the following Extended Event concept: -          Session: Server Object created by the user that defines functionality to be executed every time a set of events happen.   It’s time to write a small “Hello World” using Extended Events. This will help understand the above terms. We will use: -          Event sqlserver. error_reported: This event gets fired every time that an error happens in the server. -          Target package0.asynchronous_file_target: This target stores the event data in disk. -          Session: We will create a session that sends all the error_reported events to the ring buffer. Before we get started, a quick note: Don’t run this script in a production environment. Even though, we are going just going to be raise very low severity user errors, we don't want to introduce noise in our servers. -- TRIES TO ELIMINATE PREVIOUS SESSIONS BEGIN TRY       DROP EVENT SESSION test_session ON SERVER END TRY BEGIN CATCH END CATCH GO   -- CREATES THE SESSION CREATE EVENT SESSION test_session ON SERVER ADD EVENT sqlserver.error_reported ADD TARGET package0.asynchronous_file_target -- CONFIGURES THE FILE TARGET (set filename = 'c:\temp\data1.xel' , metadatafile = 'c:\temp\data1.xem') GO   -- STARTS THE SESSION ALTER EVENT SESSION test_session ON SERVER STATE = START GO   -- GENERATES AN ERROR RAISERROR (N'HELLO WORLD', -- Message text.            1, -- Severity,            1, 7, 3, N'abcde'); -- Other parameters GO   -- STOPS LISTENING FOR THE EVENT ALTER EVENT SESSION test_session ON SERVER STATE = STOP GO   -- REMOVES THE EVENT SESSION FROM THE SERVER DROP EVENT SESSION test_session ON SERVER GO -- REMOVES THE EVENT SESSION FROM THE SERVER select CAST(event_data as XML) as event_data from sys.fn_xe_file_target_read_file ('c:\temp\data1*.xel','c:\temp\data1*.xem', null, null) This query will output the event data with our first hello world in the Extended Event format: <event name="error_reported" package="sqlserver" id="100" version="1" timestamp="2010-02-27T03:08:04.210Z"><data name="error"><value>50000</value><text /></data><data name="severity"><value>1</value><text /></data><data name="state"><value>1</value><text /></data><data name="user_defined"><value>true</value><text /></data><data name="message"><value>HELLO WORLD</value><text /></data></event> More on parsing event data in this post: Reading event data 101 Now let's move that lets move on to the other three Extended Event objects: -          Actions. This Extended Objects actions get executed before events are published (stored in buffers to be transferred to the targets). Currently they are used additional data (like the TSQL Statement related to an event, the session, the user) or generate a mini dump.   -          Predicates: Predicates express are logical expressions that specify what predicates to fire (E.g. only listen to errors with a severity greater than 16). This are composed of two Extended Objects: o   Predicate comparators: Defines an operator for a pair of values. Examples: §  Severity > 16 §  error_message = ‘Hello World!!’ o   Predicate sources: These are values that can be also used by the predicates. They are generic data that isn’t usually provided in the event (similar to the actions). §  Sqlserver.username = ‘Tintin’ As logical expressions they can be combined using logical operators (and, or, not).  Note: This pair always has to be first an event field or predicate source and then a value         Let’s do another small Example. We will trigger errors but we will use the ones that have severity >= 10 and the error message != ‘filter’. To verify this we will use the action sql_text that will attach the sql statement to the event data: -- TRIES TO ELIMINATE PREVIOUS SESSIONS BEGIN TRY       DROP EVENT SESSION test_session ON SERVER END TRY BEGIN CATCH END CATCH GO   -- CREATES THE SESSION CREATE EVENT SESSION test_session ON SERVER ADD EVENT sqlserver.error_reported       (ACTION (sqlserver.sql_text) WHERE severity = 2 and (not (message = 'filter'))) ADD TARGET package0.asynchronous_file_target -- CONFIGURES THE FILE TARGET (set filename = 'c:\temp\data2.xel' , metadatafile = 'c:\temp\data2.xem') GO   -- STARTS THE SESSION ALTER EVENT SESSION test_session ON SERVER STATE = START GO   -- THIS EVENT WILL BE FILTERED BECAUSE SEVERITY != 2 RAISERROR (N'PUBLISH', 1, 1, 7, 3, N'abcde'); GO -- THIS EVENT WILL BE FILTERED BECAUSE MESSAGE = 'FILTER' RAISERROR (N'FILTER', 2, 1, 7, 3, N'abcde'); GO -- THIS ERROR WILL BE PUBLISHED RAISERROR (N'PUBLISH', 2, 1, 7, 3, N'abcde'); GO   -- STOPS LISTENING FOR THE EVENT ALTER EVENT SESSION test_session ON SERVER STATE = STOP GO   -- REMOVES THE EVENT SESSION FROM THE SERVER DROP EVENT SESSION test_session ON SERVER GO -- REMOVES THE EVENT SESSION FROM THE SERVER select CAST(event_data as XML) as event_data from sys.fn_xe_file_target_read_file ('c:\temp\data2*.xel','c:\temp\data2*.xem', null, null)   This last statement will output one event with the following data: <event name="error_reported" package="sqlserver" id="100" version="1" timestamp="2010-03-05T23:15:05.481Z">   <data name="error">     <value>50000</value>     <text />   </data>   <data name="severity">     <value>2</value>     <text />   </data>   <data name="state">     <value>1</value>     <text />   </data>   <data name="user_defined">     <value>true</value>     <text />   </data>   <data name="message">     <value>PUBLISH</value>     <text />   </data>   <action name="sql_text" package="sqlserver">     <value>-- THIS ERROR WILL BE PUBLISHED RAISERROR (N'PUBLISH', 2, 1, 7, 3, N'abcde'); </value>     <text />   </action> </event> If you see more events, check if you have deleted previous event files. If so, please run   -- Deletes previous event files EXEC SP_CONFIGURE GO EXEC SP_CONFIGURE 'xp_cmdshell', 1 GO RECONFIGURE GO XP_CMDSHELL 'del c:\temp\data*.xe*' GO   or delete them manually.   More Info on Events: Extended Event Events More Info on Targets: Extended Event Targets More Info on Sessions: Extended Event Sessions More Info on Actions: Extended Event Actions More Info on Predicates: Extended Event Predicates Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • CodePlex Daily Summary for Tuesday, August 28, 2012

    CodePlex Daily Summary for Tuesday, August 28, 2012Popular ReleasesImageServer: v1.1: This is the first version releasedChristoc's DotNetNuke Module Development Template: DotNetNuke Project Templates V1.1 for VS2012: This release is specifically for Visual Studio 2012 Support, distributed through the Visual Studio Extensions gallery at http://visualstudiogallery.msdn.microsoft.com/ Check out the blog post for all of the details about this release. http://www.dotnetnuke.com/Resources/Blogs/EntryId/3471/New-Visual-Studio-2012-Project-Templates-for-DotNetNuke.aspx If you need a project template for older versions of Visual Studio check out our previous releases.Home Access Plus+: v8.0: v8.0828.1800 RELEASE CHANGED TO BETA Any issues, please log them on http://www.edugeek.net/forums/home-access-plus/ This is full release, NO upgrade ZIP will be provided as most files require replacing. To upgrade from a previous version, delete everything but your AppData folder, extract all but the AppData folder and run your HAP+ install Documentation is supplied in the Web Zip The Quota Services require executing a script to register the service, this can be found in there install di...Math.NET Numerics: Math.NET Numerics v2.2.0: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Also available as NuGet packages: PM> Install-Package MathNet.Numerics PM> Install-Package MathNet.Numerics.FSharp New: instead of the special Silverlight build we now provide a portable version supporting .Net 4, Silverlight 5 and .Net Core (WinRT) 4.5: PM> Install-Package MathNet.Numerics.PortablePhalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3391 (September 2012): New features: Extended ReflectionClass libxml error handling, constants TreatWarningsAsErrors MSBuild option OnlyPrecompiledCode configuration option; allows to use only compiled code Fixes: ArgsAware exception fix accessing .NET properties bug fix ASP.NET session handler fix for OutOfProc mode Phalanger Tools for Visual Studio: Visual Studio 2010 & 2012 New debugger engine, PHP-like debugging Lot of fixes of project files, formatting, smart indent, colorization etc. Improved ...WatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.14.06: Whats New Added CKEditor 3.6.4 oEmbed Plugin can now handle short urls changes The Template File can now parsed from an xml file instead of js (More Info...) Style Sets can now parsed from an xml file instead of js (More Info...) Fixed Showing wrong Pages in Child Portal in the Link Dialog Fixed Urls in dnnpages Plugin Fixed Issue #6969 WordCount Plugin Fixed Issue #6973 File-Browser: Fixed Deleting of Files File-Browser: Improved loading time File-Browser: Improved the loa...MabiCommerce: MabiCommerce 1.0.1: What's NewSetup now creates shortcuts Fix spelling errors Minor enhancement to the Map window.ScintillaNET: ScintillaNET 2.5.2: This release has been built from the 2.5 branch. Version 2.5.2 is functionally identical to the 2.5.1 release but also includes the XML documentation comments file generated by Visual Studio. It is not 100% comprehensive but it will give you Visual Studio IntelliSense for a large part of the API. Just make sure the ScintillaNET.xml file is in the same folder as the ScintillaNET.dll reference you're using in your projects. (The XML file does not need to be distributed with your application)....WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.0: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...BlackJumboDog: Ver5.7.1: 2012.08.25 Ver5.7.1 (1)?????·?????LING?????????????? (2)SMTP???(????)????、?????\?????????????????????Visual Studio Team Foundation Server Branching and Merging Guide: v2 - Visual Studio 2012: Welcome to the Branching and Merging Guide Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review Documentation has been reviewed by the quality and recording team All critical bugs have been resolved Known Issues / Bugs Spelling, grammar and content revisions are in progress. Hotfix will be published.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.62: Fix for issue #18525 - escaped characters in CSS identifiers get double-escaped if the character immediately after the backslash is not normally allowed in an identifier. fixed symbol problem with nuget package. 4.62 should have nuget symbols available again. Also want to highlight again the breaking change introduced in 4.61 regarding the renaming of the DLL from AjaxMin.dll to AjaxMinLibrary.dll to fix strong-name collisions between the DLL and the EXE. Please be aware of this change and...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.65: As some of you may know we were planning to release version 2.70 much later (the end of September). But today we have to release this intermediate version (2.65). It fixes a critical issue caused by a third-party assembly when running nopCommerce on a server with .NET 4.5 installed. No major features have been introduced with this release as our development efforts were focused on further enhancements and fixing bugs. To see the full list of fixes and changes please visit the release notes p...MyRouter (Virtual WiFi Router): MyRouter 1.2.9: . Fix: Some missing changes for fixing the window subclassing crash. · Fix: fixed bug when Run MyRouter at the first Time. · Fix: Log File · Fix: improve performance speed application · fix: solve some Exception.Private cloud DMS: Essential server-client full package: Requirements: - SQL server >= 2008 (minimal Express - for Essential recommended) - .NET 4.0 (Server) - .NET 4.0 Client profile (Client) This version allow: - full file system functionality Restrictions: - Maximum 2 parallel users - No share spaces - No hosted business groups - No digital sign functionality - No ActiveDirectory connector - No Performance cache - No workflow - No messagingJavaScript Prototype Extensions: Release 1.1.0.0: Release 1.1.0.0 Add prototype extension for object. Add prototype extension for array.Glyphx: Version 1.2: This release includes the SdlDotNet.dll dependency in the setup, which you will need.TFS Project Test Migrator: TestPlanMigration v1.0.0: Release 1.0.0 This first version do not create the test cases in the target project because the goal was to restore a Test Plan + Test Suite hierarchy after a manual user deletion without restoring all the Project Collection Database. As I discovered, deleting a Test Plan will do the following : - Delete all TestSuiteEntry (the link between a Test Suite node and a Test Case) - Delete all TestSuite (the nodes in the test hierarchy), including root TestSuite - Delete the TestPlan Test c...ERPStore eCommerce FrontOffice: ERPStore.Core V4.0.0.2 MVC4 RTM: ERPStore.Core V4.0.0.2 MVC4 RTM (Code Source)ZXing.Net: ZXing.Net 0.8.0.0: sync with rev. 2393 of the java version improved API, direct support for multiple barcode decoding, wrapper for barcode generating many other improvements and fixes encoder and decoder command line clients demo client for emguCV dev documentation startedNew ProjectsA Constraint Propogation Solver in F#: An experimental implementation of the Variable Consistency "Bucket Elimination" algorithm for constraint propogationAirline Pilot Academy: Taller de Sistemas de Información - UCB ArchiveManageSys: Something about archive managementCriteria Workflow Engine: A C++ Workflow Engine: Desing and execute business process.DnnDash Service: The DnnDash project enables administrators of DotNetNuke websites to view any installed DotNetNuke Dashboard components via other devices.Dynamics NAV UniWPF Addin: This project is Addin control for Microsoft Dynamics NAV 2009, allowing developers to put WPF controls directly to NAV page.Essai Salon de Chat: petit projet de communication basé sur une structure server / client(multi)High performance C# byte array to hex string to byte array: The performance key point for each to/from conversion is the (perpetual) repetition of the same if blocks and calculations...ImageCloudLock Backup Solution: ImageCloudLock 2012 is designed to backup your important items, like photos, financial documents, PDFs, excel documents and personal items to the Cloud service.Login with Facebook in ASP.Net MVC3 & Get data from Facebook User: This project is useful for ASP.Net MVC developer. For login with Facebook in ASP.Net MVC3 & Get data from user's facebook account.Lucy2D: Projet for developing 2D Games based on WPF and XNA. The point is to minimize effort for developers and fast prototyping.Main Street: Human Resources software for small business. Using C# and SQL Server Express.ManagedZFS: A managed implementation of the ZFS filesystem. Reliability, performance, and manageability. Also, *block-pointer rewrite* !!!Metro English Persian Dictionary: This is an English - Persian (Farsi) dictionary designed and developed for Windows 8 Metro User Interface which contains over 50000 words. My Personal Site: My Personal SiteNeverball Framework: Neverball FrameworkSample code: This project is simple a common location for me to store project skeletons and snippets for help bootstrapping other projects.Smart Card Fuzzer: SCFuzz is a smart card middleware fuzz testing tool (fuzzer). Using API hooking, SCFuzz modifies data returned by the card in order to find bugs in the host.Test Activation: Start looking into it.Vector, a .net generic collection to use instead of a List - in niche cases.: The Vector can be used as a replacement for a List if a large number of elements must be stored, and element inserts and deletes are frequent.WCF duplex Message: This is test project uses WCF to implement message broadcast.WCF! WTF?: Getting to know WCFZune to Lync Now Playing: Zune to Lync Now Playing is a simple application that let you display on your Lync Personal Note, the current song playing in Zune Player.

    Read the article

  • Using the OAM Mobile & Social SDK to secure native mobile apps - Part 2 : OAM Mobile & Social Server configuration

    - by kanishkmahajan
    Objective  In the second part of this blog post I'll now cover configuration of OAM to secure our sample native apps developed using the iOS SDK. First, here are some key server side concepts: Application Profiles: An application profile is a logical representation of your application within OAM server. It could be a web (html/javascript) or native (iOS or Android) application. Applications may have different requirements for AuthN/AuthZ, and therefore each application that interacts with OAM Mobile & Social REST services must be uniquely defined. Service Providers: Service providers represent the back end services that are accessed by applications. With OAM Mobile & Social these services are in the areas of authentication, authorization and user profile access. A Service Provider then defines a type or class of service for authentication, authorization or user profiles. For example, the JWTAuthentication provider performs authentication and returns JWT (JSON Web Tokens) to the application. In contrast, the OAMAuthentication also provides authentication but uses OAM SSO tokens Service Profiles:  A Service Profile is a logical envelope that defines a service endpoint URL for a service provider for the OAM Mobile & Social Service. You can create multiple service profiles for a service provider to define token capabilities and service endpoints. Each service provider instance requires atleast one corresponding service profile.The  OAM Mobile & Social Service includes a pre-configured service profile for each pre-configured service provider. Service Domains: Service domains bind together application profiles and service profiles with an optional security handler. So now let's configure the OAM server. Additional details are in the OAM Documentation and this post simply provides an outline of configuration tasks required to configure OAM for securing native apps.  Configuration  Create The Application Profile Log on to the Oracle Access Management console and from System Configuration -> Mobile and Social -> Mobile Services, select "Create" under Application Profiles. You would do this  step twice - once for each of the native apps - AvitekInventory and AvitekScheduler. Enter the parameters for the new Application profile: Name:  The application name. In this example we use 'InventoryApp' for the AvitekInventory app and 'SchedulerApp' for the AvitekScheduler app. The application name configured here must match the application name in the settings for the deployed iOS application. BaseSecret: Enter a password here. This does not need to match any existing password. It is used as an encryption key between the client and the OAM server.  Mobile Configuration: Enable this checkbox for any mobile applications. This enables the SDK to collect and send Mobile specific attributes to the OAM server.  Webview: Controls the type of browser that the iOS application will use. The embedded browser (default) will render the browser within the application. External will use the system standalone browser. External can sometimes be preferable for debugging URLScheme: The URL scheme associated with the iOS apps that is also used as a custom URL scheme to register O/S handlers that will take control when OAM transfers control to device. For the AvitekInventory and the AvitekScheduler apps I used osa:// and client:// respectively. You set this scheme in Xcode while developing your iOS Apps under Info->URL Types.  Bundle Identifier : The fully qualified name of your iOS application. You typically set this when you create a new Xcode project or under General->Identity in Xcode. For the AvitekInventory and AvitekScheduler apps these were com.us.oracle.AvitekInventory and com.us.oracle.AvitekScheduler respectively.  Create The Service Domain Select create under Service domains. Create a name for your domain (AvitekDomain is what I've used). The name configured must match the service domain set in the iOS application settings. Under "Application Profile Selection" click the browse button. Choose the application profiles that you created in the previous step one by one. Set the InventoryApp as the SSO agent (with an automatic priority of 1) and the SchedulerApp as the SSO client. This associates these applications with this service domain and configures them in a 'circle of trust'.  Advance to the next page of the wizard to configure the services for this domain. For this example we will use the following services:  Authentication:   This will use the JWT (JSON Web Token) format authentication provider. The iOS application upon successful authentication will receive a signed JWT token from OAM Mobile & Social service. This token will be used in subsequent calls to OAM. Use 'MobileOAMAuthentication' here. Authorization:  The authorization provider. The SDK makes calls to this provider endpoint to obtain authorization decisions on resource requests. Use 'OAMAuthorization' here. User Profile Service:  This is the service that provides user profile services (attribute lookup, attribute modification). It can be any directory configured as a data source in OAM.  And that's it! We're done configuring our native apps. In the next section, let's look at some additional features that were mentioned in the earlier post that are automated by the SDK for the app developer i.e. these are areas that require no additional coding by the app developer when developing with the SDK as they only require server side configuration: Additional Configuration  Offline Authentication Select this option in the service domain configuration to allow users to log in and authenticate to the application locally. Clear the box to block users from authenticating locally. Strong Authentication By simply selecting the OAAMSecurityHandlerPlugin while configuring mobile related Service Domains, the OAM Mobile&Social service allows sophisticated device and client application registration logic as well as the advanced risk and fraud analysis logic found in OAAM to be applied to mobile authentication. Let's look at some scenarios where the OAAMSecurityHandlerPlugin gets used. First, when we configure OAM and OAAM to integrate together using the TAP scheme, then that integration kicks off by selecting the OAAMSecurityHandlerPlugin in the mobile service domain. This is how the mobile device is now prompted for KBA,OTP etc depending on the TAP scheme integration and the OAM users registered in the OAAM database. Second, when we configured the service domain, there were claim attributes there that are already pre-configured in OAM Mobile&Social service and we simply accepted the default values- these are the set of attributes that will be fetched from the device and passed to the server during registration/authentication as device profile attributes. When a mobile application requests a token through the Mobile Client SDK, the SDK logic will send the Device Profile attributes as a part of an HTTP request. This set of Device Profile attributes enhances security by creating an audit trail for devices that assists device identification. When the OAAM Security Plug-in is used, a particular combination of Device Profile attribute values is treated as a device finger print, known as the Digital Finger Print in the OAAM Administration Console. Each finger print is assigned a unique fingerprint number. Each OAAM session is associated with a finger print and the finger print makes it possible to log (and audit) the devices that are performing authentication and token acquisition. Finally, if the jail broken option is selected while configuring an application profile, the SDK detects a device is jail broken based on configured policy and if the OAAM handler is configured the plug-in can allow or block access to client device depending on the OAAM policy as well as detect blacklisted, lost or stolen devices and send a wipeout command that deletes all the mobile &social relevant data and blocks the device from future access. 1024x768 Social Logins Finally, let's complete this post by adding configuration to configure social logins for mobile applications. Although the Avitek sample apps do not demonstrate social logins this would be an ideal exercise for you based on the sample code provided in the earlier post. I'll cover the server side configuration here (with Facebook as an example) and you can retrofit the code to accommodate social logins by following the steps outlined in "Invoking Authentication Services" and add code in LoginViewController and maybe create a new delegate - AvitekRPDelegate based on the description in the previous post. So, here all you will need to do is configure an application profile for social login, configure a new service domain that uses the social login application profile, register the app on Facebook and finally configure the Facebook OAuth provider in OAM with those settings. Navigate to Mobile and Social, click on "Internet Identity Services" and create a new application profile. Here are the relevant parameters for the new application profile (-also we're not registering the social user in OAM with this configuration below, however that is a key feature as well): Name:  The application name. This must match the name of the of mobile application profile created for your application under Mobile Services. We used InventoryApp for this example. SharedSecret: Enter a password here. This does not need to match any existing password. It is used as an encryption key between the client and the OAM Mobile and Social service.  Mobile Application Return URL: After the Relying Party (social) login, the OAM Mobile & Social service will redirect to the iOS application using this URI. This is defined under Info->URL type and we used 'osa', so we define this here as 'osa://' Login Type: Choose to allow only internet identity authentication for this exercise. Authentication Service Endpoint : Make sure that /internetidentityauthentication is selected. Login to http://developers.facebook.com using your Facebook account and click on Apps and register the app as InventoryApp. Note that the consumer key and API secret gets generated automatically by the Facebook OAuth server. Navigate back to OAM and under Mobile and Social, click on "Internet Identity Services" and edit the Facebook OAuth Provider. Add the consumer key and API secret from the Facebook developers site to the Facebook OAuth Provider: Navigate to Mobile Services. Click on New to create a new service domain. In this example we call the domain "AvitekDomainRP". The type should be 'Mobile Application' and the application credential type 'User Token'. Add the application "InventoryApp" to the domain. Advance the next page of the wizard. Select the  default service profiles but ensure that the Authentication Service is set to 'InternetIdentityAuthentication'. Finish the creation of the service domain.

    Read the article

  • How to shoot yourself in the foot (DO NOT Read in the office)

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/21/how-to-shoot-yourself-in-the-foot-do-not-read.aspxLet me make it absolutely clear - the following is:merely collated by your Geek from http://www.codeproject.com/Lounge.aspx?msg=3917012#xx3917012xxvery, very very funny so you read it in the presence of others at your own riskso here is the list - you have been warned!C You shoot yourself in the foot.   C++ You accidently create a dozen instances of yourself and shoot them all in the foot. Providing emergency medical assistance is impossible since you can't tell which are bitwise copies and which are just pointing at others and saying "That's me, over there."   FORTRAN You shoot yourself in each toe, iteratively, until you run out of toes, then you read in the next foot and repeat. If you run out of bullets, you continue anyway because you have no exception-handling facility.   Modula-2 After realizing that you can't actually accomplish anything in this language, you shoot yourself in the head.   COBOL USEing a COLT 45 HANDGUN, AIM gun at LEG.FOOT, THEN place ARM.HAND.FINGER on HANDGUN.TRIGGER and SQUEEZE. THEN return HANDGUN to HOLSTER. CHECK whether shoelace needs to be retied.   Lisp You shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds...   BASIC Shoot yourself in the foot with a water pistol. On big systems, continue until entire lower body is waterlogged.   Forth Foot yourself in the shoot.   APL You shoot yourself in the foot; then spend all day figuring out how to do it in fewer characters.   Pascal The compiler won't let you shoot yourself in the foot.   Snobol If you succeed, shoot yourself in the left foot. If you fail, shoot yourself in the right foot.   HyperTalk Put the first bullet of the gun into foot left of leg of you. Answer the result.   Prolog You tell your program you want to be shot in the foot. The program figures out how to do it, but the syntax doesn't allow it to explain.   370 JCL You send your foot down to MIS with a 4000-page document explaining how you want it to be shot. Three years later, your foot comes back deep-fried.   FORTRAN-77 You shoot yourself in each toe, iteratively, until you run out of toes, then you read in the next foot and repeat. If you run out of bullets, you continue anyway because you still can't do exception-processing.   Modula-2 (alternative) You perform a shooting on what might be currently a foot with what might be currently a bullet shot by what might currently be a gun.   BASIC (compiled) You shoot yourself in the foot with a BB using a SCUD missile launcher.   Visual Basic You'll really only appear to have shot yourself in the foot, but you'll have so much fun doing it that you won't care.   Forth (alternative) BULLET DUP3 * GUN LOAD FOOT AIM TRIGGER PULL BANG! EMIT DEAD IF DROP ROT THEN (This takes about five bytes of memory, executes in two to ten clock cycles on any processor and can be used to replace any existing function of the language as well as in any future words). (Welcome to bottom up programming - where you, too, can perform compiler pre-processing instead of writing code)   APL (alternative) You hear a gunshot and there's a hole in your foot, but you don't remember enough linear algebra to understand what happened. or @#&^$%&%^ foot   Pascal (alternative) Same as Modula-2 except that the bullet is not the right type for the gun and your hand is blown off.   Snobol (alternative) You grab your foot with your hand, then rewrite your hand to be a bullet. The act of shooting the original foot then changes your hand/bullet into yet another foot (a left foot).   Prolog (alternative) You attempt to shoot yourself in the foot, but the bullet, failing to find its mark, backtracks to the gun, which then explodes in your face.   COMAL You attempt to shoot yourself in the foot with a water pistol, but the bore is clogged, and the pressure build-up blows apart both the pistol and your hand. or draw_pistol aim_at_foot(left) pull_trigger hop(swearing)   Scheme As Lisp, but none of the other appendages are aware of this happening.   Algol You shoot yourself in the foot with a musket. The musket is aesthetically fascinating and the wound baffles the adolescent medic in the emergency room.   Ada If you are dumb enough to actually use this language, the United States Department of Defense will kidnap you, stand you up in front of a firing squad and tell the soldiers, "Shoot at the feet." or The Department of Defense shoots you in the foot after offering you a blindfold and a last cigarette. or After correctly packaging your foot, you attempt to concurrently load the gun, pull the trigger, scream and shoot yourself in the foot. When you try, however, you discover that your foot is of the wrong type. or After correctly packing your foot, you attempt to concurrently load the gun, pull the trigger, scream, and confidently aim at your foot knowing it is safe. However the cordite in the round does an Unchecked Conversion, fires and shoots you in the foot anyway.   Eiffel   You create a GUN object, two FOOT objects and a BULLET object. The GUN passes both the FOOT objects a reference to the BULLET. The FOOT objects increment their hole counts and forget about the BULLET. A little demon then drives a garbage truck over your feet and grabs the bullet (both of it) on the way. Smalltalk You spend so much time playing with the graphics and windowing system that your boss shoots you in the foot, takes away your workstation and makes you develop in COBOL on a character terminal. or You send the message shoot to gun, with selectors bullet and myFoot. A window pops up saying Gunpowder doesNotUnderstand: spark. After several fruitless hours spent browsing the methods for Trigger, FiringPin and IdealGas, you take the easy way out and create ShotFoot, a subclass of Foot with an additional instance variable bulletHole. Object Oriented Pascal You perform a shooting on what might currently be a foot with what might currently be a bullet fired from what might currently be a gun.   PL/I You consume all available system resources, including all the offline bullets. The Data Processing & Payroll Department doubles its size, triples its budget, acquires four new mainframes and drops the original one on your foot. Postscript foot bullets 6 locate loadgun aim gun shoot showpage or It takes the bullet ten minutes to travel from the gun to your foot, by which time you're long since gone out to lunch. The text comes out great, though.   PERL You stab yourself in the foot repeatedly with an incredibly large and very heavy Swiss Army knife. or You pick up the gun and begin to load it. The gun and your foot begin to grow to huge proportions and the world around you slows down, until the gun fires. It makes a tiny hole, which you don't feel. Assembly Language You crash the OS and overwrite the root disk. The system administrator arrives and shoots you in the foot. After a moment of contemplation, the administrator shoots himself in the foot and then hops around the room rabidly shooting at everyone in sight. or You try to shoot yourself in the foot only to discover you must first reinvent the gun, the bullet, and your foot.or The bullet travels to your foot instantly, but it took you three weeks to load the round and aim the gun.   BCPL You shoot yourself somewhere in the leg -- you can't get any finer resolution than that. Concurrent Euclid You shoot yourself in somebody else's foot.   Motif You spend days writing a UIL description of your foot, the trajectory, the bullet and the intricate scrollwork on the ivory handles of the gun. When you finally get around to pulling the trigger, the gun jams.   Powerbuilder While attempting to load the gun you discover that the LoadGun system function is buggy; as a work around you tape the bullet to the outside of the gun and unsuccessfully attempt to fire it with a nail. In frustration you club your foot with the butt of the gun and explain to your client that this approximates the functionality of shooting yourself in the foot and that the next version of Powerbuilder will fix it.   Standard ML By the time you get your code to typecheck, you're using a shoot to foot yourself in the gun.   MUMPS You shoot 583149 AK-47 teflon-tipped, hollow-point, armour-piercing bullets into even-numbered toes on odd-numbered feet of everyone in the building -- with one line of code. Three weeks later you shoot yourself in the head rather than try to modify that line.   Java You locate the Gun class, but discover that the Bullet class is abstract, so you extend it and write the missing part of the implementation. Then you implement the ShootAble interface for your foot, and recompile the Foot class. The interface lets the bullet call the doDamage method on the Foot, so the Foot can damage itself in the most effective way. Now you run the program, and call the doShoot method on the instance of the Gun class. First the Gun creates an instance of Bullet, which calls the doFire method on the Gun. The Gun calls the hit(Bullet) method on the Foot, and the instance of Bullet is passed to the Foot. But this causes an IllegalHitByBullet exception to be thrown, and you die.   Unix You shoot yourself in the foot or % ls foot.c foot.h foot.o toe.c toe.o % rm * .o rm: .o: No such file or directory % ls %   370 JCL (alternative) You shoot yourself in the head just thinking about it.   DOS JCL You first find the building you're in in the phone book, then find your office number in the corporate phone book. Then you have to write this down, then describe, in cubits, your exact location, in relation to the door (right hand side thereof). Then you need to write down the location of the gun (loading it is a proprietary utility), then you load it, and the COBOL program, and run them, and, with luck, it may be run tonight.   VMS   $ MOUNT/DENSITY=.45/LABEL=BULLET/MESSAGE="BYE" BULLET::BULLET$GUN SYS$BULLET $ SET GUN/LOAD/SAFETY=OFF/SIGHT=NONE/HAND=LEFT/CHAMBER=1/ACTION=AUTOMATIC/ LOG/ALL/FULL SYS$GUN_3$DUA3:[000000]GUN.GNU $ SHOOT/LOG/AUTO SYS$GUN SYS$SYSTEM:[FOOT]FOOT.FOOT   %DCL-W-ACTIMAGE, error activating image GUN -CLI-E-IMGNAME, image file $3$DUA240:[GUN]GUN.EXE;1 -IMGACT-F-NOTNATIVE, image is not an OpenVMS Alpha AXP image or %SYS-F-FTSHT, foot shot (fifty lines of traceback omitted) sh,csh, etc You can't remember the syntax for anything, so you spend five hours reading manual pages, then your foot falls asleep. You shoot the computer and switch to C.   Apple System 7 Double click the gun icon and a window giving a selection for guns, target areas, plus balloon help with medical remedies, and assorted sound effects. Click "shoot" button and a small bomb appears with note "Error of Type 1 has occurred."   Windows 3.1 Double click the gun icon and wait. Eventually a window opens giving a selection for guns, target areas, plus balloon help with medical remedies, and assorted sound effects. Click "shoot" button and a small box appears with note "Unable to open Shoot.dll, check that path is correct."   Windows 95 Your gun is not compatible with this OS and you must buy an upgrade and install it before you can continue. Then you will be informed that you don't have enough memory.   CP/M I remember when shooting yourself in the foot with a BB gun was a big deal.   DOS You finally found the gun, but can't locate the file with the foot for the life of you.   MSDOS You shoot yourself in the foot, but can unshoot yourself with add-on software.   Access You try to point the gun at your foot, but it shoots holes in all your Borland distribution diskettes instead.   Paradox Not only can you shoot yourself in the foot, your users can too.   dBase You squeeze the trigger, but the bullet moves so slowly that by the time your foot feels the pain, you've forgotten why you shot yourself anyway. or You buy a gun. Bullets are only available from another company and are promised to work so you buy them. Then you find out that the next version of the gun is the one scheduled to actually shoot bullets.   DBase IV, V1.0 You pull the trigger, but it turns out that the gun was a poorly designed hand grenade and the whole building blows up.   SQL You cut your foot off, send it out to a service bureau and when it returns, it has a hole in it but will no longer fit the attachment at the end of your leg. or Insert into Foot Select Bullet >From Gun.Hand Where Chamber = 'LOADED' And Trigger = 'PULLED'   Clipper You grab a bullet, get ready to insert it in the gun so that you can shoot yourself in the foot and discover that the gun that the bullets fits has not yet been built, but should be arriving in the mail _REAL_SOON_NOW_. Oracle The menus for coding foot_shooting have not been implemented yet and you can't do foot shooting in SQL.   English You put your foot in your mouth, then bite it off. (For those who don't know, English is a McDonnell Douglas/PICK query language which allegedly requires 110% of system resources to run happily.) Revelation [an implementation of the PICK Operating System] You'll be able to shoot yourself in the foot just as soon as you figure out what all these bullets are for.   FlagShip Starting at the top of your head, you aim the gun at yourself repeatedly until, half an hour later, the gun is finally pointing at your foot and you pull the trigger. A new foot with a hole in it appears but you can't work out how to get rid of the old one and your gun doesn't work anymore.   FidoNet You put your foot in your mouth, then echo it internationally.   PicoSpan [a UNIX-based computer conferencing system] You can't shoot yourself in the foot because you're not a host. or (host variation) Whenever you shoot yourself in the foot, someone opens a topic in policy about it.   Internet You put your foot in your mouth, shoot it, then spam the bullet so that everybody gets shot in the foot.   troff rmtroff -ms -Hdrwp | lpr -Pwp2 & .*place bullet in footer .B .NR FT +3i .in 4 .bu Shoot! .br .sp .in -4 .br .bp NR HD -2i .*   Genetic Algorithms You create 10,000 strings describing the best way to shoot yourself in the foot. By the time the program produces the optimal solution, humans have evolved wings and the problem is moot.   CSP (Communicating Sequential Processes) You only fail to shoot everything that isn't your foot.   MS-SQL Server MS-SQL Server’s gun comes pre-loaded with an unlimited supply of Teflon coated bullets, and it only has two discernible features: the muzzle and the trigger. If that wasn't enough, MS-SQL Server also puts the gun in your hand, applies local anesthetic to the skin of your forefinger and stitches it to the gun's trigger. Meanwhile, another process has set up a spinal block to numb your lower body. It will then proceeded to surgically remove your foot, cryogenically freeze it for preservation, and attach it to the muzzle of the gun so that no matter where you aim, you will shoot your foot. In order to avoid shooting yourself in the foot, you need to unstitch your trigger finger, remove your foot from the muzzle of the gun, and have it surgically reattached. Then you probably want to get some crutches and go out to buy a book on SQL Server Performance Tuning.   Sybase Sybase's gun requires assembly, and you need to go out and purchase your own clip and bullets to load the gun. Assembly is complicated by the fact that Sybase has hidden the gun behind a big stack of reference manuals, but it hasn't told you where that stack is. While you were off finding the gun, assembling it, buying bullets, etc., Sybase was also busy surgically removing your foot and cryogenically freezing it for preservation. Instead of attaching it to the muzzle of the gun, though, it packed your foot on dry ice and sent it UPS-Ground to an unnamed hookah bar somewhere in the middle east. In order to shoot your foot, you must modify your gun with a GPS system for targeting and hire some guy named "Indy" to find the hookah bar and wire the coordinates back to you. By this time, you've probably become so daunted at the tasks stand between you and shooting your foot that you hire a guy who's read all the books on Sybase to help you shoot your foot. If you're lucky, he'll be smart enough both to find your foot and to stop you from shooting it.   Magic software You spend 1 week looking up the correct syntax for GUN. When you find it, you realise that GUN will not let you shoot in your own foot. It will allow you to shoot almost anything but your foot. You then decide to build your own gun. You can't use the standard barrel since this will only allow for standard bullets, which will not fire if the barrel is pointed at your foot. After four weeks, you have created your own custom gun. It blows up in your hand without warning, because you failed to initialise the safety catch and it doesn't know whether the initial state is "0", 0, NULL, "ZERO", 0.0, 0,0, "0.0", or "0,00". You fix the problem with your remaining hand by nesting 12 safety catches, and then decide to build the gun without safety catch. You then shoot the management and retire to a happy life where you code in languages that will allow you to shoot your foot in under 10 days.FirefoxLets you shoot yourself in as many feet as you'd like, while using multiple great addons! IEA moving target in terms of standard ammunition size and doesn't always work properly with non-Microsoft ammunition, so sometimes you shoot something other than your foot. However, it's the corporate world's standard foot-shooting apparatus. Hackers seem to enjoy rigging websites up to trigger cascading foot-shooting failures. Windows 98 About the same as Windows 95 in terms of overall bullet capacity and triggering mechanisms. Includes updated DirectShot API. A new version was released later on to support USB guns, Windows 98 SE.WPF:You get your baseball glove and a ball and you head out to your backyard, where you throw balls to your pitchback. Then your unkempt-haired-cargo-shorts-and-sandals-with-white-socks-wearing neighbor uses XAML to sculpt your arm into a gun, the ball into a bullet and the pitchback into your foot. By now, however, only the neighbor can get it to work and he's only around from 6:30 PM - 3:30 AM. LOGO: You very carefully lay out the trajectory of the bullet. Then you start the gun, which fires very slowly. You walk precisely to the point where the bullet will travel and wait, but just before it gets to you, your class time is up and one of the other kids has already used the system to hack into Sony's PS3 network. Flash: Someone has designed a beautiful-looking gun that anyone can shoot their feet with for free. It weighs six hundred pounds. All kinds of people are shooting themselves in the feet, and sending the link to everyone else so that they can too. That is, except for the criminals, who are all stealing iOS devices that the gun won't work with.APL: Its (mostly) all greek to me. Lisp: Place ((gun in ((hand sight (foot then shoot))))) (Lots of Insipid Stupid Parentheses)Apple OS/X and iOS Once a year, Steve Jobs returns from sick leave to tell millions of unwavering fans how they will be able to shoot themselves in the foot differently this year. They retweet and blog about it ad nauseam, and wait in line to be the first to experience "shoot different".Windows ME Usually fails, even at shooting you in the foot. Yo dawg, I heard you like shooting yourself in the foot. So I put a gun in your gun, so you can shoot yourself in the foot while you shoot yourself in the foot. (Okay, I'm not especially proud of this joke.) Windows 2000 Now you really do have to log in, before you are allowed to shoot yourself in the foot.Windows XPYou thought you learned your lesson: Don't use Windows ME. Then, along came this new creature, built on top of Windows NT! So you spend the next couple days installing antivirus software, patches and service packs, just so you can get that driver to install, and then proceed to shoot yourself in the foot. Windows Vista Newer! Glossier! Shootier! Windows 7 The bullets come out a lot smoother. Active Directory Each bullet now has an attached Bullet Identifier, and can be uniquely identified. Policies can be applied to dictate fragmentation, and the gun will occasionally have a confusing delay after the trigger has been pulled. PythonYou try to use import foot; foot.shoot() only to realize that's only available in 3.0, to which you can't yet upgrade from 2.7 because of all those extension libs lacking support. Solaris Shoots best when used on SPARC hardware, but still runs the trigger GUI under Java. After weeks of learning the appropriate STOP command to prevent the trigger from automatically being pressed on boot, you think you've got it under control. Then the one time you ever use dtrace, it hits a bug that fires the gun. MySQL The feature that allows you to shoot yourself in the foot has been in development for about 6 years, and they are adding it into the next version, which is coming out REAL SOON NOW, promise! But you can always check it out of source control and try it yourself (just not in any environment where data integrity is important because it will probably explode.) PostgreSQLAllows you to have a smug look on your face while you shoot yourself in the foot, because those MySQL guys STILL don't have that feature. NoSQL Barrel? Who needs a barrel? Just put the bullet on your foot, and strike it with a hammer. See? It's so much simpler and more efficient that way. You can even strike multiple bullets in one swing if you swing with a good enough arc, because hammers are easy to use. Getting them to synchronize is a little difficult, though.Eclipse There are about a dozen different packages for shooting yourself in the foot, with weird interdependencies on outdated components. Once you finally navigate the morass and get one installed, you then have something to look at while you shoot yourself in the foot with that package: You can watch the screen redraw.Outlook Makes it really easy to let everyone know you shot yourself in the foot!Shooting yourself in the foot using delegates.You really need to shoot yourself in the foot but you hate firearms (you don't want any dependency on the specifics of shooting) so you delegate it to somebody else. You don't care how it is done as long is shooting your foot. You can do it asynchronously in case you know you may faint so you are called back/slapped in the face by your shooter/friend (or background worker) when everything is done.C#You prepare the gun and the bullet, carefully modeling all of the physics of a bullet traveling through a foot. Just before you're about to pull the trigger, you stumble on System.Windows.BodyParts.Foot.ShootAt(System.Windows.Firearms.IGun gun) in the extended framework, realize you just wasted the entire afternoon, and shoot yourself in the head.PHP<?phprequire("foot_safety_check.php");?><!DOCTYPE HTML><html><head> <!--Lower!--><title>Shooting me in the foot</title></head> <body> <!--LOWER!!!--><leg> <!--OK, I made this one up...--><footer><?php echo (dungSift($_SERVER['HTTP_USER_AGENT'], "ie"))?("Your foot is safe, but you might want to wear a hard hat!"):("<div class=\"shot\">BANG!</div>"); ?></footer></leg> </body> </html>

    Read the article

  • Gathering statistics for an Oracle WebCenter Content Database

    - by Nicolas Montoya
    Have you ever heard: "My Oracle WebCenter Content instance is running slow. I checked the memory and CPU usage of the application server and it has plenty of resources. What could be going wrong?An Oracle WebCenter Content instance runs on an application server and relies on a database server on the back end. If your application server tier is running fine, chances are that your database server tier may host the root of the problem. While many things could cause performance problems, on active Enterprise Content Management systems, keeping database statistics updated is extremely important.The Oracle Database have a set of built-in optimizer utilities that can help make database queries more efficient. It is strongly recommended to update or re-create the statistics about the physical characteristics of a table and the associated indexes in order to maximize the efficiency of optimizers. These physical characteristics include: Number of records Number of pages Average record length The frequency with which you need to update statistics depends on how quickly the data is changing. Typically, statistics should be updated when the number of new items since the last update is greater than ten percent of the number of items when the statistics were last updated. If a large amount of documents are being added or removed from the system, the a post step should be added to gather statistics upon completion of this massive data change. In some cases, you may need to collect statistics in the middle of the data processing to expedite its execution. These proceses include but are not limited to: data migration, bootstrapping of a new system, records management disposition processing (typically at the end of the calendar year), etc. A DOCUMENTS table with a ten million rows will often generate a very different plan than a table with just a thousand.A quick check of the statistics for the WebCenter Content (WCC) Database could be performed via the below query:SELECT OWNER, TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN,TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS')FROM DBA_TABLESWHERE TABLE_NAME='DOCUMENTS';OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      DOCUMENTS                            4172        46          61 04/06/2012 11:17:51This output will return not only the date when the WCC table DOCUMENTS was last analyzed, but also it will return the <DATABASE SCHEMA OWNER> for this table in the form of <PREFIX>_OCS.This database username could later on be used to check on other objects owned by the WCC <DATABASE SCHEMA OWNER> as shown below:SELECT OWNER, TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN,TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS')FROM DBA_TABLESWHERE OWNER='ATEAM_OCS'ORDER BY NUM_ROWS ASC;...OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      REVISIONS                            2051        46         141 04/09/2012 22:00:22ATEAM_OCS                      DOCUMENTS                            4172        46          61 04/06/2012 11:17:51ATEAM_OCS                      ARCHIVEHISTORY                       4908       244         218 04/06/2012 11:17:49OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      DOCUMENTHISTORY                      5865       110          72 04/06/2012 11:17:50ATEAM_OCS                      SCHEDULEDJOBSHISTORY                10131       244         131 04/06/2012 11:17:54ATEAM_OCS                      SCTACCESSLOG                        10204       496         268 04/06/2012 11:17:54...The Oracle Database allows to collect statistics of many different kinds as an aid to improving performance. The DBMS_STATS package is concerned with optimizer statistics only. The database sets automatic statistics collection of this kind on by default, DBMS_STATS package is intended for only specialized cases.The following subprograms gather certain classes of optimizer statistics:GATHER_DATABASE_STATS Procedures GATHER_DICTIONARY_STATS Procedure GATHER_FIXED_OBJECTS_STATS Procedure GATHER_INDEX_STATS Procedure GATHER_SCHEMA_STATS Procedures GATHER_SYSTEM_STATS Procedure GATHER_TABLE_STATS ProcedureThe DBMS_STATS.GATHER_SCHEMA_STATS PL/SQL Procedure gathers statistics for all objects in a schema.DBMS_STATS.GATHER_SCHEMA_STATS (    ownname          VARCHAR2,    estimate_percent NUMBER   DEFAULT to_estimate_percent_type                                                 (get_param('ESTIMATE_PERCENT')),    block_sample     BOOLEAN  DEFAULT FALSE,    method_opt       VARCHAR2 DEFAULT get_param('METHOD_OPT'),   degree           NUMBER   DEFAULT to_degree_type(get_param('DEGREE')),    granularity      VARCHAR2 DEFAULT GET_PARAM('GRANULARITY'),    cascade          BOOLEAN  DEFAULT to_cascade_type(get_param('CASCADE')),    stattab          VARCHAR2 DEFAULT NULL,    statid           VARCHAR2 DEFAULT NULL,    options          VARCHAR2 DEFAULT 'GATHER',    objlist          OUT      ObjectTab,   statown          VARCHAR2 DEFAULT NULL,    no_invalidate    BOOLEAN  DEFAULT to_no_invalidate_type (                                     get_param('NO_INVALIDATE')),  force             BOOLEAN DEFAULT FALSE);There are several values for the OPTIONS parameter that we need to know about: GATHER reanalyzes the whole schema     GATHER EMPTY only analyzes tables that have no existing statistics GATHER STALE only reanalyzes tables with more than 10 percent modifications (inserts, updates,   deletes) GATHER AUTO will reanalyze objects that currently have no statistics and objects with stale statistics. Using GATHER AUTO is like combining GATHER STALE and GATHER EMPTY. Example:exec dbms_stats.gather_schema_stats( -   ownname          => '<PREFIX>_OCS', -   options          => 'GATHER AUTO' -);

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part IV)

    So finally we get to the fun part the fruits of all of our middle-tier/back end labors of generating classes to interface with an XML data source that the previous posts were about can now be presented quickly and easily to an end user.  I think.  Well see.  Well be using a WPF window to display all of our various MFL information that weve collected in the two XML files, and well provide a means of adding, updating and deleting each of these entities using as little code as possible.  Additionally, I would like to dig into the performance of this solution as well as the flexibility of it if were were to modify the underlying XML schema.  So first things first, lets create a WPF project and include our xml data in a data folder within.  On the main window, well drag out the following controls: A combo box to contain all of the teams A list box to show the players of the selected team, along with add/delete player buttons A text box tied to the selected players name, with a save button to save any changes made to the player name A combo box of all the available positions, tied to the currently selected players position A data grid tied to the statistics of the currently selected player, with add/delete statistic buttons This monstrosity of a form and its associated project will look like this (dont forget to reference the DataFoundation project from the Presentation project): To get to the visual data binding, as we learned in a previous post, you have to first make sure the project containing your bindable classes is compiled.  Do so, and then open the Data Sources pane to add a reference to the Teams and Positions classes in the DataFoundation project: Why only Team and Position?  Well, we will get to Players from Teams, and Statistics from Players so no need to make an interface for them as well see in a second.  As for Positions, well need a way to bind the dropdown to ALL positions they dont appear underneath any of the other classes so we need to reference it directly.  After adding these guys, expand every node in your Data Sources pane and see how the Team node allows you to drill into Players and then Statistics.  This is why there was no need to bring in a reference to those classes for the UI we are designing: Now for the seriously hard work of binding all of our controls to the correct data sources.  Drag the following items from the Data Sources pane to the specified control on the window design canvas: Team.Name > Teams combo box Team.Players.Name > Players list box Team.Players.Name > Player name text box Team.Players.Statistics > Statistics data grid Position.Name > Positions combo box That is it!  Really?  Well, no, not really there is one caveat here in that the Positions combo box is not bound the selected players position.  To do so, we will apply a binding to the position combo boxs SelectedValue to point to the current players PositionId value: That should do the trick now, all we need to worry about is loading the actual data.  Sadly, it appears as if we will need to drop to code in order to invoke our IO methods to load all teams and positions.  At least Visual Studio kindly created the stubs for us to do so, ultimately the code should look like this: Note the weirdness with the InitializeDataFiles call that is my current means of telling an IO where to load the data for each of the entities.  I havent thought of a more intuitive way than that yet, but do note that all data is loaded from Teams.xml besides for positions, which is loaded from Lookups.xml.   I think that may be all we need to do to at least load all of the data, lets run it and see: Yay!  All of our glorious data is being displayed!  Er, wait, whats up with the position dropdown?  Why is it red?  Lets select the RB and see if everything updates: Crap, the position didnt update to reflect the selected player, but everything else did.  Where did we go wrong in binding the position to the selected player?  Thinking about it a bit and comparing it to how traditional data binding works, I realize that we never set the value member (or some similar property) to tell the control to join the Id of the source (positions) to the position Id of the player.  I dont see a similar property to that on the combo box control, but I do see a property named SelectedValuePath that might be it, so I set it to Id and run the app again: Hey, all right!  No red box around the positions combo box.  Unfortunately, selecting the RB does not update the dropdown to point to Runningback.  Hmmm.  Now what could it be?  Maybe the problem is that we are loading teams before we are loading positions, so when it binds position Id, all of the positions arent loaded yet.  I went to the code behind and switched things so position loads first and no dice.  Same result when I run.  Why?  WHY?  Ok, ok, calm down, take a deep breath.  Get something with caffeine or sugar (preferably both) and think rationally. Ok, gigantic chocolate chip cookie and a mountain dew chaser have never let me down in the past, so dont fail me now!  Ah ha!  of course!  I didnt even have to finish the mountain dew and I think Ive got it:  Data Context.  By default, when setting on the selected value binding for the dropdown, the data context was list_team.  I dont even know what the heck list_team is, we want it to be bound to our team players view source resource instead, like this: Running it now and selecting the various players: Done and done.  Everything read and bound, thank you caffeine and sugar!  Oh, and thank you Visual Studio 2010.  Lets wire up some of those buttons now There has got to be a better way to do this, but it works for now.  What the add player button does is add a new player object to the currently selected team.  Unfortunately, I couldnt get the new object to automatically show up in the players list (something about not using an observable collection gotta look into this) so I just save the change immediately and reload the screen.  Terrible, but it works: Lets go after something easier:  The save button.  By default, as we type in new text for the players name, it is showing up in the list box as updated.  Cool!  Why couldnt my add new player logic do that?  Anyway, the save button should be as simple as invoking MFL.IO.Save for the selected player, like this: MFL.IO.Save((MFL.Player)lbTeamPlayers.SelectedItem, true); Surprisingly, that worked on the first try.  Lets see if we get as lucky with the Delete player button: MFL.IO.Delete((MFL.Player)lbTeamPlayers.SelectedItem); Refresh(); Note the use of the Refresh method again I cant seem to figure out why updates to the underlying data source are immediately reflected, but adds and deletes are not.  That is a problem for another day, and again my hunch is that I should be binding to something more complex than IEnumerable (like observable collection). Now that an example of the basic CRUD methods are wired up, I want to quickly investigate the performance of this beast.  Im going to make a special button to add 30 teams, each with 50 players and 10 seasons worth of stats.  If my math is right, that will end up with 15000 rows of data, a pretty hefty amount for an XML file.  The save of all this new data took a little over a minute, but that is acceptable because we wouldnt typically be saving batches of 15k records, and the resulting XML file size is a little over a megabyte.  Not huge, but big enough to see some read performance numbers or so I thought.  It reads this file and renders the first team in under a second.  That is unbelievable, but we are lazy loading and the file really wasnt that big.  I will increase it to 50 teams with 100 players and 20 seasons each - 100,000 rows.  It took a year and a half to save all of that data, and resulted in an 8 megabyte file.  Seriously, if you are loading XML files this large, get a freaking database!  Despite this, it STILL takes under a second to load and render the first team, which is interesting mostly because I thought that it was loading that entire 8 MB XML file behind the scenes.  I have to say that I am quite impressed with the performance of the LINQ to XML approach, particularly since I took no efforts to optimize any of this code and was fairly new to the concept from the start.  There might be some merit to this little project after all Look out SQL Server and Oracle, use XML files instead!  Next up, I am going to completely pull the rug out from under the UI and change a number of entities in our model.  How well will the code be regenerated?  How much effort will be required to tie things back together in the UI?Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ??GoldenGate Replicat?HANDLECOLLISIONS??

    - by Liu Maclean(???)
    HANDLECOLLISIONS?????goldengate????????REPLICAT??,???????????????????,???????????????????????????,??????????????????????????reperror????????discard??,????????????????,??????(????error mapping????,???????discard??),??????????????;?????????????????,????????? ??HANDLECOLLISIONS?????: target??delete??(missing delete),??????????discardfile target??update??(missing update) ????????=» update???INSERT ,???????????? ?????????=» ??????????discardfile ????????????target??,???replicat???UPDATE?????????????? ??1 target??delete??(missing delete) : C:\Users\ML>sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 18 13:38:03 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> conn sender/oracle Connected. SQL> create table handlec(t1 int primary key,t2 int); Table created. SQL> insert into handlec values(1,2); 1 row created. SQL> insert into handlec values(3,2); 1 row created. SQL> insert into handlec values(4,2); 1 row created. SQL> commit; Commit complete. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 3 2 4 2 target : SQL> conn receiver/oracle Connected. SQL> create table handlec(t1 int primary key,t2 int); Table created. SQL> insert into handlec values(1,2); 1 row created. SQL> commit; SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 SQL> GGSCI (XIANGBLI-CN) 1> alter extract load2 , begin now EXTRACT altered. GGSCI (XIANGBLI-CN) 4> alter replicat rep2, begin now REPLICAT altered. GGSCI (XIANGBLI-CN) 13> add trandata sender.* Logging of supplemental redo data enabled for table SENDER.HANDLEC. Logging of supplemental redo log data is already enabled for table SENDER.TV. GGSCI (XIANGBLI-CN) 14> start mgr MGR is already running. GGSCI (XIANGBLI-CN) 15> start er * Sending START request to MANAGER ... EXTRACT LOAD2 starting Sending START request to MANAGER ... REPLICAT REP2 starting GGSCI (XIANGBLI-CN) 16> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING EXTRACT RUNNING LOAD2 00:00:00 00:00:01 REPLICAT RUNNING REP2 00:00:00 00:00:08 ***SOURCE?????TARGET????? SQL> delete handlec where t1=3; 1 row deleted. SQL> commit; Commit complete. ??SQL error 1403??,REPLICAT ABORT 2012-09-18 13:45:48 WARNING OGG-01004 Aborted grouped transaction on 'RECEIVER.HANDLEC', Database error 1403 (OCI Error ORA-01403: no data found, SQL ). 2012-09-18 13:45:48 WARNING OGG-01003 Repositioning to rba 1091 in seqno 3. 2012-09-18 13:45:48 WARNING OGG-01154 SQL error 1403 mapping SENDER.HANDLEC to RECEIVER.HANDLEC OCI Error ORA-01403: no data found, SQL . 2012-09-18 13:45:48 WARNING OGG-01003 Repositioning to rba 1091 in seqno 3. Source Context : SourceModule : [er.errors] SourceID : [er/errors.cpp] SourceFunction : [take_rep_err_action] SourceLine : [623] ThreadBacktrace : [8] elements : [D:\ogg\V34342-01\gglog.dll(??1CContextItem@@UEAA@XZ+0x3272) [0x000000018010BDD2]] : [D:\ogg\V34342-01\gglog.dll(?_MSG_ERR_MAP_TO_TANDEM_FAILED@@YAPEAVCMessage@@PEAVCSourceContext@@AEBV?$CQualDBObjName@$00@ggapp@gglib@ggs@@1W4MessageDisposition@CMessageFactory@@@Z+0x138) [0x00000001800AD508]] : [D:\ogg\V34342-01\replicat.exe(ERCALLBACK+0x6e1e) [0x0000000140099D5E]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x4411) [0x00000001400C9BE1]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x289cd) [0x00000001400EE19D]] : [D:\ogg\V34342-01\replicat.exe(CommonLexerNewSSD+0x9440) [0x00000001402AE980]] : [C:\windows\system32\kernel32.dll(BaseThreadInitThunk+0xd) [0x000000007733652D]] : [C:\windows\SYSTEM32\ntdll.dll(RtlUserThreadStart+0x21) [0x000000007746C521]] 2012-09-18 13:45:48 ERROR OGG-01296 Error mapping from SENDER.HANDLEC to RECEIVER.HANDLEC. *********************************************************************** * ** Run Time Statistics ** * *********************************************************************** Last record for the last committed transaction is the following: ___________________________________________________________________ Trail name : D:\ogg\V34342-01\ex\ze000003 Hdr-Ind : E (x45) Partition : . (x04) UndoFlag : . (x00) BeforeAfter: B (x42) RecLength : 9 (x0009) IO Time : 2012-09-18 13:45:38.000000 IOType : 3 (x03) OrigNode : 255 (xff) TransInd : . (x03) FormatType : R (x52) SyskeyLen : 0 (x00) Incomplete : . (x00) AuditRBA : 44 AuditPos : 3337232 Continued : N (x00) RecCount : 1 (x01) 2012-09-18 13:45:38.000000 Delete Len 9 RBA 1091 Name: SENDER.HANDLEC ___________________________________________________________________ Reading D:\ogg\V34342-01\ex\ze000003, current RBA 1091, 0 records Report at 2012-09-18 13:45:48 (activity since 2012-09-18 13:45:48) From Table SENDER.HANDLEC to RECEIVER.HANDLEC: # inserts: 0 # updates: 0 # deletes: 0 # discards: 1 Last log location read: FILE: D:\ogg\V34342-01\ex\ze000003 SEQNO: 3 RBA: 1091 TIMESTAMP: 2012-09-18 13:45:38.000000 EOF: NO READERR: 0 2012-09-18 13:45:48 ERROR OGG-01668 PROCESS ABENDING. 2012-09-18 13:45:48 INFO OGG-01237 Trace file D:\ogg\V34342-01\REP_TRACE1.TRC closed. 2012-09-18 13:45:48 INFO OGG-01237 Trace file D:\ogg\V34342-01\REP_TRACE2.TRC closed. CACHE OBJECT MANAGER statistics CACHE MANAGER VM USAGE vm current = 0 vm anon queues = 0 vm anon in use = 0 vm file = 0 vm used max = 0 ==> CACHE BALANCED CACHE CONFIGURATION cache size = 2G cache force paging = 3.41G buffer min = 64K buffer highwater = 8M pageout eligible size = 8M ================================================================================ ??skiptransaction???????? GGSCI (XIANGBLI-CN) 18> start rep2 skiptransaction Sending START request to MANAGER ... REPLICAT REP2 starting ??2 target??update??(missing update),???????? : ???????, ??source????????? SQL> update handlec set t1=5 where t1=4; 1 row updated. SQL> commit; Commit complete. ???target ????(miss update)??????? Database error 1403+OGG-01296 2012-09-18 13:49:30 WARNING OGG-01004 Aborted grouped transaction on 'RECEIVER.HANDLEC', Database error 1403 (OCI Error ORA-01403: no data found, SQL <UPDATE "RECEIVER"."HANDLEC" SET "T1" = :a1 WHERE "T1" = :b0>). 2012-09-18 13:49:30 WARNING OGG-01003 Repositioning to rba 1218 in seqno 3. 2012-09-18 13:49:30 WARNING OGG-01003 Repositioning to rba 1218 in seqno 3. Source Context : SourceModule : [er.errors] SourceID : [er/errors.cpp] SourceFunction : [take_rep_err_action] SourceLine : [623] ThreadBacktrace : [8] elements : [D:\ogg\V34342-01\gglog.dll(??1CContextItem@@UEAA@XZ+0x3272) [0x000000018010BDD2]] : [D:\ogg\V34342-01\gglog.dll(?_MSG_ERR_MAP_TO_TANDEM_FAILED@@YAPEAVCMessage@@PEAVCSourceContext@@AEBV?$CQualDBObjName@$00@ggapp@gglib@ggs@@1W4MessageDisposition@CMessageFactory@@@Z+0x138) [0x00000001800AD508]] : [D:\ogg\V34342-01\replicat.exe(ERCALLBACK+0x6e1e) [0x0000000140099D5E]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x4411) [0x00000001400C9BE1]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x289cd) [0x00000001400EE19D]] : [D:\ogg\V34342-01\replicat.exe(CommonLexerNewSSD+0x9440) [0x00000001402AE980]] : [C:\windows\system32\kernel32.dll(BaseThreadInitThunk+0xd) [0x000000007733652D]] : [C:\windows\SYSTEM32\ntdll.dll(RtlUserThreadStart+0x21) [0x000000007746C521]] 2012-09-18 13:49:30 ERROR OGG-01296 Error mapping from SENDER.HANDLEC to RECEIVER.HANDLEC. ??HANDLECOLLISIONS?,rep??????????discard?? GGSCI (XIANGBLI-CN) 23> view params rep2 replicat rep2 userid receiver , password oracle trace ./rep_trace1.trc trace2 ./rep_trace2.trc ASSUMETARGETDEFS HANDLECOLLISIONS map sender.*, target receiver.*; GGSCI (XIANGBLI-CN) 18> start rep2 SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 5 ????T1=5 T2 NULL?????? ,??update?????????????,??replicat??????????????update????????????????,?????T2 ?NULL ,????????????EXTRACT??PKUPDATE??? ????????FETCHOPTIONS FETCHPKUPDATECOLS ????????EXTRACT?????,???EXTRACT? ????extract???????????? ??????: SQL> conn receiver/oracle Connected. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 5 20 200 SQL> delete handlec where t1=5; 1 row deleted. SQL> commit; Commit complete. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 20 200 SQL> conn sender/oracle Connected. SQL> update handlec set t1=t1+1000 where t1=5; 1 row updated. SQL> commit; Commit complete. SQL> conn receiver/oracle Connected. SQL> SQL> SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 20 200 1005 2 ???????FETCHOPTIONS FETCHPKUPDATECOLS??????redo image???trail?,????primary key?????HANDLECOLLISIONS????target??????????? ??3 ????????????target??,???replicat???UPDATE??????????????: *** TARGET SQL> conn receiver/oracle Connected. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 9 5 target????? t1=10 t2=9??? ,????source???(10,100)??? >>SOURCE SQL> insert into handlec values(10,100); 1 row created. SQL> commit; >>TARGET SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 5 ???????source?insert??,???target???????????????HANDLECOLLISIONS?REPLICAT???UPDATE??????COLUMNS ?? HANDLECOLLISIONS?????goldengate????????REPLICAT??,???????????????????,???????????????????????????,??????????????????????????reperror????????discard??,????????????????,??????,??????????????;?????????????????,????????? ??HANDLECOLLISIONS?????: target??delete??(missing delete),??????????discardfile target??update??(missing update) ????????=» update???INSERT ,???????????? ?????????=» ??????????discardfile ????????????target??,???replicat???UPDATE?????????????? ?:???????????Insert/Delete??,????????????????Replicat?????abend,????? ???????????,??target??HANDLECOLLISIONS??update??,?????INSERT??????,???????????????,FETCHOPTIONS FETCHPKUPDATECOLS??????redo image???trail?,????primary key?????HANDLECOLLISIONS????target??????????? ??????send ??????HANDLECOLLISIONS GGSCI (XIANGBLI-CN) 29> send rep2, NOHANDLECOLLISIONS Sending NOHANDLECOLLISIONS request to REPLICAT REP2 ... REP2 NOHANDLECOLLISIONS set for 1 tables and 0 wildcard entries

    Read the article

  • Getting JAX-WS client work on Weblogic 9.2 with ant

    - by michuk
    I've recently had lots of issues trying to deploy a JAX-WS web servcie client on Weblogic 9.2. It turns out there is no straightforward guide on how to achieve this, so I decided to put together this short wiki entry hoping it might be useful for others. Firstly, Weblogic 9.2 does not support web servcies using JAX-WS in general. It comes with old versions of XML-related java libraries that are incompatible with the latest JAX-WS (similar issues occur with Axis2, only Axis1 seems to be working flawlessly with Weblogic 9.x but that's a very old and unsupported library). So, in order to get it working, some hacking is required. This is how I did it (note that we're using ant in our legacy corporate project, you probably should be using maven which should eliminate 50% of those steps below): Download the most recent JAX-WS distribution from https://jax-ws.dev.java.net/ (The exact version I got was JAXWS2.2-20091203.zip) Place the JAX-WS jars with the dependencies in a separate folder like lib/webservices. Create a patternset in ant to reference those jars: Include the patternset in your WAR-related goal. This could look something like: (not the flatten="true" parameter - it's important as Weblogic 9.x is by default not smart enough to access jars located in a different lcoation than WEB-INF/lib inside your WAR file) In case of clashes, Weblogic uses its own jars by default. We want it to use the JAX-WS jars from our application instead. This is achieved by preparing a weblogic-application.xml file and placing it in META-INF folder of the deplotyed EAR file. It should look like this: javax.jws. javax.xml.bind. javax.xml.crypto. javax.xml.registry. javax.xml.rpc. javax.xml.soap. javax.xml.stream. javax.xml.ws. com.sun.xml.api.streaming.* Remember to place that weblogic-application.xml file in your EAR! The ant goal for that may look similar to: <jar destfile="${warfile}" basedir="${wardir}"/> <ear destfile="${earfile}" appxml="resources/${app.name}/application.xml"> <fileset dir="${dist}" includes="${app.name}.war"/> <metainf dir="resources/META-INF"/> </ear> Also you need to tell weblogic to prefer your WEB-INF classes to those in distribution. You do that by placing the following lines in your WEB-INF/weblogic.xml file: true And that's it for the weblogic-related configuration. Now only set up your JAX-WS goal. The one below is going to simply generate the web service stubs and classes based on a locally deployed WSDL file and place them in a folder in your app: Remember about the keep="true" parameter. Without it, wsimport generates the classes and... deletes them, believe it or not! For mocking a web service I suggest using SOAPUI, an open source project. Very easy to deploy, crucial for web servcies intergation testing. We're almost there. The final thing is to write a Java class for testing the web service, try to run it as a standalone app first (or as part of your unit tests) And then try to run the same code from withing Weblogic. It should work. It worked for me. After some 3 days of frustration. And yes, I know I should've put 9 and 10 under a single bullet-point, but the title "10 steps to deploy a JAX-WS web service under Web logic 9.2 using ant" sounds just so much better. Please, edit this post and improve it if you find something missing!

    Read the article

  • Mobile App Data Syncronization

    - by Matt Rogish
    Let's say I have a mobile app that uses HTML5 SQLite DB (and/or the HTML5 key-value store). Assets (media files, PDFs, etc.) are stored locally on the mobile device. Luckily enough, the mobile device is a read-only copy of the "centralized" storage, so the mobile device won't have to propagate changes upstream. However, as the server changes assets (creates new ones, modifies existing, deletes old ones) I need to propagate those changes back to the mobile app. Assume that server changes are grouped into changesets (version number n) that contain some information (added element XYZ, deleted id = 45, etc.) and that the mobile device has limited CPU/bandwidth, so most of the processing has to take place on the server. I can think of a couple of methods to do this. All have trade-offs and at this point, I'm unsure which is the right course of action... Method 1: For change set n, store the "diff" of the current n and previous n-1. When a client with version y asks if there have been any changes, send the change sets from version y up to the current version. e.g. added item 334, contents: xxx. Deleted picture 44. Deleted PDF 11. Changed 33. added picture 99. Characteristics: Diffs take up space, although in theory would be kept small. However, all diffs must be kept around indefinitely (should a v1 app have not been updated for a year, must apply v2..v100). High latency devices (mobile apps) will incur a penalty to send lots of small files (assume cannot be zipped or tarr'd up into one file) Very few server CPU resources required, as all it does is send the client a list of files "Dumb" - if I change an item in change set 3, and change it to something else in 4, the client is going to perform both actions, even though #3 is rendered moot by #4. Or, if an asset is added in #4 and removed in #5 - the client will download a file just to delete it later. Method 2: Very similar to method 1 except on the server, do some sort of a diff between the change sets represented by the app version and server version. Package that up and send that single change set to the client. Characteristics: Client-efficient: The client only has to process one file, duplicate or irrelevant changes are stripped out. Server CPU/space intensive. The change sets must be diff'd and then written out to a file that is then sent to the client. Makes diff server scalability an issue. Possibly ways to cache the results and re-use them, but in the wild there's likely to be a lot of different versions so the diff re-use has a limit Diff algorithm is complicated. The change sets must be structured in such a way that an efficient and effective diff can be performed. Method 3: Instead of keeping diffs, write out the entire versioned asset collection to a mobile-database import file. When client requests an update, send the entire database to client and have them update their assets appropriately. Characteristics: Conceptually simple -- easy to develop and deploy Very inefficient as the client database is restored every update. If only one new thing was added, the whole database is refreshed. Server space and CPU efficient. Only the latest version DB needs kept around and the server just throws the file to the client. Others?? Thoughts? Thanks!!

    Read the article

  • What's the right way to do mutable data structures (e.g., skip lists, splay trees) in F#?

    - by dan
    What's a good way to implement mutable data structures in F#? The reason I’m asking is because I want to go back and implement the data structures I learned about in the algorithms class I took this semester (skip lists, splay trees, fusion trees, y-fast tries, van Emde Boas trees, etc.), which was a pure theory course with no coding whatsoever, and I figure I might as well try to learn F# while I’m doing it. I know that I “should” use finger trees to get splay tree functionality in a functional language, and that I should do something with laziness to get skip-list functionality, etc. , but I want to get the basics nailed down before I try playing with purely functional implementations. There are lots of examples of how to do functional data structures in F#, but there isn’t much on how to do mutable data structures, so I started by fixing up the doubly linked list here into something that allows inserts and deletes anywhere. My plan is to turn this into a skip list, and then use a similar structure (discriminated union of a record) for the tree structures I want to implement. Before I start on something more substantial, is there a better way to do mutable structures like this in F#? Should I just use records and not bother with the discriminated union? Should I use a class instead? Is this question "not even wrong"? Should I be doing the mutable structures in C#, and not dip into F# until I want to compare them to their purely functional counterparts? And, if a DU of records is what I want, could I have written the code below better or more idiomatically? It seems like there's a lot of redundancy here, but I'm not sure how to get rid of it. module DoublyLinkedList = type 'a ll = | None | Node of 'a ll_node and 'a ll_node = { mutable Prev: 'a ll; Element : 'a ; mutable Next: 'a ll; } let insert x l = match l with | None -> Node({ Prev=None; Element=x; Next=None }) | Node(node) -> match node.Prev with | None -> let new_node = { Prev=None; Element=x; Next=Node(node)} node.Prev <- Node(new_node) Node(new_node) | Node(prev_node) -> let new_node = { Prev=node.Prev; Element=x; Next=Node(node)} node.Prev <- Node(new_node) prev_node.Next <- Node(new_node) Node(prev_node) let rec nth n l = match n, l with | _,None -> None | _,Node(node) when n > 0 -> nth (n-1) node.Next | _,Node(node) when n < 0 -> nth (n+1) node.Prev | _,Node(node) -> Node(node) //hopefully only when n = 0 :-) let rec printLinkedList head = match head with | None -> () | Node(x) -> let prev = match x.Prev with | None -> "-" | Node(y) -> y.Element.ToString() let cur = x.Element.ToString() let next = match x.Next with | None -> "-" | Node(y) -> y.Element.ToString() printfn "%s, <- %s -> %s" prev cur next printLinkedList x.Next

    Read the article

  • C++ MySQL++ Delete query statement brain killer question

    - by shauny
    Hello all, I'm relatively new to the MySQL++ connector in C++, and have an really annoying issue with it already! I've managed to get stored procedures working, however i'm having issues with the delete statements. I've looked high and low and have found no documentation with examples. First I thought maybe the code needs to free the query/connection results after calling the stored procedure, but of course MySQL++ doesn't have a free_result method... or does it? Anyways, here's what I've got: #include <iostream> #include <stdio.h> #include <queue> #include <deque> #include <sys/stat.h> #include <mysql++/mysql++.h> #include <boost/thread/thread.hpp> #include "RepositoryQueue.h" using namespace boost; using namespace mysqlpp; class RepositoryChecker { private: bool _isRunning; Connection _con; public: RepositoryChecker() { try { this->_con = Connection(false); this->_con.set_option(new MultiStatementsOption(true)); this->_con.set_option(new ReconnectOption(true)); this->_con.connect("**", "***", "***", "***"); this->ChangeRunningState(true); } catch(const Exception& e) { this->ChangeRunningState(false); } } /** * Thread method which runs and creates the repositories */ void CheckRepositoryQueues() { //while(this->IsRunning()) //{ std::queue<RepositoryQueue> queues = this->GetQueue(); if(queues.size() > 0) { while(!queues.empty()) { RepositoryQueue &q = queues.front(); char cmd[256]; sprintf(cmd, "svnadmin create /home/svn/%s/%s/%s", q.GetPublicStatus().c_str(), q.GetUsername().c_str(), q.GetRepositoryName().c_str()); if(this->DeleteQueuedRepository(q.GetQueueId())) { printf("query deleted?\n"); } printf("Repository created!\n"); queues.pop(); } } boost::this_thread::sleep(boost::posix_time::milliseconds(500)); //} } protected: /** * Gets the latest queue of repositories from the database * and returns them inside a cool queue defined with the * RepositoryQueue class. */ std::queue<RepositoryQueue> GetQueue() { std::queue<RepositoryQueue> queues; Query query = this->_con.query("CALL sp_GetRepositoryQueue();"); StoreQueryResult result = query.store(); RepositoryQueue rQ; if(result.num_rows() > 0) { for(unsigned int i = 0;i < result.num_rows(); ++i) { rQ = RepositoryQueue((unsigned int)result[i][0], (unsigned int)result[i][1], (String)result[i][2], (String)result[i][3], (String)result[i][4], (bool)result[i][5]); queues.push(rQ); } } return queues; } /** * Allows the thread to be shut off. */ void ChangeRunningState(bool isRunning) { this->_isRunning = isRunning; } /** * Returns the running value of the active thread. */ bool IsRunning() { return this->_isRunning; } /** * Deletes the repository from the mysql queue table. This is * only called once it has been created. */ bool DeleteQueuedRepository(unsigned int id) { char cmd[256]; sprintf(cmd, "DELETE FROM RepositoryQueue WHERE Id = %d LIMIT 1;", id); Query query = this->_con.query(cmd); return (query.exec()); } }; I've removed all the other methods as they're not needed... Basically it's the DeleteQueuedRepository method which isn't working, the GetQueue works fine. PS: This is on a Linux OS (Ubuntu server) Many thanks, Shaun

    Read the article

  • Android SQLlite crashes after trying retrieving data from it

    - by Pavel
    Hey everyone. I'm kinda new to android programming so please bear with me. I'm having some problems with retrieving records from the db. Basically, all I want to do is to store latitudes and longitudes which GPS positioning functions outputs and display them in a list using ListActivity on different tab later on. This is how the code for my DBAdapter helper class looks like: public class DBAdapter { public static final String KEY_ROWID = "_id"; public static final String KEY_LATITUDE = "latitude"; public static final String KEY_LONGITUDE = "longitude"; private static final String TAG = "DBAdapter"; private static final String DATABASE_NAME = "coords"; private static final String DATABASE_TABLE = "coordsStorage"; private static final int DATABASE_VERSION = 1; private static final String DATABASE_CREATE = "create table coordsStorage (_id integer primary key autoincrement, " + "latitude integer not null, longitude integer not null);"; private final Context context; private DatabaseHelper DBHelper; private SQLiteDatabase db; public DBAdapter(Context ctx) { this.context = ctx; DBHelper = new DatabaseHelper(context); } private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(DATABASE_CREATE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { Log.w(TAG, "Upgrading database from version " + oldVersion + " to " + newVersion + ", which will destroy all old data"); db.execSQL("DROP TABLE IF EXISTS titles"); onCreate(db); } } //---opens the database--- public DBAdapter open() throws SQLException { db = DBHelper.getWritableDatabase(); return this; } //---closes the database--- public void close() { DBHelper.close(); } //---insert a title into the database--- public long insertCoords(int latitude, int longitude) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_LATITUDE, latitude); initialValues.put(KEY_LONGITUDE, longitude); return db.insert(DATABASE_TABLE, null, initialValues); } //---deletes a particular title--- public boolean deleteTitle(long rowId) { return db.delete(DATABASE_TABLE, KEY_ROWID + "=" + rowId, null) 0; } //---retrieves all the titles--- public Cursor getAllTitles() { return db.query(DATABASE_TABLE, new String[] { KEY_ROWID, KEY_LATITUDE, KEY_LONGITUDE}, null, null, null, null, null); } //---retrieves a particular title--- public Cursor getTitle(long rowId) throws SQLException { Cursor mCursor = db.query(true, DATABASE_TABLE, new String[] { KEY_ROWID, KEY_LATITUDE, KEY_LONGITUDE}, KEY_ROWID + "=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } //---updates a title--- /*public boolean updateTitle(long rowId, int latitude, int longitude) { ContentValues args = new ContentValues(); args.put(KEY_LATITUDE, latitude); args.put(KEY_LONGITUDE, longitude); return db.update(DATABASE_TABLE, args, KEY_ROWID + "=" + rowId, null) 0; }*/ } Now my question is - am I doing something wrong here? If yes could someone please tell me what? And how I should retrieve the records in organised list manner? Help would be greatly appreciated!!

    Read the article

  • Deleting unreferenced child records with nhibernate

    - by Chev
    Hi There I am working on a mvc app using nhibernate as the orm (ncommon framework) I have parent/child entities: Product, Vendor & ProductVendors and a one to many relationship between them with Product having a ProductVendors collection Product.ProductVendors. I currently am retrieving a Product object and eager loading the children and sending these down the wire to my asp.net mvc client. A user will then modify the list of Vendors and post the updated Product back. I am using a custom model binder to generate the modified Product entity. I am able to update the Product fine and insert new ProductVendors. My problem is that dereferenced ProductVendors are not cascade deleted when specifying Product.ProductVendors.Clear() and calling _productRepository.Save(product). The problem seems to be with attaching the detached instance. Here are my mapping files: Product <?xml version="1.0" encoding="utf-8" ?> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Name" type="String" length="250" /> ProductVendors <?xml version="1.0" encoding="utf-8" ?> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Price" /> <many-to-one name="Product" class="Product" column="ProductId" lazy="false" not-null="true" /> <many-to-one name="Vendor" class="Vendor" column="VendorId" lazy="false" not-null="true" /> Custom Model Binder: using System; using Test.Web.Mvc; using Test.Domain; namespace Spoked.MVC { public class ProductUpdateModelBinder : DefaultModelBinder { private readonly ProductSystem ProductSystem; public ProductUpdateModelBinder(ProductSystem productSystem) { ProductSystem = productSystem; } protected override void OnModelUpdated(ControllerContext controllerContext, ModelBindingContext bindingContext) { var product = bindingContext.Model as Product; if (product != null) { product.Category = ProductSystem.GetCategory(new Guid(bindingContext.ValueProvider["Category"].AttemptedValue)); product.Brand = ProductSystem.GetBrand(new Guid(bindingContext.ValueProvider["Brand"].AttemptedValue)); product.ProductVendors.Clear(); if (bindingContext.ValueProvider["ProductVendors"] != null) { string[] productVendorIds = bindingContext.ValueProvider["ProductVendors"].AttemptedValue.Split(','); foreach (string id in productVendorIds) { product.AddProductVendor(ProductSystem.GetVendor(new Guid(id)), 90m); } } } } } } Controller: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Update(Product product) { using (var scope = new UnitOfWorkScope()) { //product.ProductVendors.Clear(); _productRepository.Save(product); scope.Commit(); } using (new UnitOfWorkScope()) { IList<Vendor> availableVendors = _productSystem.GetAvailableVendors(product); productDetailEditViewModel = new ProductDetailEditViewModel(product, _categoryRepository.Select(x => x).ToList(), _brandRepository.Select(x => x).ToList(), availableVendors); } return RedirectToAction("Edit", "Products", new {id = product.Id.ToString()}); } The following test does pass though: [Test] [NUnit.Framework.Category("ProductTests")] public void Can_Delete_Product_Vendors_By_Dereferencing() { Product product; using(UnitOfWorkScope scope = new UnitOfWorkScope()) { Console.Out.WriteLine("Selecting..."); product = _productRepository.First(); Console.Out.WriteLine("Adding Product Vendor..."); product.AddProductVendor(_vendorRepository.First(), 0m); scope.Commit(); } Console.Out.WriteLine("About to delete Product Vendors..."); using (UnitOfWorkScope scope = new UnitOfWorkScope()) { Console.Out.WriteLine("Clearing Product Vendor..."); _productRepository.Save(product); // seems to be needed to attach entity to the persistance manager product.ProductVendors.Clear(); scope.Commit(); } } Going nuts here as I almost have a very nice solution between mvc, custom model binders and nhibernate. Just not seeing my deletes cascaded. Any help greatly appreciated. Chev

    Read the article

  • Php template caching design

    - by Thomas
    Hello to all, I want to include caching in my app design. Caching templates for starters. The design I have used so far is very modular. I have created an ORM implementation for all my tables and each table is represented by the corresponding class. All the requests are handled by one controller which routes them to the appropriate webmethod functions. I am using a template class for handling UI parts. What I have in mind for caching includes the implementation of a separate Cache class for handling caching with the flexibility to either store in files, apc or memcache. Right now I am testing with file caching. Some thoughts Should I include the logic of checking for cached versions in the Template class or in the webmethods which handle the incoming requests and which eventually call the Template class. In the first case, things are pretty simple as I will not have to change anything more than pass the template class an extra argument (whether to load from cache or not). In the second case however, I am thinking of checking for a cached version immediately in the webmethod and if found return it. This will save all the processing done until the logic reaches the template (first case senario). Both senarios however, rely on an accurate mechanism of invalidating caches, which brings as to Invalidating caches As I see it (and you can add your input freely) a template cached file, becomes invalidate if: a. the expiration set, is reached. b. the template file itself is updated (ie by the developer when adding a new line) c. the webmethod that handles the request changes (ie the developer adds/deletes something in the code) d. content coming from the db and ending in the template file is modified I am thinking of storing a json encoded array inside the cached file. The first value will be the expiration timestamp of the cache. The second value will be the modification time of the php file with the code handling the request (to cope with option c above) The third will be the content itself The validation process I am considering, according to the above senarios, is: a. If the expiration of the cached file (stored in the array) is reached, delete the cache file b. if the cached file's mod time is smaller than the template's skeleton file mod time, delete the cached file c. if the mod time of the php file is greated than the one stored in the cache, delete the cached file. d. This is tricky. In the ORM implementation I ahve added event handlers (which fire when adding, updating, deleting objects). I could delete the cache file every time an object thatprovides content to the template, is modified. The problem is how to keep track which cached files correpond to each schema object. Take this example, a user has his shortprofile page and a full profile page (2 templates) These templates can be cached. Now, every time the user modifies his profile, the event handler would need to know which templates or cached files correspond to the User, so that these files can be deleted. I could store them in the db but I am looking for a beter approach

    Read the article

  • Decentralized synchronized secure data storage

    - by Alberich
    Introduction Hi, I am going to ask a question which seems utopic for me, but I need to know if there is a way to achieve what I need. And if not, I need to know why not. The idea Suppose I have a database structure, in MySql. I want to create some solution to allow anyone (no matter who, no matter where) to have a synchronized copy (updated clone) of this database (with its content) Well, and it is not going to be just one synchronized copy, it could (and should) be a multiple replication (supposing the basic, this means, for example, ten copies all over the world) And, the most important thing: It must be secure. By secure I mean only real-accepted transactions will be synchronized with all the others (no matter how many) database copies/clones. Note: Since it would be quite difficult to make the synchronization in real-time, I will design everything to make this feature dispensable. So it is not required. My auto-suggestion This is how I am thinking to manage it: Time identifiers and Updates checking: Every action (insert, update, delete...) will be stored as the action instruction itself, associated to the time identifier. [I think better than a DATETIME field, it'll be an INT one, with the number of miliseconds passed from 1st january 2013 on, for example]. So each copy is going to ask to the "neighbour copy" for new actions done since last update, and execute them after checking they are allowed. Problem 1: the "neighbour copy" could be outdated too. Solution 1: do not ask just one neighbour, create a random list with some of the copies/clones and ask them for news (I could avoid the list and ask ALL the clones for updates, but this will be inefficient if clones number ascends too much). Problem 2: Real-time global synchronization is not active. What if... Someone at CLONE_ENTERPRISING inserts a row into TABLE. ... this row goes to every clone ... Someone at CLONE_FIXEMALL deletes this row. ... and at the same time, somewhere in an outdated clone ... Someone at CLONE_DROPOUT edits this row (now inexistent at the other clones) Solution 2: easy stuff, force a GLOBAL synchronization before doing any new "depending-on-third-data action" (edit, for example). This global synch. will be unnecessary when making an INSERT, for instance. Note: Well, someone could have some fun, and make the same insert in two clones... since they're not getting updated in real-time, this row will exist twice. But, it's the same as when we have one single database, in some needed cases we check if there is an existing same-row before doing the final action. Not a problem. Problem 3: It is possible to edit the code and do not filter actions, so someone could spread instructions to delete everything, or just make some trolling activity. This is not a problem, since good clones will always be somewhere. Those who got bad won't interest anymore. I really appreciate if you read. I know this is not the perfect solution, it has possibly hundred of holes, but it is my basic start. I will now appreciate anything you can teach me now. Thanks a lot. PS.: It could be that all this I am trying already exists and has its own name. Sorry for asking then (I'd anyway thank this name, if it exists)

    Read the article

  • How do I get a delete trigger working using fluent API in CTP5?

    - by user668472
    I am having trouble getting referential integrity dialled down enough to allow my delete trigger to fire. I have a dependent entity with three FKs. I want it to be deleted when any of the principal entities is deleted. For principal entities Role and OrgUnit (see below) I can rely on conventions to create the required one-many relationship and cascade delete does what I want, ie: Association is removed when either principal is deleted. For Member, however, I have multiple cascade delete paths (not shown here) which SQL Server doesn't like, so I need to use fluent API to disable cascade deletes. Here is my (simplified) model: public class Association { public int id { get; set; } public int roleid { get; set; } public virtual Role role { get; set; } public int? memberid { get; set; } public virtual Member member { get; set; } public int orgunitid { get; set; } public int OrgUnit orgunit { get; set; } } public class Role { public int id { get; set; } public virtual ICollection<Association> associations { get; set; } } public class Member { public int id { get; set; } public virtual ICollection<Association> associations { get; set; } } public class Organization { public int id { get; set; } public virtual ICollection<Association> associations { get; set; } } My first run at fluent API code looks like this: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) { DbDatabase.SetInitializer<ConfDB_Model>(new ConfDBInitializer()); modelBuilder.Entity<Member>() .HasMany(m=>m.assocations) .WithOptional(a=>a.member) .HasForeignKey(a=>a.memberId) .WillCascadeOnDelete(false); } My seed function creates the delete trigger: protected override void Seed(ConfDB_Model context) { context.Database.SqlCommand("CREATE TRIGGER MemberAssocTrigger ON dbo.Members FOR DELETE AS DELETE Assocations FROM Associations, deleted WHERE Associations.memberId = deleted.id"); } PROBLEM: When I run this, create a Role, a Member, an OrgUnit, and an Association tying the three together all is fine. When I delete the Role, the Association gets cascade deleted as I expect. HOWEVER when I delete the Member I get an exception with a referential integrity error. I have tried setting ON CASCADE SET NULL because my memberid FK is nullable but SQL complains again about multiple cascade paths, so apparently I can cascade nothing in the Member-Association relationship. To get this to work I must add the following code to Seed(): context.Database.SqlCommand("ALTER TABLE dbo.ACLEntries DROP CONSTRAINT member_aclentries"); As you can see, this drops the constraint created by the model builder. QUESTION: this feels like a complete hack. Is there a way using fluent API for me to say that referential integrity should NOT be checked, or otherwise to get it to relax enough for the Member delete to work and allow the trigger to be fired? Thanks in advance for any help you can offer. Although fluent APIs may be "fluent" I find them far from intuitive.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37  | Next Page >