Search Results

Search found 8187 results on 328 pages for 'mind mapping'.

Page 50/328 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • What is the best way to provide an AutoMappingOverride for an interface in fluentnhibernate automapp

    - by Tom
    In my quest for a version-wide database filter for an application, I have written the following code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using FluentNHibernate.Automapping; using FluentNHibernate.Automapping.Alterations; using FluentNHibernate.Mapping; using MvcExtensions.Model; using NHibernate; namespace MvcExtensions.Services.Impl.FluentNHibernate { public interface IVersionAware { string Version { get; set; } } public class VersionFilter : FilterDefinition { const string FILTERNAME = "MyVersionFilter"; const string COLUMNNAME = "Version"; public VersionFilter() { this.WithName(FILTERNAME) .WithCondition("Version = :"+COLUMNNAME) .AddParameter(COLUMNNAME, NHibernateUtil.String ); } public static void EnableVersionFilter(ISession session,string version) { session.EnableFilter(FILTERNAME).SetParameter(COLUMNNAME, version); } public static void DisableVersionFilter(ISession session) { session.DisableFilter(FILTERNAME); } } public class VersionAwareOverride : IAutoMappingOverride<IVersionAware> { #region IAutoMappingOverride<IVersionAware> Members public void Override(AutoMapping<IVersionAware> mapping) { mapping.ApplyFilter<VersionFilter>(); } #endregion } } But, since overrides do not work on interfaces, I am looking for a way to implement this. Currently I'm using this (rather cumbersome) way for each class that implements the interface : public class SomeVersionedEntity : IModelId, IVersionAware { public virtual int Id { get; set; } public virtual string Version { get; set; } } public class SomeVersionedEntityOverride : IAutoMappingOverride<SomeVersionedEntity> { #region IAutoMappingOverride<SomeVersionedEntity> Members public void Override(AutoMapping<SomeVersionedEntity> mapping) { mapping.ApplyFilter<VersionFilter>(); } #endregion } I have been looking at IClassmap interfaces etc, but they do not seem to provide a way to access the ApplyFilter method, so I have not got a clue here... Since I am probably not the first one who has this problem, I am quite sure that it should be possible; I am just not quite sure how this works.. EDIT : I have gotten a bit closer to a generic solution: This is the way I tried to solve it : Using a generic class to implement alterations to classes implementing an interface : public abstract class AutomappingInterfaceAlteration<I> : IAutoMappingAlteration { public void Alter(AutoPersistenceModel model) { model.OverrideAll(map => { var recordType = map.GetType().GetGenericArguments().Single(); if (typeof(I).IsAssignableFrom(recordType)) { this.GetType().GetMethod("overrideStuff").MakeGenericMethod(recordType).Invoke(this, new object[] { model }); } }); } public void overrideStuff<T>(AutoPersistenceModel pm) where T : I { pm.Override<T>( a => Override(a)); } public abstract void Override<T>(AutoMapping<T> am) where T:I; } And a specific implementation : public class VersionAwareAlteration : AutomappingInterfaceAlteration<IVersionAware> { public override void Override<T>(AutoMapping<T> am) { am.Map(x => x.Version).Column("VersionTest"); am.ApplyFilter<VersionFilter>(); } } Unfortunately I get the following error now : [InvalidOperationException: Collection was modified; enumeration operation may not execute.] System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource) +51 System.Collections.Generic.Enumerator.MoveNextRare() +7661017 System.Collections.Generic.Enumerator.MoveNext() +61 System.Linq.WhereListIterator`1.MoveNext() +156 FluentNHibernate.Utils.CollectionExtensions.Each(IEnumerable`1 enumerable, Action`1 each) +239 FluentNHibernate.Automapping.AutoMapper.ApplyOverrides(Type classType, IList`1 mappedProperties, ClassMappingBase mapping) +345 FluentNHibernate.Automapping.AutoMapper.MergeMap(Type classType, ClassMappingBase mapping, IList`1 mappedProperties) +43 FluentNHibernate.Automapping.AutoMapper.Map(Type classType, List`1 types) +566 FluentNHibernate.Automapping.AutoPersistenceModel.AddMapping(Type type) +85 FluentNHibernate.Automapping.AutoPersistenceModel.CompileMappings() +746 EDIT 2 : I managed to get a bit further; I now invoke "Override" using reflection for each class that implements the interface : public abstract class PersistenceOverride<I> { public void DoOverrides(AutoPersistenceModel model,IEnumerable<Type> Mytypes) { foreach(var t in Mytypes.Where(x=>typeof(I).IsAssignableFrom(x))) ManualOverride(t,model); } private void ManualOverride(Type recordType,AutoPersistenceModel model) { var t_amt = typeof(AutoMapping<>).MakeGenericType(recordType); var t_act = typeof(Action<>).MakeGenericType(t_amt); var m = typeof(PersistenceOverride<I>) .GetMethod("MyOverride") .MakeGenericMethod(recordType) .Invoke(this, null); model.GetType().GetMethod("Override").MakeGenericMethod(recordType).Invoke(model, new object[] { m }); } public abstract Action<AutoMapping<T>> MyOverride<T>() where T:I; } public class VersionAwareOverride : PersistenceOverride<IVersionAware> { public override Action<AutoMapping<T>> MyOverride<T>() { return am => { am.Map(x => x.Version).Column(VersionFilter.COLUMNNAME); am.ApplyFilter<VersionFilter>(); }; } } However, for one reason or another my generated hbm files do not contain any "filter" fields.... Maybe somebody could help me a bit further now ??

    Read the article

  • MindMap in silverlight

    - by JAllen
    I need to put together a team to build a silverlight based application that will read an xml file and generate a Mind Map diagram based on that file. I am new to silverlight and I need to find out what skills do I need and how difficult is it to do something like this. I expect the typical Mind Map features available in a commercial Mind Map software, like the ability to open and collapse nodes and to move the nodes around the screen.

    Read the article

  • Form values appear blank when submitting to the database - Drupal FormAPI

    - by GaxZE
    Hello, I have been working on this drupal form API script for past week and half. to give an insight into my problem.. the form below merely lists a host of database records which contain 5 individual scoring ranks. (mind, action, relationship, language and IT). this code is apart of my own custom module where all values are listed from the database. the idea behind this module is to be able to edit these values on a large scale. I am having trouble getting the values entered in the form to be passed to the variables inside of the marli_admin_submit function. the second problem is the assigning those values to their specific ID. for this purpose id like to add im merely trying to get just one score updated rather than all of them. below is my code. any advice appreciated. function marli_scores(){ $result = pager_query(db_rewrite_sql('SELECT * FROM marli WHERE value != " "')); while ($node = db_fetch_object($result)) { $attribute = $node->attribute; $field = $node->field_name; $item = $node->value; $mind = $node->mind; $action = $node->action; $relationship = $node->relationship; $language = $node->language; $it = $node->it; $form['field'][$node->marli_id] = array('#type' => 'markup', '#value' => $field, '#prefix' => '<b>', '#suffix' => '</b>'); $form['title'][$node->marli_id] = array('#type' => 'markup', '#value' => $item, '#prefix' => '<b>', '#suffix' => '</b>'); $form['mind'][$node->marli_id] = array('#type' => 'textfield', '#maxlength' => '1', '#size' => '1', '#value' => $mind); $form['action'][$node->marli_id] = array('#type' => 'textfield', '#maxlength' => '1', '#size' => '1', '#value' => $action); $form['relationship'][$node->marli_id] = array('#type' => 'textfield', '#maxlength' => '1', '#size' => '1', '#value' => $relationship); $form['language'][$node->marli_id] = array('#type' => 'textfield', '#maxlength' => '1', '#size' => '1', '#value' => $language); $form['it'][$node->marli_id] = array('#type' => 'textfield', '#maxlength' => '1', '#size' => '1', '#value' => $it); } $form['pager'] = array('#value' => theme('pager', NULL, 50, 0)); $form['save'] = array('#type' => 'submit', '#value' => t('Save')); $form['#theme'] = 'marli_scores'; return $form; } function marli_admin_submit($form, &$form_state) { $marli_id = 4; $submit_mind = $form_state['values']['mind'][$marli_id]; $submit_action = $form_state['values']['action'][$marli_id]; $submit_relationship = $form_state['values']['relationship'][$marli_id]; $submit_language = $form_state['values']['language'][$marli_id]; $submit_it = $form_state['values']['it'][$marli_id]; $sql_query = "UPDATE {marli} SET mind = %d, action = %d, relationship = %d, language = %d, it = %d WHERE marli_id = %d"; if ($success = db_query($sql_query, $submit_mind, $submit_action, $submit_relationship, $submit_language, $submit_it)) { drupal_set_message(t(' Values have been saved.')); } else { drupal_set_message(t('There was an error saving your data. Please try again.')); } }

    Read the article

  • How do I add a shapefile in ArcGIS via python scripting?

    - by Tom W
    I am trying to automate various tasks in ArcGIS Desktop (using ArcMap generally) with Python, and I keep needing a way to add a shape file to the current map. (And then do stuff to it, but that's another story). The best I can do so far is to add a layer file to the current map, using the following ("addLayer" is a layer file object): def AddLayerFromLayerFile(addLayer): import arcpy mxd = arcpy.mapping.MapDocument("CURRENT") df = arcpy.mapping.ListDataFrames(mxd, "Layers")[0] arcpy.mapping.AddLayer(df, addLayer, "AUTO_ARRANGE") arcpy.RefreshActiveView() arcpy.RefreshTOC() del mxd, df, addLayer However, my raw data is always going be shape files, so I need to be able to open them. (Equivantly: convert a shape file to a layer file wiothout opening it, but I'd prefer not to do that).

    Read the article

  • Primefaces p:fileupload component problem

    - by Nitesh Panchal
    Hello, I am using Primefaces 2.0.1 but the FileUpload component is not working properly. It uses JQuery uploadify behind the scenes. This is my web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <filter> <filter-name>PrimeFaces FileUpload Filter</filter-name> <filter-class>org.primefaces.webapp.filter.FileUploadFilter</filter-class> </filter> <filter-mapping> <filter-name>PrimeFaces FileUpload Filter</filter-name> <servlet-name>Faces Servlet</servlet-name> </filter-mapping> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.jsf</url-pattern> </servlet-mapping> <servlet> <servlet-name>Resource Servlet</servlet-name> <servlet-class>org.primefaces.resource.ResourceServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Resource Servlet</servlet-name> <url-pattern>/primefaces_resource/*</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.jsf</welcome-file> </welcome-file-list> </web-app> This is my index.xhtml :- <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:p="http://primefaces.prime.com.tr/ui"> <h:head> <title>Facelet Title</title> </h:head> <h:body> <h:form prependId="false"> <h:commandButton actionListener="#{NewJSFManagedBean.add}" value="add"/> <p:fileUpload auto="false" widgetVar="fileUpl" fileUploadListener="#{NewJSFManagedBean.saveFile}"/> </h:form> </h:body> </html> I have following libraries in my classpath :- primefaces 2.0.1 commons-beanutils commons-beanutils-bean-collection commons-digestor commons-fileUpload commons-io commons-logging jhighlight The file gets correctly uploaded in /tmp but in browser it always says HTTP error. Please help me. It used to work till yesterday. But today i did a fresh installation of Glassfish and it has stopped working.

    Read the article

  • How to save a large nhibernate collection without causing OutOfMemoryException

    - by Michael Hedgpeth
    How do I save a large collection with NHibernate which has elements that surpass the amount of memory allowed for the process? I am trying to save a Video object with nhibernate which has a large number of Screenshots (see below for code). Each Screenshot contains a byte[], so after nhibernate tries to save 10,000 or so records at once, an OutOfMemoryException is thrown. Normally I would try to break up the save and flush the session after every 500 or so records, but in this case, I need to save the collection because it automatically saves the SortOrder and VideoId for me (without the Screenshot having to know that it was a part of a Video). What is the best approach given my situation? Is there a way to break up this save without forcing the Screenshot to have knowledge of its parent Video? For your reference, here is the code from the simple sample I created: public class Video { public long Id { get; set; } public string Name { get; set; } public Video() { Screenshots = new ArrayList(); } public IList Screenshots { get; set; } } public class Screenshot { public long Id { get; set; } public byte[] Data { get; set; } } And mappings: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="SavingScreenshotsTrial" namespace="SavingScreenshotsTrial" default-access="property"> <class name="Screenshot" lazy="false"> <id name="Id" type="Int64"> <generator class="hilo"/> </id> <property name="Data" column="Data" type="BinaryBlob" length="2147483647" not-null="true" /> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="SavingScreenshotsTrial" namespace="SavingScreenshotsTrial" > <class name="Video" lazy="false" table="Video" discriminator-value="0" abstract="true"> <id name="Id" type="Int64" access="property"> <generator class="hilo"/> </id> <property name="Name" /> <list name="Screenshots" cascade="all-delete-orphan" lazy="false"> <key column="VideoId" /> <index column="SortOrder" /> <one-to-many class="Screenshot" /> </list> </class> </hibernate-mapping> When I try to save a Video with 10000 screenshots, it throws an OutOfMemoryException. Here is the code I'm using: using (var session = CreateSession()) { Video video = new Video(); for (int i = 0; i < 10000; i++) { video.Screenshots.Add(new Screenshot() {Data = camera.TakeScreenshot(resolution)}); } session.SaveOrUpdate(video); }

    Read the article

  • ObjectContext.SaveChanges() fails with SQL CE

    - by David Veeneman
    I am creating a model-first Entity Framework 4 app that uses SQL CE as its data store. All is well until I call ObjectContext.SaveChanges() to save changes to the entities in the model. At that point, SaveChanges() throws a System.Data.UpdateException, with an inner exception message that reads as follows: Server-generated keys and server-generated values are not supported by SQL Server Compact. I am completely puzzled by this message. Any idea what is going on and how to fix it? Thanks. Here is the Exception dump: System.Data.UpdateException was unhandled Message=An error occurred while updating the entries. See the inner exception for details. Source=System.Data.Entity StackTrace: at System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter) at System.Data.EntityClient.EntityAdapter.Update(IEntityStateManager entityCache) at System.Data.Objects.ObjectContext.SaveChanges(SaveOptions options) at System.Data.Objects.ObjectContext.SaveChanges() at FsDocumentationBuilder.ViewModel.Commands.SaveFileCommand.Execute(Object parameter) in D:\Users\dcveeneman\Documents\Visual Studio 2010\Projects\FsDocumentationBuilder\FsDocumentationBuilder\ViewModel\Commands\SaveFileCommand.cs:line 68 at MS.Internal.Commands.CommandHelpers.CriticalExecuteCommandSource(ICommandSource commandSource, Boolean userInitiated) at System.Windows.Controls.Primitives.ButtonBase.OnClick() at System.Windows.Controls.Button.OnClick() at System.Windows.Controls.Primitives.ButtonBase.OnMouseLeftButtonUp(MouseButtonEventArgs e) at System.Windows.UIElement.OnMouseLeftButtonUpThunk(Object sender, MouseButtonEventArgs e) at System.Windows.Input.MouseButtonEventArgs.InvokeEventHandler(Delegate genericHandler, Object genericTarget) at System.Windows.RoutedEventArgs.InvokeHandler(Delegate handler, Object target) at System.Windows.RoutedEventHandlerInfo.InvokeHandler(Object target, RoutedEventArgs routedEventArgs) at System.Windows.EventRoute.InvokeHandlersImpl(Object source, RoutedEventArgs args, Boolean reRaised) at System.Windows.UIElement.ReRaiseEventAs(DependencyObject sender, RoutedEventArgs args, RoutedEvent newEvent) at System.Windows.UIElement.OnMouseUpThunk(Object sender, MouseButtonEventArgs e) at System.Windows.Input.MouseButtonEventArgs.InvokeEventHandler(Delegate genericHandler, Object genericTarget) at System.Windows.RoutedEventArgs.InvokeHandler(Delegate handler, Object target) at System.Windows.RoutedEventHandlerInfo.InvokeHandler(Object target, RoutedEventArgs routedEventArgs) at System.Windows.EventRoute.InvokeHandlersImpl(Object source, RoutedEventArgs args, Boolean reRaised) at System.Windows.UIElement.RaiseEventImpl(DependencyObject sender, RoutedEventArgs args) at System.Windows.UIElement.RaiseTrustedEvent(RoutedEventArgs args) at System.Windows.UIElement.RaiseEvent(RoutedEventArgs args, Boolean trusted) at System.Windows.Input.InputManager.ProcessStagingArea() at System.Windows.Input.InputManager.ProcessInput(InputEventArgs input) at System.Windows.Input.InputProviderSite.ReportInput(InputReport inputReport) at System.Windows.Interop.HwndMouseInputProvider.ReportInput(IntPtr hwnd, InputMode mode, Int32 timestamp, RawMouseActions actions, Int32 x, Int32 y, Int32 wheel) at System.Windows.Interop.HwndMouseInputProvider.FilterMessage(IntPtr hwnd, WindowMessage msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at System.Windows.Interop.HwndSource.InputFilterMessage(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(Object source, Delegate method, Object args, Int32 numArgs, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Int32 numArgs) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.PushFrame(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.Run() at System.Windows.Application.RunDispatcher(Object ignore) at System.Windows.Application.RunInternal(Window window) at System.Windows.Application.Run(Window window) at System.Windows.Application.Run() at FsDocumentationBuilder.App.Main() in D:\Users\dcveeneman\Documents\Visual Studio 2010\Projects\FsDocumentationBuilder\FsDocumentationBuilder\obj\x86\Debug\App.g.cs:line 50 at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: System.Data.EntityCommandCompilationException Message=An error occurred while preparing the command definition. See the inner exception for details. Source=System.Data.Entity StackTrace: at System.Data.Mapping.Update.Internal.UpdateTranslator.CreateCommand(DbModificationCommandTree commandTree) at System.Data.Mapping.Update.Internal.DynamicUpdateCommand.CreateCommand(UpdateTranslator translator, Dictionary`2 identifierValues) at System.Data.Mapping.Update.Internal.DynamicUpdateCommand.Execute(UpdateTranslator translator, EntityConnection connection, Dictionary`2 identifierValues, List`1 generatedValues) at System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter) InnerException: System.NotSupportedException Message=Server-generated keys and server-generated values are not supported by SQL Server Compact. Source=System.Data.SqlServerCe.Entity StackTrace: at System.Data.SqlServerCe.SqlGen.DmlSqlGenerator.GenerateReturningSql(StringBuilder commandText, DbModificationCommandTree tree, ExpressionTranslator translator, DbExpression returning) at System.Data.SqlServerCe.SqlGen.DmlSqlGenerator.GenerateInsertSql(DbInsertCommandTree tree, List`1& parameters, Boolean isLocalProvider) at System.Data.SqlServerCe.SqlGen.SqlGenerator.GenerateSql(DbCommandTree tree, List`1& parameters, CommandType& commandType, Boolean isLocalProvider) at System.Data.SqlServerCe.SqlCeProviderServices.CreateCommand(DbProviderManifest providerManifest, DbCommandTree commandTree) at System.Data.SqlServerCe.SqlCeProviderServices.CreateDbCommandDefinition(DbProviderManifest providerManifest, DbCommandTree commandTree) at System.Data.Common.DbProviderServices.CreateCommandDefinition(DbCommandTree commandTree) at System.Data.Common.DbProviderServices.CreateCommand(DbCommandTree commandTree) at System.Data.Mapping.Update.Internal.UpdateTranslator.CreateCommand(DbModificationCommandTree commandTree) InnerException:

    Read the article

  • NHibernate child deletion problem.

    - by JMSA
    Suppose, I have saved some permissions in the database by using this code: RoleRepository roleRep = new RoleRepository(); Role role = new Role(); role.PermissionItems = Permission.GetList(); roleRep .SaveOrUpdate(role); Now, I need this code to delete the PermissionItem(s) associated with a Role when role.PermissionItems == null. Here is the code: RoleRepository roleRep = new RoleRepository(); Role role = roleRep.Get(roleId); role.PermissionItems = null; roleRep .SaveOrUpdate(role); But this is not happening. What should be the correct way to cope with this situation? What/how should I change, hbm-file or persistance code? Role.cs public class Role { public virtual string RoleName { get; set; } public virtual bool IsActive { get; set; } public virtual IList<Permission> PermissionItems { get; set; } } Role.hbm.xml <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="POCO" namespace="POCO"> <class name="Role" table="Role"> <id name="ID" column="ID"> <generator class="native" /> </id> <property name="RoleName" column="RoleName" /> <property name="IsActive" column="IsActive" type="System.Boolean" /> <bag name="PermissionItems" table="Permission" cascade="all" inverse="true"> <key column="RoleID"/> <one-to-many class="Permission" /> </bag> </class> </hibernate-mapping> Permission.cs public class Permission { public virtual string MenuItemKey { get; set; } public virtual int RoleID { get; set; } public virtual Role Role { get; set; } } Permission.hbm.xml <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="POCO" namespace="POCO"> <class name="Permission" table="Permission"> <id name="ID" column="ID"> <generator class="native"/> </id> <property name="MenuItemKey" column="MenuItemKey" /> <property name="RoleID" column="RoleID" /> <many-to-one name="Role" column="RoleID" not-null="true" cascade="all"> </many-to-one> </class> </hibernate-mapping>

    Read the article

  • BizTalk 2009 - The Scope of the Table Looping Functoid

    - by StuartBrierley
    When mapping in BizTalk you will find there are times when you need to map from flat and dispersed elemements in your source schema to a repeated record with child elements in your destination schema.  Below is an example of how you can make use of the Table Looping Functoid to bring together these flat elements and create your repeated group.  Although this example is purposely simple, I have previsouly encounted this issue on a much more complex scale when mapping the response from a credit scoring agency where all the applicant details were supplied in separate parts of a very flat schema. Consider the source and destination schemas as follows:   Although the Table Looping Functoid states that the first input must be a scoping element linked from a repeating group, you can actually also make use of a constant value.  In this case I know that the source schema always contains two people, so I set this to two. Then you need to set the number of columns in your table, in this case 2 (name and sex) and link all the required fields from the source schema. Following this you can configure the table. You can then add the Table Extractor functoids and complete the map. If you now validate this map you will see that BizTalk will warn you about the scoping link for the Table Looping Functoid, but this can be safely ignored. C:\Code\Developer Folders\Stuart Brierley\Test Mapping\TableLooping.btm: warning btm1071: A first input of the Table-Looping functoid must be a link from a Source Tree Node which acts as the scoping parameter. Testing the map will produce the following output:

    Read the article

  • Understanding LINQ to SQL (11) Performance

    - by Dixin
    [LINQ via C# series] LINQ to SQL has a lot of great features like strong typing query compilation deferred execution declarative paradigm etc., which are very productive. Of course, these cannot be free, and one price is the performance. O/R mapping overhead Because LINQ to SQL is based on O/R mapping, one obvious overhead is, data changing usually requires data retrieving:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { Product product = database.Products.Single(item => item.ProductID == id); // SELECT... product.UnitPrice = unitPrice; // UPDATE... database.SubmitChanges(); } } Before updating an entity, that entity has to be retrieved by an extra SELECT query. This is slower than direct data update via ADO.NET:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (SqlConnection connection = new SqlConnection( "Data Source=localhost;Initial Catalog=Northwind;Integrated Security=True")) using (SqlCommand command = new SqlCommand( @"UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID", connection)) { command.Parameters.Add("@ProductID", SqlDbType.Int).Value = id; command.Parameters.Add("@UnitPrice", SqlDbType.Money).Value = unitPrice; connection.Open(); command.Transaction = connection.BeginTransaction(); command.ExecuteNonQuery(); // UPDATE... command.Transaction.Commit(); } } The above imperative code specifies the “how to do” details with better performance. For the same reason, some articles from Internet insist that, when updating data via LINQ to SQL, the above declarative code should be replaced by:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.ExecuteCommand( "UPDATE [dbo].[Products] SET [UnitPrice] = {0} WHERE [ProductID] = {1}", id, unitPrice); } } Or just create a stored procedure:CREATE PROCEDURE [dbo].[UpdateProductUnitPrice] ( @ProductID INT, @UnitPrice MONEY ) AS BEGIN BEGIN TRANSACTION UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID COMMIT TRANSACTION END and map it as a method of NorthwindDataContext (explained in this post):private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.UpdateProductUnitPrice(id, unitPrice); } } As a normal trade off for O/R mapping, a decision has to be made between performance overhead and programming productivity according to the case. In a developer’s perspective, if O/R mapping is chosen, I consistently choose the declarative LINQ code, unless this kind of overhead is unacceptable. Data retrieving overhead After talking about the O/R mapping specific issue. Now look into the LINQ to SQL specific issues, for example, performance in the data retrieving process. The previous post has explained that the SQL translating and executing is complex. Actually, the LINQ to SQL pipeline is similar to the compiler pipeline. It consists of about 15 steps to translate an C# expression tree to SQL statement, which can be categorized as: Convert: Invoke SqlProvider.BuildQuery() to convert the tree of Expression nodes into a tree of SqlNode nodes; Bind: Used visitor pattern to figure out the meanings of names according to the mapping info, like a property for a column, etc.; Flatten: Figure out the hierarchy of the query; Rewrite: for SQL Server 2000, if needed Reduce: Remove the unnecessary information from the tree. Parameterize Format: Generate the SQL statement string; Parameterize: Figure out the parameters, for example, a reference to a local variable should be a parameter in SQL; Materialize: Executes the reader and convert the result back into typed objects. So for each data retrieving, even for data retrieving which looks simple: private static Product[] RetrieveProducts(int productId) { using (NorthwindDataContext database = new NorthwindDataContext()) { return database.Products.Where(product => product.ProductID == productId) .ToArray(); } } LINQ to SQL goes through above steps to translate and execute the query. Fortunately, there is a built-in way to cache the translated query. Compiled query When such a LINQ to SQL query is executed repeatedly, The CompiledQuery can be used to translate query for one time, and execute for multiple times:internal static class CompiledQueries { private static readonly Func<NorthwindDataContext, int, Product[]> _retrieveProducts = CompiledQuery.Compile((NorthwindDataContext database, int productId) => database.Products.Where(product => product.ProductID == productId).ToArray()); internal static Product[] RetrieveProducts( this NorthwindDataContext database, int productId) { return _retrieveProducts(database, productId); } } The new version of RetrieveProducts() gets better performance, because only when _retrieveProducts is first time invoked, it internally invokes SqlProvider.Compile() to translate the query expression. And it also uses lock to make sure translating once in multi-threading scenarios. Static SQL / stored procedures without translating Another way to avoid the translating overhead is to use static SQL or stored procedures, just as the above examples. Because this is a functional programming series, this article not dive into. For the details, Scott Guthrie already has some excellent articles: LINQ to SQL (Part 6: Retrieving Data Using Stored Procedures) LINQ to SQL (Part 7: Updating our Database using Stored Procedures) LINQ to SQL (Part 8: Executing Custom SQL Expressions) Data changing overhead By looking into the data updating process, it also needs a lot of work: Begins transaction Processes the changes (ChangeProcessor) Walks through the objects to identify the changes Determines the order of the changes Executes the changings LINQ queries may be needed to execute the changings, like the first example in this article, an object needs to be retrieved before changed, then the above whole process of data retrieving will be went through If there is user customization, it will be executed, for example, a table’s INSERT / UPDATE / DELETE can be customized in the O/R designer It is important to keep these overhead in mind. Bulk deleting / updating Another thing to be aware is the bulk deleting:private static void DeleteProducts(int categoryId) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.DeleteAllOnSubmit( database.Products.Where(product => product.CategoryID == categoryId)); database.SubmitChanges(); } } The expected SQL should be like:BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 COMMIT TRANSACTION Hoverer, as fore mentioned, the actual SQL is to retrieving the entities, and then delete them one by one:-- Retrieves the entities to be deleted: exec sp_executesql N'SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 -- Deletes the retrieved entities one by one: BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=78,@p1=N'Optimus Prime',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=79,@p1=N'Bumble Bee',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 -- ... COMMIT TRANSACTION And the same to the bulk updating. This is really not effective and need to be aware. Here is already some solutions from the Internet, like this one. The idea is wrap the above SELECT statement into a INNER JOIN:exec sp_executesql N'DELETE [dbo].[Products] FROM [dbo].[Products] AS [j0] INNER JOIN ( SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0) AS [j1] ON ([j0].[ProductID] = [j1].[[Products])', -- The Primary Key N'@p0 int',@p0=9 Query plan overhead The last thing is about the SQL Server query plan. Before .NET 4.0, LINQ to SQL has an issue (not sure if it is a bug). LINQ to SQL internally uses ADO.NET, but it does not set the SqlParameter.Size for a variable-length argument, like argument of NVARCHAR type, etc. So for two queries with the same SQL but different argument length:using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.Where(product => product.ProductName == "A") .Select(product => product.ProductID).ToArray(); // The same SQL and argument type, different argument length. database.Products.Where(product => product.ProductName == "AA") .Select(product => product.ProductID).ToArray(); } Pay attention to the argument length in the translated SQL:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(1)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(2)',@p0=N'AA' Here is the overhead: The first query’s query plan cache is not reused by the second one:SELECT sys.syscacheobjects.cacheobjtype, sys.dm_exec_cached_plans.usecounts, sys.syscacheobjects.[sql] FROM sys.syscacheobjects INNER JOIN sys.dm_exec_cached_plans ON sys.syscacheobjects.bucketid = sys.dm_exec_cached_plans.bucketid; They actually use different query plans. Again, pay attention to the argument length in the [sql] column (@p0 nvarchar(2) / @p0 nvarchar(1)). Fortunately, in .NET 4.0 this is fixed:internal static class SqlTypeSystem { private abstract class ProviderBase : TypeSystemProvider { protected int? GetLargestDeclarableSize(SqlType declaredType) { SqlDbType sqlDbType = declaredType.SqlDbType; if (sqlDbType <= SqlDbType.Image) { switch (sqlDbType) { case SqlDbType.Binary: case SqlDbType.Image: return 8000; } return null; } if (sqlDbType == SqlDbType.NVarChar) { return 4000; // Max length for NVARCHAR. } if (sqlDbType != SqlDbType.VarChar) { return null; } return 8000; } } } In this above example, the translated SQL becomes:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'AA' So that they reuses the same query plan cache: Now the [usecounts] column is 2.

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Auto-mount in fstab no longer working until manually running 'sudo mount -a'

    - by Brett Alton
    I have 3 SMB shared drives I need to connect to for work purposes. I had Ubuntu 10.10 Maverick and had all my drives loaded into fstab to be auto-mounted. Everything worked fine for a while but just before I upgraded to 11.04 Natty, the fstab auto-mount stopped working. Unfortunately I don't know what changed I made to my machine or what update installed that made this occur. /etc/fstab {snip} //192.168.7.3/apache_proj/ /home/brett/Desktop/apache smbfs guest,rw,iocharset=utf8,uid=1000,gid=1000 0 0 //192.168.7.3/apache_54321/ /home/brett/Desktop/54321 smbfs guest,rw,iocharset=utf8,uid=1000,gid=1000 0 0 //freenas.local/shared/ /home/brett/Desktop/shared smbfs guest,rw,iocharset=utf8,uid=1000,gid=1000 0 0 //lamp/www/ /home/brett/Desktop/lamp smbfs username={snip},password={snip},rw,iocharset=utf8,uid=1000,gid=1000 0 0 When the machine boots, I run this command to get them to mount: $ sudo umount /home/brett/Desktop/54321 /home/brett/Desktop/shared /home/brett/Desktop/apache; sudo mount -a [sudo] password for brett: umount: /home/brett/Desktop/54321: not mounted umount: /home/brett/Desktop/shared: not mounted umount: /home/brett/Desktop/apache: not mounted Warning: mapping 'guest' to 'guest,sec=none' Warning: mapping 'guest' to 'guest,sec=none' Warning: mapping 'guest' to 'guest,sec=none' mount error: could not resolve address for lamp: No address associated with hostname (I run that umount as a just-in-case). I looked through dmesg and some error logs and couldn't see why fstab was failing on my mounts. I see that my 'lamp' directive is failing, but that's because the machine is currently down.

    Read the article

  • Oracle Warehouse Builder és Enterprise ETL

    - by Fekete Zoltán
    Friss és ropogós az adatlap!!! Fogyasszátok egészséggel: ODI Enterprise Edition: Warehouse Builder Enterprise ETL white paper. A jó hír: minden megvásárolt Oracle Database-hez ingyenese használható az Oracle Warehouse Builder alap (core) funkcionalitása. Mi is az az OWB core funkcionalitás, és mit használhatunk az opciókban? Az Enterprise ETL funkcionalitás az Oracle Data Integrator Enterprise Edition licensz részeként érheto el az OWB-hez. Azok a funkciók, amik csak az ODI EE licensszel érhetok el (a korábbi OWB Enterprise ETL opció is ennek a része) megtekinthetok itt is a szöveg alján. Ezek: - Transportable ETL modules, multiple configurations, and pluggable mappings - Operators for pluggable mapping, pluggable mapping input signature, pluggable mapping output signature - Design Environment Support for RAC - Metadata change propagation - Schedulable Mappings and Process Flows - Slowing Changing Dimensions (SCD) Type 2 and 3 - XML Files as a target - Target load ordering - Seeded spatial and streams transformations - Process Flow Activity templates - Process Flow variables support - Process Flow looping activities such as For Loop and While Loop - Process Flow Route and Notification activities - Metadata lineage and impact analysis - Metadata Extensibility - Deployment to Discoverer EUL - Deployment to Oracle BI Beans catalog Tehát ha komolyabb környezetben szeretném használni az OWB-t, több környezetbe deployálni, stb, akkor szükség van az ODI EE licenszre is. ODI Enterprise Edition: Warehouse Builder Enterprise ETL white paper.

    Read the article

  • How to highlight non-rectangular hotspots?

    - by HuseyinUslu
    So my question is highly related to Creating non-rectangular hotspots and detecting clicks. Yet again, I've irregular hot-spots (think the game Risk). So basically, we can detect clicks on these hot-spots easily using color key mapping as discussed in above question which I don't have any problems implementing (which is also covered here in details). The problem is about highlighting these irreguar hotspots. So let me explain the question a bit more - the above color key mapping guide uses this as a world map: Then the author color-maps the imaginary countries: Now we can now detect the country the pointer is over. In the same article author mentions outlining countries on mouse-over. Though to get the effect, he creates unique border assets for each country - like: For the game I'm working on I'm using the same color-key mapping idea to detect hot-spots, but I didn't like the way of highlighting hot-spots. Coloring all the hot-spots is already a time-consuming job for me - as I have 25+ hot-spots for each map. Further, the need to have 25 unique border/highlight asset per hot-spot doesn't sound right. Anyone have a better idea/suggestion on highlighting these hot-spots?

    Read the article

  • The Talent Behind Customer Experience

    - by Christina McKeon
    Earlier, I wrote about Powerful Data Lessons from the Presidential Election. A key component of the Obama team’s data analysis deserves its own discussion—the people. Recruiters are probably scrambling to find out who those Obama data crunchers are and lure them into corporations. For the Obama team, these data scientists became a secret ingredient that the competition didn’t have. This team of analysts knew how to hear the signal and ignore the noise, how to segment and target its base, and how to model scenarios and revise plans based on what the data told them. The talent was the difference. As you work to transform your organization to be more customer-centric, don’t forget that talent is a critical element. Journey mapping is a good start to understanding how your talent impacts your customer experiences. Part of journey mapping includes documenting the “on-stage” and “back-stage” systems and touchpoints. When mapping this part of your customers’ journey, include the roles and talent behind the employee actions—both customer facing and further upstream from that customer touchpoint. Know what each of these roles does, how well you are retaining people in these areas, and your plans to fill these open positions in the future. To use data scientists as an example, this job will be in high demand over the next 10 years. The workforce is shrinking, and higher education institutions may not be able to turn out trained data scientists as fast as you need them. You don’t want to be caught with a skills deficit, so consider how you can best plan for the future talent you will need. Have your existing employees make their career aspirations known to you now. You may find you already have employees willing to take on roles that drive better customer experiences. Then develop customer experience talent from within your organization through targeted learning programs. If you know that you will need to go outside the organization, build those candidate relationships now. Nurture the candidates you want to hire and partner with universities, colleges, and trade associations so you can increase the number of qualified candidates in your talent pool.

    Read the article

  • How to highlight non-rectengular hotspots?

    - by HuseyinUslu
    So my question is highly related to Creating non-rectangular hotspots and detecting clicks. Yet again, I've irregular hot-spots (think the game Risk). So basically, we can detect clicks on these hot-spots easily using color key mapping as discussed in above question which I don't have any problems implementing (which is also covered here in details). The problem is about highlighting these irreguar hotspots. So let me explain the question a bit more - the above color key mapping guide uses this as a world map; then the author color-maps the imaginary countries; which we can now detect the country the pointer is over. In the same article author mentions outlining countries on mouse-over; though to get the effect, he creates unique border assets for each country - like; So for the game I'm working on I'm using the same color-key mapping idea to detect hot-spots, but I didn't like the way of highlighting hot-spots. Coloring all the hot-spots is already a great work for me - as I've 25+ hot-spots for each map - further more the need to have 25 unique border/highlight asset per hot-spot doesn't sound right. Anyone have a better idea/suggestion on highlighting these hot-spots?

    Read the article

  • NHibernate: No persister error

    - by Mike
    Hello, In my quest to further my knowledge, I'm trying to get get NHibernate running. I have the following structure to my solution Core Class Library Project Infrastructure Class Library Project MVC Application Project Test Project In my Core project I have created the following entity: using System; namespace Core.Domain.Model { public class Category { public virtual Guid Id { get; set; } public virtual string Name { get; set; } } } In my Infrastructure Project I have the following mapping: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Core.Domain.Model" assembly="Core"> <class name="Category" table="Categories" dynamic-update="true"> <cache usage="read-write"/> <id name="Id" column="Id" type="Guid"> <generator class="guid"/> </id> <property name="Name" length="100"/> </class> </hibernate-mapping> With the following config file: <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property> <property name="connection.connection_string">server=xxxx;database=xxxx;Integrated Security=true;</property> <property name="show_sql">true</property> <property name="dialect">NHibernate.Dialect.MsSql2008Dialect</property> <property name="cache.use_query_cache">false</property> <property name="adonet.batch_size">100</property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property> <mapping assembly="Infrastructure" /> </session-factory> </hibernate-configuration> In my test project, I have the following Test [TestMethod] [DeploymentItem("hibernate.cfg.xml")] public void CanCreateCategory() { IRepository<Category> repo = new CategoryRepository(); Category category = new Category(); category.Name = "ASP.NET"; repo.Save(category); } I get the following error when I try to run the test: Test method Volunteer.Tests.CategoryTests.CanCreateCategory threw exception: NHibernate.MappingException: No persister for: Core.Domain.Model.Category. Any help would be greatly appreciated. I do have the cfg build action set to embedded resource. Thanks!

    Read the article

  • nhibernate subclass in code

    - by Antonio Nakic Alfirevic
    I would like to set up table-per-classhierarchy inheritance in nhibernate thru code. Everything else is set in XML mapping files except the subclasses. If i up the subclasses in xml all is well, but not from code. This is the code i use - my concrete subclass never gets created:( //the call NHibernate.Cfg.Configuration config = new NHibernate.Cfg.Configuration(); SetSubclass(config, typeof(TAction), typeof(tActionSub1), "Procedure"); //the method public static void SetSubclass(Configuration configuration, Type baseClass, Type subClass, string discriminatorValue) { PersistentClass persBaseClass = configuration.ClassMappings.Where(cm => cm.MappedClass == baseClass).Single(); SingleTableSubclass persSubClass = new SingleTableSubclass(persBaseClass); persSubClass.ClassName = subClass.AssemblyQualifiedName; persSubClass.DiscriminatorValue = discriminatorValue; persSubClass.EntityPersisterClass = typeof(SingleTableEntityPersister); persSubClass.ProxyInterfaceName = (subClass).AssemblyQualifiedName; persSubClass.NodeName = subClass.Name; persSubClass.EntityName = subClass.FullName; persBaseClass.AddSubclass(persSubClass); } the Xml mapping looks like this: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Riz.Pcm.Domain.BusinessObjects" assembly="Riz.Pcm.Domain"> <class name="Riz.Pcm.Domain.BusinessObjects.TAction, Riz.Pcm.Domain" table="dbo.tAction" lazy="true"> <id name="Id" column="ID"> <generator class="guid" /> </id> <discriminator type="String" formula="(select jt.Name from TJobType jt where jt.Id=JobTypeId)" insert="true" force="false"/> <many-to-one name="Session" column="SessionID" class="TSession" /> <property name="Order" column="Order1" /> <property name="ProcessStart" column="ProcessStart" /> <property name="ProcessEnd" column="ProcessEnd" /> <property name="Status" column="Status" /> <many-to-one name="JobType" column="JobTypeID" class="TJobType" /> <many-to-one name="Unit" column="UnitID" class="TUnit" /> <bag name="TActionProperties" lazy="true" cascade="all-delete-orphan" inverse="true" > <key column="ActionID"></key> <one-to-many class="TActionProperty"></one-to-many> </bag> <!--<subclass name="Riz.Pcm.Domain.tActionSub" discriminator-value="ZPower"></subclass>--> </class> </hibernate-mapping> What am I doing wrong? I can't find any examples on google:(

    Read the article

  • MultiActionController no longer receiving requests?

    - by Stefan Kendall
    I was attempting to make changes to my controller, and all of a sudden, I no longer seem to receive any requests (404 when attempting to hit the servlet mapped URLs). I'm sure I've broken my web.xml or app-servlet.xml, but I just don't see where. I can access index.jsp from tomcat (http://IP/app/index.jsp), but I can't get my servlet mapping to work correctly. Help? web.xml: <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd"> <web-app version = "2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> <servlet> <servlet-name>app</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> </servlet> <servet-mapping> <servlet-name>app</servlet-name> <url-pattern>/myRequest</url-pattern> </servet-mapping> app-servlet.xml: <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd"> <web-app version = "2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> <servlet> <servlet-name>app</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> </servlet> <servet-mapping> <servlet-name>app</servlet-name> <url-pattern>/myRequest</url-pattern> </servet-mapping> </web-app>

    Read the article

  • NHibernate : delete error

    - by MadSeb
    Hi, Model: I have a model in which one Installation can contain multiple "Computer Systems". Database: The table Installations has two columns Name and Description. The table ComputerSystems has three columsn Name, Description and InstallationId. Mappings: I have the following mapping for Installation: <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="myProgram.Core" namespace="myProgram"> <class name="Installation" table="Installations" lazy="true"> <id name="Id" column="Id" type="int"> <generator class="native" /> </id> <property name="Name" column="Name" type="string" not-null="true" /> <property name="Description" column="Description" type="string" /> <bag name="ComputerSystems" inverse="true" lazy="true" cascade="all-delete-orphan"> <key column="InstallationId" /> <one-to-many class="ComputerSystem" /> </bag> </class> </hibernate-mapping> I have the following mapping for ComputerSystem: <?xml version="1.0" encoding="utf-8"?> <id name="Id" column="ID" type="int"> <generator class="native" /> </id> <property name="Name" column="Name" type="string" not-null="true" /> <property name="Description" column="Description" type="string" /> <many-to-one name="Installation" column="InstallationID" cascade="save-update" not-null="true" /> Classes: The Installation class is: public class Installation { public virtual String Description { get; set; } public virtual String Name { get; set; } public virtual IList<ComputerSystem> ComputerSystems { get { if (_computerSystemItems== null) { lock (this) { if (_computerSystemItems== null) _computerSystemItems= new List<ComputerSystem>(); } } return _computerSystemItems; } set { _computerSystemItems= value; } } protected IList<ComputerSystem> _computerSystemItems; public Installation() { Description = ""; Name= ""; } } The ComputerSystem class is: public class ComputerSystem { public virtual String Name { get; set; } public virtual String Description { get; set; } public virtual Installation Installation { get; set; } } The issue is that I get an error when trying to delete an installation that contains a ComputerSystem. The error is: "deleted object would be re-saved by cascade (remove deleted object from associations)". Can anyone help ? Regards, Seb

    Read the article

  • NHibernate Many to Many delete all my data in the table

    - by Daoming Yang
    I would love to thank @Stefan Steinegger and @David helped me out yesterday with many-to-many mapping. I have 3 tables which are "News", "Tags" and "News_Tags" with Many-To-Many relationship and the "News_Tags" is the link table. If I delete one of the news records, the following mappings will delete all my news records which have the same tags. One thing I need to notice, I only allowed unique tag stored in the "Tag" table. This mapping make sense for me, it will delete the tag and related News records, but how can I implement a tagging system with NHibernate? Can anyone give me some suggestion? Many thanks. Daoming. News Mapping: <class name="New" table="News" lazy="false"> <id name="NewID"> <generator class="identity" /> </id> <property name="Title" type="String"></property> <property name="Description" type="String"></property> <set name="TagsList" table="New_Tags" lazy="false" inverse="true" cascade="all"> <key column="NewID" /> <many-to-many class="Tag" column="TagID" /> </set> </class> Tag Mapping: <class name="Tag" table="Tags" lazy="false"> <id name="TagID"> <generator class="identity" /> </id> <property name="TagName" type="String"></property> <property name="DateCreated" type="DateTime"></property> <!--inverse="true" has been defined in the "News mapping"--> <set name="NewsList" table="New_Tags" lazy="false" cascade="all"> <key column="TagID" /> <many-to-many class="New" column="NewID" /> </set> </class>

    Read the article

  • Hibernate updating records and implementing listeners : getting only required attribute values for event.getOldState()

    - by Narendra
    Hi All, I am using Hibernate 3 as my persistence framework. Below is the sample hbm file I am using. <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="com.test.User" table="user"> <meta attribute="implements">com.test.dao.interfaces.IEntity</meta> <id name="key" type="long" column="user_key"> <generator class="increment" /> </id> <property name="userName" column="user_name" not-null="true" type="string" /> <property name="password" column="password" not-null="true" type="string" /> <property name="firstName" column="first_name" not-null="true" type="string" /> <property name="lastName" column="last_name" not-null="true" type="string" /> <property name="createdDate" column="created_date" not-null="true" type="timestamp" insert="false" update="false" /> <property name="createdBy" column="created_by" not-null="true" type="string" update="false" /> </class> </hibernate-mapping> I am added a post-update listener. What it will do is if there any updations perfomed on User then it will be invoked and cahnges will be inserted to audit table. Below is the sample implementation for postupdate event. public void onPostUpdate(PostUpdateEvent event) { LogHelper.info(logger, "Begin - onPostUpdate " + event.getEntity().getClass().getSimpleName()); if (!this.checkForAudit(event.getEntity().getClass().getSimpleName())) { // check do we need to audit it. } // Get Attribute Names String[] attrNames = event.getPersister().getEntityMetamodel() .getPropertyNames(); Object[] oldobjectValue = c Object[] newObjectValue = event.getState(); this.auditDetailsEvent(attrNames, oldobjectValue, newObjectValue); LogHelper.info(logger, "End - onPostUpdate"); // return false; } Here is my requirement. event.getPersister().getEntityMetamodel() .getPropertyNames(); or event.getOldState(); or event.getState(); must return attribute names or value which i can update or insert. Is there any way to control the return values of above one's. Pleas help me on this regard. Thanks, Narendra

    Read the article

  • Oracle Technológia Fórum rendezvény, 2010. május 5. szerda

    - by Fekete Zoltán
    Jövo hét szerdán Oracle Technology Fórum napot tartunk, ahol az adatbázis-kezelési és a fejlesztoi szekciókban hallgathatók meg eloadások illetve kaphatók válaszok a kérdésekre. Jelentkezés a rendezvényre. Az adatbázis szekcióban fogok beszélni a Sun Oracle Database Machine / Exadata megoldások technikai gyöngyszemeirol mind a tranzakciós (OLTP) mind az adattárházas (DW) és adatbázis konszolidáció oldaláról. Emellett kiemelem majd az Oracle Data Mining (adatbányászat) és OLAP újdonságait, érdekességeit. Megemlítem majd az Oracle's Data Warehouse Reference Architecture alkalmazási lehetoségeit is.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >