Search Results

Search found 4565 results on 183 pages for 'nhibernate mapping'.

Page 89/183 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • ObjectContext.SaveChanges() fails with SQL CE

    - by David Veeneman
    I am creating a model-first Entity Framework 4 app that uses SQL CE as its data store. All is well until I call ObjectContext.SaveChanges() to save changes to the entities in the model. At that point, SaveChanges() throws a System.Data.UpdateException, with an inner exception message that reads as follows: Server-generated keys and server-generated values are not supported by SQL Server Compact. I am completely puzzled by this message. Any idea what is going on and how to fix it? Thanks. Here is the Exception dump: System.Data.UpdateException was unhandled Message=An error occurred while updating the entries. See the inner exception for details. Source=System.Data.Entity StackTrace: at System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter) at System.Data.EntityClient.EntityAdapter.Update(IEntityStateManager entityCache) at System.Data.Objects.ObjectContext.SaveChanges(SaveOptions options) at System.Data.Objects.ObjectContext.SaveChanges() at FsDocumentationBuilder.ViewModel.Commands.SaveFileCommand.Execute(Object parameter) in D:\Users\dcveeneman\Documents\Visual Studio 2010\Projects\FsDocumentationBuilder\FsDocumentationBuilder\ViewModel\Commands\SaveFileCommand.cs:line 68 at MS.Internal.Commands.CommandHelpers.CriticalExecuteCommandSource(ICommandSource commandSource, Boolean userInitiated) at System.Windows.Controls.Primitives.ButtonBase.OnClick() at System.Windows.Controls.Button.OnClick() at System.Windows.Controls.Primitives.ButtonBase.OnMouseLeftButtonUp(MouseButtonEventArgs e) at System.Windows.UIElement.OnMouseLeftButtonUpThunk(Object sender, MouseButtonEventArgs e) at System.Windows.Input.MouseButtonEventArgs.InvokeEventHandler(Delegate genericHandler, Object genericTarget) at System.Windows.RoutedEventArgs.InvokeHandler(Delegate handler, Object target) at System.Windows.RoutedEventHandlerInfo.InvokeHandler(Object target, RoutedEventArgs routedEventArgs) at System.Windows.EventRoute.InvokeHandlersImpl(Object source, RoutedEventArgs args, Boolean reRaised) at System.Windows.UIElement.ReRaiseEventAs(DependencyObject sender, RoutedEventArgs args, RoutedEvent newEvent) at System.Windows.UIElement.OnMouseUpThunk(Object sender, MouseButtonEventArgs e) at System.Windows.Input.MouseButtonEventArgs.InvokeEventHandler(Delegate genericHandler, Object genericTarget) at System.Windows.RoutedEventArgs.InvokeHandler(Delegate handler, Object target) at System.Windows.RoutedEventHandlerInfo.InvokeHandler(Object target, RoutedEventArgs routedEventArgs) at System.Windows.EventRoute.InvokeHandlersImpl(Object source, RoutedEventArgs args, Boolean reRaised) at System.Windows.UIElement.RaiseEventImpl(DependencyObject sender, RoutedEventArgs args) at System.Windows.UIElement.RaiseTrustedEvent(RoutedEventArgs args) at System.Windows.UIElement.RaiseEvent(RoutedEventArgs args, Boolean trusted) at System.Windows.Input.InputManager.ProcessStagingArea() at System.Windows.Input.InputManager.ProcessInput(InputEventArgs input) at System.Windows.Input.InputProviderSite.ReportInput(InputReport inputReport) at System.Windows.Interop.HwndMouseInputProvider.ReportInput(IntPtr hwnd, InputMode mode, Int32 timestamp, RawMouseActions actions, Int32 x, Int32 y, Int32 wheel) at System.Windows.Interop.HwndMouseInputProvider.FilterMessage(IntPtr hwnd, WindowMessage msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at System.Windows.Interop.HwndSource.InputFilterMessage(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(Object source, Delegate method, Object args, Int32 numArgs, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Int32 numArgs) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.PushFrame(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.Run() at System.Windows.Application.RunDispatcher(Object ignore) at System.Windows.Application.RunInternal(Window window) at System.Windows.Application.Run(Window window) at System.Windows.Application.Run() at FsDocumentationBuilder.App.Main() in D:\Users\dcveeneman\Documents\Visual Studio 2010\Projects\FsDocumentationBuilder\FsDocumentationBuilder\obj\x86\Debug\App.g.cs:line 50 at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: System.Data.EntityCommandCompilationException Message=An error occurred while preparing the command definition. See the inner exception for details. Source=System.Data.Entity StackTrace: at System.Data.Mapping.Update.Internal.UpdateTranslator.CreateCommand(DbModificationCommandTree commandTree) at System.Data.Mapping.Update.Internal.DynamicUpdateCommand.CreateCommand(UpdateTranslator translator, Dictionary`2 identifierValues) at System.Data.Mapping.Update.Internal.DynamicUpdateCommand.Execute(UpdateTranslator translator, EntityConnection connection, Dictionary`2 identifierValues, List`1 generatedValues) at System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter) InnerException: System.NotSupportedException Message=Server-generated keys and server-generated values are not supported by SQL Server Compact. Source=System.Data.SqlServerCe.Entity StackTrace: at System.Data.SqlServerCe.SqlGen.DmlSqlGenerator.GenerateReturningSql(StringBuilder commandText, DbModificationCommandTree tree, ExpressionTranslator translator, DbExpression returning) at System.Data.SqlServerCe.SqlGen.DmlSqlGenerator.GenerateInsertSql(DbInsertCommandTree tree, List`1& parameters, Boolean isLocalProvider) at System.Data.SqlServerCe.SqlGen.SqlGenerator.GenerateSql(DbCommandTree tree, List`1& parameters, CommandType& commandType, Boolean isLocalProvider) at System.Data.SqlServerCe.SqlCeProviderServices.CreateCommand(DbProviderManifest providerManifest, DbCommandTree commandTree) at System.Data.SqlServerCe.SqlCeProviderServices.CreateDbCommandDefinition(DbProviderManifest providerManifest, DbCommandTree commandTree) at System.Data.Common.DbProviderServices.CreateCommandDefinition(DbCommandTree commandTree) at System.Data.Common.DbProviderServices.CreateCommand(DbCommandTree commandTree) at System.Data.Mapping.Update.Internal.UpdateTranslator.CreateCommand(DbModificationCommandTree commandTree) InnerException:

    Read the article

  • BizTalk 2009 - The Scope of the Table Looping Functoid

    - by StuartBrierley
    When mapping in BizTalk you will find there are times when you need to map from flat and dispersed elemements in your source schema to a repeated record with child elements in your destination schema.  Below is an example of how you can make use of the Table Looping Functoid to bring together these flat elements and create your repeated group.  Although this example is purposely simple, I have previsouly encounted this issue on a much more complex scale when mapping the response from a credit scoring agency where all the applicant details were supplied in separate parts of a very flat schema. Consider the source and destination schemas as follows:   Although the Table Looping Functoid states that the first input must be a scoping element linked from a repeating group, you can actually also make use of a constant value.  In this case I know that the source schema always contains two people, so I set this to two. Then you need to set the number of columns in your table, in this case 2 (name and sex) and link all the required fields from the source schema. Following this you can configure the table. You can then add the Table Extractor functoids and complete the map. If you now validate this map you will see that BizTalk will warn you about the scoping link for the Table Looping Functoid, but this can be safely ignored. C:\Code\Developer Folders\Stuart Brierley\Test Mapping\TableLooping.btm: warning btm1071: A first input of the Table-Looping functoid must be a link from a Source Tree Node which acts as the scoping parameter. Testing the map will produce the following output:

    Read the article

  • Understanding LINQ to SQL (11) Performance

    - by Dixin
    [LINQ via C# series] LINQ to SQL has a lot of great features like strong typing query compilation deferred execution declarative paradigm etc., which are very productive. Of course, these cannot be free, and one price is the performance. O/R mapping overhead Because LINQ to SQL is based on O/R mapping, one obvious overhead is, data changing usually requires data retrieving:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { Product product = database.Products.Single(item => item.ProductID == id); // SELECT... product.UnitPrice = unitPrice; // UPDATE... database.SubmitChanges(); } } Before updating an entity, that entity has to be retrieved by an extra SELECT query. This is slower than direct data update via ADO.NET:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (SqlConnection connection = new SqlConnection( "Data Source=localhost;Initial Catalog=Northwind;Integrated Security=True")) using (SqlCommand command = new SqlCommand( @"UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID", connection)) { command.Parameters.Add("@ProductID", SqlDbType.Int).Value = id; command.Parameters.Add("@UnitPrice", SqlDbType.Money).Value = unitPrice; connection.Open(); command.Transaction = connection.BeginTransaction(); command.ExecuteNonQuery(); // UPDATE... command.Transaction.Commit(); } } The above imperative code specifies the “how to do” details with better performance. For the same reason, some articles from Internet insist that, when updating data via LINQ to SQL, the above declarative code should be replaced by:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.ExecuteCommand( "UPDATE [dbo].[Products] SET [UnitPrice] = {0} WHERE [ProductID] = {1}", id, unitPrice); } } Or just create a stored procedure:CREATE PROCEDURE [dbo].[UpdateProductUnitPrice] ( @ProductID INT, @UnitPrice MONEY ) AS BEGIN BEGIN TRANSACTION UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID COMMIT TRANSACTION END and map it as a method of NorthwindDataContext (explained in this post):private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.UpdateProductUnitPrice(id, unitPrice); } } As a normal trade off for O/R mapping, a decision has to be made between performance overhead and programming productivity according to the case. In a developer’s perspective, if O/R mapping is chosen, I consistently choose the declarative LINQ code, unless this kind of overhead is unacceptable. Data retrieving overhead After talking about the O/R mapping specific issue. Now look into the LINQ to SQL specific issues, for example, performance in the data retrieving process. The previous post has explained that the SQL translating and executing is complex. Actually, the LINQ to SQL pipeline is similar to the compiler pipeline. It consists of about 15 steps to translate an C# expression tree to SQL statement, which can be categorized as: Convert: Invoke SqlProvider.BuildQuery() to convert the tree of Expression nodes into a tree of SqlNode nodes; Bind: Used visitor pattern to figure out the meanings of names according to the mapping info, like a property for a column, etc.; Flatten: Figure out the hierarchy of the query; Rewrite: for SQL Server 2000, if needed Reduce: Remove the unnecessary information from the tree. Parameterize Format: Generate the SQL statement string; Parameterize: Figure out the parameters, for example, a reference to a local variable should be a parameter in SQL; Materialize: Executes the reader and convert the result back into typed objects. So for each data retrieving, even for data retrieving which looks simple: private static Product[] RetrieveProducts(int productId) { using (NorthwindDataContext database = new NorthwindDataContext()) { return database.Products.Where(product => product.ProductID == productId) .ToArray(); } } LINQ to SQL goes through above steps to translate and execute the query. Fortunately, there is a built-in way to cache the translated query. Compiled query When such a LINQ to SQL query is executed repeatedly, The CompiledQuery can be used to translate query for one time, and execute for multiple times:internal static class CompiledQueries { private static readonly Func<NorthwindDataContext, int, Product[]> _retrieveProducts = CompiledQuery.Compile((NorthwindDataContext database, int productId) => database.Products.Where(product => product.ProductID == productId).ToArray()); internal static Product[] RetrieveProducts( this NorthwindDataContext database, int productId) { return _retrieveProducts(database, productId); } } The new version of RetrieveProducts() gets better performance, because only when _retrieveProducts is first time invoked, it internally invokes SqlProvider.Compile() to translate the query expression. And it also uses lock to make sure translating once in multi-threading scenarios. Static SQL / stored procedures without translating Another way to avoid the translating overhead is to use static SQL or stored procedures, just as the above examples. Because this is a functional programming series, this article not dive into. For the details, Scott Guthrie already has some excellent articles: LINQ to SQL (Part 6: Retrieving Data Using Stored Procedures) LINQ to SQL (Part 7: Updating our Database using Stored Procedures) LINQ to SQL (Part 8: Executing Custom SQL Expressions) Data changing overhead By looking into the data updating process, it also needs a lot of work: Begins transaction Processes the changes (ChangeProcessor) Walks through the objects to identify the changes Determines the order of the changes Executes the changings LINQ queries may be needed to execute the changings, like the first example in this article, an object needs to be retrieved before changed, then the above whole process of data retrieving will be went through If there is user customization, it will be executed, for example, a table’s INSERT / UPDATE / DELETE can be customized in the O/R designer It is important to keep these overhead in mind. Bulk deleting / updating Another thing to be aware is the bulk deleting:private static void DeleteProducts(int categoryId) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.DeleteAllOnSubmit( database.Products.Where(product => product.CategoryID == categoryId)); database.SubmitChanges(); } } The expected SQL should be like:BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 COMMIT TRANSACTION Hoverer, as fore mentioned, the actual SQL is to retrieving the entities, and then delete them one by one:-- Retrieves the entities to be deleted: exec sp_executesql N'SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 -- Deletes the retrieved entities one by one: BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=78,@p1=N'Optimus Prime',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=79,@p1=N'Bumble Bee',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 -- ... COMMIT TRANSACTION And the same to the bulk updating. This is really not effective and need to be aware. Here is already some solutions from the Internet, like this one. The idea is wrap the above SELECT statement into a INNER JOIN:exec sp_executesql N'DELETE [dbo].[Products] FROM [dbo].[Products] AS [j0] INNER JOIN ( SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0) AS [j1] ON ([j0].[ProductID] = [j1].[[Products])', -- The Primary Key N'@p0 int',@p0=9 Query plan overhead The last thing is about the SQL Server query plan. Before .NET 4.0, LINQ to SQL has an issue (not sure if it is a bug). LINQ to SQL internally uses ADO.NET, but it does not set the SqlParameter.Size for a variable-length argument, like argument of NVARCHAR type, etc. So for two queries with the same SQL but different argument length:using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.Where(product => product.ProductName == "A") .Select(product => product.ProductID).ToArray(); // The same SQL and argument type, different argument length. database.Products.Where(product => product.ProductName == "AA") .Select(product => product.ProductID).ToArray(); } Pay attention to the argument length in the translated SQL:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(1)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(2)',@p0=N'AA' Here is the overhead: The first query’s query plan cache is not reused by the second one:SELECT sys.syscacheobjects.cacheobjtype, sys.dm_exec_cached_plans.usecounts, sys.syscacheobjects.[sql] FROM sys.syscacheobjects INNER JOIN sys.dm_exec_cached_plans ON sys.syscacheobjects.bucketid = sys.dm_exec_cached_plans.bucketid; They actually use different query plans. Again, pay attention to the argument length in the [sql] column (@p0 nvarchar(2) / @p0 nvarchar(1)). Fortunately, in .NET 4.0 this is fixed:internal static class SqlTypeSystem { private abstract class ProviderBase : TypeSystemProvider { protected int? GetLargestDeclarableSize(SqlType declaredType) { SqlDbType sqlDbType = declaredType.SqlDbType; if (sqlDbType <= SqlDbType.Image) { switch (sqlDbType) { case SqlDbType.Binary: case SqlDbType.Image: return 8000; } return null; } if (sqlDbType == SqlDbType.NVarChar) { return 4000; // Max length for NVARCHAR. } if (sqlDbType != SqlDbType.VarChar) { return null; } return 8000; } } } In this above example, the translated SQL becomes:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'AA' So that they reuses the same query plan cache: Now the [usecounts] column is 2.

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Auto-mount in fstab no longer working until manually running 'sudo mount -a'

    - by Brett Alton
    I have 3 SMB shared drives I need to connect to for work purposes. I had Ubuntu 10.10 Maverick and had all my drives loaded into fstab to be auto-mounted. Everything worked fine for a while but just before I upgraded to 11.04 Natty, the fstab auto-mount stopped working. Unfortunately I don't know what changed I made to my machine or what update installed that made this occur. /etc/fstab {snip} //192.168.7.3/apache_proj/ /home/brett/Desktop/apache smbfs guest,rw,iocharset=utf8,uid=1000,gid=1000 0 0 //192.168.7.3/apache_54321/ /home/brett/Desktop/54321 smbfs guest,rw,iocharset=utf8,uid=1000,gid=1000 0 0 //freenas.local/shared/ /home/brett/Desktop/shared smbfs guest,rw,iocharset=utf8,uid=1000,gid=1000 0 0 //lamp/www/ /home/brett/Desktop/lamp smbfs username={snip},password={snip},rw,iocharset=utf8,uid=1000,gid=1000 0 0 When the machine boots, I run this command to get them to mount: $ sudo umount /home/brett/Desktop/54321 /home/brett/Desktop/shared /home/brett/Desktop/apache; sudo mount -a [sudo] password for brett: umount: /home/brett/Desktop/54321: not mounted umount: /home/brett/Desktop/shared: not mounted umount: /home/brett/Desktop/apache: not mounted Warning: mapping 'guest' to 'guest,sec=none' Warning: mapping 'guest' to 'guest,sec=none' Warning: mapping 'guest' to 'guest,sec=none' mount error: could not resolve address for lamp: No address associated with hostname (I run that umount as a just-in-case). I looked through dmesg and some error logs and couldn't see why fstab was failing on my mounts. I see that my 'lamp' directive is failing, but that's because the machine is currently down.

    Read the article

  • Oracle Warehouse Builder és Enterprise ETL

    - by Fekete Zoltán
    Friss és ropogós az adatlap!!! Fogyasszátok egészséggel: ODI Enterprise Edition: Warehouse Builder Enterprise ETL white paper. A jó hír: minden megvásárolt Oracle Database-hez ingyenese használható az Oracle Warehouse Builder alap (core) funkcionalitása. Mi is az az OWB core funkcionalitás, és mit használhatunk az opciókban? Az Enterprise ETL funkcionalitás az Oracle Data Integrator Enterprise Edition licensz részeként érheto el az OWB-hez. Azok a funkciók, amik csak az ODI EE licensszel érhetok el (a korábbi OWB Enterprise ETL opció is ennek a része) megtekinthetok itt is a szöveg alján. Ezek: - Transportable ETL modules, multiple configurations, and pluggable mappings - Operators for pluggable mapping, pluggable mapping input signature, pluggable mapping output signature - Design Environment Support for RAC - Metadata change propagation - Schedulable Mappings and Process Flows - Slowing Changing Dimensions (SCD) Type 2 and 3 - XML Files as a target - Target load ordering - Seeded spatial and streams transformations - Process Flow Activity templates - Process Flow variables support - Process Flow looping activities such as For Loop and While Loop - Process Flow Route and Notification activities - Metadata lineage and impact analysis - Metadata Extensibility - Deployment to Discoverer EUL - Deployment to Oracle BI Beans catalog Tehát ha komolyabb környezetben szeretném használni az OWB-t, több környezetbe deployálni, stb, akkor szükség van az ODI EE licenszre is. ODI Enterprise Edition: Warehouse Builder Enterprise ETL white paper.

    Read the article

  • How to highlight non-rectangular hotspots?

    - by HuseyinUslu
    So my question is highly related to Creating non-rectangular hotspots and detecting clicks. Yet again, I've irregular hot-spots (think the game Risk). So basically, we can detect clicks on these hot-spots easily using color key mapping as discussed in above question which I don't have any problems implementing (which is also covered here in details). The problem is about highlighting these irreguar hotspots. So let me explain the question a bit more - the above color key mapping guide uses this as a world map: Then the author color-maps the imaginary countries: Now we can now detect the country the pointer is over. In the same article author mentions outlining countries on mouse-over. Though to get the effect, he creates unique border assets for each country - like: For the game I'm working on I'm using the same color-key mapping idea to detect hot-spots, but I didn't like the way of highlighting hot-spots. Coloring all the hot-spots is already a time-consuming job for me - as I have 25+ hot-spots for each map. Further, the need to have 25 unique border/highlight asset per hot-spot doesn't sound right. Anyone have a better idea/suggestion on highlighting these hot-spots?

    Read the article

  • The Talent Behind Customer Experience

    - by Christina McKeon
    Earlier, I wrote about Powerful Data Lessons from the Presidential Election. A key component of the Obama team’s data analysis deserves its own discussion—the people. Recruiters are probably scrambling to find out who those Obama data crunchers are and lure them into corporations. For the Obama team, these data scientists became a secret ingredient that the competition didn’t have. This team of analysts knew how to hear the signal and ignore the noise, how to segment and target its base, and how to model scenarios and revise plans based on what the data told them. The talent was the difference. As you work to transform your organization to be more customer-centric, don’t forget that talent is a critical element. Journey mapping is a good start to understanding how your talent impacts your customer experiences. Part of journey mapping includes documenting the “on-stage” and “back-stage” systems and touchpoints. When mapping this part of your customers’ journey, include the roles and talent behind the employee actions—both customer facing and further upstream from that customer touchpoint. Know what each of these roles does, how well you are retaining people in these areas, and your plans to fill these open positions in the future. To use data scientists as an example, this job will be in high demand over the next 10 years. The workforce is shrinking, and higher education institutions may not be able to turn out trained data scientists as fast as you need them. You don’t want to be caught with a skills deficit, so consider how you can best plan for the future talent you will need. Have your existing employees make their career aspirations known to you now. You may find you already have employees willing to take on roles that drive better customer experiences. Then develop customer experience talent from within your organization through targeted learning programs. If you know that you will need to go outside the organization, build those candidate relationships now. Nurture the candidates you want to hire and partner with universities, colleges, and trade associations so you can increase the number of qualified candidates in your talent pool.

    Read the article

  • How to highlight non-rectengular hotspots?

    - by HuseyinUslu
    So my question is highly related to Creating non-rectangular hotspots and detecting clicks. Yet again, I've irregular hot-spots (think the game Risk). So basically, we can detect clicks on these hot-spots easily using color key mapping as discussed in above question which I don't have any problems implementing (which is also covered here in details). The problem is about highlighting these irreguar hotspots. So let me explain the question a bit more - the above color key mapping guide uses this as a world map; then the author color-maps the imaginary countries; which we can now detect the country the pointer is over. In the same article author mentions outlining countries on mouse-over; though to get the effect, he creates unique border assets for each country - like; So for the game I'm working on I'm using the same color-key mapping idea to detect hot-spots, but I didn't like the way of highlighting hot-spots. Coloring all the hot-spots is already a great work for me - as I've 25+ hot-spots for each map - further more the need to have 25 unique border/highlight asset per hot-spot doesn't sound right. Anyone have a better idea/suggestion on highlighting these hot-spots?

    Read the article

  • AutoMapper and SecurityException in IIS

    - by Felipe
    Hi everybody... I'm developing a asp.net mvc application with nhibernate and I would not like to expose my objects mappings with NHibernate, so I created DTO for each entity and I'm trying to convert my Domain objects to DTO and send it to View. So I have in my sollution: ClassLibrary with my Domain (for NHibernate) and DTO objetcs Class library to make a SessionFactory adn Factories in my Project Asp.Net MVC 2 Application So, I download AutoMapper to transform Domain objects in DTO and add a the code to do this in Application_Start of global.asax. When I run in VisualStudio (by pressing F5) it works fine and my dtos are into the view, So when I publish this in IIS, I get a security exception =( in first line of conversion: Mapper.CreateMap(); <--- this line throw exception Mapper.CreateMap(); System.Security.SecurityException: Failed request for the permission of type 'System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. What can I do to resolve this to work in IIS ? When I will publish it on web server, the error will get too :( Thanks Cheers

    Read the article

  • MultiActionController no longer receiving requests?

    - by Stefan Kendall
    I was attempting to make changes to my controller, and all of a sudden, I no longer seem to receive any requests (404 when attempting to hit the servlet mapped URLs). I'm sure I've broken my web.xml or app-servlet.xml, but I just don't see where. I can access index.jsp from tomcat (http://IP/app/index.jsp), but I can't get my servlet mapping to work correctly. Help? web.xml: <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd"> <web-app version = "2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> <servlet> <servlet-name>app</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> </servlet> <servet-mapping> <servlet-name>app</servlet-name> <url-pattern>/myRequest</url-pattern> </servet-mapping> app-servlet.xml: <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd"> <web-app version = "2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> <servlet> <servlet-name>app</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> </servlet> <servet-mapping> <servlet-name>app</servlet-name> <url-pattern>/myRequest</url-pattern> </servet-mapping> </web-app>

    Read the article

  • Hibernate updating records and implementing listeners : getting only required attribute values for event.getOldState()

    - by Narendra
    Hi All, I am using Hibernate 3 as my persistence framework. Below is the sample hbm file I am using. <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="com.test.User" table="user"> <meta attribute="implements">com.test.dao.interfaces.IEntity</meta> <id name="key" type="long" column="user_key"> <generator class="increment" /> </id> <property name="userName" column="user_name" not-null="true" type="string" /> <property name="password" column="password" not-null="true" type="string" /> <property name="firstName" column="first_name" not-null="true" type="string" /> <property name="lastName" column="last_name" not-null="true" type="string" /> <property name="createdDate" column="created_date" not-null="true" type="timestamp" insert="false" update="false" /> <property name="createdBy" column="created_by" not-null="true" type="string" update="false" /> </class> </hibernate-mapping> I am added a post-update listener. What it will do is if there any updations perfomed on User then it will be invoked and cahnges will be inserted to audit table. Below is the sample implementation for postupdate event. public void onPostUpdate(PostUpdateEvent event) { LogHelper.info(logger, "Begin - onPostUpdate " + event.getEntity().getClass().getSimpleName()); if (!this.checkForAudit(event.getEntity().getClass().getSimpleName())) { // check do we need to audit it. } // Get Attribute Names String[] attrNames = event.getPersister().getEntityMetamodel() .getPropertyNames(); Object[] oldobjectValue = c Object[] newObjectValue = event.getState(); this.auditDetailsEvent(attrNames, oldobjectValue, newObjectValue); LogHelper.info(logger, "End - onPostUpdate"); // return false; } Here is my requirement. event.getPersister().getEntityMetamodel() .getPropertyNames(); or event.getOldState(); or event.getState(); must return attribute names or value which i can update or insert. Is there any way to control the return values of above one's. Pleas help me on this regard. Thanks, Narendra

    Read the article

  • Migrating VB6 to HTML5 is not a fiction - Customer success story

    - by Webgui
    All of you VB developers in the present or past would probably find it hard to believe that the old VB code can be migrated and modernized into the latest .NET based HTML5 without having to rewrite the application. But we have been working on such tools for the past couple of years and already have several real world applications that were fully 'transposed' from VB6. The solution is called Instant CloudMove and its main tool is called the TranspositionStudio. It is a unique solution that relies on the concept of transposition. Transposition comes from mathematics and music and refers to exchanging elements while everything else remains the same or moving an element as is from one environment to another. This means that we are taking the source code and put it in a modern technological environment with relatively few adjustments.The concept is based on a set of Mapping Expressions which are basically links between an element in the source environment and one in the target environment that has the same functionality. About 95% of the code is usually mapped out-of-the-box and the rest is handled with easy-to-use mapping tools designed for Visual Studio developers providing them with a familiar environment and concepts for completing the mapping and allowing them to extend and customize existing mapping expressions. The solution is also based on a circular workflow that enables developers to make any changes as required until the result is satisfying.As opposed to existing migration solutions that offer automation are usually a “black box” to the user, the transposition concept enables full visibility, flexibility and control over the code and process at all times allowing to also add/change functionalities or upgrade the UI within the process and tools.This is exactly the case with our customer’s aging VB6 PMS (Property Management System) which needed a technological update as well as a design refresh. The decision was to move the VB6 application which had about 1 million lines of code into the latest web technology. Since the application was initially written 13 years ago and had many upgrades since the code must be very patchy and includes unused sections. As a result, the company Mihshuv Group considered rewriting the entire application in Java since it already had the knowledge. Rewrite would allow starting with a clean slate and designing functionality, database architecture, UI without any constraints. On the other hand, rewrite entitles a long and detailed specification work as well as a thorough QA and this translates into a long project with high risk and costs.So the company looked for a migration solution as an alternative; the research lead to Gizmox and after examining the technology it was decided to perform a hybrid project which would include an automatic transposition of the core of the VB6 application (200,000 lines of code) while they redesigning the UI, adding new functionality, deleting unused code and rewriting about 140 reports with Crystal Reports will be done manually using Visual WebGui development tools.The migration part of the project was completed in 65 days by 3 developers from Mihshuv Group guided by Gizmox migration experts while the rewrite and UI upgrade tasks took about the same. So in only a few months period Mihshuv Group generated an up-to-date product, written in the latest Web technology with modern, friendly UI and improved functionality. Guest selection screen of the original VB6 PMS Guest selection screen on the new web–based PMS Compared to the initial plan to rewrite the entire application in Java, the hybrid migration/rewrite approach taken by Mihshuv Group using Gizmox technology proved as a great decision. In terms of time and cost there were substantial savings; from a project that was priced for at least a year (without taking into account the huge risk and uncertainty) it became a few months project only. More about this and other customer stories can be found here

    Read the article

  • C# Domain-Driven Design Sample Released

    - by Artur Trosin
    In the post I want to declare that NDDD Sample application(s) is released and share the work with you. You can access it here: http://code.google.com/p/ndddsample. NDDDSample from functionality perspective matches DDDSample 1.1.0 which is based Java and on joint effort by Eric Evans' company Domain Language and the Swedish software consulting company Citerus. But because NDDDSample is based on .NET technologies those two implementations could not be matched directly. However concepts, practices, values, patterns, especially DDD, are cross-language and cross-platform :). Implementation of .NET version of the application was an interesting journey because now as .NET developer I better understand the differences positive and negative between these two platforms. Even there are those differences they can be overtaken, in many cases it was not so hard to match a java libs\framework with .NET during the implementation. Here is a list of technology stack: 1. .net 3.5 - framework 2. VS.NET 2008 - IDE 3. ASP.NET MVC2.0 - for administration and tracking UI 4. WCF - communication mechanism 5. NHibernate - ORM 6. Rhino Commons - Nhibernate session management, base classes for in memory unit tests 7. SqlLite - database 8. Windsor - inversion of control container 9. Windsor WCF facility - for better integration with NHibernate 10. MvcContrib - and in particular its Castle WindsorControllerFactory in order to enable IoC for controllers 11. WPF - for incident logging application 12. Moq - mocking lib used for unit tests 13. NUnit - unit testing framework 14. Log4net - logging framework 15. Cloud based on Azure SDK These are not the latest technologies, tools and libs for the moment but if there are someone thinks that it would be useful to migrate the sample to latest current technologies and versions please comment. Cloud version of the application is based on Azure emulated environment provided by the SDK, so it hasn't been tested on ‘real' Azure scenario (we just do not have access to it). Thanks to participants, Eugen Gorgan who was involved directly in development, Ruslan Rusu and Victor Lungu spend their free time to discuss .NET specific decisions, Eugen Navitaniuc helped with Java related questions. Also, big thank to Cornel Cretu, he designed a nice logo and helped with some browser incompatibility issues. Any review and feedback are welcome! Thank you, Artur Trosin

    Read the article

  • First Post

    - by Allan Ritchie
    It has been a while since I've had a blog, but I'm back into the open source dev and decided to get back into things.  I had a blog a few years back when NHibernate was infant (0.8 or something) and I was working with the Wilson ORMapper (www.ormapper.net) at the time.  Anyhow, I'm still working with NHibernate (particularily the exciting v3 alpha 1) and Castle framework. I've also written a .NET ExtDirect stack for which I'll be writing a few articles around due to its flexibility.  I decided to write yet another communication stack because all the implementations I found on the Ext forums were lacking any sort of flexibility.  So stay tuned... I'll be presenting a bunch of the extension points.

    Read the article

  • Castle Windsor Dependency Injection with MVC4

    - by Renso
    Problem:Installed MVC4 on my local and ran a MVC3 app and got an error where Castle Windsor was unable to resolve any controllers' constructor injections. It failed with "No component for supporting the service....".As soon as I uninstall MVC4 beta, the problem vanishes like magic?!I also tried to upgrade to NHibernate 3 and Castle and Castle Windsor to version 3 (from version 2), but since I use Rhino Commons, that is not possible as the Rhino Commons project looks like is no longer supported and requests to upgrade it to work with NHibernate version 3 two years ago has gone unanswered. The problem is that Rhino Commons (the older version) references a method in Castle version 2 that has been depreciated in version 3: "CreateContainer("windsor.boo")' threw an exception of type 'System.MissingMethodException."Hope this helps anyone else who runs into this issue. Btw I used NuGet package manager to install the correct packages so I know that is not the issue.

    Read the article

  • Delphi Prism and LINQ to SQL / Entity Framework

    - by Vegar
    I have found many posts and examples of using LINQ-syntax in Delphi Prism (Oxygene), but I have never found anything on LINQ to SQL or Entity Framework. Is it possible to use LINQ to SQL or Entity Framework together with Prism? Where can I found such an example? Update: Olaf is giving an answer through his blog The question is now if any visual tools and code generation is provided, or if everything must be done by hand... Second update: Olaf has answered the tool/code generation-question in a comment on his site: The class designer is there, but there is no Pascal code gen. According to marc hoffman that is currently not on their list. For now you have to live with manual mapping. I guess, if you had Visual Studio (not just the VS shell), that you could add a C# library project to your solution, reference that from your Prism project. Then create the Table-Class mapping in the C# project using the visual designer. Maybe somewhat ugly, but possibly the key to get the Designer + CodeGen integrated into Prism. Who cares what language is used for the mapping . I will say this is a 1 - 0 to c# vs prism. If I did not care which language is used for the mapping - why should I care about which language is used for the rest?

    Read the article

  • UrlRewriter.net Expression Examples

    - by Tarik
    Hello, I need some web.config examples for each expression types below : $number The last substring matched by group number number. $<name> The last substring matched by group named name matched by (?< name ). ${property} The value of the property when the expression is evaluated. ${transform(value)} The result of calling the transform on the specified value. ${map:value} The result of mapping the specified value using the map. Replaced with empty string if no mapping exists. ${map:value|default} The result of mapping the specified value using the map. Replaced with the default if no mapping exists. Sample: <rewriter> <if url="/tags/(.+)" rewrite="/tagcloud.aspx?tag=$1" /> <!-- same thing as <rewrite url="/tags/(.+)" to="/tagcloud.aspx?tag=$1" /> --> </rewriter> Thank you very much !

    Read the article

  • web.xml - Java Servlet Filters and WebSphere - URL Pattern issues

    - by Ed
    Hi, So we are running a web application that has been tested on Tomcat, Glassfish, WebLogic and WebSphere. All run correctly except WebSphere. The issue is that filters are not processed for files under a certain directory. For example I have a filter that checks the user's lanuage from browser cookies and another that get the user's username, in the web.xml there are configured like so: <!-- ****************************** --> <!-- * Security context filtering * --> <!-- ****************************** --> <filter> <filter-name>SetSecurityContextFilter</filter-name> <filter-class> com.test.security.SecurityContextServletFilter </filter-class> </filter> <!-- ****************************** --> <!-- ** Locale context filtering ** --> <!-- ****************************** --> <filter> <filter-name>SetLocaleFilter</filter-name> <filter-class> com.test.locale.LocaleServletFilter </filter-class> </filter> <filter-mapping> <filter-name>SetSecurityContextFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <filter-mapping> <filter-name>SetLocaleFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> Both filters set a static threadlocal variable which can be accessed from a static getter, but when the same file 'test.jsp' invokes the getters, under 'contextroot/js' they return the default values (as if unset) while under 'contextroot/pages' they are correct. Any ideas? Thanks in advance.

    Read the article

  • EF4 and multiple abstract levels

    - by Cedric
    I need to use inheritance with EF4 and the TPH model created from DB. I created a new projet to test simples classes. There is my class model: There is my table in SQL SERVER 2008 : VEHICLE ID : int PK Owner : varchar(50) Consumption : float FirstCirculationDate : date Type : varchar(50) Discriminator : varchar(10) I added a condition in my EDMX on the Discriminator field to differentiate the Scooter, Car, Motorbike and Bike entities. MotorizedVehicle and Vehicle are Abstract. But when I compile, this error appears : Error 3032: Problem in mapping fragments starting at lines 78, 85:EntityTypes EF4InheritanceModel.Scooter, EF4InheritanceModel.Motorbike, EF4InheritanceModel.Car, EF4InheritanceModel.Bike are being mapped to the same rows in table Vehicle. Mapping conditions can be used to distinguish the rows that these types are mapped to. Edit : To Ladislav : I try it and error change to become it for all of my entities : Error 3034: Problem in mapping fragments starting at lines 72, 86:An entity is mapped to different rows within the same table. Ensure these two mapping fragments do not map two groups of entities with overlapping keys to two distinct groups of rows. To Henk (with Ladislay suggestion) : There are all of mappings details : What's wrong ? Thanks

    Read the article

  • Factory Method Pattern clarification

    - by nettguy
    My understanding of Factory Method Pattern is (Correct me if i am wrong) Factory Method Pattern "Factory Method allow the client to delegates the product creation (Instance Creation) to the subclass". There are two situation in which we can go for creating Factory Method pattern. (i) When the client is restricted to the product (Instance) creation. (ii) There are multiple products available.But a decision to be made which product instance need to be returned. If you want to create Abstract Method pattern You need to have abstract product Concrete Product Factory Method to return the appropriate product. Example : public enum ORMChoice { L2SQL, EFM, LS, Sonic } //Abstract Product public interface IProduct { void ProductTaken(); } //Concrete Product public class LinqtoSql : IProduct { public void ProductTaken() { Console.WriteLine("OR Mapping Taken:LinqtoSql"); } } //concrete product public class Subsonic : IProduct { public void ProductTaken() { Console.WriteLine("OR Mapping Taken:Subsonic"); } } //concrete product public class EntityFramework : IProduct { public void ProductTaken() { Console.WriteLine("OR Mapping Taken:EntityFramework"); } } //concrete product public class LightSpeed : IProduct { public void ProductTaken() { Console.WriteLine("OR Mapping Taken :LightSpeed"); } } public class Creator { //Factory Method public IProduct ReturnORTool(ORMChoice choice) { switch (choice) { case ORMChoice.EFM:return new EntityFramework(); break; case ORMChoice.L2SQL:return new LinqtoSql(); break; case ORMChoice.LS:return new LightSpeed(); break; case ORMChoice.Sonic:return new Subsonic(); break; default: return null; } } } **Client** Button_Click() { Creator c = new Creator(); IProduct p = c.ReturnORTool(ORMChoice.L2SQL); p.ProductTaken(); } Is my understanding of Factory Method is correct?

    Read the article

  • What's the best way to read a UDT from a database with Java?

    - by Lukas Eder
    I thought I knew everything about UDTs and JDBC until someone on SO pointed out some details of the Javadoc of java.sql.SQLInput and java.sql.SQLData JavaDoc to me. The essence of that hint was (from SQLInput): An input stream that contains a stream of values representing an instance of an SQL structured type or an SQL distinct type. This interface, used only for custom mapping, is used by the driver behind the scenes, and a programmer never directly invokes SQLInput methods. This is quite the opposite of what I am used to do (which is also used and stable in productive systems, when used with the Oracle JDBC driver): Implement SQLData and provide this implementation in a custom mapping to ResultSet.getObject(int index, Map mapping) The JDBC driver will then call-back on my custom type using the SQLData.readSQL(SQLInput stream, String typeName) method. I implement this method and read each field from the SQLInput stream. In the end, getObject() will return a correctly initialised instance of my SQLData implementation holding all data from the UDT. To me, this seems like the perfect way to implement such a custom mapping. Good reasons for going this way: I can use the standard API, instead of using vendor-specific classes such as oracle.sql.STRUCT, etc. I can generate source code from my UDTs, with appropriate getters/setters and other properties My questions: What do you think about my approach, implementing SQLData? Is it viable, even if the Javadoc states otherwise? What other ways of reading UDT's in Java do you know of? E.g. what does Spring do? what does Hibernate do? What does JPA do? What do you do? Addendum: UDT support and integration with stored procedures is one of the major features of jOOQ. jOOQ aims at hiding the more complex "JDBC facts" from client code, without hiding the underlying database architecture. If you have similar questions like the above, jOOQ might provide an answer to you.

    Read the article

  • How to check for mip-map availability in OpenGL?

    - by Xavier Ho
    Recently I bumped into a problem where my OpenGL program would not render textures correctly on a 2-year-old Lenovo laptop with an nVidia Quadro 140 card. It runs OpenGL 2.1.2, and GLSL 1.20, but when I turned on mip-mapping, the whole screen is black, with no warnings or errors. This is my texture filter code: glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); After 40 minutes of fiddling around, I found out mip-mapping was the problem. Turning it off fixed it: // glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); I get a lot of aliasing, but at least the program is visible and runs fine. Finally, two questions: What's the best or standard way to check if mip-mapping is available on a machine, aside from checking OpenGL versions? If mip-mapping is not available, what's the best work-around to avoid aliasing?

    Read the article

  • Images not shown when publishing MVC application to virtual directory inside default web-site

    - by Michael Sagalovich
    Hi! I am developing an application using ASP.NET MVC 1 and VS2008. When I deploy it to the default web-site in my IIS6 on WinXP, all images are shown correctly, path to any given image is localhost/Content/ImagesUI/[image].[ext] When I deploy it to the virtual directory, created inside the same site, any image request returns IIS standard 404 error page, while the path is localhost/[DirectoryName]/Content/ImagesUI/[image].[ext] - that seems to be correct, true? I am mapping .* to c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll in both site and directory configurations. When this mapping is removed, images are shown correctly. However, all other URLs do not work, of course. When I am trying to open an image in browser using the URL to it, aspnet_wp.exe process is not even started (I restarted IIS to test it) - I merely get 404 or the image, depending on the presence of * mapping. Thus, I suppose it has nothing to do neither with routes registered for MVC, nor with ASP. The solution that I found is to make Content folder a virtual directory and remove * mapping from its configuration. While that's OK to some extent, I want a better solution, which will explain and eliminate the cause of the problem, not just workaround it. Thanks for your help!

    Read the article

  • HIbernate 3.5.1 - can I just drop in EHCache 2.0.1?

    - by caerphilly
    I'm using Hibernate 3.5.1, which comes with EHCache 1.5 bundled. If I want to use the latest EHCache release (2.0.1), is it just a matter of removing the ehcache-1.5.jar from my project, and replacing with ehcache-core-2.0.1.jar? Any issues to be aware of? Also - is a cache "region" in the Hibernate mapping file that same as a cache "name" in the ehcache configuration xml? What I want to do is define 2 named cache regions - one for read-only reference entities that won't change (lookup lists etc), and one for all other entities. So in ehcache I want to define two elements; <cache name="readonly"> ... </cache> <cache name="mutable"> ... </cache> And then in my Hibernate mapping files, I will specify the cache to be used for each entity: <hibernate-mapping> <class name="lookuplist"> <cache region="readonly" usage="read-only"/> <property> ... </property> </class> </hibernate-mapping> Will that work? Some of the documentation seems to imply that a separate region/cache gets created for each mapped class... Thanks.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >