Search Results

Search found 4565 results on 183 pages for 'nhibernate mapping'.

Page 54/183 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Why 'timeout expired' exception thrown with StructureMap?

    - by Martin
    I'm getting a "timeout expired" exception thrown from a relatively heavily trafficked ASP.NET MVC 2 site I developed using StructureMap and Fluent NHibernate. I think that perhaps the connections aren't being disposed properly. What do you think may be causing this? Could it be my use of InstanceScope.Hybrid? Here's my NHibernateRegistry class; thanks in advance for your help: using MyProject.Core.Persistence.Impl; using FluentNHibernate.Cfg; using FluentNHibernate.Cfg.Db; using NHibernate; using NHibernate.ByteCode.LinFu; using NHibernate.Cfg; using MyProject.Core.FluentMapping; using StructureMap.Attributes; using StructureMap.Configuration.DSL; namespace MyProject.Core.Persistence { public class NHibernateRegistry : Registry { public NHibernateRegistry() { FluentConfiguration cfg = Fluently.Configure() .Database(MsSqlConfiguration.MsSql2005.ConnectionString( x => x.FromConnectionStringWithKey( "MyConnectionString")) .ProxyFactoryFactory(typeof (ProxyFactoryFactory).AssemblyQualifiedName)) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<EntryMap>()); Configuration configuration = cfg.BuildConfiguration(); ISessionFactory sessionFactory = cfg.BuildSessionFactory(); ForRequestedType<Configuration>().AsSingletons() .TheDefault.IsThis(configuration); ForRequestedType<ISessionFactory>().AsSingletons() .TheDefault.IsThis(sessionFactory); ForRequestedType<ISession>().CacheBy(InstanceScope.Hybrid) .TheDefault.Is.ConstructedBy(ctx => ctx.GetInstance<ISessionFactory>().OpenSession()); ForRequestedType<IUnitOfWork>().CacheBy(InstanceScope.Hybrid) .TheDefaultIsConcreteType<UnitOfWork>(); ForRequestedType<IDatabaseBuilder>().TheDefaultIsConcreteType<DatabaseBuilder>(); } } }

    Read the article

  • what's the alternative to readonlycollection when using lazy="extra"?

    - by Kunjan
    I am trying to use lazy="extra" for the child collection of Trades I have on my Client object. The trades is an Iset<Trade Trades, exposed as ReadOnlyCollection<Trade because I do not want anyone to modify the collection directly. As a result, I have added AddTrade and RemoveTrade methods. Now I have a Client Search page where I need to show the Trade Count. and on the Client details page I have a tab where I need to show all the trades for the Client in paged gridview. What I want to achieve is, for the search when I say on the client object as client.Trades.Count, nHibernate should only fire a select count(*) query. Hence I am using lazy="extra". But because I am using a ReadOnlyCollection, nHibernate fires a count query & a separate query to load the child collection trades completely. Also, I cannot include the Trades in my initial search request as this would disturb the paging because a counterparty can have n trades which would result in n rows, when I am searching clients only. So the child collections have to be loaded lazily. The second problem is that on the client details page -- Trades grid view, I have enabled paging for performance reasons. But by nature nHibernate loads the entire collection of trades as the user goes from one page to another. Ideally I want to control this by getting only trades specific to the page the user is on. How can I achieve this? I came across this very good article. http://stackoverflow.com/questions/876976/implementing-ipagedlistt-on-my-models-using-nhibernate But I am not sure if this will work for me, as lazy=extra currently doesnt work as expected with the ReadOnlyCollection. So, if I went ahead and implemented the solution this way and further enhanced it by making the List/Set Immutable, will lazy=extra give me the same problem as with ReadOnlyCollections?

    Read the article

  • NullReferenceException when initializing NServiceBus within web application Application_Start method

    - by SteveBering
    I am running the 2.0 RTM of NServiceBus and am getting a NullReferenceException when my MessageModule binds the CurrentSessionContext to my NHibernate sessionfactory. From within my Application_Start, I call the following method: public static void WithWeb(IUnityContainer container) { log4net.Config.XmlConfigurator.Configure(); var childContainer = container.CreateChildContainer(); childContainer.RegisterInstance<ISessionFactory>(NHibernateSession.SessionFactory); var bus = NServiceBus.Configure.WithWeb() .UnityBuilder(childContainer) .Log4Net() .XmlSerializer() .MsmqTransport() .IsTransactional(true) .PurgeOnStartup(false) .UnicastBus() .ImpersonateSender(false) .LoadMessageHandlers() .CreateBus(); var activeBus = bus.Start(); container.RegisterInstance(typeof(IBus), activeBus); } When the bus is started, my message module starts with the following: public void HandleBeginMessage() { try { CurrentSessionContext.Bind(_sessionFactory.OpenSession()); } catch (Exception e) { _log.Error("Error occurred in HandleBeginMessage of NHibernateMessageModule", e); throw; } } In looking at my log, we are logging the following error when the bind method is called: System.NullReferenceException: Object reference not set to an instance of an object. at NHibernate.Context.WebSessionContext.GetMap() at NHibernate.Context.MapBasedSessionContext.set_Session(ISession value) at NHibernate.Context.CurrentSessionContext.Bind(ISession session) Apparently, there is some issue in getting access to the HttpContext. Should this call to configure NServiceBus occur later in the lifecycle than Application_Start? Or is there another workaround that others have used to get handlers working within an Asp.NET Web application? Thanks, Steve

    Read the article

  • Action Filter Dependency Injection in ASP.NET MVC 3 RC2 with StructureMap

    - by Ben
    Hi, I've been playing with the DI support in ASP.NET MVC RC2. I have implemented session per request for NHibernate and need to inject ISession into my "Unit of work" action filter. If I reference the StructureMap container directly (ObjectFactory.GetInstance) or use DependencyResolver to get my session instance, everything works fine: ISession Session { get { return DependencyResolver.Current.GetService<ISession>(); } } However if I attempt to use my StructureMap filter provider (inherits FilterAttributeFilterProvider) I have problems with committing the NHibernate transaction at the end of the request. It is as if ISession objects are being shared between requests. I am seeing this frequently as all my images are loaded via an MVC controller so I get 20 or so NHibernate sessions created on a normal page load. I added the following to my action filter: ISession Session { get { return DependencyResolver.Current.GetService<ISession>(); } } public ISession SessionTest { get; set; } public override void OnResultExecuted(System.Web.Mvc.ResultExecutedContext filterContext) { bool sessionsMatch = (this.Session == this.SessionTest); SessionTest is injected using the StructureMap Filter provider. I found that on a page with 20 images, "sessionsMatch" was false for 2-3 of the requests. My StructureMap configuration for session management is as follows: For<ISessionFactory>().Singleton().Use(new NHibernateSessionFactory().GetSessionFactory()); For<ISession>().HttpContextScoped().Use(ctx => ctx.GetInstance<ISessionFactory>().OpenSession()); In global.asax I call the following at the end of each request: public Global() { EndRequest += (sender, e) => { ObjectFactory.ReleaseAndDisposeAllHttpScopedObjects(); }; } Is this configuration thread safe? Previously I was injecting dependencies into the same filter using a custom IActionInvoker. This worked fine until MVC 3 RC2 when I started experiencing the problem above, which is why I thought I would try using a filter provider instead. Any help would be appreciated Ben P.S. I'm using NHibernate 3 RC and the latest version of StructureMap

    Read the article

  • What is the basic pattern for using (N)Hibernate?

    - by Vilx-
    I'm creating a simple Windows Forms application with NHibernate and I'm a bit confused about how I'm supposed to use it. To quote the manual: ISession (NHibernate.ISession) A single-threaded, short-lived object representing a conversation between the application and the persistent store. Wraps an ADO.NET connection. Factory for ITransaction. Holds a mandatory (first-level) cache of persistent objects, used when navigating the object graph or looking up objects by identifier. Now, suppose I have the following scenario: I have a simple classifier which is a MSSQL table with two columns - ID (auto_increment) and Name (nvarchar). To edit this classifier I create a form which contains a single gridview and two buttons - OK and Cancel. The user can nearly directly edit the table in the gridview, and when he hits OK the changes he made are persisted to the DB (or if he hits cancel, nothing happens). Now, I have several questions about how to organize this: What should the lifetime of my ISession be? Should I create a single ISession for my whole application; an ISession for each of my forms (the application is single-threaded MDI); or an ISession for every DB operation/transaction? Does NHibernate offer some kind of built-in dirty tracking or must I do this myself? The manual mentions something like it here and there but does not go into details. How is this done? Is there not a huge overhead? Is it somehow tied with the cache(s) that NHibernate has? What are these caches for? Are they not specific to a single ISession? That is, if I use a seperate ISession for every transaction, won't it break the dirty tracking? How does the built-in dirty tracking detect deleted objects?

    Read the article

  • nservicebus deleting subscription record after inserting it?

    - by Justin Holbrook
    I have been playing with nservicebus for a few weeks now and since everything was going well on my local machine I decided to try to set up a test environment and work on deployment. I am using the generic host that comes with nservicebus and was using the nservicebus.Integration profile when running locally, but would like to use Nservicebus.Production in the test environment. I set up a sql server 2008 database, made changes to my app.config and everything seemed to work fine. But after a few attempts, I noticed messages were not being picked up by my subscriber. I checked the subscription table and it was empty. Upon examination of the logs I noticed the following: 2010-05-06 15:07:57,416 [1] DEBUG NHibernate.Persister.Entity.AbstractEntityPers ister [(null)] <(null) - Insert 0: INSERT INTO [Subscription] (SubscriberEndpo int, MessageType) VALUES (?, ?) 2010-05-06 15:07:57,416 [1] DEBUG NHibernate.Persister.Entity.AbstractEntityPers ister [(null)] <(null) - Update 0: 2010-05-06 15:07:57,416 [1] DEBUG NHibernate.Persister.Entity.AbstractEntityPers ister [(null)] <(null) - Delete 0: DELETE FROM [Subscription] WHERE Subscriber Endpoint = ? AND MessageType = ? Why would it insert then delete my subscription right afterwards? To try to rule out a nhibernate dialect issue I tried switching my subscription storage to an oracle 10g database. It behaved exactly the same, it worked the first 2 times, then I started seeing my subscriptions being deleted right after they were inserted. Any ideas what is causing this behavior?

    Read the article

  • Shadows shimmer when camera moves

    - by Chad Layton
    I've implemented shadow maps in my simple block engine as an exercise. I'm using one directional light and using the view volume to create the shadow matrices. I'm experiencing some problems with the shadows shimmering when the camera moves and I'd like to know if it's an issue with my implementation or just an issue with basic/naive shadow mapping itself. Here's a video: http://www.youtube.com/watch?v=vyprATt5BBg&feature=youtu.be Here's the code I use to create the shadow matrices. The commented out code is my original attempt to perfectly fit the view frustum. You can also see my attempt to try clamping movement to texels in the shadow map which didn't seem to make any difference. Then I tried using a bounding sphere instead, also to no apparent effect. public void CreateViewProjectionTransformsToFit(Camera camera, out Matrix viewTransform, out Matrix projectionTransform, out Vector3 position) { BoundingSphere cameraViewFrustumBoundingSphere = BoundingSphere.CreateFromFrustum(camera.ViewFrustum); float lightNearPlaneDistance = 1.0f; Vector3 lookAt = cameraViewFrustumBoundingSphere.Center; float distanceFromLookAt = cameraViewFrustumBoundingSphere.Radius + lightNearPlaneDistance; Vector3 directionFromLookAt = -Direction * distanceFromLookAt; position = lookAt + directionFromLookAt; viewTransform = Matrix.CreateLookAt(position, lookAt, Vector3.Up); float lightFarPlaneDistance = distanceFromLookAt + cameraViewFrustumBoundingSphere.Radius; float diameter = cameraViewFrustumBoundingSphere.Radius * 2.0f; Matrix.CreateOrthographic(diameter, diameter, lightNearPlaneDistance, lightFarPlaneDistance, out projectionTransform); //Vector3 cameraViewFrustumCentroid = camera.ViewFrustum.GetCentroid(); //position = cameraViewFrustumCentroid - (Direction * (camera.FarPlaneDistance - camera.NearPlaneDistance)); //viewTransform = Matrix.CreateLookAt(position, cameraViewFrustumCentroid, Up); //Vector3[] cameraViewFrustumCornersWS = camera.ViewFrustum.GetCorners(); //Vector3[] cameraViewFrustumCornersLS = new Vector3[8]; //Vector3.Transform(cameraViewFrustumCornersWS, ref viewTransform, cameraViewFrustumCornersLS); //Vector3 min = cameraViewFrustumCornersLS[0]; //Vector3 max = cameraViewFrustumCornersLS[0]; //for (int i = 1; i < 8; i++) //{ // min = Vector3.Min(min, cameraViewFrustumCornersLS[i]); // max = Vector3.Max(max, cameraViewFrustumCornersLS[i]); //} //// Clamp to nearest texel //float texelSize = 1.0f / Renderer.ShadowMapSize; //min.X -= min.X % texelSize; //min.Y -= min.Y % texelSize; //min.Z -= min.Z % texelSize; //max.X -= max.X % texelSize; //max.Y -= max.Y % texelSize; //max.Z -= max.Z % texelSize; //// We just use an orthographic projection matrix. The sun is so far away that it's rays are essentially parallel. //Matrix.CreateOrthographicOffCenter(min.X, max.X, min.Y, max.Y, -max.Z, -min.Z, out projectionTransform); } And here's the relevant part of the shader: if (CastShadows) { float4 positionLightCS = mul(float4(position, 1.0f), LightViewProj); float2 texCoord = clipSpaceToScreen(positionLightCS) + 0.5f / ShadowMapSize; float shadowMapDepth = tex2D(ShadowMapSampler, texCoord).r; float distanceToLight = length(LightPosition - position); float bias = 0.2f; if (shadowMapDepth < (distanceToLight - bias)) { return float4(0.0f, 0.0f, 0.0f, 0.0f); } } The shimmer is slightly better if I drastically reduce the view volume but I think that's mostly just because the texels become smaller and it's harder to notice them flickering back and forth. I'd appreciate any insight, I'd very much like to understand what's going on before I try other techniques.

    Read the article

  • Assigning static IP and domain name mapping to local server in LAN

    - by yashbinani
    I have developed a web application which will be deployed in a LAN environemnt. Clients will be Computers/Android Tablets/IPAD In order for communication between client and local server 1) need to assign a static IP to local server. 2) need a domian name mapping for that IP address in Local environment. 3) router should assign the same static IP if it gets restarted etc. I am using a windows XP machine as Local server OS. Do i need to take care of router configurations before buying one, or all routers will have same capability to perform this task. I am not a network specialist, so Sorry if this question sounds stupid. Thanks

    Read the article

  • Mapping of memory addresses to physical modules in Windows XP

    - by Josef Grahn
    I plan to run 32-bit Windows XP on a workstation with dual processors, based on Intel's Nehalem microarchitecture, and triple channel RAM. Even though XP is limited to 4 GB of RAM, my understanding is that it will function with more than 4 GB installed, but will only expose 4 GB (or slightly less). My question is: Assuming that 6 GB of RAM is installed in six 1 GB modules, which physical 4 GB will Windows actually map into its address space? In particular: Will it use all six 1 GB modules, taking advantage of all memory channels? (My guess is yes, and that the mapping to individual modules within a group happens in hardware.) Will it map 2 GB of address space to each of the two NUMA nodes (as each processor has it's own memory interface), or will one processor get fast access to 3 GB of RAM, while the other only has 1 GB? Thanks!

    Read the article

  • SQL Server User Mapping - Limit view of databases for a user

    - by Jaime
    Hi there, I am adding a new Login with SQL Server Authentication. I set its Server Role as public and then went into User Mapping, selecting the only database this user should have access to. I then change the Default Schema to dbo and made this user the db_owner. I then connect to the instance using the new user's credentials and I can see not only the database he should have access to but all the other attached databases. How can I limit this user to just see the database he has access to? Thanks in advance!

    Read the article

  • Dovecot Virtual Users and Users Domain Mapping

    - by Stojko
    I have successfully compiled, configured and ran Dovecot with virtual users feature. Here's part of my /etc/dovecot.conf configuration file: mail_location = maildir:/home/%d/%n/Maildir auth default { mechanisms = plain login userdb passwd-file { args = /home/%d/etc/passwd } passdb passwd-file { args = /home/%d/etc/shadow } socket listen { master { path = /var/run/dovecot/auth-worker mode = 0600 } } } I faced one issue I can't resolve myself. Is there anyway to create users' domains mapping and provide username in mail_location? Examples: 1. currently I have /home/domain.com/user/Maildir 2. I'd like to have /home/USER/domain.com/user/Maildir Can I achieve this somehow? Greets, Stojko

    Read the article

  • Mapping a Piped Shell Command in Vim

    - by michaelmichael
    In a previous question I asked about mapping evaluated code to a new window in MacVim. I got a great solution, but it presented another question: How can I map a key command in my .vimrc that involves piping output in the shell? As a simple example, let's say I wanted to pipe the results of ls -a to a new MacVim window. From the Vim command line I can enter !ls -a | mvim -, and the results will appear in a new window. Great! Now, I add that to my .vimrc: nmap <Leader>r :w !ls | mvim<CR> Vim now throws an error every time I try to source my .vimrc, which reads as follows: E492: Not an editor command: mvim<CR> Any ideas on how to overcome this?

    Read the article

  • XEN disk mapping problem under opensolaris

    - by Louis
    I have a system with two harddisks, i wanted to use the simplicity of ZFS for my file server and i also need to run a linux. I choosed XEN virtualization for that, supported on both system. My GRUB is well configured and i can boot both system. I would like is to run both system with solaris as a dom0 and the debian installed on the 2nd HD as a virtual machine. My problem is that i want to use the partitions of my 1st harddisk (sda1 under linux) and it does not work. I didn't find my use case on the web- Here is my Opensolaris device name of this partition : /dev/rdsk/c7d0p1 But when i use : disk = [ 'phy:rdsk/c7d0p1,sda1,w' ] as a disk mapping in my XEN configuration file i have the error : Error: Device 2049 (vbd) could not be connected. error: "rdsk/c7d0p1" is not a valid block device. I am "lost".

    Read the article

  • updating drive mapping GPO programmatically using powershell

    - by Kristoffer
    I have a Group Policy in a domain that have lots of drive mapping settings. I would like to change the path for a lot of these servers in this gpo with powershell if possible. I know i could do this via the GPMC, but would prefer to do it programtically. I have looked at the grouppolicy powershell module from microsoft (get-gpo and friends) but i only seem to be able to change registry entrys and permissions on the policys, not the actual path for the drivemapping. any ideas? Thanks!

    Read the article

  • How to locate the line where a vim plugin error occurs: "Mapping not found"

    - by Mert Nuhoglu
    When I open gvim, it gives "Mapping not found" warning. I found out that the problem is related to the snipmates.vim plugin. But I can't locate where exactly in snipmates.vim file this error is produced. Is there a way to find out the exact cause? Note: gvim 7.2 runs on Windows XP. I found a solution advice in vim mail list. gvim -V20 2>&1 | tee logfile I run this command from command prompt but it doesn't output anything into the logfile. All the logs are written on the entry screen of gvim.

    Read the article

  • apache server mapping to tomcat issues

    - by Karthick
    I am working on the flex application. Back end is spring/Hibernate. The application is working fine in my local XP system. When i am trying to deploy in the server i'm facing the issue. The issue is how to map the java in apache when i am by passing the apache & works in tomcat my application is working good. But not in the apache.. This can be fixed by mapping the java in apache. I don't know how to map this. can u help me out My server properties Linux lumiin.ch 2.6.18-028stab095.1 i686 Regards Karthick

    Read the article

  • Iptables port mapping from two PCs to one

    - by Anton
    We have 3 PCs, two of it are connected to internet (both of it have 2 NIC) PC1: eth0 - 1.0.0.1 (external IP) eth1 - 172.16.0.1 (internal IP) PC2: eth0 - 1.0.0.2 (external IP) eth1 - 172.16.0.2 (internal IP) PC3: eth0 - 172.16.0.3 (internal IP) Now we want to map port 80 from PC1 and PC2 to PC3. But there is the problem: iptables port forwarding works well from PC1 or PC2, but only in case if PC3 have PC1 or PC2 as gateway. So, question is: can we have port mapping from both PC1 and PC2 regardless of gateway settings on PC3? Thank you in advance.

    Read the article

  • User mapping lost after manual failover

    - by fordan
    I have two Microsoft SQL Server instances set up for mirroring each with a number of databases. There are a number of logins and for each database one or more user/login mappings. When I restore a backup of database I always have to redo the login/user mappings. I understand this because the logins are per database server. So after restoring the databases on the pricipal I redid the login/user mappings. This was not possible for the mirror because the databases were 'restoring'. After a manual failover I could not use the databases because user credentials were missing. This was not unexpected, so I did the login/user mapping again. I did a manual failover again to make the initial pricipal, which was now the mirror, principal again. To my surprise I could not use the databases because the login/user mappings were gone. Is this the expected behaviour?

    Read the article

  • What is the best way to provide an AutoMappingOverride for an interface in fluentnhibernate automapp

    - by Tom
    In my quest for a version-wide database filter for an application, I have written the following code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using FluentNHibernate.Automapping; using FluentNHibernate.Automapping.Alterations; using FluentNHibernate.Mapping; using MvcExtensions.Model; using NHibernate; namespace MvcExtensions.Services.Impl.FluentNHibernate { public interface IVersionAware { string Version { get; set; } } public class VersionFilter : FilterDefinition { const string FILTERNAME = "MyVersionFilter"; const string COLUMNNAME = "Version"; public VersionFilter() { this.WithName(FILTERNAME) .WithCondition("Version = :"+COLUMNNAME) .AddParameter(COLUMNNAME, NHibernateUtil.String ); } public static void EnableVersionFilter(ISession session,string version) { session.EnableFilter(FILTERNAME).SetParameter(COLUMNNAME, version); } public static void DisableVersionFilter(ISession session) { session.DisableFilter(FILTERNAME); } } public class VersionAwareOverride : IAutoMappingOverride<IVersionAware> { #region IAutoMappingOverride<IVersionAware> Members public void Override(AutoMapping<IVersionAware> mapping) { mapping.ApplyFilter<VersionFilter>(); } #endregion } } But, since overrides do not work on interfaces, I am looking for a way to implement this. Currently I'm using this (rather cumbersome) way for each class that implements the interface : public class SomeVersionedEntity : IModelId, IVersionAware { public virtual int Id { get; set; } public virtual string Version { get; set; } } public class SomeVersionedEntityOverride : IAutoMappingOverride<SomeVersionedEntity> { #region IAutoMappingOverride<SomeVersionedEntity> Members public void Override(AutoMapping<SomeVersionedEntity> mapping) { mapping.ApplyFilter<VersionFilter>(); } #endregion } I have been looking at IClassmap interfaces etc, but they do not seem to provide a way to access the ApplyFilter method, so I have not got a clue here... Since I am probably not the first one who has this problem, I am quite sure that it should be possible; I am just not quite sure how this works.. EDIT : I have gotten a bit closer to a generic solution: This is the way I tried to solve it : Using a generic class to implement alterations to classes implementing an interface : public abstract class AutomappingInterfaceAlteration<I> : IAutoMappingAlteration { public void Alter(AutoPersistenceModel model) { model.OverrideAll(map => { var recordType = map.GetType().GetGenericArguments().Single(); if (typeof(I).IsAssignableFrom(recordType)) { this.GetType().GetMethod("overrideStuff").MakeGenericMethod(recordType).Invoke(this, new object[] { model }); } }); } public void overrideStuff<T>(AutoPersistenceModel pm) where T : I { pm.Override<T>( a => Override(a)); } public abstract void Override<T>(AutoMapping<T> am) where T:I; } And a specific implementation : public class VersionAwareAlteration : AutomappingInterfaceAlteration<IVersionAware> { public override void Override<T>(AutoMapping<T> am) { am.Map(x => x.Version).Column("VersionTest"); am.ApplyFilter<VersionFilter>(); } } Unfortunately I get the following error now : [InvalidOperationException: Collection was modified; enumeration operation may not execute.] System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource) +51 System.Collections.Generic.Enumerator.MoveNextRare() +7661017 System.Collections.Generic.Enumerator.MoveNext() +61 System.Linq.WhereListIterator`1.MoveNext() +156 FluentNHibernate.Utils.CollectionExtensions.Each(IEnumerable`1 enumerable, Action`1 each) +239 FluentNHibernate.Automapping.AutoMapper.ApplyOverrides(Type classType, IList`1 mappedProperties, ClassMappingBase mapping) +345 FluentNHibernate.Automapping.AutoMapper.MergeMap(Type classType, ClassMappingBase mapping, IList`1 mappedProperties) +43 FluentNHibernate.Automapping.AutoMapper.Map(Type classType, List`1 types) +566 FluentNHibernate.Automapping.AutoPersistenceModel.AddMapping(Type type) +85 FluentNHibernate.Automapping.AutoPersistenceModel.CompileMappings() +746 EDIT 2 : I managed to get a bit further; I now invoke "Override" using reflection for each class that implements the interface : public abstract class PersistenceOverride<I> { public void DoOverrides(AutoPersistenceModel model,IEnumerable<Type> Mytypes) { foreach(var t in Mytypes.Where(x=>typeof(I).IsAssignableFrom(x))) ManualOverride(t,model); } private void ManualOverride(Type recordType,AutoPersistenceModel model) { var t_amt = typeof(AutoMapping<>).MakeGenericType(recordType); var t_act = typeof(Action<>).MakeGenericType(t_amt); var m = typeof(PersistenceOverride<I>) .GetMethod("MyOverride") .MakeGenericMethod(recordType) .Invoke(this, null); model.GetType().GetMethod("Override").MakeGenericMethod(recordType).Invoke(model, new object[] { m }); } public abstract Action<AutoMapping<T>> MyOverride<T>() where T:I; } public class VersionAwareOverride : PersistenceOverride<IVersionAware> { public override Action<AutoMapping<T>> MyOverride<T>() { return am => { am.Map(x => x.Version).Column(VersionFilter.COLUMNNAME); am.ApplyFilter<VersionFilter>(); }; } } However, for one reason or another my generated hbm files do not contain any "filter" fields.... Maybe somebody could help me a bit further now ??

    Read the article

  • how to define a field of view for the entire map for shadow?

    - by Mehdi Bugnard
    I recently added "Shadow Mapping" in my XNA games to include shadows. I followed the nice and famous tutorial from "Riemers" : http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Shadow_map.php . This code work nice and I can see my source of light and shadow. But the problem is that my light source does not match the field of view that I created. I want the light covers the entire map of my game. I don't know why , but the light only affect 2-3 cubes of my map. ScreenShot: (the emission of light illuminates only 2-3 blocks and not the full map) Here is my code i create the fieldOfView for LightviewProjection Matrix: Vector3 lightDir = new Vector3(10, 52, 10); lightPos = new Vector3(10, 52, 10); Matrix lightsView = Matrix.CreateLookAt(lightPos, new Vector3(105, 50, 105), new Vector3(0, 1, 0)); Matrix lightsProjection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver2, 1f, 20f, 1000f); lightsViewProjectionMatrix = lightsView * lightsProjection; As you can see , my nearPlane and FarPlane are set to 20f and 100f . So i don't know why the light stop after 2 cubes. it's should be bigger Here is set the value to my custom effect HLSL in the shader file /* SHADOW VALUE */ effectWorld.Parameters["LightDirection"].SetValue(lightDir); effectWorld.Parameters["xLightsWorldViewProjection"].SetValue(Matrix.Identity * .lightsViewProjectionMatrix); effectWorld.Parameters["xWorldViewProjection"].SetValue(Matrix.Identity * arcadia.camera.View * arcadia.camera.Projection); effectWorld.Parameters["xLightPower"].SetValue(1f); effectWorld.Parameters["xAmbient"].SetValue(0.3f); Here is my custom HLSL shader effect file "*.fx" // This sample uses a simple Lambert lighting model. float3 LightDirection = normalize(float3(-1, -1, -1)); float3 DiffuseLight = 1.25; float3 AmbientLight = 0.25; uniform const float3 DiffuseColor = 1; uniform const float Alpha = 1; uniform const float3 EmissiveColor = 0; uniform const float3 SpecularColor = 1; uniform const float SpecularPower = 16; uniform const float3 EyePosition; // FOG attribut uniform const float FogEnabled ; uniform const float FogStart ; uniform const float FogEnd ; uniform const float3 FogColor ; float3 cameraPos : CAMERAPOS; texture Texture; sampler Sampler = sampler_state { Texture = (Texture); magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = mirror; AddressV = mirror; }; texture xShadowMap; sampler ShadowMapSampler = sampler_state { Texture = <xShadowMap>; magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = clamp; AddressV = clamp; }; /* *************** */ /* SHADOW MAP CODE */ /* *************** */ struct SMapVertexToPixel { float4 Position : POSITION; float4 Position2D : TEXCOORD0; }; struct SMapPixelToFrame { float4 Color : COLOR0; }; struct SSceneVertexToPixel { float4 Position : POSITION; float4 Pos2DAsSeenByLight : TEXCOORD0; float2 TexCoords : TEXCOORD1; float3 Normal : TEXCOORD2; float4 Position3D : TEXCOORD3; }; struct SScenePixelToFrame { float4 Color : COLOR0; }; float DotProduct(float3 lightPos, float3 pos3D, float3 normal) { float3 lightDir = normalize(pos3D - lightPos); return dot(-lightDir, normal); } SSceneVertexToPixel ShadowedSceneVertexShader(float4 inPos : POSITION, float2 inTexCoords : TEXCOORD0, float3 inNormal : NORMAL) { SSceneVertexToPixel Output = (SSceneVertexToPixel)0; Output.Position = mul(inPos, xWorldViewProjection); Output.Pos2DAsSeenByLight = mul(inPos, xLightsWorldViewProjection); Output.Normal = normalize(mul(inNormal, (float3x3)World)); Output.Position3D = mul(inPos, World); Output.TexCoords = inTexCoords; return Output; } SScenePixelToFrame ShadowedScenePixelShader(SSceneVertexToPixel PSIn) { SScenePixelToFrame Output = (SScenePixelToFrame)0; float2 ProjectedTexCoords; ProjectedTexCoords[0] = PSIn.Pos2DAsSeenByLight.x / PSIn.Pos2DAsSeenByLight.w / 2.0f + 0.5f; ProjectedTexCoords[1] = -PSIn.Pos2DAsSeenByLight.y / PSIn.Pos2DAsSeenByLight.w / 2.0f + 0.5f; float diffuseLightingFactor = 0; if ((saturate(ProjectedTexCoords).x == ProjectedTexCoords.x) && (saturate(ProjectedTexCoords).y == ProjectedTexCoords.y)) { float depthStoredInShadowMap = tex2D(ShadowMapSampler, ProjectedTexCoords).r; float realDistance = PSIn.Pos2DAsSeenByLight.z / PSIn.Pos2DAsSeenByLight.w; if ((realDistance - 1.0f / 100.0f) <= depthStoredInShadowMap) { diffuseLightingFactor = DotProduct(xLightPos, PSIn.Position3D, PSIn.Normal); diffuseLightingFactor = saturate(diffuseLightingFactor); diffuseLightingFactor *= xLightPower; } } float4 baseColor = tex2D(Sampler, PSIn.TexCoords); Output.Color = baseColor*(diffuseLightingFactor + xAmbient); return Output; } SMapVertexToPixel ShadowMapVertexShader(float4 inPos : POSITION) { SMapVertexToPixel Output = (SMapVertexToPixel)0; Output.Position = mul(inPos, xLightsWorldViewProjection); Output.Position2D = Output.Position; return Output; } SMapPixelToFrame ShadowMapPixelShader(SMapVertexToPixel PSIn) { SMapPixelToFrame Output = (SMapPixelToFrame)0; Output.Color = PSIn.Position2D.z / PSIn.Position2D.w; return Output; } /* ******************* */ /* END SHADOW MAP CODE */ /* ******************* */ / For rendering without instancing. technique ShadowMap { pass Pass0 { VertexShader = compile vs_2_0 ShadowMapVertexShader(); PixelShader = compile ps_2_0 ShadowMapPixelShader(); } } technique ShadowedScene { /* pass Pass0 { VertexShader = compile vs_2_0 VSBasicTx(); PixelShader = compile ps_2_0 PSBasicTx(); } */ pass Pass1 { VertexShader = compile vs_2_0 ShadowedSceneVertexShader(); PixelShader = compile ps_2_0 ShadowedScenePixelShader(); } } technique SimpleFog { pass Pass0 { VertexShader = compile vs_2_0 VSBasicTx(); PixelShader = compile ps_2_0 PSBasicTx(); } } I edited my fx file , for show you only information and functions about the shadow ;-)

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • Best practices for class-mapping with SoapClient

    - by Foofy
    Using SoapClient's class mapping feature and it's pretty sweet. Unfortunately the SOAP service we're using has a bunch of read-only properties on some of the objects and will throw faults if the properties are passed back as anything but null. Need to filter out the properties before they're used in the SOAP call and am looking for advice on the best way to do it. So far the options are: Stick to a convention where I use getter and setter functions to manipulate the properties, and use property overloading to filter method access since only SoapClient would be doing that. E.g. developers would access properties like this: $obj->getAccountNumber() SoapClient would access properties like this: $obj->accountNumber I don't like this because the properties are still exposed and things could go wrong if developers don't stick to convention. Have a wrapper for SoapClient that sets a public property the mapped objects can check to see if the property is being accessed by SoapClient. I already have a wrapper that assigns a reference to itself to all the mapped objects. class SoapClientWrapper { public function __soapCall($method, $args) { $this->setSoapMode(true); $this->_soapClient->__soapCall($method, $args); $this->setSoapMode(false); } } class Invoice { function __get($val) { if($this->_soapClient->getSoapMode()) { return null; } else { return $this->$val; } } } This works but it doesn't feel right and seems a bit clunky. Do the mapping manually, and don't use SoapClient's mapping features. I'd just have a function on all the mapped objects that returns the safe-to-send properties. Also, nobody would have access to properties they shouldn't since I could enforce getters and setters. A lot more work, though.

    Read the article

  • ORM For .Net ON Oracle

    - by moi_meme
    I'm gonna start a new project soon, using .Net 3.5 and Winform on an Oracle database. We were planning on using an ORM, NHibernate was suggested by our architect. Since I'm personnaly more familiar with Entity Framework, i thought it would be easier to use than NHibernate. But since there aren't any official provider from Oracle, we are resistant on using it. So my Question, I Looked at different provider available and found some: DevArt DataDirct EFOracleProvider So I'd like to have some feed back on each of them, pros and con, missing feature, stuff like that, from those using them, and know if we're better of with NHibernate? Thanks for the help.

    Read the article

  • How to do Grouping using JPA annotation with mapping given field

    - by hemal
    I am using JPA Annotation mapping with the table given below, but having problem that i am doing mapping on same table but on diffrent field given ProductImpl.java @Entity @Table(name = "Product") public class ProductImpl extends SimpleTagGroup implements Product { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private long id = -1; @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL) @JoinTable(name = "ProductTagMapping", joinColumns =@JoinColumn(name = "productId"), inverseJoinColumns =@JoinColumn(name = "tagId")) private List<SimpleTag> tags; @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL) @JoinTable(name = "ProductTagMapping", joinColumns =@JoinColumn(name = "productId"), inverseJoinColumns =@JoinColumn(name = "tagId")) private List<SimpleTag> licenses; @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL) @JoinTable(name = "ProductTagMapping", joinColumns =@JoinColumn(name = "productId"), inverseJoinColumns =@JoinColumn(name = "tagId")) private List<SimpleTag> os; I want to get values like windows and linux in os , GPLv2 and GPLv3 in licenses ,so we are using TagGroup table . but here i got all tagValues in each of the os,licenses and tag fileds,so how could i do group by or some other things with JPA. and ProductTagMapping is the mapping table between Tag and TagGroup TagGroup Table ID TAGGROUPNAME 1 PRODUCTTYPE 2 LICENSE 3 TAGS 4 OS SimpleTag ID TAGVALUE 1 Application 2 Framework 3 Apache2 4 GPLv2 5 GPLv3 6 learning 7 Linux 8 Windows 9 mature

    Read the article

  • ASP.NET MVC: run code after view has rendered (close db transaction)

    - by thermal7
    Hi, I am using ASP.NET MVC2 with NHibernate, but am facing an issue. All calls to the database via NHibernate should be inside a transaction, however code inside the view kicks off database calls in some instances. Thus there is a need to be able to commit the transaction after the view has rendered. For example displaying a list of users and their user roles you might show the user role using this code: <%: Model.UserRole.Name % This will cause a hit on the database as the UserRole is loaded using a NHibernate proxy. You can fetch the UserRole eagerly which circumvents the issue in this case, but there are cases where it is much faster to use lazy loading. Anyway, is there a way to run code after a view has rendered?

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >