Search Results

Search found 1103 results on 45 pages for 'eager evaluation'.

Page 15/45 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • A little on speaking and evaluations...

    - by AaronBertrand
    Buck Woody ( blog | twitter ) just published a great post on session evaluations , and a lot of his points hit home for me. The premise is that the evaluations are not really meant for the attendee or the event organizers, but so that the speaker can get better and make the next session better. In light of this, at least in my opinion, the existing evaluation forms (and the way attendees tend to fill them out) do not achieve this at all. It may be a little more work for events to generate a more...(read more)

    Read the article

  • Oracle Customer Success Forum - Batesville - Oracle Sales Cloud - June 24th, 5pm CET

    - by Richard Lefebvre
    Batesville uses Oracle Sales Cloud to create a common platform and standardize processes for business transformation across field sales and telesales. Using real-time KPI dashboards, they are measuring their business success with consistency across their sales reps.We are pleased to invite you to a discussion with Batesville on industry trends, why sales automation is important, reasons for choosing Oracle Sales Cloud, and the vendor evaluation process. Please click on the register button to confirm your attendance by 5:00 p.m. Pacific Time on June 23, 2014.Speakers: Diane Kinker, Director CRM Program Chris Haven, Senior Director Product Management, Oracle (Moderator) Organization Profile:Batesville (www.Batesville.com), a wholly owned subsidiary of Hillenbrand, Inc. (NYSE:HI), is the leader in the North American death care industry. For more than 125 years, Batesville has been dedicated to helping families honor the lives of those they love®. Batesville’s innovation has changed the face of funeral service, from advancements in manufacturing and quality to patented features and memorialization offerings, technology and web-based solutions, and profit-enhancing merchandising systems and room displays. Our history of manufacturing excellence, product innovation, superior customer service and reliable delivery has helped Batesville become – and remain – a market leader. Event Description:In this informal reference call, you will have the opportunity to hear Batesville discuss industry trends, why sales automation is important, the decision making process for choosing Oracle Sales Cloud, and the vendor evaluation process. The call will open with a brief overview, followed by discussion, and an open question and answer session. Please allow one hour for the call.Why Oracle:Batesville looked to transform its sales automation processes. Oracle Sales Cloud met these needs and Batesville’s requirements for: Standardized end-to-end Sales Processes including Sales Performance Management (territory management, quota management and incentive compensation) Mobile capabilities with integration to Microsoft Outlook and Smartphones Creation of the WIG Dashboard (Wildly Important Goal) using reporting and analytics Click the Register Now button to confirm your attendance for this informative event. Registration will close at 5:00 p.m. Pacific Time on June 23, 2014.After you register your information will be forwarded through an Approval Process. Once your registration request has been validated against the invitation database, you will receive an email confirmation with your registration details as long as there is availability. Please be advised that Batesville will revise the registrants list and may dismiss registrations as they see fit. Register Now!

    Read the article

  • Essential SEO Advice For 2010 After the Google Mayday Update and Caffeine Roll Out

    So Google have made big changes recently with the Mayday update and the Caffeine rollout. Many webmasters on the various SEO forums such as Webmasterworld and SEOchat have been bemoaning these changes and how they have affected their websites Search Engine Ranking Position with many of their long tail rankings taking a major negative hit. This is not the time for whingeing but instead should be a time for re-evaluation.

    Read the article

  • Evaluate Oracle Solaris 11

    - by Terri Wischmann
    Evaluate Oracle Solaris 11 and make the move! We have provided some useful next steps for increasing your Oracle Solaris 11 knowledge so you can take advantage of some of the latest innovations in Oracle Solaris. Check out the Evaluation page which has a host of content to help you move from Oracle Solaris 10 to Oracle Solaris 11 or any other OS. Check out the NEW content in Evaluating Oracle Solaris 11 here Podcasts Enterprise OS Demos Cheat Sheets Competitve info

    Read the article

  • The Best Articles for Using and Customizing Windows 8

    - by Lori Kaufman
    Now that Windows 8 Enterprise is available to the public as a 90-day evaluation and Windows 8 Pro is available for Microsoft TechNet subscribers, we decided to collect links to the Windows 8 articles we’ve published since the release of the Developer Preview. How To Switch Webmail Providers Without Losing All Your Email How To Force Windows Applications to Use a Specific CPU HTG Explains: Is UPnP a Security Risk?

    Read the article

  • Microsoft met fin à l'abonnement TechNet, le programme tué par la piraterie et les abus des abonnés ?

    Microsoft annonce la fin de l'abonnement TechNet le programme tué par la piraterie et les abus des abonnés ?Dans un billet publié sur TechNet, Microsoft a annoncé la fin de l'abonnement Technet pour les professionnels de l'IT.Lancé en 1998, le programme TechNet permet aux professionnels de l'informatique à se préparer aux problèmes critiques et à planifier les déploiements en leur offrant un accès rapide et pratique aux derniers logiciels à des fins d'évaluation, ainsi qu'aux versions bêta, aux appels passés au Support technique professionnel, à des informations techniques et à des outils dans le but de faciliter leur travail.À partir du 31 août, Microsoft ne prendra plus en compte les nouveaux ...

    Read the article

  • ASP.NET MVC : optimiser le temps de chargement des pages en utilisant le regroupement et la minification, un article de Hinault Romaric

    Salut, Cette discussion est ouverte pour vous annoncer la publication de mon nouvel article sur l'amelioration du temps de chargement des pages Web en utilisant le regroupement et la minification à la volée du CSS et JavaScript. Citation: Le temps de chargement d'une page est un facteur important dans l'évaluation des performances d'un site Web. Il a un impact non négligeable sur l'expérience utilisateur et même sur le référencement naturel. Plus les pages de votre site se chargent rapidement, plus l'expérience de navigation est flui...

    Read the article

  • Linux : Microsoft et SUSE renforcent leur partenariat autour de l'interopérabilité pour optimiser les environnements hétérogènes

    Linux : Microsoft et SUSE renforcent leur partenariat Pour l'optimisation des datacenters et l'interopérabilité A l'occasion de la 5ème édition de l'Open World Forum, Microsoft et SUSE ont annoncé le renforcement de leur partenariat avec le lancement d'une plateforme d'évaluation conjointe. « Depuis plus de 10 ans, Microsoft travaille à l'ouverture et l'interopérabilité de ses technologies. A travers cet engagement de long terme, notre objectif est d'offrir de véritables solutions d'optimisation des investissements et des compétences pour les environnements hétérogènes en Cloud privé comme en Cloud public », explique Frédéric Aatz, Directeur de la stratégie interopérabilité de Mic...

    Read the article

  • What common programming problems are best solved by using prototypes and closures?

    - by vemv
    As much as I understand both concepts, I can't see how can I take advantage of JavaScript's closures and prototypes aside from using them for creating instantiable and/or encapsulated class-like blocks (which seems more of a workaround than an asset to me) Other JS features such as functions-as-values or logical evaluation of non-booleans are much easier to fall in love with... What common programming problems are best solved by using propotypal inheritance and closures?

    Read the article

  • SEO and JavaScript since Google admits JS parsing

    - by schlingel
    We're planning on building a HTML snapshot creation service to provide the Google crawlers with static HTML of our JS driven single page application. Is this still necessary and/or encouraged since Google openly admits it is parsing JS now? How should I tackle this evaluation? Are there tools to provide data on when it's needed to provide snapshots and when google has sufficent parsing? Is it better because it would be much faster in comparison to the JS incremental rendering?

    Read the article

  • Visual sitemap generater

    - by rugbert
    Im looking for a something to visually create a sitemap for one of my websites. Id like something in a tree structure, so I have the hierarchical view of my site. A couple requirements I have tho, the ability to map password protected pages, and (not REALLY a requirement) the ability to integrate google analytics data. Im trying a evaluation version of powermapper, but the version that includes analytics integration is like $300 so Im looking for something cheaper.

    Read the article

  • Optimising For Google and Bing - How Distinct Are They?

    With Bing and Yahoo set to combine soon enough, the Bing search engine will probably count for near to 30% of the search engine business, this means it could be somewhat beneficial to optimise for both Bing and Google in the not too remote future. Now that does not really entail that you ought to implement tremendous modifications to the optimisation approaches you presently use, as an evaluation by SEOmoz reveals that the two engines appear to be growing to be increasingly more similar.

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • DDD/NHibernate Use of Aggregate root and impact on web design - ex. Editing children of aggregate ro

    - by pbrophy
    Hopefully, this fictitious example will illustrate my problem: Suppose you are writing a system which tracks complaints for a software product, as well as many other attributes about the product. In this case the SoftwareProduct is our aggregate root and Complaints are entities that only can exist as a child of the product. In other words, if the software product is removed from the system, so shall the complaints. In the system, there is a dashboard like web page which displays many different aspects of a single SoftwareProduct. One section in the dashboard, displays a list of Complaints in a grid like fashion, showing only some very high level information for each complaint. When an admin type user chooses one of these complaints, they are directed to an edit screen which allows them to edit the detail of a single Complaint. The question is: what is the best way for the edit screen to retrieve the single Complaint, so that it can be displayed for editing purposes? Keep in mind we have already established the SoftwareProduct as an aggregate root, therefore direct access to a Complaint should not be allowed. Also, the system is using NHibernate, so eager loading is an option, but my understanding is that even if a single Complaint is eager loaded via the SoftwareProduct, as soon as the Complaints collection is accessed the rest of the collection is loaded. So, how do you get the single Complaint through the SoftwareProduct without incurring the overhead of loading the entire Complaints collection?

    Read the article

  • Load collections eagerly in NHibernate using Criteria API

    - by Zuber
    I have an entity A which HasMany entities B and entities C. All entities A, B and C have some references x,y and z which should be loaded eagerly. I want to read from the database all entities A, and load the collections of B and C eagerly using criteria API. So far, I am able to fetch the references in 'A' eagerly. But when the collections are loaded, the references within them are lazily loaded. Here is how I do it AllEntities_A = _session.CreateCriteria(typeof(A)) .SetFetchMode("x", FetchMode.Eager) .SetFetchMode("y", FetchMode.Eager) .List<A>().AsQueryable(); The mapping of entity A using Fluent is as shown below. _B and _C are private ILists for B & C respectively in A. Id(c => c.SystemId); Version(c => c.Version); References(c => c.x).Cascade.All(); References(c => c.y).Cascade.All(); HasMany<B>(Reveal.Property<A>("_B")) .AsBag() .Cascade.AllDeleteOrphan() .Not.LazyLoad() .Inverse() .Cache.ReadWrite().IncludeAll(); HasMany<C>(Reveal.Property<A>("_C")) .AsBag() .Cascade.AllDeleteOrphan() .LazyLoad() .Inverse() .Cache.ReadWrite().IncludeAll(); I don't want to make changes to the mapping file, and would like to load the entire entity A eagerly. i.e. I should get a List of A's where there will be List of B's and C's whose reference properties will also be loaded eagerly

    Read the article

  • JPA joined column allow every value...

    - by Fabio Beoni
    I'm testing JPA, in a simple case File/FileVersions tables (Master/Details), with OneToMany relation, I have this problem: in FileVersions table, the field "file_id" (responsable for the relation with File table) accepts every values, not only values from File table. How can I use the JPA mapping to limit the input in FileVersion.file_id only for values existing in File.id? My class are File and FileVersion: FILE CLASS @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name="FILE_ID") private Long id; @Column(name="NAME", nullable = false, length = 30) private String name; //RELATIONS ------------------------------------------- @OneToMany(mappedBy="file", fetch=FetchType.EAGER) private Collection <FileVersion> fileVersionsList; //----------------------------------------------------- FILEVERSION CLASS @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name="VERSION_ID") private Long id; @Column(name="FILENAME", nullable = false, length = 255) private String fileName; @Column(name="NOTES", nullable = false, length = 200) private String notes; //RELATIONS ------------------------------------------- @ManyToOne(fetch=FetchType.EAGER) @JoinColumn(name="FILE_ID", referencedColumnName="FILE_ID", nullable=false) private File file; //----------------------------------------------------- and this is the FILEVERSION TABLE CREATE TABLE `JPA-Support`.`FILEVERSION` ( `VERSION_ID` bigint(20) NOT NULL AUTO_INCREMENT, `FILENAME` varchar(255) NOT NULL, `NOTES` varchar(200) NOT NULL, `FILE_ID` bigint(20) NOT NULL, PRIMARY KEY (`VERSION_ID`), KEY `FK_FILEVERSION_FILE_ID` (`FILE_ID`) ) ENGINE=MyISAM AUTO_INCREMENT=4 DEFAULT CHARSET=latin1

    Read the article

  • Nhibernate, can i improve my MAPPING or Query

    - by dbones
    take this simple example A staff class which references other instances of the staff class public class Staff { public Staff() { Team = new List<Staff>(); } public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual IList<Staff> Team { get; set; } public virtual Staff Manager { get; set; } } The Fluent Mapping public class StaffMap : ClassMap<Staff> { public StaffMap() { Id(x => x.Id); Map(x => x.Name); References(x => x.Manager).Column("ManagerId"); HasMany(x => x.Team).KeyColumn("ManagerId").Inverse(); } } Now I want to run a query, which will load all the Staff and eager load the manager and Team members. This is what I came up with IList<Staff> resutls = session.CreateCriteria<Staff>() .SetFetchMode("Team", FetchMode.Eager) .SetResultTransformer(Transformers.DistinctRootEntity) .List<Staff>(); however the SQL (does what i want) has duplicate columns, 2 team2_.ManagerId and 2 team2_.Id SELECT this_.Id as Id0_1_, this_.Name as Name0_1_, this_.ManagerId as ManagerId0_1_, team2_.ManagerId as ManagerId3_, team2_.Id as Id3_, team2_.Id as Id0_0_, team2_.Name as Name0_0_, team2_.ManagerId as ManagerId0_0_ FROM [SelfRef].[dbo].[Staff] this_ left outer join [SelfRef].[dbo].[Staff] team2_ on this_.Id=team2_.ManagerId The question is, should this be happening? did i do something wrong in the query or map? or is it a feature of the HHib im using (which is Version 2.1.0.4000)? many thanks in advance

    Read the article

  • @ManyToMany Duplicate Entry Exception

    - by zp26
    I have mapped a bidirectional many-to-many exception between the entities Course and Trainee in the following manner: Course { ... private Collection<Trainee> students; ... @ManyToMany(targetEntity = lesson.domain.Trainee.class, cascade = {CascadeType.All}, fetch = {FetchType.EAGER}) @Jointable(name="COURSE_TRAINEE", joincolumns = @JoinColumn(name="COURSE_ID"), inverseJoinColumns = @JoinColumn(name = "TRAINEE_ID")) @CollectionOfElements public Collection<Trainee> getStudents() { return students; } ... } Trainee { ... private Collection<Course> authCourses; ... @ManyToMany(cascade = {CascadeType.All}, fetch = {FetchType.EAGER}, mappedBy = "students", targetEntity = lesson.domain.Course.class) @CollectionOfElements public Collection<Course> getAuthCourses() { return authCourses; } ... } Instead of creating a table where the Primary Key is made of the two foreign keys (imported from the table of the related two entities), the system generates the table "COURSE_TRAINEE" with the following schema: I am working on MySQL 5.1 and my App. Server is JBoss 5.1. Does anyone guess why?

    Read the article

  • Accessing Application Scoped Bean Causes NullPointerException

    - by user2946861
    What is an Application Scoped Bean? I understand it to be a bean which will exist for the life of the application, but that doesn't appear to be the correct interpretation. I have an application which creates an application scoped bean on startup (eager=true) and then a session bean that tries to access the application scoped bean's objects (which are also application scoped). But when I try to access the application scoped bean from the session scoped bean, I get a null pointer exception. Here's excerpts from my code: Application Scoped Bean: @ManagedBean(eager=true) @ApplicationScoped public class Bean1 implements Serializable{ private static final long serialVersionUID = 12345L; protected ArrayList<App> apps; // construct apps so they are available for the session scoped bean // do time consuming stuff... // getters + setters Session Scoped Bean: @ManagedBean @SessionScoped public class Bean2 implements Serializable{ private static final long serialVersionUID = 123L; @Inject private Bean1 bean1; private ArrayList<App> apps = bean1.getApps(); // null pointer exception What appears to be happening is, Bean1 is created, does it's stuff, then is destroyed before Bean2 can access it. I was hoping using application scoped would keep Bean1 around until the container was shutdown, or the application was killed, but this doesn't appear to be the case.

    Read the article

  • Hibernate many-to-many relationship

    - by Capitan
    I have two mapped types, related many-to-many. @Entity @Table(name = "students") public class Student{ ... @ManyToMany(fetch = FetchType.EAGER) @JoinTable( name = "students2courses", joinColumns = { @JoinColumn( name = "student_id", referencedColumnName = "_id") }, inverseJoinColumns = { @JoinColumn( name = "course_id", referencedColumnName = "_id") }) public Set<Course> getCourses() { return courses; } public void setCourses(Set<Course> courses) { this.courses = courses; } ... } __ @Entity @Table(name = "courses") public class Course{ ... @ManyToMany(fetch = FetchType.EAGER, mappedBy = "courses") public Set<Student> getStudents() { return students; } public void setStudents(Set<Student> students) { this.students = students; } ... } But if I update/delete Course entity, records are not created/deleted in table students2courses. (with Student entity updating/deleting goes as expected) I wrote abstract class HibObject public abstract class HibObject { public String getRemoveMTMQuery() { return null; } } which is inherited by Student and Course. In DAO I added this code (for delete() method): String query = obj.getRemoveMTMQuery(); if (query != null) { session.createSQLQuery(query).executeUpdate(); } and I ovrerided method getRemoveMTMQuery() for Course @Override @Transient public String getRemoveMTMQuery() { return "delete from students2courses where course_id = " + id + ";"; } Now it works but I think it's a bad code. Is there a best way to solve this problem?

    Read the article

  • Quick guide to Oracle IRM 11g: Server installation

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index This is the first of a set of articles designed to assist with the successful installation, configuration and deployment of a document security solution using Oracle IRM. This article goes through a set of simple instructions which detail how to download, install and configure the IRM server, the starting point for building a document security solution. This article contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Downloading the software Preparing a database Creating the schema WebLogic Server installation Installing Oracle IRM Introduction Because we are using Oracle Enterprise Linux in this guide, and before we get into the detail of IRM, i'd like to share some tips with Linux to make life a bit easier.Use a 64bit platform, IRM 11g runs just fine on a 32bit server but with 64bit you will build a more future proof service. Download and install the latest Java JDK package. Make sure you get the 64bit version if you are on a 64bit server. Configure Linux to use a good Yum server to simplify installing packages. For Oracle Enterprise Linux we maintain a great public Yum here. Have at least 20GB of free disk space on the partition you intend to install the IRM server. The downloads are big, then you extract them and then install. This quickly consumes disk space which you can easily recover by deleting the downloaded and extracted files after wards. But it's nice to have the disk space spare to keep these around in case you need to restart any part of the installation process again. Downloading the software OK, so before you can do anything, you need the software install kits. Luckily Oracle allows you to freely download every technology we create. You'll need to get the following; Oracle WebLogic Server Oracle Database Oracle Repository Creation Utility (rcu) Oracle IRM server You can use Microsoft SQL server 2005 or 2008, in this guide i've used Oracle RDBMS 11gR2 for Linux. Preparing the database I'm not going to go through the finer points of installing the database. There are many very good guides on installing the Oracle Database. However one thing I would suggest you think about is enabling TDE, network encryption and using Database Vault. These Oracle database security technologies are excellent for creating a complete end to end security solution. No point in going to all the effort to secure document access with IRM when someone can go directly to the database and assign themselves rights to documents. To understand this further, you can see a video of the IRM service using these database security technologies here. With a database up and running we need to create a schema to hold the IRM data. This schema contains the rights model, cryptographic keys, user account id's and associated rights etc. Creating the IRM database schema Oracle uses the Repository Creation Tool which builds your schema, extract the files from the rcu zip. Then in a terminal window; cd /oracle/install/rcu/bin ./rcu This will launch the Repository Creation Tool and you will be presented with the image to the right. Hit next and continue onto the next dialog. You are asked if you are going to be creating a new schema or wish to drop an existing one, you obviously just need to click next at this point to create a new schema. The RCU next needs to know where your database is so you'll need the following details of your database instance. Below, for reference, is the information for my installation. Hostname: irm.oracle.demo Port: 1521 (This is the default TCP port for the Oracle Database) Service Name: irm.oracle.demo. Note this is not the SID, but the service name. Username: sys Password: ******** Role: SYSDBA And then select next. Because the RCU contains schemas for many of the Oracle Technologies, you now need to select to just deploy the Oracle IRM schema. Open the section under "Enterprise Content Management" and tick the "Oracle Information Rights Management" component. Note that you also get the chance to select a prefix which defaults to "DEV" (for development). I usually change this to something that reflects my own install. PROD for a production system, INT for internal only etc. The next step asks for the passwords for the schema users. We are only creating one schema here so you just enter one password. Some brave souls store this password in an Excel spreadsheet which is then secure against the IRM server you're about to install in this guide. Nearing the end of the schema creation is the mapping of the tablespaces to the schema. Note I had setup a table space already that was encrypted using TDE and at this point I was able to select that tablespace by clicking in the "Default Tablespace" column. The next dialog confirms your actions and clicking on next causes it to create the schema and default data. After this you are presented with the completion summary. WebLogic Server installation The database is now ready and the next step is to install the application server. Oracle IRM 11g is a JEE application and currently only supported in Oracle WebLogic Server. So the next step is get WebLogic Server installed, which is pretty easy. Depending on the version you download, you either run the binary or for a 64 bit platform (like mine) run the following command. java -d64 -jar wls1033_generic.jar And in the resulting dialog hit next to start walking through the install. Next choose a directory into which you will install WebLogic Server. I like to change from the default and install into /oracle/. Then all my software goes into this one folder, all owned by the "oracle" user. The next dialog asks for your Oracle support information to ensure you are kept up to date. If you have an Oracle support account, enter your details but for most evaluation systems I leave these fields blank. Again, for evaluation or development systems, I usually stick with the "Typical" install type which you are next asked for. Next you are asked for the JDK which will be used for the server. When installing from the generic jar on a 64bit platform like in this guide, no JDK is bundled with the installer. But as you can see in the image on the right, that it does a good job of detecting the one you've got installed. Defaults for the install directories are usually taken, no changes here, just click next. And finally we are ready to install, hit next, sit back and relax. Typically this takes about 10 minutes. After the install, do not run the quick start, we need to deploy the IRM install itself from which we will create a new WebLogic domain. For now just hit done and lets move to the final step of the installation process. Installing Oracle IRM The last piece of the puzzle to getting your environment ready is to deploy the IRM files themselves. Unzip the Oracle Enterprise Content Management 11g zip file and it will create a Disk1 directory. Switch to this folder and in the console run ./runInstaller. This will launch the installer which will also ask for the location of the JDK. Look at the image on the right for the detail. You should now see the first stage of the IRM installation. The dialog warns you need to have a WebLogic server installed and have created the schema's, but you've just done all that above (I hope) so we are ready to go. The installer now checks that you have all the required libraries installed and other system parameters are correct. Because nearly all of my development and evaluation installations have the database server on the same system, the installer passes these checks without issue... Next... Now chose where to install the IRM files, you must install into the same Middleware Home as the WebLogic Server installation you just performed. Usually the installer already defaults to this location anyway. I also tend to change the Oracle Home Directory to Oracle_IRM so it's clear this is just an IRM install. The summary page tells you about space needed to deploy the files. Unfortunately the IRM install comes with all of the other Oracle ECM software, you can't just select the IRM files, everything gets deployed to disk and uses 1.6GB of space! Not fun, but Oracle has to package up similar technologies otherwise we would have a very large number of installers to QA and manage, again, not fun. Hit Install, time for another drink, maybe a piece of cake or a donut... on a half decent system this part of the install took under 10 minutes. Finally the installation of your IRM server is complete, click on finish and the next phase is to create the WebLogic domain and start configuring your server. Now move onto the next article in this guide... configuring your IRM server ready to seal your first document.

    Read the article

  • Linux Mint vs Kubuntu

    - by Hannes de Jager
    I'm currently running Kubuntu Karmic Koala and are eager to upgrade to 10.04 the end of the month. But I've also spotted Linux Mint and heard a couple of good things about it. It looks snazzy but I was wondering how it compares to Ubuntu/Kubuntu. For those that ran both can you provide some pros and cons?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >