Search Results

Search found 61071 results on 2443 pages for 'spring data jpa'.

Page 29/2443 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Spring Hibernate Connection through AOP standalone application

    - by Kiran
    I am trying to develop Annotation based Spring Hibernate standalone application to connect to DB. I've gone through the some blogs and wondered like we should not make use of hibernateTemplate becoz coupling your application tightly to the spring framework. For this reason, Spring recommends that HibernateTemplate no longer be used.Further more my requirement is changed to Spring Hibernate with AOP using Declarative Transaction management.I am new to AOP concepts. Can any one please give an example on Spring Hibernate Connection through AOP. That would be a great help to me. Thanks in advance.

    Read the article

  • How significant are JPA lazy loading performance benefits?

    - by Robert
    I understand that this is highly specific to the concrete application, but I'm just wondering what's the general opinion, or at least some personal experiences on the issue. I have an aversion towards the 'open session in view' pattern, so to avoid it, I'm thinking about simply fetching everything small eagerly, and using queries in the service layer to fetch larger stuff. Has anyone used this and regretted it? And is there maybe some elegant solution to lazy loading in the view layer that I'm not aware of?

    Read the article

  • How to specify a different column for a @Inheritance JPA annotation

    - by Cue
    @Entity @Inheritance(strategy = InheritanceType.JOINED) public class Foo @Entity @Inheritance(strategy = InheritanceType.JOINED) public class BarFoo extends Foo mysql> desc foo; +---------------+-------------+ | Field | Type | +---------------+-------------+ | id | int | +---------------+-------------+ mysql> desc barfoo; +---------------+-------------+ | Field | Type | +---------------+-------------+ | id | int | | foo_id | int | | bar_id | int | +---------------+-------------+ mysql> desc bar; +---------------+-------------+ | Field | Type | +---------------+-------------+ | id | int | +---------------+-------------+ Is it possible to specify column barfo.foo_id as the joined column? Are you allowed to specify barfoo.id as BarFoo's @Id since you are overriding the getter/seeter of class Foo? I understand the schematics behind this relationship (or at least I think I do) and I'm ok with them. The reason I want an explicit id field for BarFoo is exactly because I want to avoid using a joined key (foo _id, bar _id) when querying for BarFoo(s) or when used in a "stronger" constraint. (as Ruben put it)

    Read the article

  • JPA 2.0 Eclipse Link

    - by Parhs
    Hello... I have this code @Column(updatable=false) @Enumerated(EnumType.STRING) private ExamType examType; How ever i can change the value when i update it via merge.. WHY????

    Read the article

  • Java JPA @OneToMany neededs to reciprocate @ManyToOne?

    - by bguiz
    Create Table A ( ID varchar(8), Primary Key(ID) ); Create Table B ( ID varchar(8), A_ID varchar(8), Primary Key(ID), Foreign Key(A_ID) References A(ID) ); Given that I have created two tables using the SQL statements above, and I want to create Entity classes for them, for the class B, I have these member attributes: @Id @Column(name = "ID", nullable = false, length = 8) private String id; @JoinColumn(name = "A_ID", referencedColumnName = "ID", nullable = false) @ManyToOne(optional = false) private A AId; In class A, do I need to reciprocate the many-to-one relationship? @Id @Column(name = "ID", nullable = false, length = 8) private String id; @OneToMany(cascade = CascadeType.ALL, mappedBy = "AId") private List<B> BList; //<-- Is this attribute necessary? Is it a necessary or a good idea to have a reciprocal @OneToMany for the @ManyToOne? If I make the design decision to leave out the @OneToMany annotated attribute now, will come back to bite me further down.

    Read the article

  • Unique constraint not created in JPA

    - by homaxto
    I have created the following entity bean, and specified two columns as being unique. Now my problem is that the table is created without the unique constraint, and no errors in the log. Does anyone have an idea? @Entity @Table(name = "cm_blockList", uniqueConstraints = @UniqueConstraint(columnNames = {"terminal", "blockType"})) public class BlockList { @Id @GeneratedValue(strategy = GenerationType.AUTO) private int id; @ManyToOne(cascade = CascadeType.PERSIST) @JoinColumn(name="terminal") private Terminal terminal; @Enumerated(EnumType.STRING) private BlockType blockType; private String regEx; }

    Read the article

  • Problem updating collection using JPA

    - by FarmBoy
    I have an entity class Foo foo that contains Collection<Bar> bars. I've tried a variety of ways, but I'm unable to successfully update my collection. One attempt: foo = em.find(key); foo.getBars().clear(); foo.setBars(bars); em.flush; \\ commit, etc. This appends the new collection to the old one. Another attempt: foo = em.find(key); bars = foo.getBars(); for (Bar bar : bars) { em.remove(bar); } em.flush; At this point, I thought I could add the new collection, but I find that the entity foo has been wiped out. Here are some annotations. In Foo: @OneToMany(cascade = { CascadeType.ALL }, mappedBy = "foo") private List<Bar> bars; In Bar: @ManyToOne(optional = false, cascade = { CascadeType.ALL }) @JoinColumn(name = "FOO_ID") private Foo foo; Has anyone else had trouble with this? Any ideas?

    Read the article

  • JPA 2.0 How to persist in order

    - by parhs
    hello i am having two entities... Exam and Exam_Normal EXAM @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private Long id; private String name; private String codeName; @Enumerated(EnumType.STRING) private ExamType examType; @ManyToOne private Category category; @OneToMany(mappedBy="id",cascade=CascadeType.PERSIST) @OrderBy("id") private List<Exam_Normal> exam_Normal; EXAM_NORMAL @Id private Long item; @Id @ManyToOne private Exam id; @Enumerated(EnumType.STRING) private Gender gender; private Integer age_month_from; private Integer age_month_to; The problem is that if i put a list of EXAM_NORMAL at an EXAM class if i try to persist(EXAM) i get an error because it tries to persist EXAM_NORMAL first but it cant because the primary keyof EXAM is missing because it isnt persisted... Is there any way to define the order?Or i should set null the list ,persist and then set the list again ? thanks:)

    Read the article

  • How to change password hashing algorithm when using spring security?

    - by harry
    I'm working on a legacy Spring MVC based web Application which is using a - by current standards - inappropriate hashing algorithm. Now I want to gradually migrate all hashes to bcrypt. My high level strategy is: New hashes are generated with bcrypt by default When a user successfully logs in and has still a legacy hash, the app replaces the old hash with a new bcrypt hash. What is the most idiomatic way of implementing this strategy with Spring Security? Should I use a custom Filter or my on AccessDecisionManager or …?

    Read the article

  • Doesn't Spring really support Interface injection at all?

    - by mrCoder
    Hi I know that Spring doesn't supports Interface injection and I've read that many a times. But today as I came across an article about IOC by Martin Fowler (link), it seems using ApplicationContextAware in Spring is some what similar to the Interface injection. when ever Spring' context reference is required in our Spring bean, we'll implement ApplicationContextAware and will implement the setApplicationContext(ApplicationContext context) method, and we'll include the bean in the config file. Is not this the same as Interface injection, where where telling the Spring to inject (or), say, pass the reference of the context into this bean? Or I m missing something here? Thanks for any information! ManiKanta

    Read the article

  • where is the best palce to count the lazy load property using JPA

    - by Ke
    Let's say we have a "Question" and "Answer" entity, @Entity public class Question extends IdEntity { @Lob private String content; @Transient private int answerTotal; @OneToMany(fetch = FetchType.LAZY) private List<Answer> answers = new ArrayList<Answer>(); ...... I need to tell how many answers for the question every time Question is queryed. So I need to do count: String count = "select count(o) from Answer o WHERE o.question=:q"; My question is, where is the best place to do the count? (Because I did a lot of query about Question entity, by date, by tag, by category, by asker, etc. It is obviously not a good solution to add count operation in each query. My first attempt is to implement a @PostLoad listener, so every time Question entity is loaded, I do count. However, EntityManager cannot be injected in listener. So this way does not work. Any hint?

    Read the article

  • JPA 2.0 Eclipse Link ... Composite primary keys

    - by Parhs
    I have two entities(actually more but it doenst matter) Exam and Exam_Normals Its a oneToMany relationship... The problem is that i need a primary key for Exams_Normals PK (Exam_ID Item) Item should be 1 2 3 4 5 etc.... But cant achieve it getting errors An alternative would be to: I cound use an IDENTITY and a ManyToOne relationship at Exam_Normals but that should be like PK(Exam_Normals_ID) and a reference to Exam and an extra collumn Item to keep an order.. SO 3 collumns But to avoid the alternative tried I tried with @IdClass and got errors Tried @EmbeddedID everything nothing works Any idea??

    Read the article

  • Question on jpa joined table inheritance

    - by soontobeared
    Hi, The 'DiscriminatorColumn' annotation isn't creating any column in my parent entity. Where am I going wrong ? Here's my code @Entity @Inheritance(strategy=InheritanceType.JOINED) @DiscriminatorColumn(name="TYPE", discriminatorType=DiscriminatorType.STRING,length=20) public class WorkUnit extends BaseEntityClass implements Serializable{ @Entity @DiscriminatorValue(value="G") @Table(name="Group_") @PrimaryKeyJoinColumn public class Group extends WorkUnit implements Serializable{

    Read the article

  • JPA @version - can it be used to calcualate version of a table entry

    - by OpenSource
    Hi, Please consider the following table (created using a corresponding entity) request ------- id requestor type version items 1 a t1 1 5 2 a t1 2 3 3 b t1 1 2 4 a t2 1 4 5 a t1 3 9 The above is what I want to achieve. The version field is a calculated field others are user provided. Basically the request's version needs to be calculated based on the combination of requestor and the type. The first occurance with a given combination will have a version 1 then version 2 and so on. I tried various things using @version on a different entity with just the three columns and joining the two entities using ManytoOne etc but I'm not able to get to the desired outcome. I dont want to confuse you with the things I tried. Since the objective is simple there should be an easier way I suppose? Can you please help? - any help greatly appreciated! thanks in advance

    Read the article

  • JPA GeneratedValue with GenerationType.TABLE does a big jump after jvm restart

    - by joeduardo
    When I start my server and add an entry, the generated id will start with 1, 2, so on and so forth. After a restart, adding an entry would generate an id like 32,xxx. Another restart and adding of entry would generate an id like 65,xxx. I don't know why this is happening. Here's a snippet of the annotation I'm using for my id. I'm using Hibernate. @Id @GeneratedValue(strategy = GenerationType.TABLE) private Long id;

    Read the article

  • JPA Problems mapping relationships

    - by Rosen Martev
    Hello. I have a problem when I try to persist my model. An exception is thrown when creating the EntityManagerFactory: Blockquote javax.persistence.PersistenceException: [PersistenceUnit: NIF] Unable to build EntityManagerFactory at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:677) at org.hibernate.ejb.HibernatePersistence.createEntityManagerFactory(HibernatePersistence.java:126) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:52) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:34) at project.serealization.util.PersistentManager.createSession(PersistentManager.java:24) at project.serealization.SerializationTest.testProject(SerializationTest.java:25) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:79) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: org.hibernate.HibernateException: Wrong column type in nif.action_element for column FLOW_ID. Found: double, expected: bigint at org.hibernate.mapping.Table.validateColumns(Table.java:284) at org.hibernate.cfg.Configuration.validateSchema(Configuration.java:1116) at org.hibernate.tool.hbm2ddl.SchemaValidator.validate(SchemaValidator.java:139) at org.hibernate.impl.SessionFactoryImpl.(SessionFactoryImpl.java:349) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1327) at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:867) at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:669) ... 24 more The code for SimpleActionElement and SimpleFlow is as follows: @Entity public class SimpleActionElement { @OneToOne(cascade = CascadeType.ALL, targetEntity = SimpleFlow.class) @JoinColumn(name = "FLOW_ID") private SimpleFlow<T> flow; ... } @Entity public class SimpleFlow<T> { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "ELEMENT_ID") private Long element_id; ... }

    Read the article

  • How to declare different non-JPA annotations on embedded classes

    - by e99y
    @Embedded public class EmbedMe { private String prop1; private String prop2; } @Entity public class EncryptedEmbedded { @Embeddable private EmbedMe enc; } I am current using Jasypt for encryption. Is there a way to indicate that the @Embeddable in EncryptedEmbedded will use @Type(value = "newDeclaredTypeHere") per attribute (prop1, prop2)? Thanks in advance... ;)

    Read the article

  • JPA Native Query (SQL View)

    - by Uchenna
    I have two Entities Customer and Account. @Entity @Table(name="customer") public class Customer { private Long id; private String name; private String accountType; private String accountName; ... } @Entity @Table(name="account") public class Account { private Long id; private String accountName; private String accountType; ... } i have a an sql query select a.id as account_id, a.account_name, a.account_type, d.id, d.name from account a, customer d Assumption account and customer tables are created during application startup. accountType and accountName fields of Customer entity should not be created. That is, only id and name columns will be created. Question How do i run the above sql query and return a Customer Entity Object with the accountType and accountName properties populated with sql query's account_name and account_type values. Thanks

    Read the article

  • hibernate jpa criteriabuilder ignore case queries

    - by user373201
    How to do a like ignore case query using criteria builder. For description property I want to do something like upper(description) like '%xyz%' I have the following query CriteriaBuilder criteriaBuilder = entityManager.getCriteriaBuilder(); CriteriaQuery<Person> personCriteriaQuery = criteriaBuilder.createQuery(Person.class); Root<Person> personRoot = personCriteriaQuery.from(Person.class); personCriteriaQuery.select(personRoot); personCriteriaQuery.where(criteriaBuilder.like(personRoot.get(Person_.description), "%"+filter.getDescription().toUpperCase()+"%")); List<Person> pageResults = entityManager.createQuery(personCriteriaQuery).getResultList();

    Read the article

  • JPA entity relations are not populated after .persist()

    - by Tomik
    Hello, this is a sample of my two entities: @Entity public class Post implements Serializable { @OneToMany(mappedBy = "post", fetch = javax.persistence.FetchType.EAGER) @OrderBy("revision DESC") public List<PostRevision> revisions; @Entity(name="post_revision") public class PostRevision implements Serializable { @ManyToOne public Post post; private Integer revision; @PrePersist private void prePersist() { List<PostRevision> list = post.revisions; if(list.size() >= 1) revision = list.get(list.size() - 1).revision + 1; else revision = 1; } So, there's a "post" and it can have several revisions. During persisting of the revision, entity takes a look at the list of the existing revisions and finds the next revision number. Problem is that Post.revisions is NULL but I think it should be automatically populated. I guess there's some kind of problem in my source code but I don't know where. Here's my "persistence" code: Post post = new Post(); PostRevision revision = new PostRevision(); revision.post = post; em.persist(post); em.persist(revision); em.flush(); I think that after persisting "post", it becomes "managed" and all the relations should be populated from now on. Thanks for help! (Note: public attributes are just for demonstration)

    Read the article

  • Data Governance 2010 Conference in San Diego

    - by Tony Ouk
    The Data Governance Annual Conference is one of the world's most authoritative and vendor neutral event on Data Governance and Data Quality.  The conference will focus on the "how-tos" from starting a data governance and stewardship program to attaining data governance maturity with specific topics on MDM.  This year's event will be hosted June 7 through June 10 in San Diego, California. For more information, including registration details, visit the Data Governance 2010 Conference website.

    Read the article

  • How to search for newline or linebreak characters in Excel?

    - by Highly Irregular
    I've imported some data into Excel (from a text file) and it contains some sort of newline characters. It looks like this initially: If I hit F2 (to edit) then Enter (to save changes) on each of the cells with a newline (without actually editing anything), Excel automatically changes the layout to look like this: I don't want these newlines characters here, as it messes up data processing further down the track. How can I do a search for these to detect more of them? The usual search function doesn't accept an enter character as a search character.

    Read the article

  • Oracle Data Integration 12c: Simplified, Future-Ready, High-Performance Solutions

    - by Thanos Terentes Printzios
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. Oracle Data Integration delivers pervasive and continuous access to timely and trusted data across heterogeneous systems. Oracle is enhancing its data integration offering announcing the general availability of 12c release for the key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c, delivering Simplified and High-Performance Solutions for Cloud, Big Data Analytics, and Real-Time Replication. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions : Supporting Current and Emerging Initiatives Extreme Performance : Even higher performance than ever before Fast Time-to-Value : Higher IT Productivity and Simplified Solutions  With the new capabilities in Oracle Data Integrator 12c, customers can benefit from: Superior developer productivity, ease of use, and rapid time-to-market with the new flow-based mapping model, reusable mappings, and step-by-step debugger. Increased performance when executing data integration processes due to improved parallelism. Improved productivity and monitoring via tighter integration with Oracle GoldenGate 12c and Oracle Enterprise Manager 12c. Improved interoperability with Oracle Warehouse Builder which enables faster and easier migration to Oracle Data Integrator’s strategic data integration offering. Faster implementation of business analytics through Oracle Data Integrator pre-integrated with Oracle BI Applications’ latest release. Oracle Data Integrator also integrates simply and easily with Oracle Business Analytics tools, including OBI-EE and Oracle Hyperion. Support for loading and transforming big and fast data, enabled by integration with big data technologies: Hadoop, Hive, HDFS, and Oracle Big Data Appliance. Only Oracle GoldenGate provides the best-of-breed real-time replication of data in heterogeneous data environments. With the new capabilities in Oracle GoldenGate 12c, customers can benefit from: Simplified setup and management of Oracle GoldenGate 12c when using multiple database delivery processes via a new Coordinated Delivery feature for non-Oracle databases. Expanded heterogeneity through added support for the latest versions of major databases such as Sybase ASE v 15.7, MySQL NDB Clusters 7.2, and MySQL 5.6., as well as integration with Oracle Coherence. Enhanced high availability and data protection via integration with Oracle Data Guard and Fast-Start Failover integration. Enhanced security for credentials and encryption keys using Oracle Wallet. Real-time replication for databases hosted on public cloud environments supported by third-party clouds. Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c and other Oracle technologies, such as Oracle Database 12c and Oracle Applications, provides a number of benefits for organizations: Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training. Integration with Oracle Database 12c provides a strong foundation for seamless private cloud deployments. Delivers real-time data for reporting, zero downtime migration, and improved performance and availability for Oracle Applications, such as Oracle E-Business Suite and ATG Web Commerce . Oracle’s data integration offering is optimized for Oracle Engineered Systems and is an integral part of Oracle’s fast data, real-time analytics strategy on Oracle Exadata Database Machine and Oracle Exalytics In-Memory Machine. Oracle Data Integrator 12c and Oracle GoldenGate 12c differentiate the new offering on data integration with these many new features. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. Find out much more about the new release in the video webcast "Introducing 12c for Oracle Data Integration", where customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Resource Kits Meet Oracle Data Integration 12c  Discover what's new with Oracle Goldengate 12c  Oracle EMEA DIS (Data Integration Solutions) Partner Community is available for all your questions, while additional partner focused webcasts will be made available through our blog here, so stay connected. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Weindows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review-again.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Windows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >