Search Results

Search found 31421 results on 1257 pages for 'entity sql'.

Page 431/1257 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • Interview with Geoff Bones, developer on SQL Storage Compress

    - by red(at)work
    How did you come to be working at Red Gate? I've been working at Red Gate for nine months; before that I had been at a multinational engineering company. A number of my colleagues had left to work at Red Gate and spoke very highly of it, but I was happy in my role and thought, 'It can't be that great there, surely? They'll be back!' Then one day I visited to catch up them over lunch in the Red Gate canteen. I was so impressed with what I found there, that, three days later, I'd applied for a role as a developer. And how did you get into software development? My first job out of university was working as a systems programmer on IBM mainframes. This was quite a while ago: there was a lot of assembler and loading programs from tape drives and that kind of stuff. I learned a lot about how computers work, and this stood me in good stead when I moved over the development in the 90s. What's the best thing about working as a developer at Red Gate? Where should I start? One of the great things as a developer at Red Gate is the useful feedback and close contact we have with the people who use our products, either directly at trade shows and other events or through information coming through the product managers. The company's whole ethos is built around assisting the user, and this is in big contrast to my previous development roles. We aim to produce tools that people really want to use, that they enjoy using, and, as a developer, this is a great thing to aim for and a great feeling when we get it right. At Red Gate we also try to cut out the things that distract and stop us doing our jobs. As a developer, this means that I can focus on the code and the product I'm working on, knowing that others are doing a first-class job of making sure that the builds are running smoothly and that I'm getting great feedback from the testers. We keep our process light and effective, as we want to produce great software more than we want to produce great audit trails. Tell us a bit about the products you are currently working on. You mean HyperBac? First let me explain a bit about what HyperBac is. At heart it's a compression and encryption technology, but with a few added features that open up a wealth of really exciting possibilities. Right now we have the HyperBac technology in just three products: SQL HyperBac, SQL Virtual Restore and SQL Storage Compress, but we're only starting to develop what it can do. My personal favourite is SQL Virtual Restore; for example, I love the way you can use it to run independent test databases that are all backed by a single compressed backup. I don't think the market yet realises the kind of things you do once you are using these products. On the other hand, the benefits of SQL Storage Compress are straightforward: run your databases but use only 20% of the disk space. Databases are getting larger and larger, and, as they do, so does your ROI. What's a typical day for you? My days are pretty varied. We have our daily team stand-up meeting and then sometimes I will work alone on a current issue, or I'll be pair programming with one of my colleagues. From time to time we give half a day up to future planning with the team, when we look at the long and short term aims for the product and working out the development priorities. I also get to go to conferences and events, which is unusual for a development role and gives me the chance to meet and talk to our customers directly. Have you noticed anything different about developing tools for DBAs rather than other IT kinds of user? It seems to me that DBAs are quite independent minded; they know exactly what the problem they are facing is, and often have a solution in mind before they begin to look for what's on the market. This means that they're likely to cherry-pick tools from a range of vendors, picking the ones that are the best fit for them and that disrupt their environments the least. When I've met with DBAs, I've often been very impressed at their ability to summarise their set up, the issues, the obstacles they face when implementing a tool and their plans for their environment. It's easier to develop products for this audience as they give such a detailed overview of their needs, and I feel I understand their problems.

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

  • Design question for WinForms (C#) app, using Entity Framework

    - by cdotlister
    I am planning on writing a small home budget application for myself, as a learning excercise. I have built my database (SQL Server), and written a small console application to interact with it, and test out scenarios on my database. Once I am happy, my next step would be to start building the application - but I am already wondering what the best/standard design would be. I am palnning on using Entity Framework for handling my database entities... then linq to sql/objects for getting the data, all running under a WinForms (for now) application. My plan (I've never used EF... and most of my development background is Web apps) is to have my database... with Entity Framework in it's own project.. which has the connection to the database. This project would expose methods such as 'GetAccount()', 'GetAccount(int accountId)' etc. I'd then have a service project that references my EF project. And on top of that, my GUI project, which makes the calls to my service project. But I am stuck. Lets say I have a screen that displays a list of Account types (Debit, Credit, Loan...). Once I have selected one, the next drop down shows a list of accounts I have that suite that account type. So, my OnChange event on my DropDown on the account type control will make a call to the serviceLayer project, 'GetAccountTypes()', and I would expect back a List< of Account Types. However, the AccountType object ... what is that? That can't be the AccountType object from my EF project, as my GUI project doesn't have reference to it. Would I have to have some sort of Shared Library, shared between my GUI and my Service project, with a custom built AccountType object? The GUI can then expect back a list of these. So my service layer would have a method: public List<AccountType> GetAccountTypes() That would then make a call to a custom method in my EF project, which would probably be the same as the above method, except, it returns an list of EF.Data.AccountType (The Entity Framework generated Account Type object). The method would then have the linq code to get the data as I want it. Then my service layer will get that object, and transform it unto my custom AccountType object, and return it to the GUI. Does that sound at all like a good plan?

    Read the article

  • ASP.NET MVC & Detached Entity won't save

    - by Justin
    Hey all, I have an ASP.NET MVC POST action for saving an entity on submit of a form. It works fine for insert but doesn't work for update, the database doesn't get called, so it's clearly not tracking the changes, as it's "detached". I'm using Entity Framework w/.NET 4: //POST: /Developers/Save/ [AcceptVerbs(HttpVerbs.Post)] public ActionResult Save(Developer developer) { developer.UpdateDate = DateTime.Now; if (developer.DeveloperID == 0) {//inserting new developer. DataContext.DeveloperData.Insert(developer); } //save changes - TODO: doesn't update... DataContext.SaveChanges(); //redirect to developer list. return RedirectToAction("Index"); } Any ideas would be greatly appreciated, thanks, Justin

    Read the article

  • Sorting NSSets of a core data entity - Objective-c

    - by ncohen
    Hi everyone, I would like to sort the data of a core data NSSet (I know we can do it only with arrays but let me explain...). I have an entity user who has a relationship to-many with the entity recipe. A recipe has the attributes name and id. I would like to get the data such that: Code: NSArray *id = [[user.recipes valueForKey:@"identity"] allObjects]; NSArray *name = [[user.recipes valueForKey:@"name"] allObjects]; if I take the object at index 1 in both arrays, they correspond to the same recipe... Thanks

    Read the article

  • Core Data and many Entity

    - by mr.octobor
    I'm newbie and I must save "Ranking" and "Level" of user. I create file Ranking.xcdatamodel for save "Ranking" with entity name Ranking (property is Rank, Name) I can save and show it. but when I create entity Level (property is CurrentLevel) my program is crash and show this message Unresolved error Error Domain=NSCocoaErrorDomain Code=134100 UserInfo=0x60044b0 "Operation could not be completed. (Cocoa error 134100.)", { metadata = { NSPersistenceFrameworkVersion = 248; NSStoreModelVersionHashes = { Users = ; }; NSStoreModelVersionHashesVersion = 3; NSStoreModelVersionIdentifiers = ( ); NSStoreType = SQLite; NSStoreUUID = "41225AD0-B508-4AA7-A5E2-15D6990FF5E7"; "_NSAutoVacuumLevel" = 2; }; reason = "The model used to open the store is incompatible with the one used to create the store"; } I don't know how to save "Level" please suggest me.

    Read the article

  • GAE Entity Groups/Transaction

    - by bach
    Hi, Say you have a Client Buying Card object and a product object. When the client chooses the buy opition you create the object and then add a product. It should be transactional but it's not on the same entity group as the product and the card already been persisted, isn't it? Is there any way to overcome this simple scenario safely and easily? here's a code sample: Transaction tx = pm.currentTransaction(); tx.begin(); Product prod = pm.getObjectById(Product.class, "TV"); prod.setReserved(true); pm.makePersistent(prod); Card card = pm.getObjectById(Card.class, "user123"); /// <--- will thorw an exception as card and prod aren't on the same entity group card.setProd(prod); pm.makePersistent(card); try { tx.commit(); break; }

    Read the article

  • Doctrine2 use of criteria inside the entity class

    - by Piotr Kowalczuk
    They try to write a method whose task would be to return only selected elements of the collection of items associated with a particular entity. /** * @ORM\OneToMany(targetEntity="PlayerStats", mappedBy="summoner") * @ORM\OrderBy({"player_stat_summary_type" = "ASC"}) */ protected $player_stats; public function getPlayerStatsBySummaryType($summary_type) { if ($this->player_stats->count() != 0) { $criteria = Criteria::create() ->where(Criteria::expr()->eq("player_stat_summary_type", $summary_type)); return $this->player_stats->matching($criteria)->first(); } return null; } but i get error: PHP Fatal error: Cannot access protected property Ranking\CoreBundle\Entity\PlayerStats::$player_stat_summary_type in /Users/piotrkowalczuk/Sites/lolranking/vendor/doctrine/common/lib/Doctrine/Common/Collections/Expr/ClosureExpressionVisitor.php on line 53 any idea how to fix this?

    Read the article

  • org.hibernate.annotations.Entity not being picked up by Hibernate 3.6

    - by user1317764
    I am using hibernate 3.6.7 I am using annotated classes. My classes were annotated with org.hibernate.annotations.Entity. Added the classes to configuration using configuration.addAnnotatedClass() method. Hibernate does not seem to pick it up. Stuff works fine if I use the standard jpa Entity annotation. What am I missing? I know that the classes have been deprecated in the Hibernate 4.x releases with the advent of newer annotations to configure stuff like dynamic-insert and dynamic-updates. I am not using any XML configuration file. I am setting up configuration with a properties file and using java apis.

    Read the article

  • iPhone - finding items in current entity that share related item

    - by Pedro
    G'day Folks My core data driven app has an Events entity, essentially a list of times & venues, with a related Acts entity, the names & bios of the acts appearing at those events. I have a table view that shows the event time & venue (as a table section with 1 row), the act name & act bio that works nicely. If that act is appearing at more than one event I'd like to include another table section that lists those events. I think could get that with event.act.events except that it would include the event I'm currently displaying. Can anyone suggest how to get the data I want & exclude the current record? Cheers & TIA, Pedro :) PS... I have not quite 18 hours until the promised time for a prototype of my app to be available for some testers to download.

    Read the article

  • Entity data field validation and part data submission

    - by pradeeptp
    I have an entity class that has 10 fields. I am using MS Validation Application block to mark all fields as mandatory (IsRequired). I am implementing a securiy feature in which during updation of the data, not all the fields in the entity class will have data. For example a few users can only view 5 fileds while others all 10 fields during updation on GUI I have the following options 1) Bring all the data for all the fields from the DB table and hide the ones not accessible to the users in GUI. I am concerned about the performance because everytime GUI will pull unncessary data. 2) Bring the data (e.g. only 5 fields) that are permissible for access/view by the user on GUI. During submit, validation block will throw exception because all fields are marked as IsRequired and only data for 5 fields are sent back to the server. I want to know if there are any other good approaches to solve problems like this. I am using .NET 3.5 Thanks.

    Read the article

  • Hibernate MappingException Unknown entity: $Proxy2

    - by slynn1324
    I'm using Hibernate annotations and have a VERY basic data object: import java.io.Serializable; import javax.persistence.Entity; import javax.persistence.Id; @Entity public class State implements Serializable { /** * */ private static final long serialVersionUID = 1L; @Id private String stateCode; private String stateFullName; public String getStateCode() { return stateCode; } public void setStateCode(String stateCode) { this.stateCode = stateCode; } public String getStateFullName() { return stateFullName; } public void setStateFullName(String stateFullName) { this.stateFullName = stateFullName; } } and am trying to run the following test case: public void testCreateState(){ Session s = HibernateUtil.getSessionFactory().getCurrentSession(); Transaction t = s.beginTransaction(); State state = new State(); state.setStateCode("NE"); state.setStateFullName("Nebraska"); s.save(s); t.commit(); } and get an org.hibernate.MappingException: Unknown entity: $Proxy2 at org.hibernate.impl.SessionFactoryImpl.getEntityPersister(SessionFactoryImpl.java:628) at org.hibernate.impl.SessionImpl.getEntityPersister(SessionImpl.java:1366) at org.hibernate.event.def.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:121) .... I haven't been able to find anything referencing the $Proxy part of the error - and am at a loss.. Any pointers to what I'm missing would be greatly appreciated. hibernate.cfg.xml <property name="hibernate.connection.driver_class">org.hsqldb.jdbcDriver</property> <property name="connection.url">jdbc:hsqldb:hsql://localhost/xdb</property> <property name="connection.username">sa</property> <property name="connection.password"></property> <property name="current_session_context_class">thread</property> <property name="dialect">org.hibernate.dialect.HSQLDialect</property> <property name="show_sql">true</property> <property name="hbm2ddl.auto">update</property> <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property> <mapping class="com.test.domain.State"/> in HibernateUtil.java public static SessionFactory getSessionFactory(boolean testing ) { if ( sessionFactory == null ){ try { String configPath = HIBERNATE_CFG; AnnotationConfiguration config = new AnnotationConfiguration(); config.configure(configPath); sessionFactory = config.buildSessionFactory(); } catch (Exception e){ e.printStackTrace(); throw new ExceptionInInitializerError(e); } } return sessionFactory; }

    Read the article

  • asp.net mvc custom model binding in an update entity scenario

    - by mctayl
    Hi I have a question about model binding. Imagine you have an existing database entity displayed in a form and you'd like to edit some details, some properties eg createddate etc are not bound to the form, during model binding, these properties are not assigned to the model as they are not on the http post data or querystrong etc, hence their properties are null. In my controller method for update , Id just like to do public ActionResult Update( Entity ent) { //Save changes to db } but as some properties are null in ent, they override the existing database fields which are not part of the form post data, What is the correct way to handle this? Ive tried hidden fields to hold the data, but model binding does not seem to assign hidden fields to the model. Any suggestions would be appreciated

    Read the article

  • @OneToOne and @JoinColumn, auto delete null entity , doable?

    - by smallufo
    I have two Entities , with the following JPA annotations : @Entity @Table(name = "Owner") public class Owner implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id") private long id; @OneToOne(fetch=FetchType.EAGER , cascade=CascadeType.ALL) @JoinColumn(name="Data_id") private Data Data; } @Entity @Table(name = "Data") public class Data implements Serializable { @Id private long id; } Owner and Data has one-to-one mapping , the owning side is Owner. The problem occurs when I execute : owner.setData(null) ; ownerDao.update(owner) ; The "Owner" table's Data_id becomes null , that's correct. But the "Data" row is not deleted automatically. I have to write another DataDao , and another service layer to wrap the two actions ( ownerDao.update(owner) ; dataDao.delete(data); ) Is it possible to make a data row automatically deleted when the owning Owner set it to null ?

    Read the article

  • Customize MS dynamic CRM entity in visua studio 2008

    - by Lalit
    Hi, I have installed the MS dynamic CRM on my windows server 2003. I want to add the javascript to the one of entity that has drop down control. let say Opportunity entity. But I don't know how to open the CRM in visual studio so that i can make changes. I have installed CRM explorer as well as Install the CRM Solution Framework(under folder\CRMSolutionFrameworkTemplate\Setup.cmd) Using command prompt to install: Setup.cmd {InstallDir} {ProjectName} {Project Long Name} {Organization Name} How to make the chanes, how to get the CRM in VS for edit. While opening the soution from "C:\Projects\MyCrmSolution\SourceCode\MyCrmSolution" It giving error as :"Mcrosoft.sourceanalysis.target not found error so it can not open the solution". please guide I am new in this stuff....

    Read the article

  • Effective Entity Update In Hibernate ?

    - by Tony
    I've always wanted to know How can I update an entity effectively in hibernate just as by using SQL. For example : I have a product entity , has a field name createTime , when I use session.saveOrUpdate(product) ; I have to get this field from database and then set to product then update , actually whenever I use session.saveOrUpdate() , I updated ALL fileds , even if I just need update only one field. but most time the value object we passed to DAO layer can't contain all fileds information , like the createDate in Product , we seldom need to update this field. How to just update selected fields ? Of course I can use HQL , but that will separate save and update logic . It would be better If Hibernate has a method like this : session.updateOnlyNotNullFields(product); How can I do this in Hibernate ?

    Read the article

  • how to configure hibernate not to update @Version on each access to entity

    - by radai
    i have a simple query that returns an entity, and when i look at hibernate SQL output i see that when i execute this query hibernate updates the @Version field (on each consecutive read the @version field is updated). i dont modify anything in the entity i fetch, and i dont pass is as an argument to either persist or merge. this effectively means every read i make turns into a read+write. i've tried setting the lock mode t oboth NONE (jpa 2) and READ (jpa 1) to no avail. is there any way to achieve this? if so, is there any way to set this as the default behavior in persistence.xml in some way ? im using jpa2 over hibernate 3.6

    Read the article

  • Have an external java application by notified of changes to Entity EJBs in JBoss AS

    - by John
    I'm trying to connect an external application to a JBoss AS container. The external application is a Java application that is currently being notified of changes to database entities through a JMS topic. I've added an EntityLifecycleListener class to all my entities that publishes a serialized (and unwrapped) copy of the entity to the JMS topic. The problem is that this implementation ignores the transaction boundaries of the JBoss container. For example, the @PostUpdate event can be fire, generating the JMS message for that entity, but the transaction could be rolled back causing the external application to be notified of an invalid change and become unsync'd. I need my external application to only be notified of successful commits to the database, but I need to be able to publish the entire java POJO to the external application. Is there an official way of doing this?

    Read the article

  • Can you use the same Enum in multiple entities in Linq-to-SQL?

    - by Mark
    In my persistence layer, I've declared a load of Enums to represent tables containing reference data (i.e. data never changes). In Linq2SQL, I am able to set the type of an entity property to an enum type and all is well, but as soon as I set a second entity's property to use the same enum type, the Code Generator (MSLinqToSQLGenerator) start generating an empty code file. I assume that MSLinqToSQLGenerator is quietly crashing. The question is why, and are there any work-arounds? Anyone else experienced this problem?

    Read the article

  • In JPA, a Map of embeddable values, that have an embedded entity used as the key

    - by Schmuli
    I'm still new to JPA (and Hibernate, which I'm using as my provider), so maybe this just can't be done, but anyway... Consider the following code: @Entity class Root { @Id private long id; private String name; @ElementCollection private Map<ResourceType, Resource> resources; ... } @Entity class ResourceType { @Id private long id; private String name; } @Embeddable class Resource { private ResourceType resourceType; private long value; } In the database, there is a collection table, 'Root_resources', that stores the values of the map, but the resource type appears twice (actually, the resource type ID does), once as the KEY of the map, and once as part of the value. Is there a way, similar to, say, the @MapKey annotation, to indicate that the key is one of the columns of the value (i.e. embedded)?

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >