Search Results

Search found 1367 results on 55 pages for 'matthew pk'.

Page 47/55 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • How to import and export only data of whole database in access 2007

    - by DiegoMaK
    Hi, I have two identical databases with same structure, database a in computer a and database b in computer b. The data of database a*(a.accdb)* and database b*(b.accdb)* are different. then in database a i have for example ID:1, 2, 3 and in database B i Have ID:4,5,6 Then i need merge these databases data in only one database(a or b, doesn't matter) so the final database looks like. ID:1,2,3,4,5,6 I search an easy way to do this. because i have many tables. and do this by union query is so tedious. I search for example for a backup option for only data without scheme as in postgreSQl or many others RDBMS, but i don't see this options in access 2007. pd:only just table could be duplicate values(i guess that pk doesn't allow copy a duplicate value and all others values will be copied well). if i wrong please correct me. thanks for your help.

    Read the article

  • How to dynamically modify NHibernate load queries at runtime? EventListeners? Interceptors?

    - by snicker
    I need to modify the query used to load many-to-one references in my model. Specifically, I need to be able to further filter this data. Unfortunately, NH will not allow me to filter many-to-one relationships using the built in filtering system (?). I could just be doing something incorrect. Is there a hook where I can manually and dynamically modify the query used to load the data? Or an alternative to filters that will allow me to specify parameters? Background: I am working with a database that is using a form of revision control, with each entity having a natural ID PK, an EntityId, a RevisionValidTo and RevisionValidFrom field. There may be many rows using the same EntityId, which is the reference for other tables to join on, but the Revision ranges are mutually exclusive. Thus, the relationship is only many-to-one IIF the filter is applied. However, NH offers no way to specify a filter on many-to-one references (they do for collections...)

    Read the article

  • Uploading a csv file to sql server - Identity problem.

    - by Doozer1979
    Given a column structure in a CSV file of: First_Name, Last_Name, Date_Of_Birth And a SQL Server table with a structure of ID(PK) | First_Name | Last_Name | Date_Of_Birth (Field ID is an Identity with an auto-increment of 1) How do i arrange it so that SQL Server does not attempt to insert the First_Name column from the csv file into the ID field? For info the csv is loaded into a DataTable and then copied to SQL Server using SqlBulkCopy Should i be modifying the csv file before the import add the ID column (The destination table is truncated prior to import, so no need to worry about duplicate key values.) Or perhaps adding an id column to the Datatable? Or Is there a setting in Sql Server that i may have missed?

    Read the article

  • Handling auto-incrementing IDENTITY SQL Server fields with LINQ to SQL in C#

    - by Maxim Z.
    I'm building an ASP.NET MVC site that uses LINQ to SQL to connect to SQL Server, where I have a table that has an IDENTITY bigint primary key column that represents an ID. In one of my code methods, I need to create an object of that table to get its ID, which I will place into another object based on another table (FK-to-PK relationship). At what point is the IDENTITY column value generated and how can I obtain it from my code? Is the right approach to: Create the object that has the IDENTITY column Do an InsertOnSubmit() and SubmitChanges() to submit the object to the database table Get the value of the ID property of the object

    Read the article

  • Is it a good idea to use a computed column as part of a primary key ?

    - by Brann
    I've got a table defined as : OrderID bigint NOT NULL, IDA varchar(50) NULL, IDB bigint NULL, [ ... 50 other non relevant columns ...] The natural primary key for this table would be (OrderID,IDA,IDB), but this it not possible because IDA and IDB can be null (they can both be null, but they are never both defined at the same time). Right now I've got a unique constraint on those 3 columns. Now, the thing is I need a primary key to enable transactional replication, and I'm faced with two choices : Create an identity column and use it as a primary key Create a non-null computed column C containing either IDA or IDB or '' if both columns were null, and use (OrderID,C) as my primary key. The second alternative seams cleaner as my PK would be meaningful, and is feasible (see msdn link), but since I've never seen this done anywhere, I was wondering if they were some cons to this approach.

    Read the article

  • MSSQL: Primary Key Schema Largely Guid but Sometimes Integer Types...

    - by Code Sherpa
    OK, this may be a silly question but... I have inherited a project and am tasked with going over the primary key relationships. The project largely uses Guids. I say "largely" because there are examples where tables use integral types to reflect enumerations. For example, dbo.MessageFolder has MessageFolderId of type int to reflect public emum MessageFolderTypes { inbox = 1, sent = 2, trash = 3, etc... } This happens a lot. There are tables with primary keys of type int which is unavoidable because of their reliance on enumerations and tables with primary keys of type Guid which reflect the primary key choice on the part of the previous programmer. Should I care that the PK schema is spotty like this? It doesn't feel right but does it really matter? If this could create a problem, how do I get around it (I really can't move all PKs to type int without serious legwork and I have never heard of enumerations that have guid values)? Thanks.

    Read the article

  • Copying Data between table without identity column

    - by user668479
    I have two table and I need to copy the data across from SRCServiceUsers to Clients Everytime i run it I get the following: Violation of PRIMARY KEY constraint 'PK_Clients'. Cannot insert duplicate key in object 'dbo.Clients'. The statement has been terminated. The Primary key ClientId field is not an identity column and therefore requires filling To date I have the following insert into Clients( ClientID, Title, Forenames, FamilyName, [Address], Town, County, PostCode, PhoneNumber, StartDate) SELECT ( Select Max(Clients.ClientID)+ 1, SRCServiceUsers.Title, SRCServiceUsers.[First Names], SRCServiceUsers.Surname, --BUILD UP MUITIPLE COLUMNS SRCServiceUsers.[Property Name] + ', ' + SRCServiceUsers.Street + ', ' + SRCServiceUsers.Suburb as [Address], SRCServiceUsers.Town, SRCServiceUsers.County, SRCServiceUsers.Postcode, SRCServiceUsers.Telephone, SRCServiceUsers.[Start Date] From srcsERVICEuSERS How can i autoincrement the PK field - CLientID when inserting the data? Many thanks Andrew

    Read the article

  • table lock while creating table using select

    - by shal
    Using mysql version 5.0.18 I am creating a table TT, Client 1 set autocommit = false; start transaction Create table TT select * from PT; PT has tow columns pk bigint not null,name varchar(20) Client 2 set autocommit = false start transaction insert into PT values(123,'text'); While inserting a row in PT , it is waiting for the table Client 1 to commit. I am unable to insert the row. why? Is it possible to insert the row without waiting for Client 1 to commit.

    Read the article

  • How can I map "insert='false' update='false'" on a composite-id key-property which is also used in a one-to-many FK?

    - by Gweebz
    I am working on a legacy code base with an existing DB schema. The existing code uses SQL and PL/SQL to execute queries on the DB. We have been tasked with making a small part of the project database-engine agnostic (at first, change everything eventually). We have chosen to use Hibernate 3.3.2.GA and "*.hbm.xml" mapping files (as opposed to annotations). Unfortunately, it is not feasible to change the existing schema because we cannot regress any legacy features. The problem I am encountering is when I am trying to map a uni-directional, one-to-many relationship where the FK is also part of a composite PK. Here are the classes and mapping file... CompanyEntity.java public class CompanyEntity { private Integer id; private Set<CompanyNameEntity> names; ... } CompanyNameEntity.java public class CompanyNameEntity implements Serializable { private Integer id; private String languageId; private String name; ... } CompanyNameEntity.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.jboss.org/dtd/hibernate/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.example"> <class name="com.example.CompanyEntity" table="COMPANY"> <id name="id" column="COMPANY_ID"/> <set name="names" table="COMPANY_NAME" cascade="all-delete-orphan" fetch="join" batch-size="1" lazy="false"> <key column="COMPANY_ID"/> <one-to-many entity-name="vendorName"/> </set> </class> <class entity-name="companyName" name="com.example.CompanyNameEntity" table="COMPANY_NAME"> <composite-id> <key-property name="id" column="COMPANY_ID"/> <key-property name="languageId" column="LANGUAGE_ID"/> </composite-id> <property name="name" column="NAME" length="255"/> </class> </hibernate-mapping> This code works just fine for SELECT and INSERT of a Company with names. I encountered a problem when I tried to update and existing record. I received a BatchUpdateException and after looking through the SQL logs I saw Hibernate was trying to do something stupid... update COMPANY_NAME set COMPANY_ID=null where COMPANY_ID=? Hibernate was trying to dis-associate child records before updating them. The problem is that this field is part of the PK and not-nullable. I found the quick solution to make Hibernate not do this is to add "not-null='true'" to the "key" element in the parent mapping. SO now may mapping looks like this... CompanyNameEntity.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.jboss.org/dtd/hibernate/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.example"> <class name="com.example.CompanyEntity" table="COMPANY"> <id name="id" column="COMPANY_ID"/> <set name="names" table="COMPANY_NAME" cascade="all-delete-orphan" fetch="join" batch-size="1" lazy="false"> <key column="COMPANY_ID" not-null="true"/> <one-to-many entity-name="vendorName"/> </set> </class> <class entity-name="companyName" name="com.example.CompanyNameEntity" table="COMPANY_NAME"> <composite-id> <key-property name="id" column="COMPANY_ID"/> <key-property name="languageId" column="LANGUAGE_ID"/> </composite-id> <property name="name" column="NAME" length="255"/> </class> </hibernate-mapping> This mapping gives the exception... org.hibernate.MappingException: Repeated column in mapping for entity: companyName column: COMPANY_ID (should be mapped with insert="false" update="false") My problem now is that I have tryed to add these attributes to the key-property element but that is not supported by the DTD. I have also tryed changing it to a key-many-to-one element but that didn't work either. So... How can I map "insert='false' update='false'" on a composite-id key-property which is also used in a one-to-many FK?

    Read the article

  • Composite Primary and Cardinality

    - by srini.venigalla
    I have some questions on Composite Primary Keys and the cardinality of the columns. I searched the web, but did not find any definitive answer, so I am trying again. The questions are: Context: Large (50M - 500M rows) OLAP Prep tables, not NOSQL, not Columnar. MySQL and DB2 1) Does the order of keys in a PK matter? 2) If the cardinality of the columns varies heavily, which should be used first. For example, if I have CLIENT/CAMPAIGN/PROGRAM where CLIENT is highly cardinal, CAMPAIGN is moderate, PROGRAM is almost like a bitmap index, what order is the best? 3) What order is the best for Join, if there is a Where clause and when there is no Where Clause (for views) Thanks in advance.

    Read the article

  • How to add default value on django save form?

    - by Ignacio
    I have an object Task and a form that saves it. I want to automatically asign created_by field to the currently logged in user. So, my view is this: def new_task(request, task_id=None): message = None if task_id is not None: task = Task.objects.get(pk=task_id) message = 'TaskOK' submit = 'Update' else: task = Task(created_by = GPUser(user=request.user)) submit = 'Create' if request.method == 'POST': # If the form has been submitted... form = TaskForm(request.POST, instance=task) if form.is_valid(): task = form.save(commit=False); task.created_by = GPUser(user=request.user) task.save() if message == None: message = 'taskOK' return tasks(request, message) else: form = TaskForm(instance=task) return custom_render('user/new_task.html', {'form': form, 'submit': submit, 'task_id':task.id}, request) The problem is, you guessed, the created_by field doesn't get saved. Any ideas? Thanks

    Read the article

  • How to convert full outer join query to O-R query?

    - by Kugel
    I'm converting relational database into object-relational in Oracle. I have a query that uses full outer join in the old one. Is it possible to write the same query for O-R database without explicitly using full outer join? For normal inner join it simple, I just use dot notation together with ref/deref. I'm interested in this in general so let's say the relational query is: select a.attr, b.attr from a full outer join b on (a.fk = b.pk); I want to know if it's a good idea to do it this way: select a.attr, b.attr from a_obj a full outer join b_obj b on (a.b_ref = ref(b));

    Read the article

  • different markers on google maps v3

    - by user1447313
    i need help again :( How i can add different markers to google map v3. here is example for may marker var latlng = new google.maps.LatLng(lat,lng); var image = "../img/hotel_icon.png"; var locationDescription = this.locationDescription; var marker = new google.maps.Marker({ map: map, position: latlng, title:'pk:'+name, icon: image }); bounds.extend(latlng); setMarkes(map, marker, name, locationDescription); });//close each map.fitBounds(bounds); });//close getjson }//close initialize function setMarkes(map, marker, name, locationDescription){ google.maps.event.addListener(marker, 'mouseover', function() { infowindow.close(); infowindow.setContent('<h3>'+name+'</h3><em>'+locationDescription+'</em>'); infowindow.open(map, marker); }); } is any help

    Read the article

  • Store data in file system rather than SQL or Oracle database.

    - by nunu
    Hi All, As I am working on Employee Management system, I have two table (for example) in database as given below. EmployeeMaster (DB table structure) EmployeeID (PK) | EmployeeName | City MonthMaster (DB table structure) Month | Year | EmployeeID (FK) | PrenentDays | BasicSalary Now my question is, I want to store data in file system rather than storing data in SQL or ORACLE. I want my data in file system storage for Insert, Edit and Delete opration with keeping relation with objects too. I am a C# developer, Could anybody have thoughts or idea on it. (To store data in file system with keeping relations between them) Thanks in advance. Any ideas on it?

    Read the article

  • How to point to other table's ID with hibernate?

    - by Wilhelm
    The problem: let's say I have two tables Client, and Product, in which Client has its primary key and a column called products (that points to pk's in Product table)... ok, if I need products to point only one row, it's nice, but if I need it to point for... 1000 rows in Product table, the products column would have to be larger enought... but I can't predict this situation. So, how could I design my table and how would I use hibernate with it, to achieve that "pointing" in a optmized and maybe "easy" way. NOTE: I excluded some columns of the "design" presented here, just to keep the simplicity.

    Read the article

  • SQL Server: Primary Key Schema Largely Guid but Sometimes Integer Types...

    - by Code Sherpa
    OK, this may be a silly question but... I have inherited a project and am tasked with going over the primary key relationships. The project largely uses Guids. I say "largely" because there are examples where tables use integral types to reflect enumerations. For example, dbo.MessageFolder has MessageFolderId of type int to reflect public emum MessageFolderTypes { inbox = 1, sent = 2, trash = 3, etc... } This happens a lot. There are tables with primary keys of type int which is unavoidable because of their reliance on enumerations and tables with primary keys of type Guid which reflect the primary key choice on the part of the previous programmer. Should I care that the PK schema is spotty like this? It doesn't feel right but does it really matter? If this could create a problem, how do I get around it (I really can't move all PKs to type int without serious legwork and I have never heard of enumerations that have guid values)? Thanks.

    Read the article

  • SQL Server uncorrelated subquery very slow

    - by brianberns
    I have a simple, uncorrelated subquery that performs very poorly on SQL Server. I'm not very experienced at reading execution plans, but it looks like the inner query is being executed once for every row in the outer query, even though the results are the same each time. What can I do to tell SQL Server to execute the inner query only once? The query looks like this: select * from Record record0_ where record0_.RecordTypeFK='c2a0ffa5-d23b-11db-9ea3-000e7f30d6a2' and ( record0_.EntityFK in ( select record1_.EntityFK from Record record1_ join RecordTextValue textvalues2_ on record1_.PK=textvalues2_.RecordFK and textvalues2_.FieldFK = '0d323c22-0ec2-11e0-a148-0018f3dde540' and (textvalues2_.Value like 'O%' escape '~') ) )

    Read the article

  • relating data stored in NoSQL DB to data stored in SQL DB

    - by seanbrant
    Whats the best way to use a SQL DB along side a NoSQL DB? I want to keep my users and other data in postgres but have some data that would be better suited for a NoSQL DB like redis. I see a lot of talk about switching to NoSQL but little talk on integrating it with existing systems. I think it would be foolish to throw the baby out with the bath water and ditch SQL all together, unless it makes things easier to maintain and develop. I'm wondering what the best approach is for relating data stored in SQL to my data in redis. I was thinking of something along the line of this. User object stored in SQL Book object in redis, key sh1 hash of value, value is a JSON string Relations stored in redis, key User.pk:books, value redis set of sha1's Anyone have experience, tips, better ways?

    Read the article

  • How to properly reserve identity values for usage in a database?

    - by esac
    We have some code in which we need to maintain our own identity (PK) column in SQL. We have a table in which we bulk insert data, but we add data to related tables before the bulk insert is done, thus we can not use an IDENTITY column and find out the value up front. The current code is selecting the MAX value of the field and incrementing it by 1. Although there is a highly unlikely chance that two instances of our application will be running at the same time, it is still not thread-safe (not to mention that it goes to the database everytime). I am using the ADO.net entity model. How would I go about 'reserving' a range of id's to use, and when that range runs out, grab a new block to use, and guarantee that the same range will not be used.

    Read the article

  • django form creation on init

    - by John
    Hi, How can I add a field in the form init function? e.g. in the code below I want to add a profile field. class StaffForm(forms.ModelForm): def __init__(self, user, *args, **kwargs): if user.pk == 1: self.fields['profile'] = forms.CharField(max_length=200) super(StaffForm, self).__init__(*args, **kwargs) class Meta: model = Staff I know I can add it just below the class StaffForm.... line but I want this to be dynamic depending on what user is passed in so can't do it this way. Thanks

    Read the article

  • Override Django inlineformset_factory has_changed() to always return True

    - by John
    Hi, I am using the django inlineformset_factory function. a = get_object_or_404(ModelA, pk=id) FormSet = inlineformset_factory(ModelA, ModelB) if request.method == 'POST': metaform = FormSet (instance=a, data=request.POST) if metaform.is_valid(): f = metaform.save(commit=False) for instance in f: instance.updated_by = request.user instance.save() else: metaform = FormSet(instance=a) return render_to_response('nodes/form.html', {'form':metaform}) What is happening is that if I change any of the data then everything works ok and all the data gets updated. However if I don't change any of the data then the data is not updated. i.e. only entries which are changed go through the for loop to be saved. I guess this makes sense as there is no point saving data if it has not changed. However I need to go through and save every object in the form regardless of whether it has any changes on not. So my question is how do I override this so that it goes through and saves every record whether it has any changes or not? Hope this makes sense Thanks

    Read the article

  • heirarchial data from self referencing table in tree form

    - by Beta033
    Ii looks like this has been asked and answered in all the simple cases, excluding the one that i'm having trouble with. I've tried using a recursive CTE to generate this, however maybe a cursor would be better? or maybe a set of recursive functions will do the trick? Can this be done in a cte? consider the following table PrimaryKey ParentKey 1 NULL 2 1 3 6 4 7 5 2 6 1 7 NULL should yield PK 1 -2 --5 -6 --3 7 -4 where the number of - marks equal the depth, my primary difficulty is the ordering.

    Read the article

  • Serialize Dictionary with a string key and List[] value to JSON

    - by Patrick
    How can I serialize a python Dictionary to JSON and pass back to javascript, which contains a string key, while the value is a List (i.e. []) if request.is_ajax() and request.method == 'GET': groupSet = GroupSet.objects.get(id=int(request.GET["groupSetId"])) groups = groupSet.groups.all() group_items = [] #list groups_and_items = {} #dictionary for group in groups: group_items.extend([group_item for group_item in group.group_items.all()]) #use group as Key name and group_items (LIST) as the value groups_and_items[group] = group_items data = serializers.serialize("json", groups_and_items) return HttpResponse(data, mimetype="application/json") the result: [{"pk": 5, "model": "myApp.group", "fields": {"name": "\u6fb4\u9584", "group_items": [13]}}] while the group_items should have many group_item and each group_item should have "name", rather than only the Id, in this case the Id is 13. I need to serialize the group name, as well as the group_item's Id and name as JSON and pass back to javascript. I am new to Python and Django, please advice me if you have a better way to do this, appreciate. Thank you so much. :)

    Read the article

  • mySQL & Relational databases: How to handle sharding/splitting on application level?

    - by Industrial
    Hi everybody, I have thought a bit about sharding tables, since partitioning cannot be done with foreign keys in a mySQL table. Maybe there's an option to switch to a different relational database that features both, but I don't see that as an option right now. So, the sharding idea seems like a pretty decent thing. But, what's a good approach to do this on a application level? I am guessing that a take-off point would be to prefix tables with a max value for the primary key in each table. Something like products_4000000 , products_8000000 and products_12000000. Then the application would have to check with a simple if-statement the size of the id (PK) that will be requested is smaller then four, eight or twelve million before doing any actual database calls. So, is this a step in the right direction or are we doing something really stupid?

    Read the article

  • SQL Querying for Threaded Messages

    - by Harper
    My site has a messaging feature where one user may message another. The messages support threading - a parent message may have any number of children but only one level deep. The messages table looks like this: Messages - Id (PK, Auto-increment int) - UserId (FK, Users.Id) - FromUserId (FK, Users.Id) - ParentMessageId (FK to Messages.Id) - MessageText (varchar 200) I'd like to show messages on a page with each 'parent' message followed by a collapsed view of the children messages. Can I use the GROUP BY clause or similar construct to retrieve parent messages and children messages all in one query? Right now I am retrieving parent messages only, then looping through them and performing another query for each to get all related children messages. I'd like to get messages like this: Parent1 Child1 Child2 Child3 Parent2 Child1 Parent3 Child1 Child2

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >