Search Results

Search found 5380 results on 216 pages for 'primary'.

Page 179/216 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • Using an ORM with a database that has no defined relationships?

    - by Ahmad
    Consider a database(MSSQL 2005) that consists of 100+ tables which have primary keys defined to a certain degree. There are 'relationships' between tables, however these are not enforced with foreign key constraints. Consider the following simplified example of typical types of tables I am dealing with. The are clear relations between the User and City and Province tables. However, they key issues is the inconsistent data types in the tables and naming conventions. User: UserRowId [int] PK Name [varchar(50)] CityId [smallint] ProvinceRowId [bigint] City: CityRowId [bigint] PK CityDescription [varchar(100)] Province: ProvinceId [int] PK ProvinceDesc [varchar(50)] I am considering a rewrite of the application (in ASP.net MVC) that uses this data source as is similar in design to MVC storefront. However I am going through a proof of concept phase and this is one of the stumbling blocks I have come across. What are my options in terms of ORM choice that can be easily used and why? Should I even be considering an ORM? (The reason I ask this is that most explanations and tutorials all work with relatively cleanly designed existing databases, or newly created ones when compared to mine. I am thus having a very hard time trying to find a way forward with this problem) There is a huge amount of existing SQL queries, would a datamappper(eg IBatis.net) be more suitable since we could easily modify them to work and reuse the investment already made? I have found this question on SO which indicates to me that an ORM can be used - however I get the impression that this a question of mapping? Note: at the moment, the object model is not clearly defined as it was non-existent. The existing system pretty much did almost everything in SQL or consisted of overly complicated, and numerous queries to complete fucntionality. I am pretty much a noob and have zero experience around ORMs and MVC - so this an awesome learning curve I am on.

    Read the article

  • Classic ASP application-wide initializations and object caching

    - by slack3r
    In classic ASP (which I am forced to use), I have a few factory functions, that is, functions that return classes. I use JScript. In one include file I use these factory functions to create some classes that are used throughout the application. This include file is included with the #include directive in all pages. These factory functions do some "heavy lifting" and I don't want them to be executed on every page load. So, to make this clear I have something like this: // factory.inc function make_class(arg1, arg2) { function klass() { //... } // ... Some heavy stuff return klass; } // init.inc, included everywhere <!-- #include FILE="factory.inc" --> // ... MyClass1 = make_class(myarg01, myarg02); MyClass2 = make_class(myarg11, myarg12); //... How can I achieve the same effect without calling make_class on every page load? I know that I can't cache the classes in the Application object I can't use the Application_OnStart hook in Global.asa I could probably create a scripting component, but I really don't want to do that So, is there something else I can do? Maybe some way to achieve caching of these classes, which are really objects in JScript. PS: [further clarification] In the above code "heavy stuff" is not so heavy, but I just want to know if there's a way to avoid it being executed all the time. It reads database meta information, builds a table of the primary keys in the database and another table that resolves strings to classes, etc.

    Read the article

  • JoinColumn name not used in sql

    - by Vladimir
    Hi! I have a problem with mapping many-to-one relationship without exact foreign key constraint set in database. I use OpenJPA implementation with MySql database, but the problem is with generated sql scripts for insert and select statements. I have LegalEntity table which contains RootId column (among others). I also have Address table which has LegalEntityId column which is not nullable, and which should contain values referencing LegalEntity's "RootId" column, but without any database constraint (foreign key) set. Address entity is mapped: @Entity @Table(name="address") public class Address implements Serializable { ... @ManyToOne(fetch=FetchType.LAZY, optional=false) @JoinColumn(referencedColumnName="RootId", name="LegalEntityId", nullable=false, insertable=true, updatable=true, table="LegalEntity") public LegalEntity getLegalEntity() { return this.legalEntity; } } SELECT statement (when fetching LegalEntity's addresses) and INSERT statment are generated: SELECT t0.Id, .., t0.LEGALENTITY_ID FROM address t0 WHERE t0.LEGALENTITY_ID = ? ORDER BY t0.Id DESC [params=(int) 2] INSERT INTO address (..., LEGALENTITY_ID) VALUES (..., ?) [params=..., (int) 2] If I omit table attribute from mentioned statements are generated: SELECT t0.Id, ... FROM address t0 INNER JOIN legalentity t1 ON t0.LegalEntityId = t1.RootId WHERE t1.Id = ? ORDER BY t0.Id DESC [params=(int) 2] INSERT INTO address (...) VALUES (...) [params=...] So, LegalEntityId is not included in any of the statements. Is it possible to have relationship based on such referencing (to column other than primary key, without foreign key in database)? Is there something else missing? Thanks in advance.

    Read the article

  • Using NULLs in matchup table

    - by TomWilsonFL
    I am working on the accounting portion of a reservation system (think limo company). In the system there are multiple objects that can either be paid or submit a payment. I am tracking all of these "transactions" in three tables called: tx, tx_cc, and tx_ch. tx generates a new tx_id (for transaction ID) and keeps the information about amount, validity, etc. Tx_cc and tx_ch keep the information about the credit card or check used, respectively, which link to other tables (credit_card and bank_account among others). This seems fairly normalized to me, no? Now here is my problem: The payment transaction can take place for a myriad of reasons. Either a reservation is being paid for, a travel agent that booked a reservation is being paid, a driver is being paid, etc. This results in multiple tables, one for each of the entities: agent_tx, driver_tx, reservation_tx, etc. They look like this: CREATE TABLE IF NOT EXISTS `driver_tx` ( `tx_id` int(10) unsigned zerofill NOT NULL, `driver_id` int(11) NOT NULL, `reservation_id` int(11) default NULL, `reservation_item_id` int(11) default NULL, PRIMARY KEY (`tx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Now this transaction is for a driver, but could be applied to an individual item on the reservation or the entire reservation overall. Therefore I demand either reservation_id OR reservation_item_id to be null. In the future there may be other things which a driver is paid for, which I would also add to this table, defaulting to null. What is the rule on this? Opinion? Obviously I could break this out into MANY three column tables, but the amount of OUTER JOINing needed seems outrageous. Your input is appreciated. Peace, Tom

    Read the article

  • 'LINQ query plan' horribly inefficient but 'Query Analyser query plan' is perfect for same SQL!

    - by Simon_Weaver
    I have a LINQ to SQL query that generates the following SQL : exec sp_executesql N'SELECT COUNT(*) AS [value] FROM [dbo].[SessionVisit] AS [t0] WHERE ([t0].[VisitedStore] = @p0) AND (NOT ([t0].[Bot] = 1)) AND ([t0].[SessionDate] > @p1)',N'@p0 int,@p1 datetime', @p0=1,@p1='2010-02-15 01:24:00' (This is the actual SQL taken from SQL Profiler on SQL Server 2008.) The query plan generated when I run this SQL from within Query Analyser is perfect. It uses an index containing VisitedStore, Bot, SessionDate. The query returns instantly. However when I run this from C# (with LINQ) a different query plan is used that is so inefficient it doesn't even return in 60 seconds. This query plan is trying to do a key lookup on the clustered primary key which contains a couple million rows. It has no chance of returning. What I just can't understand though is that the EXACT same SQL is being run - either from within LINQ or from within Query Analyser yet the query plan is different. I've ran the two queries many many times and they're now running in isolation from any other queries. The date is DateTime.Now.AddDays(-7), but I've even hardcoded that date to eliminate caching problems. Is there anything i can change in LINQ to SQL to affect the query plan or try to debug this further? I'm very very confused!

    Read the article

  • How to find an embedded platform?

    - by gmagana
    I am new to the locating hardware side of embedded programming and so after being completely overwhelmed with all the choices out there (pc104, custom boards, a zillion option for each board, volume discounts, devel kits, ahhh!!) I am asking here for some direction. Basically, I must find a new motherboard and (most likely) re-implement the program logic. Rewriting this in C/C++/Java/C#/Pascal/BASIC is not a problem for me. so my real problem is finding the hardware. This motherboard will have several other devices attached to it. Here is a summary of what I need to do: Required: 2 RS232 serial ports (one used all the time for primary UI, the second one not continuous) 1 modem (9600+ baud ok) [Modem will be in simultaneous use with only one of the serial port devices, so interrupt sharing with one serial port is OK, but not both] Minimum permanent/long term storage: Whatever O/S requires + 1 MB (executable) + 512 KB (Data files) RAM: Minimal, whatever the O/S requires plus maybe 1MB for executable. Nice to have: USB port(s) Ethernet network port Wireless network Implementation languages (any O/S I will adapt to): First choice Java/C# (Mono ok) Second choice is C/Pascal Third is BASIC Ok, given all this, I am having a lot of trouble finding hardware that will support this that is low in cost. Every manufacturer site I visit has a lot of options, and it's difficult to see if their offering will even satisfy my must-have requirements (for example they sometimes list 3 "serial ports", but it appears that only one of the three is RS232, for example, and don't mention what the other two are). The #1 constraint is cost, #2 is size. Can anyone help me with this? This little task has left me thinking I should have gone for EE and not CS :-). EDIT: A bit of background: This is a system currently in production, but the original programmer passed away, and the current hardware manufacturer cannot find hardware to run the (currently) DOS system, so I need to reimplement this in a modern platform. I can only change the programming and the motherboard hardware.

    Read the article

  • MySQL Need some help with a query

    - by Jules
    I'm trying to fix some data by adding a new field. I have a backup from a few months ago and I have restored this database to my server. I'm looking at table called pads, its primary key is PadID and the field of importance is called RemoveMeDate. In my restored (older) database there is less records with an actual date set in RemoveMeDate. My control date is 2001-01-01 00:00:00 meaning that the record is not hidden aka visible. What I need to do is select all the records from the older database / table with the control date and join with those from the newer db /table where the control date is not set. I hope I've explained that correctly. I'll try again, with numbers. I have 80,000 visible records in the older table (with control date set) and 30,000 in the newer db/table. I need to select the 50,000 from the old database, to perform an update query. Heres my query, which I'd can't get to work as I'd like. jules-fix-reasons is the old database, jules is the newer one. select p.padid from `jules-fix-reasons`.`pads` p JOIN `jules`.`pads` ON p.padid = `jules`.`pads`.`PadID` where p.RemoveMeDate <> '2001-01-01 00:00:00' AND `jules`.`pads`.RemoveMeDate = '2001-01-01 00:00:00'

    Read the article

  • Create new or update existing entity at one go with JPA

    - by Alex R
    A have a JPA entity that has timestamp field and is distinguished by a complex identifier field. What I need is to update timestamp in an entity that has already been stored, otherwise create and store new entity with the current timestamp. As it turns out the task is not as simple as it seems from the first sight. The problem is that in concurrent environment I get nasty "Unique index or primary key violation" exception. Here's my code: // Load existing entity, if any. Entity e = entityManager.find(Entity.class, id); if (e == null) { // Could not find entity with the specified id in the database, so create new one. e = entityManager.merge(new Entity(id)); } // Set current time... e.setTimestamp(new Date()); // ...and finally save entity. entityManager.flush(); Please note that in this example entity identifier is not generated on insert, it is known in advance. When two or more of threads run this block of code in parallel, they may simultaneously get null from entityManager.find(Entity.class, id) method call, so they will attempt to save two or more entities at the same time, with the same identifier resulting in error. I think that there are few solutions to the problem. Sure I could synchronize this code block with a global lock to prevent concurrent access to the database, but would it be the most efficient way? Some databases support very handy MERGE statement that updates existing or creates new row if none exists. But I doubt that OpenJPA (JPA implementation of my choice) supports it. Event if JPA does not support SQL MERGE, I can always fall back to plain old JDBC and do whatever I want with the database. But I don't want to leave comfortable API and mess with hairy JDBC+SQL combination. There is a magic trick to fix it using standard JPA API only, but I don't know it yet. Please help.

    Read the article

  • Django unit testing: South-migrated DB works in MySQL, throws duplicate PK error in PostGreSQL. Am I

    - by unclaimedbaggage
    Hi folks, (Worth starting off with a disclaimer: I'm very new to PostGreSQL) I have a django site which involves a standard app/tests.py testing file. If I migrate the DB to MySQL (through South),, the tests all pass. However in PostGresQL, I'm getting the following error: IntegrityError: duplicate key value violates unique constraint "business_contact_pkey" Note this happens while unit testing only - the actual page runs fine in both MySQL & PostGresql. Really having a heckuva time figuring this one out. Anyone have ideas? Below are the Postgresql "\d business_contact" & offending tests.py method if they help. No changes made to either DB except the (same) South migrations Thanks first_name | character varying(200) | not null mobile_phone | character varying(100) | surname | character varying(200) | not null business_id | integer | not null created | timestamp with time zone | not null deleted | boolean | not null default false updated | timestamp with time zone | not null slug | character varying(150) | not null phone | character varying(100) | email | character varying(75) | id | integer | not null default nextval('business_contact_id_seq'::regclass) Indexes: "business_contact_pkey" PRIMARY KEY, btree (id) "business_contact_slug_key" UNIQUE, btree (slug) "business_contact_business_id" btree (business_id) Foreign-key constraints: "business_id_refs_id_772cc1b7b40f4b36" FOREIGN KEY (business_id) REFERENCES business(id) DEFERRABLE INITIALLY DEFERRED Referenced by: TABLE "business" CONSTRAINT "primary_contact_id_refs_id_dfaf59c4041c850" FOREIGN KEY (primary_contact_id) REFERENCES business_contact(id) DEFERRABLE INITIALLY DEFERRED TEST DEF: def test_add_business_contact(self): """ Add a business contact """ contact_slug = 'test-new-contact-added-new-adf' business_id = 1 business = Business.objects.get(id=business_id) postdata = { 'first_name': 'Test', 'surname': 'User', 'business': '1', 'slug': contact_slug, 'email': '[email protected]', 'phone': '12345678', 'mobile_phone': '9823452', 'business': 1, 'business_id': 1, } #Test to ensure contacts that should not exist are not returned contact_not_exists = Contact.objects.filter(slug=contact_slug) self.assertFalse(contact_not_exists) #Add the contact and ensure it is present in the DB afterwards """ contact_add_url = '%s%s/contact/add/' % (settings.BUSINESS_URL, business.slug) self.client.post(contact_add_url, postdata) added_contact = Contact.objects.filter(slug=contact_slug) print added_contact try: self.assertTrue(added_contact) except: formset = ContactForm(postdata) print formset.errors self.assertFalse(True, "Contact not found in the database - most likely, the post values in the test didn't validate against the form")

    Read the article

  • Credit Card storage solution

    - by jtnire
    Hi Everyone, I'm developing a solution that is designed to store membership details, as well as credit card details. I'm trying to comply with PCI DSS as much as I can. Here is my design so far: PAN = Primary account number == long number on credit card Server A is a remote server. It stores all membership details (Names, Address etc..) and provides indivudal Key A's for each PAN stored Server B is a local server, and actually holds the encrypted PANs, as well as Key B, and does the decryption. To get a PAN, the client has to authenticate with BOTH servers, ask Server A for the respective Key A, then give Key A to server B, which will return the PAN to the client (provided authentication was sucessful). Server A will only ever encrypt Key A with Server B's public Key, as it will have it beforehand. Server B will probably have to send a salt first though, however I doin't think that has to be encrypted I havn't really thought about any implementation (i.e. coding) specifics yet regarding the above, however the solution is using Java's Cajo framework (wrapper for RMI) so that is how the servers will communicate with each other (Currently, membership details are transfered in this way). The reason why I want Server B to do the decryption, and not the client, is that I am afraid of decryption keys going into the client's RAM, even though it's probably just as bad on the server... Can anyone see anything wrong with the above design? It doesn't matter if the above has to be changed. Thanks jtnire

    Read the article

  • edmx - The operation could not be completed - When adding Inheritance

    - by vdh_ant
    Hey guys I have an edmx model which I have draged 2 tables onto - One called 'File' and the other 'ApplicaitonFile'. These two tables have a 1 to 1 relationship in the database. If I stop here everything works fine. But in my model, I want 'ApplicaitonFile' to inherit from 'File'. So I delete the 1 to 1 relationship then configure 'ApplicaitonFile' from 'File' and then remove the FileId from 'ApplicaitonFile' which was the primary key. (Note I am following the instructions from here). If I leave the model open at this point everything is fine, but as soon as I close it, if I try and reopen it again I get the following error "The operation could not be completed". I have been searching for a solution and found this - http://stackoverflow.com/questions/944050/entity-model-does-not-load but as far as I can tell I don't have a duplicate InheritanceConnectors (although I don't know exactly what I'm looking for but I can't see anything out of the ordinary - like 2 connectors with the same name) and the relationship I originally have is a 1 to 1 not a 1 to 0..1 Any ideas??? this is driving me crazy...

    Read the article

  • What is the corrrect way to increment a field making up part of a composit key

    - by Tr1stan
    I have a bunch of tables whose primary key is made up of the foreign keys of other tables (Composite key). Therefore for example the attributes (as a very cut down version) might look like this: A[aPK, SomeFields] 1:M B[bPK, aFK, SomeFields] 1:M C[cPK, bFK, aFK, SomeFields] as data this could look like: A[aPK, SomeFields]: 1, Foo 2, Bar B[bPK, aFK, SomeFields]: 1, 1, FooData1 2, 1, FooData2 1, 2, BarData1 2, 2, BarData2 C[cPK, bFK, aFK, SomeFields]: 1, 1, 1, FooData1More 2, 1, 1, FooData1More 1, 2, 1, FooData2More 2, 2, 1, FooData2More 1, 1, 2, BarData1More 2, 1, 2, BarData1More 1, 2, 2, BarData2More 2, 2, 2, BarData2More I've got this running in a MSSQL DBMS and I'm looking for the best way to increment the left most column, in each table when a new tuple is added to it. I can't use the Auto Increment Identity Specification option as that has no idea that it is part of a composite key. I also don't want to use any aggregate function such as: MAX(field)+1 as this will have adverse affects with multiple users inputting data, rolling back etc. There might however be a nice trigger based option here, but I'm not sure. This must be a common issue so I'm hoping that someone has a lovely solution. As a side which may or may not affect the answer, I'm using Entity Framework 1.0 as my ORM, within a c# MVC application.

    Read the article

  • Creating an appropriate index for a frequently used query in SQL Server

    - by Slauma
    In my application I have two queries which will be quite frequently used. The Where clauses of these queries are the following: WHERE FieldA = @P1 AND (FieldB = @P2 OR FieldC = @P2) and WHERE FieldA = @P1 AND FieldB = @P2 P1 and P2 are parameters entered in the UI or coming from external datasources. FieldA is an int and highly on-unique, means: only two, three, four different values in a table with say 20000 rows FieldB is a varchar(20) and is "almost" unique, there will be only very few rows where FieldB might have the same value FieldC is a varchar(15) and also highly distinct, but not as much as FieldB FieldA and FieldB together are unique (but do not form my primary key, which is a simple auto-incrementing identity column with a clustered index) I'm wondering now what's the best way to define an index to speed up specifically these two queries. Shall I define one index with... FieldB (or better FieldC here?) FieldC (or better FieldB here?) FieldA ... or better two indices: FieldB FieldA and FieldC FieldA Or are there even other and better options? What's the best way and why? Thank you for suggestions in advance!

    Read the article

  • Is there any benefit to my rather quirky character sizing convention?

    - by Paul Alan Taylor
    I love things that are a power of 2. I celebrated my 32nd birthday knowing it was the last time in 32 years I'd be able to claim that my age was a power of 2. I'm obsessed. It's like being some Z-list Batman villain, except without the colourful adventures and a face full of batarangs. I ensure that all my enum values are powers of 2, if only for future bitwise operations, and I'm reasonably assured that there is some purpose (even if latent) for doing it. Where I'm less sure, is in how I define the lengths of database fields. Again, I can't help it. Everything ends up being a power of 2. CREATE TABLE Person ( PersonID int IDENTITY PRIMARY KEY ,Firstname varchar(64) ,Surname varchar(128) ) Can any SQL super-boffins who know about the internals of how stuff is stored and retrieved tell me whether there is any benefit to my inexplicable obsession? Is it more efficient to size character fields this way? Can anyone pop in with an "actually, what you're doing works because ....."? I suspect I'm just getting crazier in my older age, but it'd be nice to know that there is some method to my madness.

    Read the article

  • Gdata JavaScript Authsub continues redirect

    - by Krustal
    I am using the JavaScript Google Data API and having issues getting the AuthSub script to work correctly. This is my script currently: google.load('gdata', '1'); function getCookie(c_name){ if(document.cookie.length>0){ c_start=document.cookie.indexOf(c_name + "="); if(c_start!=-1){ c_start=c_start + c_name.length+1; c_end=document.cookie.indexOf(";",c_start); if(c_end==-1) c_end=document.cookie.length; return unescape(document.cookie.substring(c_start, c_end)); } } return ""; } function main(){ var scope = 'http://www.google.com/calendar/feeds/'; if(!google.accounts.user.checkLogin(scope)){ google.accounts.user.login(); } else { /* * Retrieve all calendars */ // Create the calendar service object var calendarService = new google.gdata.calendar.CalendarService('GoogleInc-jsguide-1.0'); // The default "allcalendars" feed is used to retrieve a list of all // calendars (primary, secondary and subscribed) of the logged-in user var feedUri = 'http://www.google.com/calendar/feeds/default/allcalendars/full'; // The callback method that will be called when getAllCalendarsFeed() returns feed data var callback = function(result) { // Obtain the array of CalendarEntry var entries = result.feed.entry; //for (var i = 0; i < entries.length; i++) { var calendarEntry = entries[0]; var calendarTitle = calendarEntry.getTitle().getText(); alert('Calendar title = ' + calendarTitle); //} } // Error handler to be invoked when getAllCalendarsFeed() produces an error var handleError = function(error) { alert(error); } // Submit the request using the calendar service object calendarService.getAllCalendarsFeed(feedUri, callback, handleError); } } google.setOnLoadCallback(main); However when I run this the page redirects me to the authentication page. After I authenticate it send me back to my page and then quickly sends me back to the authenticate page again. I've included alerts to check if the token is being set and it doesn't seem to be working. Has anyone has this problem?

    Read the article

  • NHibernate One to One Foreign Key ON DELETE CASCADE

    - by xll
    I need to implement One-to-one association between Project and ProjecSettings using fluent NHibernate: public class ProjectMap : ClassMap<Project> { public ProjectMap() { Id(x => x.Id) .UniqueKey(MapUtils.Col<Project>(x => x.Id)) .GeneratedBy.HiLo("NHHiLoIdentity", "NextHiValue", "1000", string.Format("[EntityName] = '[{0}]'", MapUtils.Table<Project>())) .Not.Nullable(); HasOne(x => x.ProjectSettings) .PropertyRef(x => x.Project); } } public class ProjectSettingsMap : ClassMap<ProjectSettings> { public ProjectSettingsMap() { Id(x => x.Id) .UniqueKey(MapUtils.Col<ProjectSettings>(x => x.Id)) .GeneratedBy.HiLo("NHHiLoIdentity", "NextHiValue", "1000", string.Format("[EntityName] = '[{0}]'", MapUtils.Table<ProjectSettings>())); References(x => x.Project) .Column(MapUtils.Ref<ProjectSettings, Project>(p => p.Project, p => p.Id)) .Unique() .Not.Nullable(); } } This results in the following sql for Project Settings: CREATE TABLE ProjectSettings ( Id bigint PRIMARY KEY NOT NULL, Project_Project_Id bigint NOT NULL UNIQUE, /* Foreign keys */ FOREIGN KEY (Project_Project_Id) REFERENCES Project() ON DELETE NO ACTION ON UPDATE NO ACTION ); What I am trying to achieve is to have ON DELETE CASCADE for the FOREIGN KEY (Project_Project_Id), so that when the project is deleted through sql query, it's settings are deleted too. How can I achieve this ? EDIT: I know about Cascade.Delete() option, but it's not what I need. Is there any way to intercept the FK statement generation?

    Read the article

  • SQL Server stored procedure return code oddity

    - by gbn
    Hello The client that calls this code is restricted and can only deal with return codes from stored procs. So, we modified our usual contract to RETURN -1 on error and default to RETURN 0 if no error If the code hits the inner catch block, then the RETURN code default to -4. Where does this come from, does anyone know...? IF OBJECT_ID('dbo.foo') IS NOT NULL DROP TABLE dbo.foo GO CREATE TABLE dbo.foo ( KeyCol char(12) NOT NULL, ValueCol xml NOT NULL, Comment varchar(1000) NULL, CONSTRAINT PK_foo PRIMARY KEY CLUSTERED (KeyCol) ) GO IF OBJECT_ID('dbo.bar') IS NOT NULL DROP PROCEDURE dbo.bar GO CREATE PROCEDURE dbo.bar @Key char(12), @Value xml, @Comment varchar(1000) AS SET NOCOUNT ON DECLARE @StartTranCount tinyint; BEGIN TRY SELECT @StartTranCount = @@TRANCOUNT; IF @StartTranCount = 0 BEGIN TRAN; BEGIN TRY --SELECT @StartTranCount = 'fish' INSERT dbo.foo (KeyCol, ValueCol, Comment) VALUES (@Key, @Value, @Comment); END TRY BEGIN CATCH IF ERROR_NUMBER() = 2627 --PK violation UPDATE dbo.foo SET ValueCol = @Value, Comment = @Comment WHERE KeyCol = @Key; ELSE RAISERROR ('Tits up', 16, 1); END CATCH IF @StartTranCount = 0 COMMIT TRAN; END TRY BEGIN CATCH IF @StartTranCount = 0 AND XACT_STATE() <> 0 ROLLBACK TRAN; RETURN -1 END CATCH --Without this, we'll send -4 if we hit the UPDATE CATCH block above --RETURN 0 GO --Run with RETURN 0 and fish line commented out DECLARE @rtn int EXEC @rtn = dbo.bar 'abcdefghijkl', '<foobar />', 'testing' SELECT @rtn; SELECT * FROM dbo.foo DECLARE @rtn int EXEC @rtn = dbo.bar 'abcdefghijkl', '<foobar2 />', 'testing2' --updated OK but we get @rtn = -4 SELECT @rtn; SELECT * FROM dbo.foo --uncomment fish line DECLARE @rtn int EXEC @rtn = dbo.bar 'abcdefghijkl', '<foobar />', 'testing' --Hit outer CATCH, @rtn = -1 as expected SELECT @rtn; SELECT * FROM dbo.foo

    Read the article

  • query structure - ignoring entries for the same event from multiple users?

    - by Andrew Heath
    One table in my MySQL database tracks game plays. It has the following structure: SCENARIO_VICTORIES [ID] [scenario_id] [game] [timestamp] [user_id] [winning_side] [play_date] ID is the autoincremented primary key. timestamp records the moment of submission for the record. winning_side has one of three possible values: 1, 2, or 0 (meaning a draw) One of the queries done on this table calculates the victory percentage for each scenario, when that scenario's page is viewed. The output is expressed as: Side 1 win % Side 2 win % Draw % and queried with: SELECT winning_side, COUNT(scenario_id) FROM scenario_victories WHERE scenario_id='$scenID' GROUP BY winning_side ORDER BY winning_side ASC and then processed into the percentages and such. Sorry for the long setup. My problem is this: several of my users play each other, and record their mutual results. So these battles are being doubly represented in the victory percentages and result counts. Though this happens infrequently, the userbase isn't large and the double entries do have a noticeable effect on the data. Given the table and query above - does anyone have any suggestions for how I can "collapse" records that have the same play_date & game & scenario_id & winning_side so that they're only counted once?

    Read the article

  • Right way to return proxy model instance from a base model instance in Django ?

    - by sotangochips
    Say I have models: class Animal(models.Model): type = models.CharField(max_length=255) class Dog(Animal): def make_sound(self): print "Woof!" class Meta: proxy = True class Cat(Animal): def make_sound(self): print "Meow!" class Meta: proxy = True Let's say I want to do: animals = Animal.objects.all() for animal in animals: animal.make_sound() I want to get back a series of Woofs and Meows. Clearly, I could just define a make_sound in the original model that forks based on animal_type, but then every time I add a new animal type (imagine they're in different apps), I'd have to go in and edit that make_sound function. I'd rather just define proxy models and have them define the behavior themselves. From what I can tell, there's no way of returning mixed Cat or Dog instances, but I figured maybe I could define a "get_proxy_model" method on the main class that returns a cat or a dog model. Surely you could do this, and pass something like the primary key and then just do Cat.objects.get(pk = passed_in_primary_key). But that'd mean doing an extra query for data you already have which seems redundant. Is there any way to turn an animal into a cat or a dog instance in an efficient way? What's the right way to do what I want to achieve?

    Read the article

  • How to return a record from function executed by INSERT/UPDATE rule?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • Combinationally unique MySQL tables

    - by Jack Webb-Heller
    So, here's the problem (it's probably an easy one :P) This is my table structure: CREATE TABLE `users_awards` ( `user_id` int(11) NOT NULL, `award_id` int(11) NOT NULL, `duplicate` int(11) NOT NULL DEFAULT '0', UNIQUE KEY `award_id` (`award_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 So it's for a user awards system. I don't want my users to be granted the same award multiple times, which is why I have a 'duplicate' field. The query I'm trying is this (with sample data of 3 and 2) : INSERT INTO users_awards (user_id, award_id) VALUES ('3','2') ON DUPLICATE KEY UPDATE duplicate=duplicate+1 So my MySQL is a little rusty, but I set user_id to be a primary key, and award_id to be a UNIQUE key. This (kind of) created the desired effect. When user 1 was given award 2, it entered. If he/she got this twice, only one row would be in the table, and duplicate would be set to 1. And again, 2, etc. When user 2 was given award 1, it entered. If he/she got this twice, duplicate updated, etc. etc. But when user 1 is given award 1 (after user 2 has already been awarded it), user 2 (with award 1)'s duplicate field increases and nothing is added to user 1. Sorry if that's a little n00bish. Really appreciate the help! Jack

    Read the article

  • Problem with circular definition in Scheme

    - by user8472
    I am currently working through SICP using Guile as my primary language for the exercises. I have found a strange behavior while implementing the exercises in chapter 3.5. I have reproduced this behavior using Guile 1.4, Guile 1.8.6 and Guile 1.8.7 on a variety of platforms and am certain it is not specific to my setup. This code works fine (and computes e): (define y (integral (delay dy) 1 0.001)) (define dy (stream-map (lambda (x) x) y)) (stream-ref y 1000) The following code should give an identical result: (define (solve f y0 dt) (define y (integral (delay dy) y0 dt)) (define dy (stream-map f y)) y) (solve (lambda (x) x) 1 0.001) But it yields the error message: standard input:7:14: While evaluating arguments to stream-map in expression (stream-map f y): standard input:7:14: Unbound variable: y ABORT: (unbound-variable) So when embedded in a procedure definition, the (define y ...) does not work, whereas outside the procedure in the global environment at the REPL it works fine. What am I doing wrong here? I can post the auxiliary code (i.e., the definitions of integral, stream-map etc.) if necessary, too. With the exception of the system-dependent code for cons-stream, they are all in the book. My own implementation of cons-stream for Guile is as follows: (define-macro (cons-stream a b) `(cons ,a (delay ,b)))

    Read the article

  • Trusted Folder/Drive Picker in the Browser

    - by kylepfritz
    I'd like to write a Folder/Drive picker the runs in the browser and allows a user to select files to upload to a webservice. The primary usage would be selecting folders or a whole CD and uploading them to the web with their directory structure in tact. I'm imagining something akin to Jumploader but which automatically enumerates external drives and CDs. I remember a version of Facebook's picture uploader that could do this sort of enumeration and was java-based but it has since been replaced by a much slicker plugin-based architecture. Because the application needs to run at very high trust, I think I'm limited to old-school java applets. Is there another alternative? I'm hesitant to start down the plugin route because of the necessity of writing one for both IE and Mozilla at a minimum. Are there good places to get started there? On the applet front, I built a clunky prototype to demonstrate that I can enumerate devices and list files. It runs fine in the applet viewer but I don't think I have the security settings configured correctly for it to run in the browser at full trust. Currently I don't get any drives back when I run it in the browser. Applet Prototype: public class Loader extends javax.swing.JApplet { ... private void EnumerateDrives(java.awt.event.ActionEvent evt) { File[] roots = File.listRoots(); StringBuilder b = new StringBuilder(); for (File root : roots) { b.append(root.getAbsolutePath() + ", "); } jLabel.setText(b.toString()); } } Embed Html: <p>Loader:</p> <script src="http://www.java.com/js/deployJava.js" type="text/javascript" ></script> <script> var attributes = {code:'org.exampl.Loader.Loader.class', archive:'Loader/dist/Loader.jar', width:600, height:400} ; var parameters = {}; deployJava.runApplet(attributes, parameters, '1.6');

    Read the article

  • GetAccessControl error with NTAccount

    - by Adam Witko
    private bool HasRights(FileSystemRights fileSystemRights_, string fileName_, bool isFile_) { bool hasRights = false; WindowsIdentity WinIdentity = System.Security.Principal.WindowsIdentity.GetCurrent(); WindowsPrincipal WinPrincipal = new WindowsPrincipal(WinIdentity); AuthorizationRuleCollection arc = null; if (isFile_) { FileInfo fi = new FileInfo(@fileName_); arc = fi.GetAccessControl().GetAccessRules(true, true, typeof(NTAccount)); } else { DirectoryInfo di = new DirectoryInfo(@fileName_); arc = di.GetAccessControl().GetAccessRules(true, true, typeof(NTAccount)); } foreach (FileSystemAccessRule rule in arc) { if (WinPrincipal.IsInRole(rule.IdentityReference.Value)) { if (((int)rule.FileSystemRights & (int)fileSystemRights_) > 0) { if (rule.AccessControlType == AccessControlType.Allow) hasRights = true; else if (rule.AccessControlType == AccessControlType.Deny) { hasRights = false; break; } } } } return hasRights; } The above code block is causing me problems. When the WinPrincipal.IsInRole(rule.IdentityReference.Value) is executed the following exception occurs: "The trust relationship between the primary domain and the trusted domain failed.". I'm very new to using identities, principles and such so I don't know what's the problem. I'm assuming it's with the use of NTAccount? Thanks

    Read the article

  • How to return a record from function, executed by INSERT/UPDATE rule (trigger)?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >