Search Results

Search found 1220 results on 49 pages for 'nathan pk'.

Page 36/49 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • SQL Server combining 2 rows into 1 from the same table

    - by Maton
    Hi, I have a table with an JobID (PK), EmployeeID (FK), StartDate, EndDate containing data such as: 1, 10, '01-Jan-2010 08:00:00', '01-Jan-2010 08:30:00' 2, 10, '01-Jan-2010 08:50:00', '01-Jan-2010 09:05:00' 3, 10, '02-Feb-2010 10:00:00', '02-Feb-2010 10:30:00' I want to return a record for each EndDate for a Job and then the same employees StartDate for his next immediate job (by date time). So from the data above the result would be Result 1: 10, 01-Jan-2010 08:30:00, 01-Jan-2010 08:50:00 Result 2: 10, 01-Jan-2010 09:05:00, 02-Feb-2010 10:00:00 Greatly appreciate any help!

    Read the article

  • Flex builder error

    - by Anoop
    My flex builder suddenly stopped highlighting compile time errors. Its also not giving any code completion suggestion, even after pressing ctrl+Space. What could be the possible reasons? Regards, PK

    Read the article

  • how to save django object using dictionary?

    - by shahjapan
    is there a way that I can save the model by using dictionary for e.g. this is working fine, p1 = Poll.objects.get(pk=1) p1.name = 'poll2' p1.descirption = 'poll2 description' p1.save() but what if I have dictionary like { 'name': 'poll2', 'description: 'poll2 description' } is there a simple way to save the such dictionary direct to Poll

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • Why are joins bad when considering scalability?

    - by acidzombie24
    Why are joins bad or 'slow'. I know i heard this more then once. I found this quote The problem is joins are relatively slow, especially over very large data sets, and if they are slow your website is slow. It takes a long time to get all those separate bits of information off disk and put them all together again. source I always thought they were fast especially when looking up a PK. Why are they 'slow'?

    Read the article

  • Hibernate custom join clause on association

    - by myso
    I would like to associate 2 entities using hibernate annotations with a custom join clause. The clause is on the usual FK/PK equality, but also where the FK is null. In SQL this would be something like: join b on a.id = b.a_id or b.a_id is null From what I have read I should use the @WhereJoinTable annotation on the owner entity, but I'm puzzled about how I specify this condition...especially the first part of it - referring to the joining entity's id. Does anyone have an example?

    Read the article

  • Per instance dynamic fields django model

    - by Roberto Rosario
    I have a model with a JSON field or a link to a CouchDB document. I can currently access the dynamic informaction in a way such as: genericdocument.objects.get(pk=1) == genericdocument.json_field['sample subfield'] instead I would like genericdocument.sample_subfield to maintain compatibility with all the apps the project currently shares.

    Read the article

  • get JSON object attribute name

    - by Laurent Luce
    I know that I can retrieve "session" by using item.fields.name but what if I don't know in advance that the attribute is called "name". How can I retrieve the list of the attributes names in fields first. [ { "pk": 2, "model": "auth.group", "fields": { "name": "session" } } ]

    Read the article

  • Optimize slow ranking query

    - by Juan Pablo Califano
    I need to optimize a query for a ranking that is taking forever (the query itself works, but I know it's awful and I've just tried it with a good number of records and it gives a timeout). I'll briefly explain the model. I have 3 tables: player, team and player_team. I have players, that can belong to a team. Obvious as it sounds, players are stored in the player table and teams in team. In my app, each player can switch teams at any time, and a log has to be mantained. However, a player is considered to belong to only one team at a given time. The current team of a player is the last one he's joined. The structure of player and team is not relevant, I think. I have an id column PK in each. In player_team I have: id (PK) player_id (FK -> player.id) team_id (FK -> team.id) Now, each team is assigned a point for each player that has joined. So, now, I want to get a ranking of the first N teams with the biggest number of players. My first idea was to get first the current players from player_team (that is one record top for each player; this record must be the player's current team). I failed to find a simple way to do it (tried GROUP BY player_team.player_id HAVING player_team.id = MAX(player_team.id), but that didn't cut it. I tried a number of querys that didn't work, but managed to get this working. SELECT COUNT(*) AS total, pt.team_id, p.facebook_uid AS owner_uid, t.color FROM player_team pt JOIN player p ON (p.id = pt.player_id) JOIN team t ON (t.id = pt.team_id) WHERE pt.id IN ( SELECT max(J.id) FROM player_team J GROUP BY J.player_id ) GROUP BY pt.team_id ORDER BY total DESC LIMIT 50 As I said, it works but looks very bad and performs worse, so I'm sure there must be a better way to go. Anyone has any ideas for optimizing this? I'm using mysql, by the way. Thanks in advance Adding the explain. (Sorry, not sure how to format it properly) id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY t ALL PRIMARY NULL NULL NULL 5000 Using temporary; Using filesort 1 PRIMARY pt ref FKplayer_pt77082,FKplayer_pt265938,new_index FKplayer_pt77082 4 t.id 30 Using where 1 PRIMARY p eq_ref PRIMARY PRIMARY 4 pt.player_id 1 2 DEPENDENT SUBQUERY J index NULL new_index 8 NULL 150000 Using index

    Read the article

  • ADO Exception in HQL query

    - by Yoav
    I have 2 classes: Project and DataStructure. Class Project contains member List<DataStructure. My goal is to load a Project and all its DataStructures in one call. public class Project { public virtual string Id { get { } set { } } public virtual string Name { get { } set { } } public virtual ISet<DataStructure> DataStructures { get { } set { } } } public class DataStructure { public virtual string Id { get { } set { } } public virtual string Name { get { } set { } } public virtual string Description { get { } set { } } public virtual Project Project { get { } set { } } public virtual IList<DataField> Fields { get { } set { } } } Note that DataStructure also contains a list of class DataField but I don’t want to load these right now. Mapping in Fluent NHibernate: public class ProjectMap : ClassMap<Project> { public ProjectMap() { Table("PROJECTS"); Id(x => x.Pk, "PK"); Map(x => x.Id, "ID"); Map(x => x.Name, "NAME"); HasMany<DataStructure>(x => x.DataStructures).KeyColumn("FK_PROJECT"); } } public class DataStructureMap : ClassMap<DataStructure> { public DataStructureMap() { Table("DATA_STRUCTURES"); Map(x => x.Id, "ID"); Map(x => x.Name, "NAME"); Map(x => x.Description, "DESCRIPTION"); References<Project>(x => x.Project, "FK_PROJECT"); HasMany<DataField>(x => x.Fields).KeyColumn("FK_DATA_STRUCTURE"); } } This is my query: using (ISession session = SessionFactory.OpenSession()) { IQuery query = session.CreateQuery("from Project pr left join pr.DataStructure"); project = query.List<Project>(); } query.List() returns this exception: NHibernate.Exceptions.GenericADOException: Could not execute query[SQL: SQL not available] ---> System.ArgumentException: The value "System.Object[]" is not of type "Project" and cannot be used in this generic collection.

    Read the article

  • Database schema for simple stats project

    - by Bubnoff
    Backdrop: I have a file hierarchy of cvs files for multiple locations named by dates they cover ...by month specifically. Each cvs file in the folder is named after the location. eg', folder name: 2010-feb contains: location1.csv location2.csv Each CSV file holds records like this: 2010-06-28, 20:30:00 , 0 2010-06-29, 08:30:00 , 0 2010-06-29, 09:30:00 , 0 2010-06-29, 10:30:00 , 0 2010-06-29, 11:30:00 , 0 meaning of record columns ( column names ): Date, time, # of sessions I have a perl script that pulls the data from this mess and originally I was going to store it as json files, but am thinking a database might be more appropriate long term ...comparing year to year trends ...fun stuff like that. Pt 2 - My question/problem: So I now have a REST service that coughs up json with a test database. My question is [ I suck at db design ], how best to design a database backend for this? I am thinking the following tables would suffice and keep it simple: Location: (PK)location_code, name session: (PK)id, (FK)location_code, month, hour, num_sessions I need to be able to average sessions (plus min and max) for each hour across days of week in addition to days of week in a given month or months. I've been using perl hashes to do this and am trying to decide how best to implement this with a database. Do you think stored procedures should be used? As to the database, depending on info gathered here, it will be postgresql or sqlite. If there is no compelling reason for postgresql I'll stick with sqlite. How and where should I compare the data to hours of operation. I am storing the hours of operation in a yaml file. I currently 'match' the hour in the data to a hash from the yaml to do this. Would a database open simpler methods? I am thinking I would do this comparison as I do now then insert the data. Can be recalled with: SELECT hour, num_sessions FROM session WHERE location_code=LOC1 Since only hours of operation are present, I do not need to worry about it. Should I calculate all results as I do now then store as a stats table for different 'reports'? This, rather than processing on demand? How would this look? Anyway ...I ramble. Thanks for reading! Bubnoff

    Read the article

  • SQLite Query to Insert a record If not exists

    - by Tharindu Madushanka
    I want to insert a record into a sqlite table if its actually not inserted. Let's say it has three fields pk, name, address I want to INSERT new record with name if that name not added preveously. Can we do with this in a single Query. Seems like its slightly different from SQL Queries sometimes. Thanks you, Tharindu Madushanka

    Read the article

  • jQuery Validation Plugin: Packer undefined error?

    - by Rosarch
    I'm using the jQuery validation plugin from bassistance.de. It works fine. From <head>: <script type="text/javascript" src="/static/JQuery.js"></script> <script type="text/javascript" src="/static/js-lib/jquery.validate.pack.js"></script> <script type="text/javascript" src="/static/js-lib/jquery.validate.additional-methods.js"></script> At first, this was the only validation code I had, and it worked: $("form").validate(); $("#form-username").rules("add", { required: true, email: true, }); It was validating this HTML: <form id="form-username-form" action="api/user_of_email" method="get"> <p> <label for="form-username">Email:</label> <input type="text" name="email" id="form-username" /> <input type="submit" value="Submit" id="form-submit" /> </p> </form> Great, everything works. But then I add this JS: $("#form-choose-options input[type='text']").rules("add", { number: true, }); to validate this markup: <form id="form-choose-options" action="api/set_options" method="get"> <p> <label for="form-min-credits">Min credits per term:</label><input type="text" name="min_credits" id="form-min-credits" /> <br /> <label for="form-optimal-credits">Optimal credits per term:</label><input type="text" name="optimal_credits" id="form-optimal-credits" /> <br /> <label for="form-max-credits">Max credits per term:</label><input type="text" name="max_credits" id="form-max-credits" /> <br /> <label for="form-low-GPA">Lowest acceptable GPA:</label><input type="text" name="low_GPA" id="form-low-GPA" /> <br /> <label for="form-high-GPA">Highest realistic GPA:</label><input type="text" name="high_GPA" id="form-high-GPA" /> <br /> <input type="hidden" class="user-pk" name="pk"/> <input type="submit" value="Submit" /> </p> </form> This causes a javascript error on document load: $.data(f.form, "validator") is undefined The error is from the packer function. What am I doing wrong?

    Read the article

  • Is it possible to use SQL XML to insert, and get output from each record?

    - by nbolton
    I would like to perform a SQL XML insert (on MSSQL), and in this case I need to insert a list of files into the DB (this is simple enough). However, there's an auto generated PK column (ID), and I need the ID for each newly created filename without performing a 2nd query. Is this possible? I guess it doesn't matter if the result is/isn't XML, but the input certainly has to be.

    Read the article

  • hibernate modeling relationships managed through an intermediate table

    - by shikarishambu
    I have a datamodel that has an intermediate table to manage relationships between entities. For example, tables Person and Organization are related through the Relationship table Party (table) - ID Person (table) - ID (references Party.ID) - name Organization (table) -ID (references Party.ID) -name Relationship (table) -ID (PK) -type (references relationshiptype lookup) -fromID (references Party.ID) -ToID (references Party.ID) -fromDate -ToDate Type+fromID+ToID+fromDate+ToDate is guaranteed to be unique. How do I manage this using hibernate? TIA

    Read the article

  • How can I get DocId when adding a document in Lucene index?

    - by Rohit
    I am indexing a row of data from database in Lucene.Net. A row is equivalent of Document. I want to update my database with the DocId, so that I can use the DocId in the results to be able to retrieve rows quickly. I currently first retrive the PK from the result docs which I think should be slower than retriving directly from the database using DocId. How can I find the DocId when adding a document to Lucene?

    Read the article

  • Database related Questions

    - by alokpatil
    I am planning to make a railway reservation project... I am maintaining following tables.. trainTable (trainId,trainName,trainFrom,trainTo,trainDate,trainNoOfBoogies)...PK(trainId) Boogie (trainId,boogieId,boogieName,boogieNoOfseats)...CompositeKey(trainId,boogieId)... Seats (trainId,boogieId,seatId,seatStatus,seatType)...CompositeKey(trainId,boogieId,seatId)... user (userId,name...personal details) userBooking (userId,trainId,boogieId,seatId)...Is this good design reply me please...

    Read the article

  • MVC | Linq Update Query | Help!

    - by 109221793
    Hi guys, I'm making modifications to a C# MVC application that I've inherited. I have a database, and for simplicity I'll just focus on the two tables I'm working with for this linq query. Item ItemID Int PK ItemName RepairSelection (Yes or No) RepairID Int FK Repair RepairID Int PK RepairCategory SubmissionDate DateSentForRepair Ok, so ItemID is pretty much the identifier, and the View to display the Repair details goes like this (snippet): <%= Html.LabelFor(x => x.ItemID)%> <%= Html.DisplayFor(x => x.ItemID)%><br /> <%= Html.LabelFor(x => x.Repair.RepairCategory)%> <%= Html.DisplayFor(x => x.Repair.RepairCategory, "FormTextShort")%><br /> <%= Html.LabelFor(x => x.Repair.SubmissionDate)%> <%= Html.DisplayFor(x => x.Repair.SubmissionDate)%><br /> <%= Html.LabelFor(x => x.Repair.DateSentForRepair)%> <%= Html.DisplayFor(x => x.Repair.DateSentForRepair)%><br /> <%= Html.ActionLink("Edit Repair Details", "Edit", new { ItemID= Model.ItemID})%> Here is the GET Edit action: public ActionResult Edit(Int64? itemId) { ModelContainer ctn = new ModelContainer(); var item = from i in ctn.Items where i.ItemID == itemId select i; return View(item.First()); } This is also fine, the GET Edit view displays the right details. Where I'm stuck is the linq query to update the Repair table. I have tried it so many ways today that my head is just fried (new to Linq as you may have guessed). My latest try is here (which I know is way off so go easy ;-) ): [HttpPost] public ActionResult Edit(Int64 itemId, Repair repair, Item item, FormCollection formValues) { if (formValues["cancelButton"] != null) { return RedirectToAction("View", new { ItemID = itemId }); } ModelContainer ctn = new ModelContainer(); Repair existingData = ctn.Repair.First(a => a.RepairId == item.RepairID && item.ItemID == itemId); existingData.SentForConversion = DateTime.Parse(formValues["SentForConversion"]); ctn.SaveChanges(); return RedirectToAction("View", new { ItemID = itemId }); } For the above attempt I get a Sequence Contains No Elements error. Any help or pointers would be appreciated. Thanks guys.

    Read the article

  • Group by query design help

    - by Midhat
    Consider this data PK field1 field2 1 a b 2 a (null) 3 x y 4 x z 5 q w I need to get this data select all columns from all rows where field1 has count 1 i tried and finally settled for select * from mytable where field1 in (select field1 from mytable group by field1 having count(field1)>1 ) order by field1 but there has to be a better way than this

    Read the article

  • how to do multi insert and obtain ids

    - by liysd
    hi, I want to insert some data into a table (id PK autoincrement, val) with use multi insert INSERT INTO tab (val) VALUES (1), (2), (3) Is it possible to obtain a table of last inserted ids? I'm asking becouse I'm not sure if all will in this form: (n, n+1, n+2). I use mysql inodb.

    Read the article

  • How to generate the EC2 cerificate

    - by user192048
    While setting up the EC2 access, it seems I need two files, the private key and ec2 certificate. $ export EC2_PRIVATE_KEY=~/.ec2/pk-HKZYKTAIG2ECMXYIBH3HXV4ZBZQ55CLO.pem $ export EC2_CERT=~/.ec2/cert-HKZYKTAIG2ECMXYIBH3HXV4ZBZQ55CLO.pem However, I did not find anywhere I could download or create the key. from the documentation: The command line tools need access to the private key and X.509 certificate you generated after signing up for the Amazon EC2 service. I probably missed that, Is it possible to generate it again

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >