Search Results

Search found 31931 results on 1278 pages for 'sql statement'.

Page 573/1278 | < Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >

  • Asynchronous callback - gwt

    - by sprasad12
    Hi, I am using gwt and postgres for my project. On the front end i have few widgets whose data i am trying to save on to tables at the back-end when i click on "save project" button(this also takes the name for the created project). In the asynchronous callback part i am setting more than one table. But it is not sending the data properly. I am getting the following error: org.postgresql.util.PSQLException: ERROR: insert or update on table "entitytype" violates foreign key constraint "entitytype_pname_fkey" Detail: Key (pname)=(Project Name) is not present in table "project". But when i do the select statement on project table i can see that the project name is present. Here is how the callback part looks like: oksave.addClickHandler(new ClickHandler(){ @Override public void onClick(ClickEvent event) { if(erasync == null) erasync = GWT.create(EntityRelationService.class); AsyncCallback<Void> callback = new AsyncCallback<Void>(){ @Override public void onFailure(Throwable caught) { } @Override public void onSuccess(Void result){ } }; erasync.setProjects(projectname, callback); for(int i = 0; i < boundaryPanel.getWidgetCount(); i++){ top = new Integer(boundaryPanel.getWidget(i).getAbsoluteTop()).toString(); left = new Integer(boundaryPanel.getWidget(i).getAbsoluteLeft()).toString(); if(widgetTitle.startsWith("ATTR")){ type = "regular"; erasync.setEntityAttribute(name1, name, type, top, left, projectname, callback); } else{ erasync.setEntityType(name, top, left, projectname, callback); } } } Question: Is it wrong to set more than one in the asynchronous callback where all the other tables are dependent on a particular table? when i say setProjects in the above code isn't it first completed and then moved on to the next one? Please any input will be greatly appreciated. Thank you.

    Read the article

  • Integer Surrogate Key?

    - by CitadelCSAlum
    I need something real simple, that for some reason I am unable to accomplish to this point. I have also been unable to Google the answer surprisingly. I need to number the entries in different tables uniquely. I am aware of AUTO INCREMENT in MySQL and that it will be involved. My situation would be as follows If I had a table like Table EXAMPLE ID - INTEGER FIRST_NAME - VARCHAR(45) LAST_NAME - VARCHAR(45) PHONE - VARCHAR(45) CITY - VARCHAR(45) STATE - VARCHAR(45) ZIP - VARCHAR(45) This would be the setup for the table where ID is an integer that is auto-incremented every time an entry is inserted into the table. The thing I need is that I do not want to have to account for this field when inserting data into the database. From my understanding this would be a surrogate key, that I can tell the database to automatically increment and I do not have to include it in the INSERT STATEMENT so instead of INSERT INTO EXAMPLE VALUES (2,'JOHN','SMITH',333-333-3333,'NORTH POLE'.... I can leave out the first ID column and just write something like INSERT INTO EXAMPLE VALUES ('JOHN','SMITH'.....etc) Notice I Wouldnt have to define the ID column... I know this is a very common task to do, but for some reason I cant get to the bottom of it. I am using MySQL, just to clarify. Thanks alot

    Read the article

  • Which non-clustered index should I use?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. CREATE TABLE [dbo].[Customers]( [CustomerId] [int] IDENTITY(1,1) NOT NULL, [CustomerName] [varchar](100) NOT NULL, [Deleted] [bit] NOT NULL, [Active] [bit] NOT NULL, CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ( [CustomerId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] This is the query I'll be using to see what execution plan is showing: SELECT CustomerName FROM Customers Well, executing this command with no additional non-clustered index, it leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 Now I'm trying to see if it's possible to improve performance, so I've created a non-clustered index for this table: 1) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO Executing again the select against Customers table, the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 It seems better. Now I've deleted this just created non-clustered index, in order to create a new one: 2) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerIDIncludeCustomerName] ON [dbo].[Customers] ( [CustomerId] ASC ) INCLUDE ( [CustomerName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this new non-clustered index, I've executed the select statement again and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 So, which non-clustered index should I use? Why the costs are the same on execution plan for I/O and Operator? Am I doing something wrong or this is expected? thank you

    Read the article

  • MySQL table data transformation -- how can I dis-aggregate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • how to insert data if it contain apostrop ?

    - by angel ansari
    Actally my task is load csv file into sql server using c# so i have split it by comma my problem is that some field's data contain apostrop and i m firing insert query to load data into sql so its give error my coding like that using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; using System.Data.SqlClient; namespace tool { public partial class Form1 : Form { StreamReader reader; SqlConnection con; SqlCommand cmd; int count = 0; //int id=0; FileStream fs; string file = null; string file_path = null; SqlCommand sql_del = null; public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { OpenFileDialog file1 = new OpenFileDialog(); file1.ShowDialog(); textBox1.Text = file1.FileName.ToString(); file = Path.GetFileName(textBox1.Text); file_path = textBox1.Text; fs = new FileStream(file_path, FileMode.Open, FileAccess.Read); } private void button2_Click(object sender, EventArgs e) { if (file != null ) { sql_del = new SqlCommand("Delete From credit_debit1", con); sql_del.ExecuteNonQuery(); reader = new StreamReader(file_path); string line_content = null; string[] items = new string[] { }; while ((line_content = reader.ReadLine()) != null) { if (count >=4680) { items = line_content.Split(','); string region = items[0].Trim('"'); string station = items[1].Trim('"'); string ponumber = items[2].Trim('"'); string invoicenumber = items[3].Trim('"'); string invoicetype = items[4].Trim('"'); string filern = items[5].Trim('"'); string client = items[6].Trim('"'); string origin = items[7].Trim('"'); string destination = items[8].Trim('"'); string agingdate = items[9].Trim('"'); string activitydate = items[10].Trim('"'); if ((invoicenumber == "-") || (string.IsNullOrEmpty(invoicenumber))) { invoicenumber = "null"; } else { invoicenumber = "'" + invoicenumber + "'"; } if ((destination == "-") || (string.IsNullOrEmpty(destination))) { destination = "null"; } else { destination = "'" + destination + "'"; } string vendornumber = items[11].Trim('"'); string vendorname = items[12].Trim('"'); string vendorsite = items[13].Trim('"'); string vendorref = items[14].Trim('"'); string subaccount = items[15].Trim('"'); string osdaye = items[16].Trim('"'); string osaa = items[17].Trim('"'); string osda = items[18].Trim('"'); string our = items[19].Trim('"'); string squery = "INSERT INTO credit_debit1" + "([id],[Region],[Station],[PONumber],[InvoiceNumber],[InvoiceType],[FileRefNumber],[Client],[Origin],[Destination], " + "[AgingDate],[ActivityDate],[VendorNumber],[VendorName],[VendorSite],[VendorRef],[SubAccount],[OSDay],[OSAdvAmt],[OSDisbAmt], " + "[OverUnderRecovery] ) " + "VALUES " + "('" + count + "','" + region + "','" + station + "','" + ponumber + "'," + invoicenumber + ",'" + invoicetype + "','" + filern + "','" + client + "','" + origin + "'," + destination + "," + "'" + (string)agingdate.ToString() + "','" + (string)activitydate.ToString() + "','" + vendornumber + "',' " + vendorname + "',' " + vendorsite + "',' " + vendorref + "'," + "'" + subaccount + "','" + osdaye + "','" + osaa + "','" + osda + "','" + our + "') "; cmd = new SqlCommand(squery, con); cmd.CommandTimeout = 1500; cmd.ExecuteNonQuery(); } label2.Text = count.ToString(); Application.DoEvents(); count++; } MessageBox.Show("Process completed"); } else { MessageBox.Show("path select"); } } private void button3_Click(object sender, EventArgs e) { this.Close(); } private void Form1_Load(object sender, EventArgs e) { con = new SqlConnection("Data Source=192.168.50.200;User ID=EGL_TEST;Password=TEST;Initial Catalog=EGL_TEST;"); con.Open(); } } } vendername field contain data (MCCOLLISTER'S TRANSPORTATION) so how to pass this data

    Read the article

  • Why does a conditional not affect query speed?

    - by Telos
    I have a stored procedure that was taking a "long" period of time to execute. The query only needs to return data in one case, so I figured I could check for that case and just return before hitting the actual query. The only problem is that it still takes the same amount of time to execute with an if statement. I have verified that the code inside the if is not executing, and that if I replace the complex query with a simple select the speed is fine... so now I'm confused. Why is the query being slowed down by code that doesn't get executed when the conditional is false? Here's the query itself: ALTER PROCEDURE [dbo].[pr_cbc_GetCokeInfo] @pa_record int, @pb_record int AS BEGIN SET NOCOUNT ON; declare @ticketRec int SELECT @ticketRec = TicketRecord FROM eservice_live..v_sdticket where TicketRecord=@pa_record AND serviceCompanyID = 1139 AND @pb_record IS NULL if @ticketRec IS NULL return select record = null, doc_ref = @pa_record, memo_type = 'I', memo = 'Bottler: ' + isnull(Bottler, '') + ' ' + 'Sales Loc: ' + isnull(SalesLocation, '') + ' ' + 'Outlet Desc: ' + isnull(OutletDesc, '') + ' ' + 'City: ' + isnull(OutletCity, '') + ' ' + 'EquipNo: ' + isnull(EquipNo, '') + ' ' + 'SerialNo: ' + isnull(SerialNo, '') + ' ' + 'PhaseNo: ' + isnull(cast(PhaseNo as varchar(255)), '') + ' ' + 'StaticIP: ' + isnull(StaticIP, '') + ' ' + 'Air Card: ' + isnull(AirCard, '') FROM eservice_live..v_SDExtendedInfoField ef JOIN eservice_live..CokeSNList csl ON ef.valueText=csl.SerialNo where ef.docType='CLH' AND ef.docref = @ticketRec AND ef.ExtendedDocNumber=5 SET NOCOUNT OFF; END

    Read the article

  • UPDATE query that fixes orphaned records

    - by Jed
    I have an Access database that has two tables that are related by PK/FK. Unfortunately, the database tables have allowed for duplicate/redundant records and has made the database a bit screwy. I am trying to figure out a SQL statement that will fix the problem. To better explain the problem and goal, I have created example tables to use as reference: You'll notice there are two tables, a Student table and a TestScore table where StudentID is the PK/FK. The Student table contains duplicate records for students John, Sally, Tommy, and Suzy. In other words the John's with StudentID's 1 and 5 are the same person, Sally 2 and 6 are the same person, and so on. The TestScore table relates test scores with a student. Ignoring how/why the Student table allowed duplicates, etc - The goal I'm trying to accomplish is to update the TestScore table so that it replaces the StudentID's that have been disabled with the corresponding enabled StudentID. So, all StudentID's = 1 (John) will be updated to 5; all StudentID's = 2 (Sally) will be updated to 6, and so on. Here's the resultant TestScore table that I'm shooting for (Notice there is no longer any reference to the disabled StudentID's 1-4): Can you think of a query (compatible with MS Access's JET Engine) that can accomplish this goal? Or, maybe, you can offer some tips/perspectives that will point me in the right direction. Thanks.

    Read the article

  • Can I serialize an object if I didn't write the class used to instantiate that object?

    - by Richard77
    Hello, I've a simple class [Serializable] public class MyClass { public String FirstName { get; set: } public String LastName { get; set: } //Bellow is what I would like to do //But, it's not working //I get an exception ContactDataContext db = new ContactDataContext(); public void Save() { Contact contact = new Contact(); contact.FirstName = FirstName; contact.LastName = LastName; db.Contacts.InsertOnSubmit(contact); db.SubmitChanges(); } } I wanted to attach a Save method to the class so that I could call it on each object. When I introduced the above statement which contains ContactDataContext, I got the following error "In assembly ... PublicKeyToken=null' is not marked as serializable" It's clear that the DataContext class is generated by the framework (). I checked and did not see where that class was marked serialize. What can I do to overcome that? What's the rule when I'm not the author of a class? Just go ahead and mark the DataContext class as serializable, and pretend that everything will work? Thanks for helping

    Read the article

  • TSQL to insert an ascending value

    - by David Neale
    I am running some SQL that identifies records which need to be marked for deletion and to insert a value into those records. This value must be changed to render the record useless and each record must be changed to a unique value because of a database constraint. UPDATE Users SET Username = 'Deleted' + (ISNULL( Cast(SELECT RIGHT(MAX(Username),1) FROM Users WHERE Username LIKE 'Deleted%') AS INT) ,0) + 1 FROM Users a LEFT OUTER JOIN #ADUSERS b ON a.Username = 'AVSOMPOL\' + b.sAMAccountName WHERE (b.sAMAccountName is NULL AND a.Username LIKE 'AVSOMPOL%') OR b.userAccountControl = 514 This is the important bit: SET Username = 'Deleted' + (ISNULL( Cast(SELECT RIGHT(MAX(Username),1) FROM Users WHERE Username LIKE 'Deleted%') AS INT) ,0) + 1 What I've tried to do is have deleted records have their Username field set to 'Deletedxxx'. The ISNULL is needed because there may be no records matching the SELECT RIGHT(MAX(Username),1) FROM Users WHERE Username LIKE 'Deleted%' statement and this will return NULL. I get a syntax error when trying to parse this (Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'SELECT'. Msg 102, Level 15, State 1, Line 2 Incorrect syntax near ')'. I'm sure there must be a better way to go about this, any ideas?

    Read the article

  • syntax error in sql query?

    - by user1550588
    I want to create some tables in database so for that i wright some queries to create table. but i got an error "there was a syntax error in your sql statement". I check all my queries but i am not able to find what syntax i wright wrong. here i attached my sql queries please told me what is wrong with queries. Thanks in advance. CREATE TABLE "WATER&SEWAGE" { "CUSTOMER_ID" VARCHAR, "POTABLE_WATER" VARCHAR, "HOT_WATER" VARCHAR, "SAFETY" VARCHAR, "T/P_VALVE+EXT" VARCHAR, "HEATER_TYPE" VARCHAR, "GAS_VENT_CONDITION" VARCHAR, "FIRE_BOX_CONDITION" VARCHAR, "PRESSURE" VARCHAR, "BACK_FLOW" VARCHAR, "PLUMBING_CONDITION" VARCHAR, "CABINET_CONDITION" VARCHAR, "3_COMP_SINK" VARCHAR, "WATER_HEATER" VARCHAR, "CLEAN_OUT_CONDITION" VARCHAR, "FLOOR_DRAIN" VARCHAR, "FLOOR_SINK" VARCHAR, "MAINTANANCE" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "TIME_AND_TETMPERATURE_REL" ("CUSTOMER_ID" INTEGER, "THERMOMTE_AVAILABLE" VARCHAR, "THERMO_TYPE" VARCHAR, "COLD_TEMP" VARCHAR, "FOOD_TYPE1" VARCHAR, "PROPER_COOLING" VARCHAR, "FOOD_TYPE2" VARCHAR, "COMPLIANCE" VARCHAR, "FOOD_TYPE3" VARCHAR, "TPHC" VARCHAR, "HACCP" VARCHAR, "HOT_HOLDING_TEMP" VARCHAR, "FOOD_TYPE4" VARCHAR, "HOT_TEMP" VARCHAR, "FOOD_TYPE5" VARCHAR, "IMPROPER_REHEATING" VARCHAR, "FOOD_TYPE6" VARCHAR, "FOOD_OUT_OF_TEMP" VARCHAR, "WRITTEN_PLAN_FILES" VARCHAR, "EMPLOYEE_KNOWLEDGE" VARCHAR, "TIME_KEEPING_METHODS" VARCHAR, "HACCP_METHODS" VARCHAR, "CALIBRATION" VARCHAR, "COOKING_TEMP" VARCHAR, "EQUIP_CONDITION" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "PROTECTION_FROM_CONTAMINATION" ("FOOD_ADULTERATION" VARCHAR, "FOOD_SPOILAGE" VARCHAR, "CONTAMINATION" VARCHAR, "FOOD_STORAGE" VARCHAR, "REFRIGRATORS" VARCHAR, "FREEZER" VARCHAR, "FIFO" VARCHAR, "DRY" VARCHAR, "DISHWASHING_CHEMICALS" VARCHAR, "CLEANING" VARCHAR, "SANITIZER" VARCHAR, "SUPPLY_EQUIPMENT" VARCHAR, "PESTICIDES" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR, "CUSTOMER_ID" INTEGER) CREATE TABLE "PHYSICAL_FACILITIES_2" ("CUSTOMER_ID" VARCHAR, "CEILING" VARCHAR, "WINDOWS" VARCHAR, "SPOILS_STORAGE" VARCHAR, "DOORS" VARCHAR, "PERSONAL_LOCKERS" VARCHAR, "FLOORS" VARCHAR, "CONDITION" VARCHAR, "HANDICAP" VARCHAR, "SELF_CLOSING_ROOM" VARCHAR, "TOILETS" VARCHAR, "TOILET_PAPER" VARCHAR, "TOILET_CONDITION" VARCHAR, "VENTILATION_CONDITION" VARCHAR, "MENS" VARCHAR, "MEN'S_VENTILATION" VARCHAR, "MEN'S_FLOOR_DRAWN" VARCHAR, "MEN'S_TOILET_STALLS" VARCHAR, "MEN'S_HAND_SINK" VARCHAR, "MEN'S_TOWELS" VARCHAR, "WOMEN'S" VARCHAR, "WOMEN'S_VENTILATION" VARCHAR, "WOMEN'S_FLOOR_DRAWN" VARCHAR, "WOMEN'S_TOILET_STALLS" VARCHAR, "WOMEN'S_HAND_SINK" VARCHAR, "WOMEN'S_TOWELS" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "PHYSICAL_FACILITIES_1" ("CUSTOMER_ID" VARCHAR, "LIGHTNING" VARCHAR, "KITCHEN" VARCHAR, "STORAGE" VARCHAR, "REFRIGRATION" VARCHAR, "JANITORIAL" VARCHAR, "INTERIOR" VARCHAR, "FIXTURE" VARCHAR, "FOOTCANDLES_1" VARCHAR, "FOOTCANDLES_2" VARCHAR, "FOOTCANDLES_3" VARCHAR, "FOOTCANDLES_4" VARCHAR, "FOOTCANDLES_5" VARCHAR, "EXTERIOR" VARCHAR, "WINDOWS" VARCHAR, "GLASS" VARCHAR, "SCREEN" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "PEST_CONTROL" ("CUSTOMER_ID" VARCHAR, "WALL/CEILING_PIPES" VARCHAR, "SANITATION" VARCHAR, "DOORS" VARCHAR, "SELF_CLOSING" VARCHAR, "RODENT_PROOF" VARCHAR, "VERMIN_PROOFING" VARCHAR, "FOUNDATION" VARCHAR, "ATTIC_VENTS" VARCHAR, "WINDOWS" VARCHAR, "TYPE_SCREENS" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "GENERAL_FOOD_SAFETY_REQ" ("CUSTOMER_ID" VARCHAR, "APPROVED_METHODS" VARCHAR, "DEFROST" VARCHAR, "FROZEN_FOOD" VARCHAR, "FOOD_WASHING" VARCHAR, "PRODUCE_1" VARCHAR, "FRUITS" VARCHAR, "GRAINS" VARCHAR, "VEGETABLES" VARCHAR, "SEPERATE_FROM_CONT" VARCHAR, "RAW" VARCHAR, "PRODUCE_2" VARCHAR, "STORED_BULK" VARCHAR, "STORAGE" VARCHAR, "FIFO" VARCHAR, "REFRIGRATAION" VARCHAR, "STORAGE_TEMP" VARCHAR, "HUMIDITY" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "FOOD_SAFETY_CERTIFICATION" ("CUSTOMER_ID" INTEGER,"OWNER" VARCHAR,"MANAGER" VARCHAR,"EMPLOYEE" VARCHAR,"NAME1" VARCHAR,"NAT'L_REGISTRY1" VARCHAR,"PROMETRIC1" VARCHAR,"SERVESAFE1" VARCHAR,"EXPIRATION1" VARCHAR,"NAME2" VARCHAR,"NAT'L_REGISTRY2" VARCHAR,"PROMETRIC2" VARCHAR,"SERVESAFE2" VARCHAR,"EXPIRATION2" VARCHAR,"COMMENTS" VARCHAR,"FOOD_TEMP" VARCHAR,"DISHWASHER" VARCHAR,"DANGER_ZONE" VARCHAR,"COOLING_FOODS" VARCHAR,"THAWING" VARCHAR,"REHEATING" VARCHAR,"THERMOMETERS" VARCHAR,"FIFO" VARCHAR,"HANDWASH" VARCHAR,"COOKING_TEMP" VARCHAR,"FOOD_POISONING_TYPES" VARCHAR,"EVEDENCE_PHOTOS" VARCHAR) CREATE TABLE "FOOD_FROM_APPROVED_SOURCES" ("CUSTOMER_ID" VARCHAR, "RECEIPTS" VARCHAR, "DELIVERY_DOOR" VARCHAR, "AIR_CURTAIN" VARCHAR, "VELOCITY" VARCHAR, "OFF_LOAD" VARCHAR, "INSPECTS" VARCHAR, "RODENTS" VARCHAR, "WARNING" VARCHAR, "SHELL_FISH" VARCHAR, "GULF_OYSTER" VARCHAR, "FOOD_OUT_OF_TEMP" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTO" VARCHAR) CREATE TABLE "FOOD_FACILITY_SITE_FACTS" ("CUSTOMER_ID" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL UNIQUE,"DBA" VARCHAR,"DISTRICT" VARCHAR,"CITY" VARCHAR,"ZIPCODE" VARCHAR,"ADDRESS" VARCHAR,"SEAT_NO" VARCHAR,"PHONE_NO" VARCHAR,"OWNER" VARCHAR,"PERSON_IN_CHARGE" VARCHAR,"WEBSITE" VARCHAR,"EMAIL" VARCHAR,"FACILITY_TYPE" VARCHAR,"SQ_FOOTAGE" VARCHAR,"NO_OF_SEATS" VARCHAR,"ALCOHOL_SALES" VARCHAR,"LTD_PREP" VARCHAR,"BUSINESS_LICENCE_NO" VARCHAR,"EXPIRATION" VARCHAR,"COMMENTS" VARCHAR,"EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "FOOD_DISPLAY_SELF_SERVICE" ("CUSTOMER_ID" VARCHAR, "SELF_SERVICE" VARCHAR, "LIDS" VARCHAR, "UTENSILS" VARCHAR, "HOW_OFTEN_CHANGED" VARCHAR, "SCOOP" VARCHAR, "SNEEZ_GUARD" VARCHAR, "DISHES" VARCHAR, "CROSS" VARCHAR, "SENITATION" VARCHAR, "MAINTANANCE" VARCHAR, "MECHANICAL_CONDITION" VARCHAR, "SERVICE_OF_UTENSIL" VARCHAR, "TIME" VARCHAR, "PRPER_FOOD_LABELING" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "EQUIPMENT/UTENSILS/LINES" ("CUSTOMER_ID" VARCHAR, "STORED_DISHES" VARCHAR, "CUPS" VARCHAR, "DAMAGED_DISHES" VARCHAR, "UTENSILS" VARCHAR, "LINES" VARCHAR, "COOLING_EQUIPMENT" VARCHAR, "COOK_WARE_STORAGE" VARCHAR, "CONDITION" VARCHAR, "STORAGE_LOCATION" VARCHAR, "NAPKINS" VARCHAR, "SANITATION" VARCHAR, "ADEQUATE_CAPACITY" VARCHAR, "HAZARDS" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "EMPLOYEE_HEALTH_AND_HYGENE" ("CSTOMER_ID" INTEGER, "COMUNICABLE_DIESEASE" VARCHAR, "WIPING_BAGS" VARCHAR, "DISCHARGE" VARCHAR, "LOCKERS" VARCHAR, "PROPER_HAND_WASH" VARCHAR, "RESTROOM_CONDITION" VARCHAR, "HANDWASH_STATION" VARCHAR, "HAIR_RESTRAINT" VARCHAR, "GLOVES_USED" VARCHAR, "WOUND_CARE" VARCHAR, "FOOD_SAMPLE_TESTING" VARCHAR, "GARMENT_CONDITION" VARCHAR, "HEALTH" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "DISH&WARE_WASHING" ("CUSTOMER_ID" INTEGER, "TYPE_OF_COMPARTMENT_SINK" VARCHAR, "PLUMBING_ISSUE1" VARCHAR, "QUATERNARY_AMMONIA" VARCHAR, "IODINE" VARCHAR, "HOT_WTR_SANIT_TEMP" VARCHAR, "PLUMBING_ISSUE2" VARCHAR, "HOT_WATER" VARCHAR, "TEMP" VARCHAR, "DISHWASHER_TYPE" VARCHAR, "WAQLL_CONDITION" VARCHAR, "SANITIZER_LEVELS" VARCHAR, "CHLORINE" VARCHAR, "SINK_OR_DISHWASHER" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR) CREATE TABLE "CORRECTIVE_ACTIONS" ("CUSTOMER_ID" VARCHAR, "CORRECTIVE_ACTIONS" VARCHAR) CREATE TABLE "CONSUMER_ADVISORY" ("CUSTOMER_ID" VARCHAR, "TRUTH_IN_MENU" VARCHAR, "MAJOR_CHAIN" VARCHAR, "TRANS_FAT" VARCHAR, "SCHOOL" VARCHAR, "PROHIBITED" VARCHAR, "CALORIES" VARCHAR, "COMMENTS" VARCHAR, "EVEDINCE_PHOTOS" VARCHAR) CREATE TABLE "COMPLAINS&ENFORCEMENTS" ("CUSTOMER_ID" VARCHAR, "PLAN_SUBMITTED" VARCHAR, "APPROVEL" VARCHAR, "CERTIFICATE_OF_OCCU" VARCHAR, "PRIOR_INSPEK_REPORT" VARCHAR, "LAST_INSPECTION_DATE" VARCHAR, "PERMIT_STATUS" VARCHAR, "POSTED" VARCHAR, "EXPIRATION" VARCHAR, "PERMIT_DISPLAYED" VARCHAR, "INSPECTION_REPORT_POSTING" VARCHAR, "COMMENTS" VARCHAR, "EVIDENCE_PHOTOS" VARCHAR)

    Read the article

  • jdbc query - date ranges as parameters

    - by pstanton
    Hi all, I'd like to write a single JDBC statement that can handle the equivalent of any number of NOT BETWEEN date1 AND date2 where clauses. By single query, i mean that the same SQL string will be used to create the JDBC statements and then have different parameters supplied. This is so that the underlying frameworks can efficiently cache the query (i've been stung by that before). Essentially, I'd like to find a query that is the equivalent of SELECT * FROM table WHERE mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? and at the same time could be used with fewer parameters: SELECT * FROM table WHERE mydate NOT BETWEEN ? AND ? or more parameters SELECT * FROM table WHERE mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? I will consider using a temporary table if that will be simpler and more efficient. thanks for the help!

    Read the article

  • MySQL table data transformation -- how can I dis-aggreate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Update table variable with function

    - by Joris
    I got a table variable @RQ, I want it updated using a table-valued function. Now, I think I do the update wrong, because my function works... The function: ALTER FUNCTION [dbo].[usf_GetRecursiveFoobar] ( @para int, @para datetime, @para varchar(30) ) RETURNS @ReQ TABLE ( Onekey int, Studnr nvarchar(10), Stud int, Description nvarchar(32), ECTSGot decimal(5,2), SBUGot decimal(5,0), ECTSmax decimal(5,2), SBUmax decimal(5,0), IsFree bit, IsGot int, DateGot nvarchar(10), lvl int, path varchar(max) ) AS BEGIN; WITH RQ AS ( --RECURSIVE QUERY ) INSERT @ReQ SELECT RQ.Onekey, RQ.Studnr, RQ.Stud, RQ.Description, RQ.ECTSGot, RQ.SBUGot, RQ.ECTSmax, RQ.SBUmax, RQ.IsFree, RQ.IsGot, RQ.DatumGot, RQ.lvl, RQ.path FROM RQ RETURN END Now, when I run a simple query: DECLARE @ReQ TABLE ( OnderwijsEenheid_key int, StudentnummerHSA nvarchar(10), Student_key int, Omschrijving nvarchar(32), ECTSbehaald decimal(5,2), SBUbehaald decimal(5,0), ECTSmax decimal(5,2), SBUmax decimal(5,0), IsVrijstelling bit, IsBehaald int, DatumBehaald nvarchar(10), lvl int, path varchar(max) ) INSERT INTO @ReQ SELECT * FROM usf_GetRecursiveFoobar(@para1, @para2, @para3) I got error: Msg 8152, Level 16, State 13, Line 20 String or binary data would be truncated. The statement has been terminated. Why? What to do about it?

    Read the article

  • mysql subselect alternative

    - by Arnold
    Hi, Lets say I am analyzing how high school sports records affect school attendance. So I have a table in which each row corresponds to a high school basketball game. Each game has an away team id and a home team id (FK to another "team table") and a home score and an away score and a date. I am writing a query that matches attendance with this seasons basketball games. My sample output will be (#_students_missed_class, day_of_game, home_team, away_team, home_team_wins_this_season, away_team_wins_this_season) I now want to add how each team did the previous season to my analysis. Well, I have their previous season stored in the game table but i should be able to accomplish that with a subselect. So in my main select statement I add the subselect: SELECT COUNT(*) FROM game_table WHERE game_table.date BETWEEN 'start of previous season' AND 'end of previous season' AND ( (game_table.home_team = team_table.id AND game_table.home_score > game_table.away_score) OR (game_table.away_team = team_table.id AND game_table.away_score > game_table.home_score)) In this case team-table.id refers to the id of the home_team so I now have all their wins calculated from the previous year. This method of calculation is neither time nor resource intensive. The Explain SQL shows that I have ALL in the Type field and I am not using a Key and the query times out. I'm not sure how I can accomplish a more efficient query with a subselect. It seems proposterously inefficient to have to write 4 of these queries (for home wins, home losses, away wins, away losses). I am sure this could be more lucid. I'll absolutely add color tomorrow if anyone has questions

    Read the article

  • Where Not In OR Except simulation of SQL in LINQ to Objects

    - by Thinking
    Suppose I have two lists that holds the list of source file names and destination file names respectively. The Sourcefilenamelist has files as 1.txt, 2.txt,3.txt, 4.txt while the Destinaitonlist has 1.txt,2.txt. I ned to write a linq query to find out which files are in SourceList that are absent in DestinationFile list. e.g. here the out put will be 3.txt and 4.txt. I have done this by a foreach statement. but now I want to do the same by using LINQ(C#). Edit: My Code is List<FileList> sourceFileNames = new List<FileList>(); sourceFileNames.Add(new FileList { fileNames = "1.txt" }); sourceFileNames.Add(new FileList { fileNames = "2.txt" }); sourceFileNames.Add(new FileList { fileNames = "3.txt" }); sourceFileNames.Add(new FileList { fileNames = "4.txt" }); List<FileList> destinationFileNames = new List<FileList>(); destinationFileNames.Add(new FileList { fileNames = "1.txt" }); destinationFileNames.Add(new FileList { fileNames = "2.txt" }); IEnumerable<FileList> except = sourceFileNames.Except(destinationFileNames); And Filelist is a simple class with only one property fileNames of type string.

    Read the article

  • Jboss 5.1.0: java.lang.LinkageError: loader constraint violation for class javax.sql.DataSource

    - by lawrencexu
    have read quite a lot about this error, the reason is 1) more than one jar containing this class have been include into the classpath, 2)include the jar twice or more, but in my case, this class is jdk class, and I have search no this class have been found under my application. any clue would be very helpful. exception stack: 2012-10-31 14:09:58,319 WARN [org.jboss.detailed.classloader.ClassLoaderManager] (http-0.0.0.0-8080-1:) Unexpected error during load of:javax.sql.DataSource java.lang.LinkageError: loader constraint violation: loader (instance of org/jboss/classloader/spi/base/BaseClassLoader) previously initiated loading for a different type with name "javax/sql/DataSource" at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631) at java.lang.ClassLoader.defineClass(ClassLoader.java:615) at org.jboss.classloader.spi.base.BaseClassLoader.access$200(BaseClassLoader.java:67) at org.jboss.classloader.spi.base.BaseClassLoader$2.run(BaseClassLoader.java:633) at org.jboss.classloader.spi.base.BaseClassLoader$2.run(BaseClassLoader.java:592) at java.security.AccessController.doPrivileged(Native Method) at org.jboss.classloader.spi.base.BaseClassLoader.loadClassLocally(BaseClassLoader.java:591) at org.jboss.classloader.spi.base.BaseClassLoader.loadClassLocally(BaseClassLoader.java:568) at org.jboss.classloader.spi.base.BaseDelegateLoader.loadClass(BaseDelegateLoader.java:135) at org.jboss.classloader.spi.filter.FilteredDelegateLoader.loadClass(FilteredDelegateLoader.java:131) at org.jboss.classloader.spi.base.ClassLoadingTask$ThreadTask.run(ClassLoadingTask.java:455) at org.jboss.classloader.spi.base.ClassLoaderManager.nextTask(ClassLoaderManager.java:267) at org.jboss.classloader.spi.base.ClassLoaderManager.process(ClassLoaderManager.java:166) at org.jboss.classloader.spi.base.BaseClassLoaderDomain.loadClass(BaseClassLoaderDomain.java:287) at org.jboss.classloader.spi.base.BaseClassLoaderDomain.loadClass(BaseClassLoaderDomain.java:1163) at org.jboss.classloader.spi.base.BaseClassLoader.loadClassFromDomain(BaseClassLoader.java:862) at org.jboss.classloader.spi.base.BaseClassLoader.doLoadClass(BaseClassLoader.java:502) at org.jboss.classloader.spi.base.BaseClassLoader.loadClass(BaseClassLoader.java:447) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) at com.lombardrisk.webgui.dwr.ajax_search.UmbrellaNettingFundsSearch.getBranchesByAgreementId(UmbrellaNettingFundsSearch.java:199) at com.lombardrisk.webgui.dwr.ajax_search.UmbrellaNettingFundsSearch.buildConditionOfPrinAndCpty(UmbrellaNettingFundsSearch.java:177) at com.lombardrisk.webgui.dwr.ajax_search.UmbrellaNettingFundsSearch.getFunds(UmbrellaNettingFundsSearch.java:58) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.directwebremoting.impl.ExecuteAjaxFilter.doFilter(ExecuteAjaxFilter.java:34) at org.directwebremoting.impl.DefaultRemoter$1.doFilter(DefaultRemoter.java:428) at org.directwebremoting.impl.DefaultRemoter.execute(DefaultRemoter.java:431) at org.directwebremoting.impl.DefaultRemoter.execute(DefaultRemoter.java:283) at org.directwebremoting.servlet.PlainCallHandler.handle(PlainCallHandler.java:52) at org.directwebremoting.servlet.UrlProcessor.handle(UrlProcessor.java:101) at org.directwebremoting.servlet.DwrServlet.doPost(DwrServlet.java:146) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.lombardrisk.webgui.filter.ValidRequestFilter.doFilter(ValidRequestFilter.java:41) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:183) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:525) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:95) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.internalProcess(ActiveRequestResponseCacheValve.java:74) at org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:47) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:599) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:451) at java.lang.Thread.run(Thread.java:662)

    Read the article

  • Help with SQL Query

    - by djfrear
    With regards to the following statement: Select * From explorer.booking_record booking_record_ Inner Join explorer.client client_ On booking_record_.labelno = client_.labelno Inner Join explorer.tour_hotel tour_hotel_ On tour_hotel_.tourcode = booking_record_.tourrefcode Inner Join explorer.hotelrecord hotelrecord_ On tour_hotel_.hotelcode = hotelrecord_.hotelref Where booking_record_.bookingdate Not Like '0000-00-00' And booking_record_.tourdeparturedate Not Like '0000-00-00' And hotelrecord_.hotelgroup = "LPL" And Year(booking_record_.tourdeparturedate) Between Year(AddDate(Now(), Interval -5 Year)) And Year(Now()) My MySQL skills are certainly not up to scratch, the actual result set I wish to find is "a customer who has been to 5 or more LPL hotels in the past 5 years". So far I havent got as far as dealing with the count as I'm getting a huge number of results with some 250+ per customer. I assume this is to do with the way I'm joining tables. Schema wise the booking_record table contains a tour reference code, which links to tour_hotel which then contains a hotelcode which links to hotelrecord. This hotelrecord table contains the hotelgroup. The client table is joined to the booking_record via a booking reference and a client may have many bookings. If anyone could suggest a way for me to do this I'd be very grateful and hopefully learn enough to do it myself next time! I've been scratching my head over this one for a few hours now! Customers may have many bookings within booking_record Daniel.

    Read the article

  • Same query has nested loops when used with INSERT, but Hash Match without.

    - by AaronLS
    I have two tables, one has about 1500 records and the other has about 300000 child records. About a 1:200 ratio. I stage the parent table to a staging table, SomeParentTable_Staging, and then I stage all of it's child records, but I only want the ones that are related to the records I staged in the parent table. So I use the below query to perform this staging by joining with the parent tables staged data. --Stage child records INSERT INTO [dbo].[SomeChildTable_Staging] ([SomeChildTableId] ,[SomeParentTableId] ,SomeData1 ,SomeData2 ,SomeData3 ,SomeData4 ) SELECT [SomeChildTableId] ,D.[SomeParentTableId] ,SomeData1 ,SomeData2 ,SomeData3 ,SomeData4 FROM [dbo].[SomeChildTable] D INNER JOIN dbo.SomeParentTable_Staging I ON D.SomeParentTableID = I.SomeParentTableID; The execution plan indicates that the tables are being joined with a Nested Loop. When I run just the select portion of the query without the insert, the join is performed with Hash Match. So the select statement is the same, but in the context of an insert it uses the slower nested loop. I have added non-clustered index on the D.SomeParentTableID so that there is an index on both sides of the join. I.SomeParentTableID is a primary key with clustered index. Why does it use a nested loop for inserts that use a join? Is there a way to improve the performance of the join for the insert?

    Read the article

  • Ordering by a max or a min from another table

    - by Paul Tomblin
    I have a table that consists of a unique id, and a few other attributes. It holds "schedules". Then I have another table that holds a list of all the times each schedule has or will "fire". This isn't the exact schema, but it's close: create table schedule ( id varchar(40) primary key, attr1 int, attr2 varchar(20) ); create table schedule_times ( id varchar(40) foreign key schedule(id), fire_date date ); I want to query the schedule table, getting the attributes and the next and previous fire_dates, in Java, sometimes ordering on one of the attributes, but sometimes ordering on either previous fire_date or the next fire_date. Ordering by the attributes is easy, I just stick an "order by" into the string while I'm building my prepared statement. I'm not even sure how to go about selecting the last fire_date and the next one in a single query - I know that I can find the next fire_date for a given id by doing a SELECT min(fire_date) FROM schedule_times WHERE id = ? AND fire_date > sysdate; and the similar thing for previous fire_date using max() and fire_date < sysdate. I'm just drawing a blank on how to incorporate that into a single select from the schedule so I can get both next and previous fire_date in one shot, and also how to order by either of those attributes.

    Read the article

  • javax.naming.NameNotFoundException: Name [comp/env] is not bound in this Context. Unable to find [comp] error with java scheduler

    - by Morgan Azhari
    What I'm trying to do is to update my database after a period of time. So I'm using java scheduler and connection pooling. I don't know why but my code only working once. It will print: init success success javax.naming.NameNotFoundException: Name [comp/env] is not bound in this Context. Unable to find [comp]. at org.apache.naming.NamingContext.lookup(NamingContext.java:820) at org.apache.naming.NamingContext.lookup(NamingContext.java:168) at org.apache.naming.SelectorContext.lookup(SelectorContext.java:158) at javax.naming.InitialContext.lookup(InitialContext.java:411) at test.Pool.main(Pool.java:25) ---> line 25 is Context envContext = (Context)initialContext.lookup("java:/comp/env"); I don't know why it only works once. I already test it if I didn't running it without java scheduler and it works fine. No error whatsoerver. Don't know why i get this error if I running it using scheduler. Hope someone can help me. My connection pooling code: public class Pool { public DataSource main() { try { InitialContext initialContext = new InitialContext(); Context envContext = (Context)initialContext.lookup("java:/comp/env"); DataSource datasource = new DataSource(); datasource = (DataSource)envContext.lookup("jdbc/test"); return datasource; } catch (Exception ex) { ex.printStackTrace(); } return null; } } my web.xml: <web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <listener> <listener-class> package.test.Pool</listener-class> </listener> <resource-ref> <description>DB Connection Pooling</description> <res-ref-name>jdbc/test</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> Context.xml: <?xml version="1.0" encoding="UTF-8"?> <Context path="/project" reloadable="true"> <Resource auth="Container" defaultReadOnly="false" driverClassName="com.mysql.jdbc.Driver" factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" initialSize="0" jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer" jmxEnabled="true" logAbandoned="true" maxActive="300" maxIdle="50" maxWait="10000" minEvictableIdleTimeMillis="300000" minIdle="30" name="jdbc/test" password="test" removeAbandoned="true" removeAbandonedTimeout="60" testOnBorrow="true" testOnReturn="false" testWhileIdle="true" timeBetweenEvictionRunsMillis="30000" type="javax.sql.DataSource" url="jdbc:mysql://localhost:3306/database?noAccessToProcedureBodies=true" username="root" validationInterval="30000" validationQuery="SELECT 1"/> </Context> my java scheduler public class Scheduler extends HttpServlet{ public void init() throws ServletException { System.out.println("init success"); try{ Scheduling_test test = new Scheduling_test(); ScheduledExecutorService executor = Executors.newScheduledThreadPool(100); ScheduledFuture future = executor.scheduleWithFixedDelay(test, 1, 60 ,TimeUnit.SECONDS); }catch(Exception e){ e.printStackTrace(); } } } Schedule_test public class Scheduling_test extends Thread implements Runnable{ public void run(){ Updating updating = new Updating(); updating.run(); } } updating public class Updating{ public void run(){ ResultSet rs = null; PreparedStatement p = null; StringBuilder sb = new StringBuilder(); Pool pool = new Pool(); Connection con = null; DataSource datasource = null; try{ datasource = pool.main(); con=datasource.getConnection(); sb.append("SELECT * FROM database"); p = con.prepareStatement(sb.toString()); rs = p.executeQuery(); rs.close(); con.close(); p.close(); datasource.close(); System.out.println("success"); }catch (Exception e){ e.printStackTrace(); } }

    Read the article

  • Postgre database ignoring created index ?!

    - by drasto
    I have an Postgre database and a table called my_table. There are 4 columns in that table (id, column1, column2, column3). The id column is primary key, there are no other constrains or indexes on columns. The table has about 200000 rows. I want to print out all rows which has value of column column2 equal(case insensitive) to 'value12'. I use this: SELECT * FROM my_table WHERE column2 = lower('value12') here is the execution plan for this statement(result of set enable_seqscan=on; EXPLAIN SELECT * FROM my_table WHERE column2 = lower('value12')): Seq Scan on my_table (cost=0.00..4676.00 rows=10000 width=55) Filter: ((column2)::text = 'value12'::text) I consider this to be to slow so I create an index on column column2 for better prerformance of searches: CREATE INDEX my_index ON my_table (lower(column2)) Now I ran the same select: SELECT * FROM my_table WHERE column2 = lower('value12') and I expect it to be much faster because it can use index. However it is not faster, it is as slow as before. So I check the execution plan and it is the same as before(see above). So it still uses sequential scen and it ignores the index! Where is the problem ?

    Read the article

  • MySQL query - if not exists - insert into - else - update

    - by user3180931
    I made a simple document generator by the form, this form saves everything to mysql database, It works great, but when someone type a the same 'nrumowy' it creates a new row in mysql, 'nrumowy' is unique, so when someone adds a form with the same 'nrumowy' I want to just update existing data in mysql, I have that code: $con=mysqli_connect("localhost","login","pass","database"); // Check connection if (mysqli_connect_errno()) { echo "Failed to connect to MySQL: " . mysqli_connect_error(); } // escape variables for security $numerklienta = mysqli_real_escape_string($con, $_POST['numerklienta']); $name = mysqli_real_escape_string($con, $_POST['name']); $hours = mysqli_real_escape_string($con, $_POST['hours']); $date = mysqli_real_escape_string($con, $_POST['date']); $beginDate = mysqli_real_escape_string($con, $_POST['beginDate']); $nrdomu = mysqli_real_escape_string($con, $_POST['nrdomu']); $telefon = mysqli_real_escape_string($con, $_POST['telefon']); $fax = mysqli_real_escape_string($con, $_POST['fax']); $nip = mysqli_real_escape_string($con, $_POST['nip']); $email = mysqli_real_escape_string($con, $_POST['email']); $stronawww = mysqli_real_escape_string($con, $_POST['stronawww']); $branza = mysqli_real_escape_string($con, $_POST['branza']); $vatkodpocztowy = mysqli_real_escape_string($con, $_POST['vatkodpocztowy']); $vatmiejscowosc = mysqli_real_escape_string($con, $_POST['vatmiejscowosc']); $vatulica = mysqli_real_escape_string($con, $_POST['vatulica']); $vatnrdomu = mysqli_real_escape_string($con, $_POST['vatnrdomu']); $vatemail = mysqli_real_escape_string($con, $_POST['vatemail']); $vatosoba = mysqli_real_escape_string($con, $_POST['vatosoba']); $datapublikacji = mysqli_real_escape_string($con, $_POST['datapublikacji']); $rabat = mysqli_real_escape_string($con, $_POST['rabat']); $wartoscnetto = mysqli_real_escape_string($con, $_POST['wartoscnetto']); $typreklamy = mysqli_real_escape_string($con, $_POST['typreklamy']); $inne = mysqli_real_escape_string($con, $_POST['inne']); $inne2 = mysqli_real_escape_string($con, $_POST['inne2']); $inne3 = mysqli_real_escape_string($con, $_POST['inne3']); $zaliczka = mysqli_real_escape_string($con, $_POST['zaliczka']); $liczbarat1 = mysqli_real_escape_string($con, $_POST['liczbarat1']); $zaakceptowaneprzez = mysqli_real_escape_string($con, $_POST['zaakceptowaneprzez']); $telzam = mysqli_real_escape_string($con, $_POST['telzam']); $datapodpis = mysqli_real_escape_string($con, $_POST['datapodpis']); $nrumowy = mysqli_real_escape_string($con, $_POST['nrumowy']); $sql="IF NOT EXISTS ( SELECT * FROM zam WHERE nrumowy = '$nrumowy' ) THEN INSERT INTO zam (numerklienta, name, hours, date, beginDate, nrdomu, telefon, fax, nip, email, stronawww, branza, vatkodpocztowy, vatmiejscowosc, vatulica, vatnrdomu, vatemail, vatosoba, datapublikacji, rabat, wartoscnetto, typreklamy, inne, inne2, inne3, zaliczka, liczbarat1, zaakceptowaneprzez, telzam, datapodpis, nrumowy) VALUES ('$numerklienta', '$name', '$hours', '$date', '$beginDate', '$nrdomu', '$telefon', '$fax', '$nip', '$email', '$stronawww', '$branza', '$vatkodpocztowy', '$vatmiejscowosc', '$vatulica', '$vatnrdomu', '$vatemail', '$vatosoba', '$datapublikacji', '$rabat', '$wartoscnetto', '$typreklamy', '$inne', '$inne2', '$inne3', '$zaliczka', '$liczbarat1', '$zaakceptowaneprzez', '$telzam', '$datapodpis', '$nrumowy' ) ELSE UPDATE zam SET name = '$name', numerklienta = '$numerklienta', hours = '$hours', date = '$date', beginDate = '$beginDate', nrdomu = '$nrdomu', telefon = '$telefon', fax = '$fax', nip = '$nip', email = '$email', stronawww = '$stronawww', branza = '$branza', vatkodpocztowy = '$vatkodpocztowy', vatmiejscowosc = '$vatmiejscowosc', vatulica = '$vatulica', vatnrdomu = '$vatnrdomu', vatemail = '$vatemail', vatosoba = '$vatosoba', datapublikacji = '$datapublikacji', rabat = '$rabat', wartoscnetto = '$wartoscnetto', typreklamy = '$typreklamy', inne = '$inne', inne2 = '$inne2', inne3 = '$inne3', zaliczka = '$zaliczka', liczbarat1 = '$liczbarat1', zaakceptowaneprzez = '$zaakceptowaneprzez', telzam = '$telzam', datapodpis = '$datapodpis' WHERE nrumowy ='$nrumowy' END IF"; if (!mysqli_query($con,$sql)) { die('Error: ' . mysqli_error($con)); } mysqli_close($con); This query without " select..... " and "else update" just a 'insert into' works great, also when I change this 'insert into' to 'update' but I don't know how to make this variable if not exists - insert into - else update

    Read the article

  • JoinColumn name not used in sql

    - by Vladimir
    Hi! I have a problem with mapping many-to-one relationship without exact foreign key constraint set in database. I use OpenJPA implementation with MySql database, but the problem is with generated sql scripts for insert and select statements. I have LegalEntity table which contains RootId column (among others). I also have Address table which has LegalEntityId column which is not nullable, and which should contain values referencing LegalEntity's "RootId" column, but without any database constraint (foreign key) set. Address entity is mapped: @Entity @Table(name="address") public class Address implements Serializable { ... @ManyToOne(fetch=FetchType.LAZY, optional=false) @JoinColumn(referencedColumnName="RootId", name="LegalEntityId", nullable=false, insertable=true, updatable=true, table="LegalEntity") public LegalEntity getLegalEntity() { return this.legalEntity; } } SELECT statement (when fetching LegalEntity's addresses) and INSERT statment are generated: SELECT t0.Id, .., t0.LEGALENTITY_ID FROM address t0 WHERE t0.LEGALENTITY_ID = ? ORDER BY t0.Id DESC [params=(int) 2] INSERT INTO address (..., LEGALENTITY_ID) VALUES (..., ?) [params=..., (int) 2] If I omit table attribute from mentioned statements are generated: SELECT t0.Id, ... FROM address t0 INNER JOIN legalentity t1 ON t0.LegalEntityId = t1.RootId WHERE t1.Id = ? ORDER BY t0.Id DESC [params=(int) 2] INSERT INTO address (...) VALUES (...) [params=...] So, LegalEntityId is not included in any of the statements. Is it possible to have relationship based on such referencing (to column other than primary key, without foreign key in database)? Is there something else missing? Thanks in advance.

    Read the article

  • Name for method that takes a string value and returns DBNull.Value || string

    - by David Murdoch
    I got tired of writing the following code: /* Commenting out irrelevant parts public string MiddleName; public void Save(){ SqlCommand = new SqlCommand(); // blah blah...boring INSERT statement with params etc go here. */ if(MiddleName==null){ myCmd.Parameters.Add("@MiddleName", DBNull.Value); } else{ myCmd.Parameters.Add("@MiddleName", MiddleName); } /* // more boring code to save to DB. }*/ So, I wrote this: public static object DBNullValueorStringIfNotNull(string value) { object o; if (value == null) { o = DBNull.Value; } else { o = value; } return o; } // which would be called like: myCmd.Parameters.Add("@MiddleName", DBNullValueorStringIfNotNull(MiddleName)); If this is a good way to go about doing this then what would you suggest as the method name? DBNullValueorStringIfNotNull is a bit verbose and confusing. I'm also open to ways to alleviate this problem entirely. I'd LOVE to do this: myCmd.Parameters.Add("@MiddleName", MiddleName==null ? DBNull.Value : MiddleName); but that won't work. I've got C# 3.5 and SQL Server 2005 at my disposal if it matters.

    Read the article

  • mysql_close doesn't kill locked sql requests

    - by Nikita
    I use mysqld Ver 5.1.37-2-log for debian-linux-gnu I perform mysql calls from c++ code with functions mysql_query. The problem occurs when mysql_query execute procedure, procedure locked on locked table, so mysql_query hangs. If send kill signal to application then we can see lock until table is locked. Create the following SQL table and procedure CREATE TABLE IF NOT EXISTS `tabletolock` ( `id` INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`) )ENGINE = InnoDB; DELIMITER $$ DROP PROCEDURE IF EXISTS `LOCK_PROCEDURE` $$ CREATE PROCEDURE `LOCK_PROCEDURE`() BEGIN SELECT id INTO @id FROM tabletolock; END $$ DELOMITER; There are sql commands to reproduce the problem: 1. in one terminal execute lock tables tabletolock write; 2. in another terminal execute call LOCK_PROCEDURE(); 3. In first terminal exeute show processlist and see | 2492 | root | localhost | syn_db | Query | 12 | Locked | SELECT id INTO @id FROM tabletolock | Then perfrom Ctrl-C in second terminal to interrupt our procudere and see processlist again. It is not changed, we already see locked select request and can teminate it by unlock tables or kill commands. Problem described is occured with mysql command line client. Also such problem exists when we use functions mysql_query and mysql_close. Example of c code: #include <iostream> #include <mysql/mysql.h> #include <mysql/errmsg.h> #include <signal.h> // g++ -Wall -g -fPIC -lmysqlclient dbtest.cpp using namespace std; MYSQL * connection = NULL; void closeconnection() { if(connection != NULL) { cout << "close connection !\n"; mysql_close(connection); mysql_thread_end(); delete connection; mysql_library_end(); } } void sigkill(int s) { closeconnection(); signal(SIGINT, NULL); raise(s); } int main(int argc, char ** argv) { signal(SIGINT, sigkill); connection = new MYSQL; mysql_init(connection); mysql_options(connection, MYSQL_READ_DEFAULT_GROUP, "nnfc"); if (!mysql_real_connect(connection, "127.0.0.1", "user", "password", "db", 3306, NULL, CLIENT_MULTI_RESULTS)) { delete connection; cout << "cannot connect\n"; return -1; } cout << "before procedure call\n"; mysql_query(connection, "CALL LOCK_PROCEDURE();"); cout << "after procedure call\n"; closeconnection(); return 0; } Compile it, and perform the folloing actions: 1. in first terminal local tables tabletolock write; 2. run program ./a.out 3. interrupt program Ctrl-C. on the screen we see that closeconnection function is called, so connection is closed. 4. in first terminal execute show processlist and see that procedure was not intrrupted. My question is how to terminate such locked calls from c code? Thank you in advance!

    Read the article

< Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >