Search Results

Search found 29193 results on 1168 pages for 'sql merge'.

Page 679/1168 | < Previous Page | 675 676 677 678 679 680 681 682 683 684 685 686  | Next Page >

  • Beginner Ques::How to delete records permanently in case of linked tables?

    - by Serenity
    Let's say I have these 2 tables QuesType and Ques:- QuesType QuestypeID|QuesType |Active ------------------------------------ 101 |QuesType1 |True 102 |QuesType2 |True 103 |XXInActiveXX |False Ques QuesID|Ques|Answer|QUesTypeID|Active ------------------------------------ 1 |Ques1|Ans1 |101 |True 2 |Ques2|Ans2 |102 |True 3 |Ques3|Ans3 |101 |True In the QuesType Table:- QuesTypeID is a Primary key In the Ques Table:- QuesID is a Primary key and QuesType ID is the Foreign Key that refernces QuesTypeID from QuesType Table Now I am unable to delete records from QuesType Table, I can only make QuesType inactive by setting Active=False. I am unable to delete QuesTypes permanently because of the Foreign key relation it has with Ques Table. So , I just set the column Active=false and those Questypes then don't show on my grid wen its binded. What I want to do is be able to delete any QuesType permamnently. Now it can only be deleted if its not being used anywhere in the Ques table, right? So to delete any QuesType permanently I thot this is what I could do:- In the grid that displays QuesTypes, I have this check box for Active and a button for delete.What I thot was, when a user makes some QuesType inactive then OnCheckChanged() event will run and that will have the code to delete all the Questions in Ques table that are using that QuesTypeID. Then on the QuesType grid, that QuesType would show as Deactivated and only then can a user delete it permanently. Am I thinking correctly? Currently in my DeleteQuesType Stored Procedure what I am doing is:- Setting the Active=false and Setting QuesTye= some string like XXInactiveXX Is there any other way?

    Read the article

  • drop-down combo box-like functionality for queries.

    - by Frank Developer
    Is it possible to provide the following type of fuctionality with informix client tools? As the user types the first two characters of a name, the drop-down list is empty. At the third character, the list fills with just the names beginning with those three characters. At the fourth character, MS-Access completes the first matching name (assuming the combo's AutoExpand is on). Once enough characters are typed to identify the customer, the user tabs to the next field. The time taken to load the combo between keystrokes is minimal. This occurs once only for each entry, unless the user backspaces through the first three characters again. If your list still contains too many records, you can reduce them by another order of magnitude by changing the value of constant conSuburbMin from 3 to 4.

    Read the article

  • PreparedStatement.setString() method without quotes

    - by Slavko
    I'm trying to use a PreparedStatement with code similar to this: SELECT * FROM ? WHERE name = ? Obviously, what happens when I use setString() to set the table and name field is this: SELECT * FROM 'my_table' WHERE name = 'whatever' and the query doesn't work. Is there a way to set the String without quotes so the line looks like this: SELECT * FROM my_table WHERE name = 'whatever' or should I just give it up and use the regular Statement instead (the arguments come from another part of the system, neither of those is entered by a user)?

    Read the article

  • Having to insert a record, then update the same record warrants 1:1 relationship design?

    - by dianovich
    Let's say an Order has many Line items and we're storing the total cost of an order (based on the sum of prices on order lines) in the orders table. -------------- orders -------------- id ref total_cost -------------- -------------- lines -------------- id order_id price -------------- In a simple application, the order and line are created during the same step of the checkout process. So this means INSERT INTO orders .... -- Get ID of inserted order record INSERT into lines VALUES(null, order_id, ...), ... where we get the order ID after creating the order record. The problem I'm having is trying to figure out the best way to store the total cost of an order. I don't want to have to create an order create lines on an order calculate cost on order based on lines then update record created in 1. in orders table This would mean a nullable total_cost field on orders for starters... My solution thus far is to have an order_totals table with a 1:1 relationship to the orders table. But I think it's redundant. Ideally, since everything required to calculate total costs (lines on an order) is in the database, I would work out the value every time I need it, but this is very expensive. What are your thoughts?

    Read the article

  • counting employee attendance

    - by jjj
    i am trying to write a statment for counting the employees attendance and execute thier id , name and the days that he has working on the last 3 months by counting the duplicate id on NewTimeAttendance for month 1 , 2 and 3 .. i tried to count : Select COUNT(employeeid) from NewTimeAttendance where employeeid=1 and (month=1 or month =2 or month = 3) This is absolutely working ,but just for one employee... the secound try: SELECT COUNT(NewEmployee.EmployeeID) FROM NewEmployee INNER JOIN NewTimeAttendance ON NewEmployee.EmployeeID = NewTimeAttendance.EmployeeID and (month=1 or month =2 or month = 3) This is working , but it counts all employees .. and i want it to execute each EmployeeId, EmployeeName and number of days as new record last try: (before you see the code ... it is wrong ..but i am trying) for i in 0..27 loop SELECT COUNT(NewEmployee.EmployeeID),NewEmployee.EmployeeId,EmployeeName FROM NewEmployee INNER JOIN NewTimeAttendance ON NewEmployee.EmployeeID(i) = NewTimeAttendance.EmployeeID and (month=1 or month =2 or month = 3) end loop i realy need help...thanks in advance

    Read the article

  • Dynamic where clause in LINQ - with column names available at runtime

    - by sandesh247
    Disclaimer: I've solved the problem using Expressions from System.Linq.Expressions, but I'm still looking for a better/easier way. Consider the following situation : var query = from c in db.Customers where (c.ContactFirstName.Contains("BlackListed") || c.ContactLastName.Contains("BlackListed") || c.Address.Contains("BlackListed")) select c; The columns/attributes that need to be checked against the blacklisted term are only available to me at runtime. How do I generate this dynamic where clause? An additional complication is that the Queryable collection (db.Customers above) is typed to a Queryable of the base class of 'Customer' (say 'Person'), and therefore writing c.Address as above is not an option.

    Read the article

  • Looking for MSSQL Table Design Sanity Check for Profile Tables with Dynamic Columns.

    - by Code Sherpa
    I just want a general sanity check regarding database design. We are building a web system that has both Teachers and Students. Both have accounts in the system. Both have profiles in the system. My question is about the table design of those Profile tables. The Teacher profile is pretty static regarding the metadata associated with it. Each teacher has a set number of fields that exposes information about that individual (schools, degrees, etc). The students, however, are a different case. We are using a windows service to pull varying data about the students from an endless stream of excel spreadsheets. The data gets moved into our database and then the fields appear in association with the student's profile. Accordingly, each and every student may have very different fields in their profile. I originally started with the concept of three tables: Accounts ---------- AccountID TeacherProfiles ---------- TeacherProfileID AccountID SecondarySchool University YearsTeaching Etc... StudentProfiles ---------- StudentProfileID AccountID Header Value The StudentProfiles table would hold the name of the column headers from the excel spreadsheets and the associated values. I have since evolved the design a little to treat Profiles more generically per the attached ERD image. The Teacher and Student "Headers" are stored in a table called "ProfileAttributeTypes" and responses (either from the excel document or via input fields on the web form) are put in a ProfileAttributes table. This way both Student and Teacher profiles can be associated with a dynamic flow of profile fields. The "Permissions" table tells us whether we are dealing with a Student or a Teacher. Since this system is likely to grow quickly, I want to make sure the foundation is solid. Can you please provide feedback about this design and let me know if it seems sound or if you could see problems it might create and, if so, what might be a better approach? Thanks in advance.

    Read the article

  • History tables pros, cons and gotchas - using triggers, sproc or at application level.

    - by Nathan W
    I am currently playing around with the idea of having history tables for some of my tables in my database. Basically I have the main table and a copy of that table with a modified date and an action column to store what action was preformed eg Update,Delete and Insert. So far I can think of three different places that you can do the history table work. Triggers on the main table for update, insert and delete. (Database) Stored procedures. (Database) Application layer. (Application) My main question is, what are the pros, cons and gotchas of doing the work in each of these layers. One advantage I can think of by using the triggers way is that integrity is always maintained no matter what program is implmentated on top of the database.

    Read the article

  • Extract time part from TimeStamp column in ORACLE

    - by RRUZ
    Actually i' am using MyTimeStampField-TRUNC(MyTimeStampField) to extract the time part from an timestamp column in Oracle. SELECT CURRENT_TIMESTAMP-TRUNC(CURRENT_TIMESTAMP) FROM DUAL this return +00 13:12:07.100729 this work ok for me, to extract the time part from an timestamp field, but i' m wondering if exist a better way (may be using an built-in function of ORACLE) to do this? Thanks.

    Read the article

  • Calendar using Javascript/ PHP/ mySQL

    - by Gushiken
    for a current webapp i need a "outlook-like" calendar... Here are some requirements for the calendar: week-view for the appointments different appointment types direct display of the length and time of the date (like in googleCalendar) multiple appointments for the same time only using javascript, php and any DB We need the calendar for the Zend Framework, so if the Calendar doesn't already support the ZF, the source needs to be editable! do you know any calendar which fits my needs? or do you have any tipps for developing one by myself?

    Read the article

  • Entity Framework EntityKey / Foreign Key problem.

    - by Ronny176
    Hi, I keep getting the same error: Entities in 'VlaamseOverheidMeterEntities.ObjectMeter' participate in the 'FK_ObjectMeter_Meter' relationship. 0 related 'Meter' were found. 1 'Meter' is expected. I have the following table structure: Meter 1 <- * ObjectMeter * - 1 VO_Object It is always the same scenario: The first meter is added to the database, the second meter gives the error above. I have the following code in my manager: public List<string> addTemporary(string username, string meterNaam, string readingType, string parentID) { Meter meter = new Meter(); VO_Object voObject = objectManager.getObjectByID(parentID); ObjectMeter objMeter = new ObjectMeter(); meter.readingType = (int)Enum.Parse(typeof(ReadingType), readingType); meter.isActive = true; meter.name = meterNaam; meter.startDate = DateTime.Now; meter.endDate = DateTime.Now.AddYears(6000); meter.uniqueIdentifier = "N/A"; meter.meterType = (int)Enum.Parse(typeof(MeterType), "NA"); meter.meterCategory = (int)Enum.Parse(typeof(MeterCategory), "NA"); meter.energyType = (int)Enum.Parse(typeof(EnergyType), "NA"); meter.utilityType = (int)Enum.Parse(typeof(UtilityType), "NA"); meter.unitOfMeasure = (int)Enum.Parse(typeof(UnitOfMeasure), "NA"); objMeter.valid_from = meter.startDate; objMeter.valid_until = meter.endDate; objMeter.Meter = meter; objMeter.VO_Object = voObject; createMeter(meter); List<String> str = new List<string>(); str.Add("" + meter.meterID); str.Add(meter.name); return str; } and this in my Dao Class which links to the database: internal void CreateMeter(Meter _meter) { _entities.AddToMeter(_meter); _entities.SaveChanges(); } Can someone please explain this error? Ronald

    Read the article

  • Selecting More Than 1 Table in A Single Query

    - by Kamran
    I have 5 Tables in MS Access as Logs [Title, ID, Date, Author] Tape [Title, ID, Date, Author] Maps [Title, ID, Date, Author] VCDs [Title, ID, Date, Author] Book [Title, ID, Date, Author] I tried my level best through this code SELECT Logs.[Author], Tape.[Author], Maps.[Author], VCDs.[Author], Book.[Author] FROM Logs , Tape , Maps , VCDs, Book WHERE ((([Author] & " " & [Author] & " " & [Author] & " " & [Author]& " " & [Author]) Like "*" & [Type the Title or Any Part of the Title and Press Ok] & "*")); I want to select all of these fields in a single query. Suppose there is Adam as author of works in all tables. So when i put Adam in search box it should result from all tables. I know this can be done by having single table or renaming fields names but that's not required. Please help.

    Read the article

  • Aggregate Functions in Index with IBMDB2

    - by Erkan
    Is there any way to pre aggregate results of aggregate functionts (f.i. count()) and store it in an index? The background is: i want to speed up count() queries. So that: Select count(users) from TE123 where region = 'A'; would be supported by an index like Region Count(Users) A 548 E 458 I know that MQTs would also help for this problem. However, in this case it is not possible to use MQT, as we use kind of an ORM and we don't want to define Entities on MQTs. I just slightly remember - one DBA told me - that there is such a function planned for DB2 V10.

    Read the article

  • While Loop in TSQL with Sum totals

    - by RPS
    I have the following TSQL Statement, I am trying to figure out how I can keep getting the results (100 rows at a time), store them in a variable (as I will have to add the totals after each select) and continue to select in a while loop until no more records are found and then return the variable totals to the calling function. SELECT [OrderUser].OrderUserId, ISNULL(SUM(total.FileSize), 0), ISNULL(SUM(total.CompressedFileSize), 0) FROM ( SELECT DISTINCT TOP(100) ProductSize.OrderUserId, ProductSize.FileInfoId, CAST(ProductSize.FileSize AS BIGINT) AS FileSize, CAST(ProductSize.CompressedFileSize AS BIGINT) AS CompressedFileSize FROM ProductSize WITH (NOLOCK) INNER JOIN [Version] ON ProductSize.VersionId = [Version].VersionId ) AS total RIGHT OUTER JOIN [OrderUser] WITH (NOLOCK) ON total.OrderUserId = [OrderUser].OrderUserId WHERE NOT ([OrderUser].isCustomer = 1 AND [OrderUser].isEndOrderUser = 0 OR [OrderUser].isLocation = 1) AND [OrderUser].OrderUserId = 1 GROUP BY [OrderUser].OrderUserId

    Read the article

  • Is it possible with dynamic TSQL query ?

    - by eugeneK
    I have very long select query which i need to filter based on some params, i'm trying to avoid having different stored procedures or if statements inside of single stored procedure by using partly dynamic TSQL... I will avoid long select just for example sake select a from b where c=@c or d=@d @c and @d are filter params, only one can filter at the same time but also both filters could be disabled. 0 for each of these means param is disables so i can create nvarchar with where statement in it... How do i integrate in here dynamic query so 'where' can be added to normal query. I cannot add all the query as big nvarchar because there is too many things in it which will require changes ( ie. when's, subqueries, joins)

    Read the article

  • MySQL whats wrong with my foreign keys?

    - by Skiy
    Hello, what is wrong with the two foreign keys which I have marked with comments? create database db; use db; create table Flug( Flugbez varchar(20), FDatum Date, Ziel varchar(20), Flugzeit int, Entfernung int, Primary Key (Flugbez, FDatum)); create table Flugzeugtyp( Typ varchar(20), Hersteller varchar(20), SitzAnzahl int, Reisegeschw int, primary key (Typ) ); create table flugzeug( Typ varchar(20), SerienNr int, AnschDatum Date, FlugStd int, primary key(Typ,SerienNr), foreign key(Typ) references Flugzeugtyp(Typ)); create table Abflug( Flugbez varchar(20), FDatum Date, Typ varchar(20), Seriennr int, Kaptaen varchar(20), Primary key(Flugbez,FDatum,Typ,SerienNr), Foreign key(Flugbez) references Flug(Flugbez), -- Foreign key(FDatum) references Flug(FDatum), Foreign key(Typ) references Flugzeugtyp(Typ) -- ,Foreign key(SerienNr) references Flugzeug(SerienNr) ); When I uncomment these, I get: ERROR 1005 (HY000): Can't create table 'db.abflug' (errno: 150)

    Read the article

  • Insert not working

    - by user1642318
    I've searched evreywhere and tried all suggestions but still no luck when running the following code. Note that some code is commented out. Thats just me trying different things. SqlConnection connection = new SqlConnection("Data Source=URB900-PC\SQLEXPRESS;Initial Catalog=usersSQL;Integrated Security=True"); string password = PasswordTextBox.Text; string email = EmailTextBox.Text; string firstname = FirstNameTextBox.Text; string lastname = SurnameTextBox.Text; //command.Parameters.AddWithValue("@UserName", username); //command.Parameters.AddWithValue("@Password", password); //command.Parameters.AddWithValue("@Email", email); //command.Parameters.AddWithValue("@FirstName", firstname); //command.Parameters.AddWithValue("@LastName", lastname); command.Parameters.Add("@UserName", SqlDbType.VarChar); command.Parameters["@UserName"].Value = username; command.Parameters.Add("@Password", SqlDbType.VarChar); command.Parameters["@Password"].Value = password; command.Parameters.Add("@Email", SqlDbType.VarChar); command.Parameters["@Email"].Value = email; command.Parameters.Add("@FirstName", SqlDbType.VarChar); command.Parameters["@FirstName"].Value = firstname; command.Parameters.Add("@LasttName", SqlDbType.VarChar); command.Parameters["@LasttName"].Value = lastname; SqlCommand command2 = new SqlCommand("INSERT INTO users (UserName, Password, UserEmail, FirstName, LastName)" + "values (@UserName, @Password, @Email, @FirstName, @LastName)", connection); connection.Open(); command2.ExecuteNonQuery(); //command2.ExecuteScalar(); connection.Close(); When I run this, fill in the textboxes and hit the button I get...... Must declare the scalar variable "@UserName". Any help would be greatly appreciated. Thanks.

    Read the article

  • change postgres date format

    - by Jay
    Is there a way to change the default format of a date in Postgres? Normally when I query a Postgres database, dates come out as yyyy-mm-dd hh:mm:ss+tz, like 2011-02-21 11:30:00-05. But one particular program the dates come out yyyy-mm-dd hh:mm:ss.s, that is, there is no time zone and it shows tenths of a second. Apparently something is changing the default date format, but I don't know what or where. I don't think it's a server-side configuration parameter, because I can access the same database with a different program and I get the format with the timezone. I care because it appears to be ignoring my "set timezone" calls in addition to changing the format. All times come out EST. Additional info: If I write "select somedate from sometable" I get the "no timezone" format. But if I write "select to_char(somedate::timestamptz, 'yyyy-mm-dd hh24:mi:ss-tz')" then timezones work as I would expect. This really sounds to me like something is setting all timestamps to implicitly be "to_char(date::timestamp, 'yyyy-mm-dd hh24:mi:ss.m')". But I can't find anything in the documentation about how I would do this if I wanted to, nor can I find anything in the code that appears to do this. Though as I don't know what to look for, that doesn't prove much.

    Read the article

  • Alternatives to sign() in sqlite for custom order by

    - by Pentium10
    I have a string column which contains some numeric fields, but a lot are 0, empty string or null. The rest are numbers having different range, all positive. I tried to create a custom order by. The order would be done by two fields. First I would like to order the fields that have this number 0 and then sort by name. So something this would work: select * from table order by sign(referenceid) desc, name asc; But Sqlite lacks the sign() -1/0/1 function, and I am on Android and I can't create user defined functions. What other options I have to get this sort done.

    Read the article

  • Access to Oracle Database with sqlapi C++

    - by Meloun
    Hi, I need to write some data in several database. I choose sqlapi.com I have made it for mysql and mssql. Now I have Problem with Oracle database. I have installed server and client on Ubuntu. In browser it works, but sqlapi says: libnnz10.so: cannot open shared object file: No such file or directory DBMS API Library 'libclntsh.so' loading fails This library is a part of DBMS client installation, not SQLAPI++ Make sure DBMS client is installed and this required library is available for dynamic loading Linux/Unix: 1) The directories in the user's LD_LIBRARY_PATH environment variable 2) The list of libraries cached in /etc/ld.so.cache 3) /usr/lib, followed by /lib There are both of these files depp inside /usr/lib. I have tried a lot of ways to say eclipse path to this folder, but nothing works. Thanks for help.

    Read the article

  • sequence of events in ACCESS

    - by I__
    what is the proper way of doing the following: getting DATE as user input running a query generating a report that uses the query this is the solution i was thinking: have a form that takes user input run the query open the report what is the correct way of doing this?

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

< Previous Page | 675 676 677 678 679 680 681 682 683 684 685 686  | Next Page >