Search Results

Search found 40744 results on 1630 pages for 'sql interview questions a'.

Page 702/1630 | < Previous Page | 698 699 700 701 702 703 704 705 706 707 708 709  | Next Page >

  • Multiple user database design

    - by dieguitoweb
    I have to develop a basic social network for an academic purpose; but I need some tips for the users management.. The users are subdivided into 3 groups with different privilege: admins,analysts and standards users. For every user should be stored into the database the following information: name,lastname,e-mail,age,password. I'm not quite sure how I should design the database between theese two solutions: 1)one table called 'users' with the 'role' attribute that explain what a user can do and what can't do, and the permissions are managed via php 2)every application user is a database user created with the query 'CREATE ROLE' (It's a postgres database) and he has permissions on some tables granted with the 'GRANT' statement You should take into account that the project is for a database exam.. thanks

    Read the article

  • change postgres date format

    - by Jay
    Is there a way to change the default format of a date in Postgres? Normally when I query a Postgres database, dates come out as yyyy-mm-dd hh:mm:ss+tz, like 2011-02-21 11:30:00-05. But one particular program the dates come out yyyy-mm-dd hh:mm:ss.s, that is, there is no time zone and it shows tenths of a second. Apparently something is changing the default date format, but I don't know what or where. I don't think it's a server-side configuration parameter, because I can access the same database with a different program and I get the format with the timezone. I care because it appears to be ignoring my "set timezone" calls in addition to changing the format. All times come out EST. Additional info: If I write "select somedate from sometable" I get the "no timezone" format. But if I write "select to_char(somedate::timestamptz, 'yyyy-mm-dd hh24:mi:ss-tz')" then timezones work as I would expect. This really sounds to me like something is setting all timestamps to implicitly be "to_char(date::timestamp, 'yyyy-mm-dd hh24:mi:ss.m')". But I can't find anything in the documentation about how I would do this if I wanted to, nor can I find anything in the code that appears to do this. Though as I don't know what to look for, that doesn't prove much.

    Read the article

  • Subquery with multiple results combined into a single field?

    - by Todd
    Assume I have these tables, from which i need to display search results in a browser: Table: Containers id | name 1 Big Box 2 Grocery Bag 3 Envelope 4 Zip Lock Table: Sale id | date | containerid 1 20100101 1 2 20100102 2 3 20091201 3 4 20091115 4 Table: Items id | name | saleid 1 Barbie Doll 1 2 Coin 3 3 Pop-Top 4 4 Barbie Doll 2 5 Coin 4 I need output that looks like this: itemid itemname saleids saledates containerids containertypes 1 Barbie Doll 1,2 20100101,20100102 1,2 Big Box, Grocery Bag 2 Coin 3,4 20091201,20091115 3,4 Envelope, Zip Lock 3 Pop-Top 4 20091115 4 Zip Lock The important part is that each item type only gets one record/row in the return on the screen. I accomplished this in the past by returning multiple rows of the same item and using a scripting language to limit the output. However, this makes the ui overly complicated and loopy. So, I'm hoping I can get the database to spit out only as many records as there are rows to display. This example may be a bit extreme because of the 2 joins needed to get to the container from the item (through the sale table). I'd be happy for just an example query that outputs this: itemid itemname saleids saledates 1 Barbie Doll 1,2 20100101,20100102 2 Coin 3,4 20091201,20091115 3 Pop-Top 4 20091115 I can only return a single result in a subquery, so I'm not sure how to do this.

    Read the article

  • Can't delete record via the datacontext it was retrieved from

    - by Antilogic
    I just upgraded one of my application's methods to use compiled queries (not sure if this is relevant). Now I'm getting contradicting error messages when I run the code. This is my method: MyClass existing = Queries.MyStaticCompiledQuery(MyRequestScopedDataContext, param1, param2).SingleOrDefault(); if (existing != null) { MyRequestScopedDataContext.MyClasses.DeleteOnSubmit(existing); } When I run it I get this message: Cannot remove an entity that has not been attached. Note that the compiled query and the DeleteOnSubmit reference the same DataContext. Still I figured I'd humor the application and add an attach command before the DeleteOnSubmit, like so: MyClass existing = Queries.MyStaticCompiledQuery(MyRequestScopedDataContext, param1, param2).SingleOrDefault(); if (existing != null) { MyRequestScopedDataContext.MyClasses.Attach(existing); MyRequestScopedDataContext.MyClasses.DeleteOnSubmit(existing); } BUT... When I run this code, I get a completely different contradictory error message: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported. I'm at a complete loss... Does anyone else have some insight as to why I can't delete a record via the same DataContext I retrieved it from?

    Read the article

  • best database design for city zip & state tables

    - by ryan a
    My application will need to reference addresses. Street info will be stored with my main objects but the rest needs to be stored seperately to reduce redundancy. How should I store/retrieve ZIPs, cities and states? Here are some of my ideas. single table solution (cant do relationships) [locations] locationID locationParent (FK for locationID - 0 for state entries) locationName (city, state) locationZIP two tables (with relationships, FK constraints, ref integrity) [state] stateID stateName [city] cityID stateID (FK for state.stateID) cityName zipCode three tables [state] stateID stateName [city] cityID stateID (FK for state.stateID) cityName [zip] zipID cityID (FK for city.cityID) zipName Then I read into ZIP codes amd how they are assigned. They aren't specifically related to cities. Some cities have more than one ZIP (ok will still work) but some ZIPs are in more than one city (oh snap) and some other ZIPs (very few) are in more than one state! Also some ZIPs are not even in the same state as the address they belong to at all. Seems ZIPs are made for carrier route identification and some remote places are best served by post offices in neighboring cities or states. Does anybody know of a good (not perfect) solution that takes this into consideration to minimize discrepencies as the database grows?

    Read the article

  • Is count(*) really expensive ?

    - by Anil Namde
    I have a page where I have 4 tabs displaying 4 different reports based off different tables. I obtain the row count of each table using a select count(*) from <table> query and display number of rows available in each table on the tabs. As a result, each page postback causes 5 count(*) queries to be executed (4 to get counts and 1 for pagination) and 1 query for getting the report content. Now my question is: are count(*) queries really expensive -- should I keep the row counts (at least those that are displayed on the tab) in the view state of page instead of querying multiple times? How expensive are COUNT(*) queries ?

    Read the article

  • Query to find duplicate item in 2 table

    - by Rico
    I have this table Antecedent Consequent I1 I2 I1 I1,I2,I3 I1 I4,I1,I3,I4 I1,I2 I1 I1,I2 I1,I4 I1,I2 I1,I3 I1,I4 I3,I2 I1,I2,I3 I1,I4 I1,I3,I4 I4 AS you can see it's pretty messed up. is there anyway i can remove rows if item in consequent exist in antecedent (in 1 row) for example: INPUT: Antecedent Consequent I1 I2 I1 I1,I2,I3 <---- DELETE since I1 exist in antecedent I1 I4,I1,I3,I4 <---- DELETE since I1 exist in antecedent I1,I2 I1 <---- DELETE since I1 exist in antecedent I1,I2 I1,I4 <---- DELETE since I1 exist in antecedent I1,I2 I1,I3 <---- DELETE since I1 exist in antecedent I1,I4 I3,I2 I1,I2,I3 I1,I4 <---- DELETE since I1 exist in antecedent I1,I3,I4 I4 <---- DELETE since I4 exist in antecedent OUTPUT: Antecedent Consequent I1 I2 I1,I4 I3,I2 is there anyway i can do that by query?

    Read the article

  • How can I bind an Enum to a DbType of bit or int?

    - by uriDium
    Hi I am using Linq2Sql and want to bind an objects field (which is enum) to either a bit or a int type in the database. For example I want have a gender field in my model. I have already edited the DBML and changed the Type to point to my enum. I want to create Radio buttons (which I think I have figured out) for gender and dropdown lists for other areas using the same idea. My enum looks like this public enum Gender { Male, Female } Mapping between DbType 'int' and Type 'Project.Models.Gender' in Column 'Gender' of Type 'Candidate' is not supported. Any ideas on how to do this mapping. Am I missing something on the enums.

    Read the article

  • Using a trigger to record audit information vs. stored procedure

    - by Germ
    Suppose you have the following... An ASP.NET web application that calls a stored procedure to delete a record. The table has a trigger on it that will insert an audit entry each time a record is deleted. I want to be able to record in the audit entry the username of who deleted the record. What would be the best way to go about achieving this? I know I could remove the trigger and have the delete stored procedure insert the audit entry prior to deleting but are there any other recommeded alternative? If a username was passed as a parameter to the delete stored procedure, is there anyway to get this value in the trigger that's excuted when the record is deleted? I'm just throwing this out there...

    Read the article

  • Eclipselink and update trigger on multiple access to the database

    - by Raven
    Hi, in my project I have a database which many clients connect to. Concurrent access and writing works well. The problem now is not to reload the data every second from the database to always have the current status of the data. Does Eclipselink provide a trigger mechanism on (automatically?) reload the data if the database is changed? How would one use this trigger? Thanks!

    Read the article

  • mysql reference result from subquery

    - by iamrohitbanga
    this is what i am doing update t1 set x=a,y=b where a and b are obtained from (select query here) i know the select query the select query returns multiple results which are the same when i use group by or distinct query execution slows down considerably a and b are forward references so mysql reports an error i want to set a equal to the value obtained in the first row and b equal to the value obtained in the first row for the respective columns, to avoid group by. i don't know how to refer to the first result from the select query. how can i achieve all this?

    Read the article

  • MySQL left outer join is slow

    - by Ryan Doherty
    Hi, hoping to get some help with this query, I've worked at it for a while now and can't get it any faster: SELECT date, count(id) as 'visits' FROM dates LEFT OUTER JOIN visits ON (dates.date = DATE(visits.start) and account_id = 40 ) WHERE date >= '2010-12-13' AND date <= '2011-1-13' GROUP BY date ORDER BY date ASC That query takes about 8 seconds to run. I've added indexes on dates.date, visits.start, visits.account_id and visits.start+visits.account_id and can't get it to run any faster. Table structure (only showing relevant columns in visit table): create table visits ( `id` int(11) NOT NULL AUTO_INCREMENT, `account_id` int(11) NOT NULL, `start` DATETIME NOT NULL, `end` DATETIME NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE `dates` ( `date` date NOT NULL, PRIMARY KEY (`date`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; dates table contains all days from 2010-1-1 to 2020-1-1 (~3k rows). visits table contains about 400k rows dating from 2010-6-1 to yesterday. I'm using the date table so the join will return 0 visits for days there were no visits. Results I want for reference: +------------+--------+ | date | visits | +------------+--------+ | 2010-12-13 | 301 | | 2010-12-14 | 356 | | 2010-12-15 | 423 | | 2010-12-16 | 332 | | 2010-12-17 | 346 | | 2010-12-18 | 226 | | 2010-12-19 | 213 | | 2010-12-20 | 311 | | 2010-12-21 | 273 | | 2010-12-22 | 286 | | 2010-12-23 | 241 | | 2010-12-24 | 149 | | 2010-12-25 | 102 | | 2010-12-26 | 174 | | 2010-12-27 | 258 | | 2010-12-28 | 348 | | 2010-12-29 | 392 | | 2010-12-30 | 395 | | 2010-12-31 | 278 | | 2011-01-01 | 241 | | 2011-01-02 | 295 | | 2011-01-03 | 369 | | 2011-01-04 | 438 | | 2011-01-05 | 393 | | 2011-01-06 | 368 | | 2011-01-07 | 435 | | 2011-01-08 | 313 | | 2011-01-09 | 250 | | 2011-01-10 | 345 | | 2011-01-11 | 387 | | 2011-01-12 | 0 | | 2011-01-13 | 0 | +------------+--------+ Thanks in advance for any help!

    Read the article

  • How to SUM columns on multiple conditions in a GROUP BY

    - by David Liddle
    I am trying to return a list of Accounts with their Balances, Outcome and Income Account Transaction ------- ----------- AccountID TransactionID BankName AccountID Locale Amount Status Here is what I currently have. Could someone explain where I am going wrong? select a.ACCOUNT_ID, a.BANK_NAME, a.LOCALE, a.STATUS, sum(t1.AMOUNT) as BALANCE, sum(t2.AMOUNT) as OUTCOME, sum(t3.AMOUNT) as INCOME from ACCOUNT a left join TRANSACTION t1 on t1.ACCOUNT_ID = a.ACCOUNT_ID left join TRANSACTION t2 on t1.ACCOUNT_ID = a.ACCOUNT_ID and t2.AMOUNT < 0 left join TRANSACTION t3 on t3.ACCOUNT_ID = a.ACCOUNT_ID and t3.AMOUNT > 0 group by a.ACCOUNT_ID, a.BANK_NAME, a.LOCALE, a.[STATUS]

    Read the article

  • Access: Expression too complex to be evaluated

    - by user2502964
    I'm trying to sort out values from a database by the weekending date. The script I'm using functions on 6 of my 7 databases (they are all constructed identically). The 7th database doesn't function. I get the expression too complex error. any help figuring out why?? Here is my code: SELECT UPC_Test.Type, UPC_Test.[Model No], UPC_Test.[Model Desc], UPC_Test.[Serial No], Format(DateValue([UPC_Test].[Test Date]+7-Weekday([UPC_Test].[Test Date],0)),"m/d/yyyy") AS [Test Date], UPC_Test.Parameter, UPC_Test.[Failure Symptom], UPC_Test.[Repair Action], UPC_Test.[Factory Select], UPC_Test.[Test Station] FROM UPC_Test GROUP BY UPC_Test.Type, UPC_Test.[Model No], UPC_Test.[Model Desc], UPC_Test.[Serial No], Format(DateValue([UPC_Test].[Test Date]+7-Weekday([UPC_Test].[Test Date],0)),"m/d/yyyy"), UPC_Test.Parameter, UPC_Test.[Failure Symptom], UPC_Test.[Repair Action], UPC_Test.[Factory Select], UPC_Test.[Test Station] HAVING (((UPC_Test.Type)="Production") AND ((Format(DateValue([UPC_Test].[Test Date]+7-Weekday([UPC_Test].[Test Date],0)),"m/d/yyyy"))=[Enter]) AND ((UPC_Test.[Failure Symptom])<>"") AND ((UPC_Test.[Repair Action])<>"") AND ((UPC_Test.[Test Station])="UPC RF Test")) ORDER BY Format(DateValue([UPC_Test].[Test Date]+7-Weekday([UPC_Test].[Test Date],0)),"m/d/yyyy");

    Read the article

  • Change update value of property (LINQTOSQL)

    - by Dynde
    Hi... I've got an entity object - Customer, with a property called VATRate. This VATRate is in the form of a decimal (0.25). I wanted to be able to enter a percentage value, and save it in the correct decimal value in the database, so I made this custom property: partial class Customer{ public decimal VatPercent { get{ ... //Get code works fine} set { this.VATRate = (value / 100); } } } And then I just bind this property instead of VATRate in my ASPX editTemplate (formview). This actually works - at least one time, when I debug an update, the value is set correctly one time, and then right after it gets set to the old value. I can't really see why it sets the value twice (and with the old value the second time). Can anyone shed some light on this?

    Read the article

  • How to store data with N columns

    - by iconiK
    I need a way to store an int for N columns. Basically what I have is this: Armies: ArmyID - UINT UnitCount1 - UINT UnitCount2 - UINT UnitCount3 - UINT UnitCount4 - UINT ... I can't possible add a column for each and every unit, so I need a fast way to store the number of each units in an army (you might have guesses it's for a game by now). Using XML is not an option as it will be dead slow.

    Read the article

  • count(*) vs count(row-name) - which is more correct?

    - by bread
    Does it make a difference if you do count(*) vs count(row-name) as in these two examples? I have a tendency to always write count(*) because it seems to fit better in my mind with the notion of it being an aggregate function, if that makes sense. But I'm not sure if it's technically best as I tend to see example code written without the * more often than not. count(*): select customerid, count(*), sum(price) from items_ordered group by customerid having count(*) > 1; vs. count(row-name): SELECT customerid, count(customerid), sum(price) FROM items_ordered GROUP BY customerid HAVING count(customerid) > 1;

    Read the article

  • Classic ASP shopping cart discounts

    - by Neil Bradley
    Hi there, I have a Classic ASP shopping cart, but I need to add in an option to enter a promo code and (if valid) apply a discount to the total. There is a promo codes table that stores the promotional codes and the value of the discount to apply. So I was wondering if someone might be able to help me integrate this? Happy to pay for the time. I think it may only take an hour or so at most. :S

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • Indexing affects only the WHERE clause?

    - by andre matos
    If I have something like: CREATE INDEX idx_myTable_field_x ON myTable USING btree (field_x); SELECT COUNT(field_x), field_x FROM myTable GROUP BY field_x ORDER BY field_x; Imagine myTable with around 500,000 rows and most of field_x values being unique. Since I don't use any WHERE clause, will the created index have any effect at all in my query? Edit: I'm asking this question because I don't get any relevant difference between query-times before and after creating the index; They always take about 8 seconds (which, of course is too much time!). Is this behaviour expected?

    Read the article

  • How to avoid geometric slowdown with large Linq transactions?

    - by Shaul
    I've written some really nice, funky libraries for use in LinqToSql. (Some day when I have time to think about it I might make it open source... :) ) Anyway, I'm not sure if this is related to my libraries or not, but I've discovered that when I have a large number of changed objects in one transaction, and then call DataContext.GetChangeSet(), things start getting reaalllly slooowwwww. When I break into the code, I find that my program is spinning its wheels doing an awful lot of Equals() comparisons between the objects in the change set. I can't guarantee this is true, but I suspect that if there are n objects in the change set, then the call to GetChangeSet() is causing every object to be compared to every other object for equivalence, i.e. at best (n^2-n)/2 calls to Equals()... Yes, of course I could commit each object separately, but that kinda defeats the purpose of transactions. And in the program I'm writing, I could have a batch job containing 100,000 separate items, that all need to be committed together. Around 5 billion comparisons there. So the question is: (1) is my assessment of the situation correct? Do you get this behavior in pure, textbook LinqToSql, or is this something my libraries are doing? And (2) is there a standard/reasonable workaround so that I can create my batch without making the program geometrically slower with every extra object in the change set?

    Read the article

< Previous Page | 698 699 700 701 702 703 704 705 706 707 708 709  | Next Page >