Search Results

Search found 27368 results on 1095 pages for 'msaccess to sql'.

Page 592/1095 | < Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >

  • Replace duplicate spaces with single space in TSQL

    - by Chris
    I need to ensure that a given field does not have more than one space (not concerned about all white space, just space) between characters. So 'single spaces only' Needs to turn into 'single spaces only' The below will not work select replace('single spaces only',' ',' ') as it would result in 'single spaces only' I would really prefer to stick with native TSQL rather than a CLR based solution. Thoughts?

    Read the article

  • same table, 1 field to 2 field query

    - by edib
    I have 2 tables: 1st holds employees (of ones in any position) and the 2nd holds manager employee relations with id numbers. I want to write a query like 1st field: name(employee), 2nd field: name(manager) How can I do that?

    Read the article

  • Alternative to sql NOT IN?

    - by Alex
    Hi, I am trying to make a materialized view in Oracle (I am a newbie, btw). For some reason, it doesn't like the presence of sub-query in it. I've been trying to use LEFT OUTER JOIN instead, but it's returning different data set now. Put simply, here's the code I'm trying to modify: SELECT * FROM table1 ros, table2 bal, table3 flx WHERE flx.name = 'XXX' AND flx.value = bal.value AND NVL (ros.ret, 'D') = Nvl (flx.attr16, 'D') AND ros.value = bal.segment3 AND ros.type IN ( 'AL', 'AS', 'PL' ) AND bal.period = 13 AND bal.code NOT IN (SELECT bal1.code FROM table2 bal1 WHERE bal1.value = flx.value AND bal1.segment3 = ros.value AND bal1.flag = bal.flag AND bal1.period = 12 AND bal1.year = bal.year) And here's one of my attempt: SELECT * FROM table1 ros, table2 bal, table3 flx LEFT OUTER JOIN table2 bal1 ON bal.code = bal1.code WHERE bal1.code is null AND bal1.segment3 = ros.value AND bal.segment3 = ros.value AND bal1.flag = bal.flag AND bal1.year = bal.year AND flx.name = 'XXX' AND flx.value = bal.value AND bal1.value = flx.value AND bal1.period_num = 12 AND NVL (ros.type, 'D') = NVL (flx.attr16, 'D') AND ros.value = bal.segment3 AND ros.type IN ( 'AL', 'AS', 'PL' ) AND bal.period = 13; This drives me nuts! Thanks in advance for the help :)

    Read the article

  • What is the Null Character literal in TSQL?

    - by David in Dakota
    I am wondering what the literal for a Null character (e.g. '\0') is in TSQL. Note: not a NULL field value, but the null character (see link). I have a column with a mix of typical and a null character. I'm trying to replace the null character with a different value. I would have thought that the following would work but it is unsuccessfull: select REPLACE(field_with_nullchar, char(0), ',') from FOO where BAR = 20

    Read the article

  • Union on two tables with a where clause in the one

    - by Lostdrifter
    Currently I have 2 tables, both of the tables have the same structure and are going to be used in a web application. the two tables are production and temp. The temp table contains one additional column called [signed up]. Currently I generate a single list using two columns that are found in each table (recno and name). Using these two fields I'm able to support my web application search function. Now what I need to do is support limiting the amount of items that can be used in the search on the second table. the reason for this is become once a person is "signed up" a similar record is created in the production table and will have its own recno. doing: Select recno, name from production UNION ALL Select recno, name from temp ...will show me everyone. I have tried: Select recno, name from production UNION ALL Select recno, name from temp WHERE signup <> 'Y' But this returns nothing? Can anyone help?

    Read the article

  • MySQL slow query

    - by andrhamm
    SELECT items.item_id, items.category_id, items.title, items.description, items.quality, items.type, items.status, items.price, items.posted, items.modified, zip_code.state_prefix, zip_code.city, books.isbn13, books.isbn10, books.authors, books.publisher FROM ( ( items LEFT JOIN bookitems ON items.item_id = bookitems.item_id ) LEFT JOIN books ON books.isbn13 = bookitems.isbn13 ) LEFT JOIN zip_code ON zip_code.zip_code = items.item_zip WHERE items.rid = $rid` I am running this query to get the list of a user's items and their location. The zip_code table has over 40k records and this might be the issue. It currently takes up to 15 seconds to return a list of about 20 items! What can I do to make this query more efficient?

    Read the article

  • many-to-many query

    - by kofto4ka
    Hello, guys! I have a problem and I dont know what is better solution. Okay, I have 2 tables: posts(id, title), posts_tags(post_id, tag_id). I have next task: must select posts with tags ids for example 4, 10 and 11. Not exactly, post could have any other tags at the same time. So, how I could do it more optimized? Creating temporary table in each query? Or may be some kind of stored procedure? In the future, user could ask script to select posts with any count of tags (it could be 1 tag only or 10 at the same time) and I must be sure that method that I will choose would be the best method for my problem. Sorry for my english, thx for attention.

    Read the article

  • How to designing a generic databse whos layout may change over time?

    - by mawg
    Here's a tricky one - how do I programatically create and interrogate a database who's contents I can't really foresee? I am implementing a generic input form system. The user can create PHP forms with a WYSIWYG layout and use them for any purpose he wishes. He can also query the input. So, we have three stages: a form is designed and generated. This is a one-off procedure, although the form can be edited later. This designs the database. someone or several people make use of the form - say for daily sales reports, stock keeping, payroll, etc. Their input to the forms is written to the database. others, maybe management, can query the database and generate reports. Since these forms are generic, I can't predict the database structure - other than to say that it will reflect HTML form fields and consist of a the data input from collection of edit boxes, memos, radio buttons and the like. Questions and remarks: A) how can I best structure the database, in terms of tables and columns? What about primary keys? My first thought was to use the control name to identify each column, then I realized that the user can edit the form and rename, so that maybe "name" becomes "employee" or "wages" becomes ":salary". I am leaning towards a unique number for each. B) how best to key the rows? I was thinking of a timestamp to allow me to query and a column for the row Id from A) C) I have to handle column rename/insert/delete. Foe deletion, I am unsure whether to delete the data from the database. Even if the user is not inputting it from the form any more he may wish to query what was previously entered. Or there may be some legal requirements to retain the data. Any gotchas in column rename/insert/delete? D) For the querying, I can have my PHP interrogate the database to get column names and generate a form with a list where each entry has a database column name, a checkbox to say if it should be used in the query and, based on column type, some selection criteria. That ought to be enough to build searches like "position = 'senior salesman' and salary 50k". E) I probably have to generate some fancy charts - graphs, histograms, pie charts, etc for query results of numerical data over time. I need to find some good FOSS PHP for this. F) What else have I forgotten? This all seems very tricky to me, but I am database n00b - maybe it is simple to you gurus?

    Read the article

  • How to create custom get and set methods for Linq2SQL object

    - by optician
    I have some objects which have been created automatically by linq2SQL. I would like to write some code which should be run whenever the properties on these objects are read or changed. Can I use typical get { //code } and set {//code } in my partial class file to add this functionality? Currently I get an error about this member already being defined. This all makes sense. Is it correct that I will have to create a method to function as the entry point for this functionality, as I cannot redefine the get and set methods for this property. I was hoping to just update the get and set, as this would mean I wouldn't have to change all the reference points in my app. But I think I may just have to update it everywhere.

    Read the article

  • ASP.NET GridView Sort?

    - by Curtis White
    I'm performing a query with a sort in the Selecting event of the LinqDataSource. I'm then casting my query to a list and assigning it to the result. I'm using this data source in an ASP.NET gridview. I can see the list is sorted but when the ASP.NET gridview does not seem to be respecting the sort order. How can I get the gridview to respect my default sort order?

    Read the article

  • What is the proper way to create a recursive entity in the Entity Framework?

    - by Orion Adrian
    I'm currently using VS 2010 RC, and I'm trying to create a model that contains a recursive self-referencing entity. Currently when I import the entity from the model I get an error indicating that the parent property cannot be part of the association because it's set to 'Computed' or 'Identity', though I'm not sure why it does it that way. I've been hand-editing the file to get around that error, but then the model simply doesn't work. What is the proper way to get recursive entities to work in the Entity Framework. CREATE TABLE [dbo].[Appointments]( [AppointmentId] [int] IDENTITY(1,1) NOT NULL, [Description] [nvarchar](1024) NULL, [Start] [datetime] NOT NULL, [End] [datetime] NOT NULL, [Username] [varchar](50) NOT NULL, [RecurrenceRule] [nvarchar](1024) NULL, [RecurrenceState] [varchar](20) NULL, [RecurrenceParentId] [int] NULL, [Annotations] [nvarchar](50) NULL, [Application] [nvarchar](100) NOT NULL, CONSTRAINT [PK_Appointments] PRIMARY KEY CLUSTERED ( [AppointmentId] ASC ) ) GO ALTER TABLE [dbo].[Appointments] WITH CHECK ADD CONSTRAINT [FK_Appointments_ParentAppointments] FOREIGN KEY([RecurrenceParentId]) REFERENCES [dbo].[Appointments] ([AppointmentId]) GO ALTER TABLE [dbo].[Appointments] CHECK CONSTRAINT [FK_Appointments_ParentAppointments] GO

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • Select from multiple tables, remove duplicates

    - by staze
    I have two tables in a SQLite DB, and both have the following fields: idnumber, firstname, middlename, lastname, email, login One table has all of these populated, the other doesn't have the idnumber, or middle name populated. I'd LIKE to be able to do something like: select idnumber, firstname, middlename, lastname, email, login from users1,users2 group by login; But I get an "ambiguous" error. Doing something like: select idnumber, firstname, middlename, lastname, email, login from users1 union select idnumber, firstname, middlename, lastname, email, login from users2; LOOKS like it works, but I see duplicates. my understanding is that union shouldn't allow duplicates, but maybe they're not real duplicates since the second user table doesn't have all the fields populated (e.g. "20, bob, alan, smith, [email protected], bob" is not the same as "NULL, bob, NULL, smith, [email protected], bob"). Any ideas? What am I missing? All I want to do is dedupe based on "login". Thanks!

    Read the article

  • Insert rownumber repeatedly in records in t-sql.

    - by jeff
    Hi, I want to insert a row number in a records like counting rows in a specific number of range. example output: RowNumber ID Name 1 20 a 2 21 b 3 22 c 1 23 d 2 24 e 3 25 f 1 26 g 2 27 h 3 28 i 1 29 j 2 30 k I rather to try using the rownumber() over (partition by order by column name) but my real records are not containing columns that will count into 1-3 rownumber. I already try to loop each of record to insert a row count 1-3 but this loop affects the performance of the query. The query will use for the RDL report, that is why as much as possible the performance of the query must be good. any suggestions are welcome. Thanks

    Read the article

  • Calling sp and Performance strategy.

    - by Costa
    Hi I find my self in a situation where I have to choose between either creating a new sp in database and create the middle layer code. so loose some precious development time. also the procedure is likely to contain some joins. Or use two existing sp(s), the problem of this approach is that I am doing two round trips to database. which can be poor performance especially if I have database in another server. Which approach you will go?, and why? thanks

    Read the article

  • MySQL query advice

    - by vasion
    I am lost in MySQL documentation. I have a table with votes - it has these columns id song_id user_id created I cannot find the query which will process the information and output the 10 most voted songs in a given time period. What is it?

    Read the article

  • Why would I do an inner join on a non-distinct field?

    - by froadie
    I just came across a query that does an inner join on a non-distinct field. I've never seen this before and I'm a little confused about this usage. Something like: SELECT distinct all, my, stuff FROM myTable INNER JOIN myOtherTable ON myTable.nonDistinctField = myOtherTable.nonDistinctField (WHERE some filters here...) I'm not quite sure what my question is or how to phrase it, or why exactly this confuses me, but I was wondering if anyone could explain why someone would need to do an inner join on a non-distinct field and then select only distinct values...? Is there ever a legitimate use of an inner join on a non-distinct field? What would be the purpose? And if there's is a legitimate reason for such a query, can you give examples of where it would be used?

    Read the article

  • select rows with column that is not null?

    - by fayer
    by default i have one column in mysql table to be NULL. i want to select some rows but only if the field value in that column is not NULL. what is the correct way of typing it? $query = "SELECT * FROM names WHERE id = '$id' AND name != NULL"; is this correct?

    Read the article

  • Do a query only if there are no results on previous query

    - by yes123
    Hi guys: I do this query(1): (1)SELECT * FROM t1 WHERE title LIKE 'key%' LIMIT 1 I need to do a second(2) query only if this previous query has no results (2)SELECT * FROM t1 WHERE title LIKE '%key%' LIMIT 1 basically i need only 1 row who got the most close title to my key. Atm i am using an UNION query with a custom field to order it and a LIMIT 1. Problem is I don't want to do the others query if already the first made the result. Thanks

    Read the article

< Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >