Search Results

Search found 31328 results on 1254 pages for 'sql join'.

Page 584/1254 | < Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >

  • Oracle - correlated subquery problems

    - by FrustratedWithFormsDesigner
    I have this query: select acc_num from (select distinct ac_outer.acc_num, ac_outer.owner from ac_tab ac_outer where (ac_outer.owner = '1234567') and ac_outer.owner = (select sq.owner from (select a1.owner from ac_tab a1 where a1.acc_num = ac_outer.acc_num order by a1.a_date desc, a1.b_date desc, a1.c_date desc) sq where rownum = 1) order by dbms_random.value()) subq order by acc_num; The idea is to get all acc_nums (not a primary key) from ac_tab, that have an owner of 1234567. Since an acc_num in ac_tab could have changed owners over time, I am trying to use the inner correlated subqueries to ensure that an acc_num is returned ONLY if it's most recent owner is 12345678. Naturally, it doesn't work (or I wouldn't be posting here ;) ) Oracle gives me an error: ORA-000904 ac_outer.acc_num is an invalid identifier. I thought that ac_outer should be visible to the correlated subqueries, but for some reason it's not. Is there a way to fix the query, or do I have to resort to PL/SQL to solve this? (Oracle verison is 10g)

    Read the article

  • quaring larg text file containing JSON objects.

    - by Maciek Sawicki
    Hi, I have few Gigabytes text file in format: {"user_ip":"x.x.x.x", "action_type":"xxx", "action_data":{"some_key":"some_value"...},...} each entry is one line. First I would like to easily find entries for given ip. This part is easy because I can use grep for example. However even for this I would like to find better solution because I would like to get response as fast as possible. Next part is more complicated because I would like to find entries from selected ip and of selected type and with particular value of some_key in action_data. Probably I would have to convert this file to SQL db (probably SQLite, because it will be desktop APP), but I would ask if there are exists better solutions?

    Read the article

  • Get only latest row, grouped by a column

    - by Cylindric
    I have a large data-set of emails sent and status-codes. ID Recipient Date Status 1 [email protected] 01/01/2010 1 2 [email protected] 02/01/2010 1 3 [email protected] 01/01/2010 1 4 [email protected] 02/01/2010 2 5 [email protected] 03/01/2010 1 6 [email protected] 01/01/2010 1 7 [email protected] 02/01/2010 2 In this example: all emails sent to someone have a status of 1 the middle email (by date) sent to them has a status of 2, but the latest is 1 the last email sent to others has a status of 2 What I need to retrieve is a count of all emails sent to each person, and what the latest status code was. The first part is fairly simple: SELECT Recipient, Count(*) EmailCount FROM Messages GROUP BY Recipient ORDER BY Recipient Which gives me: Recipient EmailCount [email protected] 2 [email protected] 3 [email protected] 2 How can I get the most recent status code too? The end result should be: Recipient EmailCount LastStatus [email protected] 2 1 [email protected] 3 1 [email protected] 2 2 Thanks. (Server is Microsoft SQL Server 2008, query is being run through an OleDbConnection in .Net)

    Read the article

  • pipelined function

    - by user289429
    Can someone provide an example of how to use parallel table function in oracle pl/sql. We need to run massive queries for 15 years and combine the result. SELECT * FROM Table(TableFunction(cursor(SELECT * FROM year_table))) ...is what we want effectively. The innermost select will give all the years, and the table function will take each year and run massive query and returns a collection. The problem we have is that all years are being fed to one table function itself, we would rather prefer the table function being called in parallel for each of the year. We tried all sort of partitioning by hash and range and it didn't help. Also, can we drop the keyword PIPELINED from the function declaration? because we are not performing any transformation and just need the aggregate of the resultset.

    Read the article

  • n-grams from text in PostgreSQL

    - by harshsinghal
    I am looking to create n-grams from text column in PostgreSQL. I currently split(on white-space) data(sentences) in a text column to an array. select regexp_split_to_array(sentenceData,E'\s+') from tableName Once I have this array, how do I go about: Creating a loop to find n-grams, and write each to a row in another table Using unnest I can obtain all the elements of all the arrays on separate rows, and maybe I can then think of a way to get n-grams from a single column, but I'd loose the sentence boundaries which I wise to preserve. Sample SQL code for PostgreSQL to emulate the above scenario create table tableName(sentenceData text); INSERT INTO tableName(sentenceData) VALUES('This is a long sentence'); INSERT INTO tableName(sentenceData) VALUES('I am currently doing grammar, hitting this monster book btw!'); INSERT INTO tableName(sentenceData) VALUES('Just tonnes of grammar, problem is I bought it in TAIWAN, and so there aint any englihs, just chinese and japanese'); select regexp_split_to_array(sentenceData,E'\s+') from tableName; select unnest(regexp_split_to_array(sentenceData,E'\s+')) from tableName;

    Read the article

  • querying larg text file containing JSON objects.

    - by Maciek Sawicki
    Hi, I have few Gigabytes text file in format: {"user_ip":"x.x.x.x", "action_type":"xxx", "action_data":{"some_key":"some_value"...},...} each entry is one line. First I would like to easily find entries for given ip. This part is easy because I can use grep for example. However even for this I would like to find better solution because I would like to get response as fast as possible. Next part is more complicated because I would like to find entries from selected ip and of selected type and with particular value of some_key in action_data. Probably I would have to convert this file to SQL db (probably SQLite, because it will be desktop APP), but I would ask if there are exists better solutions?

    Read the article

  • Value cannot be null.Parameter name: key when databind in ASP.NET

    - by Yongwei Xing
    Hi all I am triing to bind the data to a listbox from sql server then got the error "Value cannot be null.Parameter name: key" sqlCommand = "SELECT [Country] FROM [tbl_LookupCountry] where [Country] IS NOT NULL"; SqlConnection sqlConCountry = new SqlConnection(connectString); SqlCommand sqlCommCountry = new SqlCommand(); sqlCommCountry.Connection = sqlConCountry; sqlCommCountry.CommandType = System.Data.CommandType.Text; sqlCommCountry.CommandText = sqlCommand; sqlCommCountry.CommandTimeout = 300; sqlConCountry.Open(); reader = sqlCommCountry.ExecuteReader(); ddlCountry.DataSource = reader; ddlCountry.DataBind(); sqlConCountry.Close(); Does anyone meet this problem before?

    Read the article

  • unique constraint (w/o Trigger) on "one-to-many" relation

    - by elgcom
    To illustrate the problem, I make an example: A tag_bundle consists of one or more than one tags. A unique tag combination can map to a unique tag_bundle, vice versa. tag_bundle tag tag_bundle_relation +---------------+ +--------+ +---------------+--------+ | tag_bundle_id | | tag_id | | tag_bundle_id | tag_id | +---------------+ +--------+ +---------------+--------+ | 1 | | 100 | | 1 | 100 | +---------------+ +--------+ +---------------+--------+ | 101 | | 1 | 101 | +--------+ +---------------+--------+ There can't be another tag_bundle having the combination from tag 100 and tag 101. How can I ensure such unique constraint when executing SQL "concurrently"!! that is, to prevent concurrently adding two bundles with the same tag combination Adding a simple unique constraint on any table does not work, Is there any solution other than Trigger or explicit lock. I come to only this simple way: make tag combination into string, and let it be unique. tag_bundle (unique on tags) tag tag_bundle_relation +---------------+--------+ +--------+ +---------------+--------+ | tag_bundle_id | tags | | tag_id | | tag_bundle_id | tag_id | +---------------+--------+ +--------+ +---------------+--------+ | 1 | 100,101| | 100 | | 1 | 100 | +---------------+--------+ +--------+ +---------------+--------+ | 101 | | 1 | 101 | +--------+ +---------------+--------+ but it seems not a good way :(

    Read the article

  • Exclusive filtering by tag

    - by KaptajnKold
    I'm using rails 3.0 and MySql 5.1 I have these three models: Question, Tag and QuestionTag. Tag has a column called name. Question has many Tags through QuestionTags and vice versa. Suppose I have n tag names. How do I find only the questions that have all n tags, identified by tag name. And how do I do it in a single query. (If you can convince me that doing it in more than one query is optimal, I'll be open to that) A pure rails 3 solution would be preferred, but I am not adverse to a pure SQL solution either.

    Read the article

  • Excel Spreadsheet - Best way to perform an Oracle Query on a cell

    - by Jamie
    Hi there, I have an Excel Spreadsheeet. There is a cell containing a concatenated name and surname (don't ask why), for example: Cell A2 BLOGGSJOE On this cell, I would like to run the following SQL and output it to cell A3, A4 and A5 SELECT i.id, i.forename, i.surname FROM individual i WHERE UPPER(REPLACE('" & A2 & "', ' ', '')) = UPPER(REPLACE(i.surname|| i.forename, ' ', '')) AND NVL(i.ind_efface, 'N') = 'N' Any idea how I could perform an oracle query on each cell and return the result? I have enabled an oracle datasource connection in Excel, just not sure what to do now. Is this a stupid approach, and can you recommend a better more proficient way? Thanks muchly! I lack the necessary experience in this type of thing! :-) EDIT: I am aware that I could just write a simple ruby/php/python/whatever script to loop through the excel spreadsheet (or csv file), and then perform the query etc. but i thought there might be a quick way in excel itself.

    Read the article

  • Losing DateTimeOffset precision when using C#

    - by Darvis Lombardo
    I have a SQL Server table with a CreatedDate field of type DateTimeOffset(2). A sample value which is in the table is 2010-03-01 15:18:58.57 -05:00 As an example, from within C# I retrieve this value like so: var cmd = new SqlCommand("SELECT CreatedDate FROM Entities WHERE EntityID = 2", cn); var da = new SqlDataAdapter(cmd); DataTable dt =new DataTable(); da.Fill(dt); And I look at the value: MessageBox.Show(dt.Rows[0][0].ToString()); The result is 2010-03-01 15:18:58 -05:00, which is missing the .57 that is stored in the database. If I look at dt.Rows[0][0] in the Watch window, I also do not see the .57, so it appears it has been truncated. Can someone shed some light on this? I need to use the date to match up with other records in the database and the .57 is needed. Thanks! Darvis

    Read the article

  • Auto-generated values for columns in database

    - by Jamal
    Is it a good practice to initialize columns that we can know their values in database, for example identity columns of type unique identifier can have a default value (NEWID()), or columns that shows the record create date can have a default value (GETDATE()). Should I go through all my tables and do this whereever I am sure that I won't need to assign the value manually and the Auto-generated value is correct. I am also thinking about using linq-to-sql classes and setting the "Auto Generated Value" property of these columns to true. Maybe this is what everybody already knows or maybe I am asking a question about a fundamental issue, if so please tell me.

    Read the article

  • Linq ChangeConflictException occurs when submitting DataContext changes

    - by Alex
    System.Data.Linq.ChangeConflictException: 2 of X updates failed. at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at PROJECT.Controllers.HomeController.ClickProc(Int32 id, String code, String n) This is what I get very often. This action is done thousands of times a day, and I get this exception about once every 5 seconds. From what I understand it happens when something changes in the database in the period between creating DataContext and updating it. Am I right? How can I fix it? Update I just debugged the error and found the following: Table name: dbo.Stats current value: 9852039 original value: 9852038 database value: 9852039 The Stats table is updated constantly. So how can I still make LINQ save the changes. With "classical" SQL Server access through SqlDataCommand I never had problems like that.

    Read the article

  • How to approach this SQL query

    - by Kim
    I have data related as follows: A table of Houses A table of Boxes (with an FK back into Houses) A table of Things_in_boxes (with an FK back to Boxes) A table of Owners (with an FK back into Houses) In a nutshell, a House has many Boxes, and each Box has many Things in it. In addition, each House has many Owners. If I know two Owners (say Peter and Paul), how can I list all the Things that are in the Boxes that are in the Houses owned by these guys? Also, I'd like to master this SQL stuff. Can anyone recommend a good book/resource? (I'm using MySQL). Thanks!

    Read the article

  • How to set a filter for an estimated maximum price

    - by David
    I cannot figure out how to set an estimated maximum price for a collection of records. What I want to avoid is to simply use SQL MAX, because maybe there are records with exorbitant prices. For example, in the "computers-hardware" category of OLX (http://www.olx.com/computers-hardware-cat-240) the filter for maximum price is estimately set to $1400, but sorting by price, the first items are above $10000 Maybe they calculated the average and then estimated some maximum price... what do you think? And what about the stepping? How would you calculate it?

    Read the article

  • About SQL Server security

    - by Felipe Fiali
    I have an ASP.NET application which runs under the Classic .NET AppPool in IIS. I have a report to render from my website. The problem is SQL Server keeps telling me that it failed to create a connection to the datasource, because login failed for user IUSR. After adding that user directly to the databse I could get the report to work, but I'm concerned about security. By doing that, am I opening my specified databases to all websites hosted on IIS? Or is that account identity-specific?

    Read the article

  • Adding a new column to Table which contains live data

    - by Ardman
    I have a large table consisting of over 60 millions records and I would like to add 2 new columns for data migration purposes. There are indexes on the table and some of them are large. So, by me adding the 2 new columns to the table, will I run the risk of slowing down the database whilst it attempts to add them and maybe time-out? Or will it just work? I know that if I try and rearrange the columns SQL Server will ask me to drop and re-create the table, so I definately don't want this. Is this something everyone is challenged with?

    Read the article

  • How to retrieve all errors and messages from a query using ADO

    - by Johan Levin
    When a SQL batch returns more than one message from e.g. print statements, then I can only retrieve the first one using the ADO connection's Errors collection. How do I get the rest of the messages? If I run this script: Option Explicit Dim conn Set conn = CreateObject("ADODB.Connection") conn.Provider = "SQLOLEDB" conn.ConnectionString = "Data Source=(local);Integrated Security=SSPI;Initial Catalog=Master" conn.Open conn.Execute("print 'Foo'" & vbCrLf & "print 'Bar'" & vbCrLf & "raiserror ('xyz', 10, 127)") Dim error For Each error in conn.Errors MsgBox error.Description Next Then I only get "Foo" back, never "Bar" or "xyz". Is there a way to get the remaining messages?

    Read the article

  • Combining DROP USER and DROP DATABASE with SELECT .. WHERE query?

    - by zsero
    I'd like to make a very simple thing, replicate the functionality of mysql's interactive mysql_secure_installation script. My question is that is there a simple, built-in way in MySQL to combine the output of a SELECT query with the input of a DROP user or DROP database script? For example, if I'd like to drop all users with empty passwords. How could I do that with DROP USER statement? I know an obvious solution would be to run everything for example from a Python script, run a query with mysql -Bse "select..." parse the output with some program construct the drop query run it. Is there an easy way to do it in a simple SQL query? I've seen some example here, but I wouldn't call it simple: http://stackoverflow.com/a/12097567/518169 Would you recommend making a combined query, or just to parse the output using for example Python or bash scripts/sed?

    Read the article

  • Intime and OutTime for the Modified date

    - by Jash
    Question is already posted on June 4, but still am not get the Proper answer Again Table Structure: T_Person – Table 1 CARDNO 168 471 488 247 519 518 331 240 518 386 441 331 T_Cardevent – Table 2 CARDEVENTDATE CARDEVENTTIME 20090225 163932 20090225 164630 20090225 165027 20090225 165137 20090225 165147 20090225 165715 20090225 165749 20090303 162059 20090303 162723 20090303 155029 20090303 155707 20090303 162824 Query SELECT CARDNO, CARDEVENTDATE, (1000000 * CAST (CARDEVENTDATE AS BIGINT) + CAST (CARDEVENTTIME AS BIGINT) - 30001) / 1000000 AS CardEvenDateAdjusted, CARDEVENTTIME FROM T_CARDEVENT WHERE (CARDEVENTDATE > 20090601) GROUP BY CARDNO, CARDEVENTDATE, CARDEVENTTIME, (1000000 * CAST(CARDEVENTDATE AS BIGINT) + CAST(CARDEVENTTIME AS BIGINT) - 30001) / 1000000 ORDER BY CARDNO, CARDEVENDATEADJUSTED From this above query date is displaying correctly according to that time 03:00:01 to 03:00:00 How can I get min (time) and Max (time) for the adjusted date? I need the sql query for the above condition. Help me? Urgent Please

    Read the article

  • Inline Conditional Statement in Stored Procedure

    - by Jason
    Here is the pseudo-code for my inline query in my code: select columnOne from myTable where columnOne = '#variableOne#' if len(variableTwo) gt 0 and columnTwo = '#variableTwo#' end I would like to move this into a stored procedure but am having trouble building the query correctly. I assume it would be something like select columnOne from myTable where columnOne = @variableOne CASE WHEN len(@variableTwo) <> 0 THEN and columnTwo = @variableTwo END This is giving me a syntax error. Could someone tell me what I've got wrong. Also, I would like to keep it to only one query and not just have one if statement. Also, I do not want to build the sql in the stored procedure and run Exec() on it.

    Read the article

  • LINQ to Entity, joining on NOT IN tables

    - by SlackerCoder
    My brain seems to be mush right now! I am using LINQ to Entity, and I need to get some data from one table that does NOT exist in another table. For example: I need the groupID, groupname and groupnumber from TABLE A where they do not exist in TABLE B. The groupID will exist in TABLE B, along with other relevant information. The tables do not have any relationship. In SQL it would be quite simply (there is a more elegant and efficient solution, but I want to paint a picture of what I need) SELECT GroupID, GroupName, GroupNumber, FROM TableA WHERE GroupID NOT IN (SELECT GroupID FROM TableB) Is there an easy/elegant way to do this? Right now I have a bunch of queries hitting the db, then comparing, etc. It's pretty messy. Thanks.

    Read the article

  • Get the highest odds from the last update

    - by Frankie Yale
    I have these tables in a PostgreSQL database: bookmakers ----------------------- | id | name | ----------------------- | 1 | Unibet | ----------------------- | 2 | 888 | ----------------------- odds --------------------------------------------------------------------- | id | odds_type | odds_index | bookmaker_id | created_at | --------------------------------------------------------------------- | 1 | 1 | 1.55 | 1 | 2012-06-02 10:30 | --------------------------------------------------------------------- | 2 | 2 | 3.22 | 2 | 2012-06-02 10:30 | --------------------------------------------------------------------- | 3 | X | 3.00 | 1 | 2012-06-02 10:30 | --------------------------------------------------------------------- | 4 | 2 | 1.25 | 1 | 2012-05-27 09:30 | --------------------------------------------------------------------- | 5 | 1 | 2.30 | 2 | 2012-05-27 09:30 | --------------------------------------------------------------------- | 6 | X | 2.00 | 2 | 2012-05-27 09:30 | --------------------------------------------------------------------- What I am trying to query is the following: Give me the 1/X/2 odds from the latest update (created_at) from ALL bookmakers and from that last update, give me the highest odds for each odds_type ('1', '2', 'X'). On my website I display them as: Best odds right now: 1 | X | 2 -------------------- 2.30 | 3.00 | 3.22 I have to first get the latest, because the odds from the update from yesterday are no longer valid. Then from that last update, I have - in this case - 2 odds from 2 different bookmakers, so I need to get the best one for type '1','2','X'. Pseudo SQL would be something like: SELECT MAX(odds_index) WHERE odds_type = '1' ORDER BY created_at DESC, odds_index DESC But that doesn't work, because I would always get the latest odds (and not the highest/best from those latest) I hope I'm making sense.

    Read the article

  • Storing rich text documents

    - by David Veeneman
    This is a follow-up to another question I asked earlier today. I am creating a desktop app that stores rich text documents created in WPF (in a RichTextBox control). The app uses SQL Compact, and up until now, I had planned to store each document in a binary column in the database. I am rethinking that approach. Would it be better practice to store each rich text document in the file system, rather than saving it to the database? I figure I could put the documents in the same folder with the database, then store a relative path to each document in its database record, along with other information about the document (tags and so on). I'd like to know some pros and cons of that approach, along with ideas of what is generally considered best practice for this sort of thing. Thanks for your help.

    Read the article

  • populate object graph from database

    - by Rama
    Hi, I would like to know the best way to populate an object that has a collection of child objects and each child object may inturn have a collection of objects, from database without making multiple calls to the database to get child objects for each object. basically in hierarchical format something like for example a customer has orders and each order has order items. is it best to retrieve the data in xml format (SQL server 2005), or retrieve a dataset by joining the related tables together and then map that the data to the object? thanks in advance for your help.

    Read the article

< Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >