Search Results

Search found 27530 results on 1102 pages for 'sql truncate'.

Page 606/1102 | < Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >

  • JPA native query join returns object but dereference throws class cast exception

    - by masato-san
    I'm using JPQL Native query to join table and query result is stored in List<Object[]>. public String getJoinJpqlNativeQuery() { String final SQL_JOIN = "SELECT v1.bitbit, v1.numnum, v1.someTime, t1.username, t1.anotherNum FROM MasatosanTest t1 JOIN MasatoView v1 ON v1.username = t1.username;" System.out.println("get join jpql native query is being called ============================"); EntityManager em = null; List<Object[]> out = null; try { em = EmProvider.getDefaultManager(); Query query = em.createNativeQuery(SQL_JOIN); out = query.getResultList(); System.out.println("return object ==========>" + out); System.out.println(out.get(0)); String one = out.get(0).toString(); //LINE 77 where ClassCastException System.out.println(one); } catch(Exception e) { } finally { if(em != null) { em.close; } } } The problem is System.out.println("return object ==========>" + out); outputs: return object ==========> [[true, 0, 2010-12-21 15:32:53.0, masatosan, 0.020], [false, 0, 2010-12-21 15:32:53.0, koga, 0.213]] System.out.println(out.get(0)) outputs: [true, 0, 2010-12-21 15:32:53.0, masatosan, 0.020] So I assumed that I can assign return value of out.get(0) which should be String: String one = out.get(0).toString(); But I get weird ClassCastException. java.lang.ClassCastException: java.util.Vector cannot be cast to [Ljava.lang.Object; at local.test.jaxrs.MasatosanTestResource.getJoinJpqlNativeQuery (MasatosanTestResource.java:77) So what's really going on? Even Object[] foo = out.get(0); would throw an ClassCastException :(

    Read the article

  • Receiving a NullPointerException when calling a cursor in Android

    - by LordSnoutimus
    I am creating an application which tracks the users location using GPS, stores the longitude and latitude in a database using a content provider then output the first long and lat to a mapview. I am able to create the cursor using this line of code: Cursor c = getContentResolver().query(GPSContentProvider.CONTENT_URI, null, null, null, null); startManagingCursor(c); However, when I make a call to move to the first row in the database or even try to close the cursor using c.close(); I receive a NullPointerException.

    Read the article

  • Why am I getting "Enter Parameter Value" when running my MS Access query?

    - by DanM
    In my query, I use the IIF function to assign either "Before" or "After" to a field named BeforeOrAfter using AS. When I run this query, however, the "Enter Parameter Value" dialog appears, requesting a value for BeforeOrAfter. If I remove BeforeOrAfter DESC from the ORDER BY clause, I don't get the dialog. Here is the offending query: SELECT d.Scenario, e.Event, IIF(d.LogTime < e.Time, 'Before','After') AS BeforeOrAfter, d.HeartRate FROM Data d INNER JOIN Events e ON d.Scenario = e.Scenario WHERE e.Include = Yes ORDER BY d.Scenario, e.Id, BeforeOrAfter DESC Question: Why is my AS BeforeOrAfter not being recognized by the ORDER BY clause? Why does it ask me to enter a parameter value for "BeforeOrAfter" when I run this query? Note: I tried using brackets, single quotes, double quotes, etc., but none of that made any difference.

    Read the article

  • Speeding up inner-joins and subqueries while restricting row size and table membership

    - by hiffy
    I'm developing an rss feed reader that uses a bayesian filter to filter out boring blog posts. The Stream table is meant to act as a FIFO buffer from which the webapp will consume 'entries'. I use it to store the temporary relationship between entries, users and bayesian filter classifications. After a user marks an entry as read, it will be added to the metadata table (so that a user isn't presented with material they have already read), and deleted from the stream table. Every three minutes, a background process will repopulate the Stream table with new entries (i.e. whenever the daemon adds new entries after the checks the rss feeds for updates). Problem: The query I came up with is hella slow. More importantly, the Stream table only needs to hold one hundred unread entries at a time; it'll reduce duplication, make processing faster and give me some flexibility with how I display the entries. The query (takes about 9 seconds on 3600 items with no indexes): insert into stream(entry_id, user_id) select entries.id, subscriptions_users.user_id from entries inner join subscriptions_users on subscriptions_users.subscription_id = entries.subscription_id where subscriptions_users.user_id = 1 and entries.id not in (select entry_id from metadata where metadata.user_id = 1) and entries.id not in (select entry_id from stream where user_id = 1); The query explained: insert into stream all of the entries from a user's subscription list (subscriptions_users) that the user has not read (i.e. do not exist in metadata) and which do not already exist in the stream. Attempted solution: adding limit 100 to the end speeds up the query considerably, but upon repeated executions will keep on adding a different set of 100 entries that do not already exist in the table (with each successful query taking longer and longer). This is close but not quite what I wanted to do. Does anyone have any advice (nosql?) or know a more efficient way of composing the query?

    Read the article

  • Oracle Database Enforce CHECK on multiple tables

    - by GigaPr
    I am trying to enforce a CHECK Constraint in a ORACLE Database on multiple tables CREATE TABLE RollingStocks ( Id NUMBER, Name Varchar2(80) NOT NULL, RollingStockCategoryId NUMBER NOT NULL, CONSTRAINT Pk_RollingStocks Primary Key (Id), CONSTRAINT Check_RollingStocks_CategoryId CHECK ((RollingStockCategoryId IN (SELECT Id FROM FreightWagonTypes)) OR (RollingStockCategoryId IN (SELECT Id FROM LocomotiveClasses))) ); ...but i get the following error: *Cause: Subquery is not allowed here in the statement. *Action: Remove the subquery from the statement. Can you help me understanding what is the problem or how to achieve the same result?

    Read the article

  • Stored Procedure for Multi-Table Insert Error: Cannot Insert the Value Null into Column

    - by SidC
    Good Evening All, I've created the following stored procedure: CREATE PROCEDURE AddQuote -- Add the parameters for the stored procedure here AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; Declare @CompanyName nvarchar(50), @Addr nvarchar(50), @City nvarchar(50), @State nvarchar(2), @Zip nvarchar(5), @NeedDate datetime, @PartNumber float, @Qty int -- Insert statements for procedure here Insert into dbo.Customers (CompanyName, Address, City, State, ZipCode) Values (@CompanyName, @Addr, @City, @State, @Zip) Insert into dbo.Orders (NeedbyDate) Values(@NeedDate) Insert into dbo.OrderDetail (fkPartNumber,Qty) Values (@PartNumber,@Qty) END GO When I execute AddQuote, I receive an error stating: Msg 515, Level 16, State 2, Procedure AddQuote, Line 31 Cannot insert the value NULL into column 'ID', table 'Diel_inventory.dbo.OrderDetail'; column does not allow nulls. INSERT fails. The statement has been terminated. I understand that I've set Qty field to not allow nulls and want to continue doing so. However, are there other syntax changes I should make to ensure that this sproc works correctly? Thanks, Sid

    Read the article

  • Which Oracle table uses a sequence?

    - by Jaú
    Having a sequence, I need to find out which table.column gets its values. As far as I know, Oracle doesn't keep track of this relationship. So, looking up for the sequence in source code would be the only way. Is that right? Anyone knows of some way to find out this sequence-table relationship?

    Read the article

  • Optimum size of transaction in Postgres?

    - by Joe
    I'm running a process that does a lot of updates ( 100,000) to a table. I have the choice between putting all the updates in a single transaction or committing transactions every 1000 or so. Ignore for the moment the case where a transaction fails and is aborted. I'm interested in the best size of transaction for memory and speed efficiency.

    Read the article

  • Updating Excel Cell with Non-Numeric Data in C#

    - by kbo206
    I have a query that is ExcelQuery = "Update [Sheet1$] set CITIZEN_ID = #" + value + " where CITIZEN_ID = " + value; As you can see, I'm essentially just appending a "#" onto the CITIZEN_ID field. value is a int/numeric value. So if I had "256" in the CITIZEN_ID column it would be converted to "#256" When I execute this I get an OleDbException Syntax error in date in query expression so I surrounded part of the query in single quotes like this, ExcelQuery = "Update [Sheet1$] set CITIZEN_ID = '#" + value + "' where CITIZEN_ID = " + value; With that I get yet another OleDbException this time with, Data type mismatch in criteria expression. I'm guessing for some reason the CITIZEN_ID fields don't want to take anything besides a plain number. Is there any way I can remedy this to get that pound symbol in? Thanks!

    Read the article

  • Should I commit or rollback a transaction that creates a temp table, reads, then deletes it?

    - by Triynko
    To select information related to a list of hundreds of IDs... rather than make a huge select statement, I create temp table, insert the ids into it, join it with a table to select the rows matching the IDs, then delete the temp table. So this is essentially a read operation, with no permanent changes made to any persistent database tables. I do this in a transaction, to ensure the temp table is deleted when I'm finished. My question is... what happens when I commit such a transaction vs. let it roll it back? Performance-wise... does the DB engine have to do more work to roll back the transaction vs committing it? Is there even a difference since the only modifications are done to a temp table? Related question here, but doesn't answer my specific case involving temp tables: http://stackoverflow.com/questions/309834/should-i-commit-or-rollback-a-read-transaction

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • Transactional isolation level needed for safely incrementing ids

    - by Knut Arne Vedaa
    I'm writing a small piece of software that is to insert records into a database used by a commercial application. The unique primary keys (ids) in the relevant table(s) are sequential, but does not seem to be set to "auto increment". Thus, I assume, I will have to find the largest id, increment it and use that value for the record I'm inserting. In pseudo-code for brevity: id = select max(id) from some_table id++ insert into some_table values(id, othervalues...) Now, if another thread started the same transaction before the first one finished its insert, you would get two identical ids and a failure when trying to insert the last one. You could check for that failure and retry, but a simpler solution might be setting an isolation level on the transaction. For this, would I need SERIALIZABLE or a lower level? Additionally, is this, generally, a sound way of solving the problem? Are the any other ways of doing it?

    Read the article

  • Subset sum problem

    - by MadBoy
    I'm having a problem with counting which is continuation of this question. I am not really a math person so it's really hard for me to figure out this subset sum problem which was suggested as resolution. I'm having 4 ArrayList in which I hold data: alId, alTransaction, alNumber, alPrice Type | Transaction | Number | Price 8 | Buy | 95.00000000 | 305.00000000 8 | Buy | 126.00000000 | 305.00000000 8 | Buy | 93.00000000 | 306.00000000 8 | Transfer out | 221.00000000 | 305.00000000 8 | Transfer in | 221.00000000 | 305.00000000 8 | Sell | 93.00000000 | 360.00000000 8 | Sell | 95.00000000 | 360.00000000 8 | Sell | 126.00000000 | 360.00000000 8 | Buy | 276.00000000 | 380.00000000 In the end I'm trying to get what's left for customer and what's left I put into 3 array lists: alNew (corresponds to alNumber), alNewPoIle (corresponds to alPrice), and alNewCo (corrseponds to alID) ArrayList alNew = new ArrayList(); ArrayList alNewPoIle = new ArrayList(); ArrayList alNewCo = new ArrayList(); for (int i = 0; i < alTransaction.Count; i++) { string tempAkcjeCzynnosc = (string) alTransaction[i]; string tempAkcjeInId = (string) alID[i]; decimal varAkcjeCena = (decimal) alPrice[i]; decimal varAkcjeIlosc = (decimal) alNumber[i]; int index; switch (tempAkcjeCzynnosc) { case "Transfer out": case "Sell": index = alNew.IndexOf(varAkcjeIlosc); if (index != -1) { alNew.RemoveAt(index); alNewPoIle.RemoveAt(index); alNewCo.RemoveAt(index); } else { ArrayList alTemp = new ArrayList(); decimal varAkcjeSuma = 0; for (int j = 0; j < alNew.Count; j ++) { string akcjeInId = (string) alNewCo[j]; decimal akcjeCena = (decimal) alNewPoIle[j]; decimal akcjeIlosc = (decimal) alNew[j]; if (tempAkcjeInId == akcjeInId && akcjeCena == varAkcjeCena) { alTemp.Add(j); varAkcjeSuma = varAkcjeSuma + akcjeIlosc; } } if (varAkcjeSuma == varAkcjeIlosc) { for (int j = alTemp.Count -1 ; j >=0 ; j --) { int tempIndex = (int) alTemp[j]; alNew.RemoveAt(tempIndex); alNewPoIle.RemoveAt(tempIndex); alNewCo.RemoveAt(tempIndex); } } } break; case "Transfer In": case "Buy": alNew.Add(varAkcjeIlosc); alNewPoIle.Add(varAkcjeCena); alNewCo.Add(tempAkcjeInId); break; } } Basically I'm adding and removing things from Array depending on Transaction Type, Transaction ID and Numbers. I'm adding numbers to ArrayList like 156, 340 (when it is TransferIn or Buy) etc and then i remove them doing it like 156, 340 (when it's TransferOut, Sell). My solution works for that without a problem. The problem I have is that for some old data employees were entering sum's like 1500 instead of 500+400+100+500. How would I change it so that when there's Sell/TransferOut or Buy/Transfer In and there's no match inside ArrayList it should try to add multiple items from thatArrayList and find elements that combine into aggregate. Inside my code I tried to resolve that problem with simple summing everything when there's no match (index == 1) if (index != -1) { alNew.RemoveAt(index); alNewPoIle.RemoveAt(index); alNewCo.RemoveAt(index); } else { ArrayList alTemp = new ArrayList(); decimal varAkcjeSuma = 0; for (int j = 0; j < alNew.Count; j ++) { string akcjeInId = (string) alNewCo[j]; decimal akcjeCena = (decimal) alNewPoIle[j]; decimal akcjeIlosc = (decimal) alNew[j]; if (tempAkcjeInId == akcjeInId && akcjeCena == varAkcjeCena) { alTemp.Add(j); varAkcjeSuma = varAkcjeSuma + akcjeIlosc; } } if (varAkcjeSuma == varAkcjeIlosc) { for (int j = alTemp.Count -1 ; j >=0 ; j --) { int tempIndex = (int) alTemp[j]; alNew.RemoveAt(tempIndex); alNewPoIle.RemoveAt(tempIndex); alNewCo.RemoveAt(tempIndex); } } But it only works if certain conditions are met, and fails for the rest.

    Read the article

  • Select Statements in Jobs

    - by Andrew Vogel
    I have inherited a few jobs and I am trying to understand why select statements would be in their steps. I would think that select statements would be pointless in an automated job that displays nothing for an end user.

    Read the article

  • Question about Cost in Oracle Explain Plan

    - by Will
    When Oracle is estimating the 'Cost' for certain queries, does it actually look at the amount of data (rows) in a table? For example: If I'm doing a full table scan of employees for name='Bob', does it estimate the cost by counting the amount of existing rows, or is it always a set cost?

    Read the article

  • Returning IEnumerable<T> vs IQueryable<T>

    - by stackoverflowuser
    what is the difference between returning iqueryable vs ienumerable. IQueryable<Customer> custs = from c in db.Customers where c.City == "<City>" select c; IEnumerable<Customer> custs = from c in db.Customers where c.City == "<City>" select c; Will both be deferred execution? When should one be preferred over the other?

    Read the article

  • Increase increment size to match GUID advantage

    - by TenaciousImpy
    Hi, I've been thinking of implementing this system, but can't help but feel there's a catch somewhere. One of the points of using GUID over incrementing int is that, in the future, if you were to merge databases together, you wouldn't have any clashes over the primary key/identifier. However, my approach is to set the increment size to X where X is the number of servers I'll most likely have in the future. Then, on each server, have the seed be an increment over the seed number on the previous server. That way, during merging, there would be no clashes with the primary key. Is this a safe, normal method or have I gone mental :)? Thanks

    Read the article

  • Storing i18n data in a database using XML

    - by TigrouMeow
    Hello, I may have to store some i18n-ed data in my database using XML if I don't fight back. That's not my choice, but it's in the specifications I have to follow. We would have, by example, something like following in a 'Country' column: <lang='fr'>Etats-Unis</lang> <lang='en'>United States</lang> This would apply to many columns in the database. I don't think it's a good idea at all. I tend to think that a cell in a database should represent a single piece of data (better for look-up), and that the database should have two dimensions maximum and not 3 or more (one request more would be required per dimension / a dimension here would be equal to the number of XML attributes). My idea was to have a separate table for all the translations, with columns such as : ID / Language / Translation. However, I should admit that I'm really not sure what is the best way to store data in various languages in a DB... Thanks for your advices :)

    Read the article

  • ProviderException: InvalidCastException

    - by JS
    Few of our clients are regularly getting invalid cast exception, with variations i.e. InvalidCastException / ProviderException, but both generating from method call: System.Web.Security.SqlRoleProvider.GetRolesForUser(String username) The other variation is: Exception type: InvalidCastException Exception message: Unable to cast object of type System.Int32 to type System.String. I had a look at application event log which shows: Stack trace: at System.Web.Security.SqlRoleProvider.GetRolesForUser(String username) at System.Web.Security.RolePrincipal.IsInRole(String role) at System.Web.Configuration.AuthorizationRule.IsTheUserInAnyRole(StringCollection roles, IPrincipal principal) at System.Web.Configuration.AuthorizationRule.IsUserAllowed(IPrincipal user, String verb) at System.Web.Configuration.AuthorizationRuleCollection.IsUserAllowed(IPrincipal user, String verb) at System.Web.Security.UrlAuthorizationModule.OnEnter(Object source, EventArgs eventArgs) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)* Has anyone come across this issue, and if so what is the fix? Thanks JS

    Read the article

  • C# Getting Just Date From Timestamp

    - by Soo
    If I have a timestamp in the form: yyyy-mm-dd hh:mm:ss:mmm How can I just extract the date from the timestamp? For instance, if a timestamp reads: "2010-05-18 08:36:52:236" what is the best way to just get 2010-05-18 from it. What I'm trying to do is isolate the date portion of the timestamp, define a custom time for it to create a new time stamp. Is there a more efficient way to define the time of the timestamp without first taking out the date, and then adding a new time?

    Read the article

  • MYSQL Convert rows to columns performance problem

    - by Tarski
    I am doing a query that converts rows to columns similar to this post but have encountered a performance problem. Here is the query:- SELECT Info.Customer, Answers.Answer, Answers.AnswerDescription, Details.Code1, Details.Code2, Details.Code3 FROM Info LEFT OUTER JOIN Answers ON Info.AnswerID = Answers.AnswerID LEFT OUTER JOIN (SELECT ReferenceNo, MAX(CASE DetailsIndicator WHEN 'cde1' THEN DetailsCode ELSE NULL END ) Code1, MAX(CASE DetailsIndicator WHEN 'cde2' THEN DetailsCode ELSE NULL END ) Code2, MAX(CASE DetailsIndicator WHEN 'cde3' THEN DetailsCode ELSE NULL END ) Code3 FROM DetailsData GROUP BY ReferenceNo) Details ON Info.ReferenceNo = Details.ReferenceNo There are less than 300 rows returned, but the Details table is about 180 thousand rows. The query takes 45 seconds to run and needs to take only a few seconds. When I type show processlist; into MYSQL it is hanging on "Sending Data". Any thoughts as to what the performance problem might be?

    Read the article

  • PostgreSQL - best way to return an array of key-value pairs

    - by Matt W
    I'm trying to select a number of fields, one of which needs to be an array with each element of the array containing two values. Each array item needs to contain a name (character varying) and an ID (numeric). I know how to return an array of single values (using the ARRAY keyword) but I'm unsure of how to return an array of an object which in itself contains two values. The query is something like SELECT t.field1, t.field2, ARRAY(--with each element containing two values i.e. {'TheName', 1 }) FROM MyTable t I read that one way to do this is by selecting the values into a type and then creating an array of that type. Problem is, the rest of the function is already returning a type (which means I would then have nested types - is that OK? If so, how would you read this data back in application code - i.e. with a .Net data provider like NPGSQL?) Any help is much appreciated.

    Read the article

  • How can I alter a temp table?

    - by William
    I need to create a temp table, than add a new int NOT NULL AUTO_INCREMENT field to it so I can use the new field as a row number. Whats wrong with my query? SELECT post, newid FROM ((SELECT post`test_posts`) temp ALTER TABLE temp ADD COLUMN newid int NOT NULL AUTO_INCREMENT)

    Read the article

  • TSQL: Global Script Variable?

    - by Abs
    Hello all, I am making use of variables in my TSQL queries. As my script has grown, I have separated each part by GO, now the problem is I need access to variables at the top of my script. How can I still access these variables? Hopefully, this is something simple and straightforward. Thanks all

    Read the article

< Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >