Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 213/293 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • non-scalar type requested

    - by lego69
    can somebody please help me with an error conversion from `A' to non-scalar type `B' requested I have class A and derived from it B, but I have problems with these rows: A a(1); A *pb = new B(a); B b = *pb; //here I have an error thanks in advance for any help class A { protected: int player; public: A(int initPlayer = 0); A(const A&); A& operator=(const A&); virtual ~A(){}; virtual void foo(); void foo() const; operator int(); }; class B: public A { public: B(int initPlayer): A(initPlayer){}; ~B(){}; virtual void foo(); };

    Read the article

  • Does Oracle 11g automatically index fields frequently used for full table scans?

    - by gustafc
    I have an app using an Oracle 11g database. I have a fairly large table (~50k rows) which I query thus: SELECT omg, ponies FROM table WHERE x = 4 Field x was not indexed, I discovered. This query happens a lot, but the thing is that the performance wasn't too bad. Adding an index on x did make the queries approximately twice as fast, which is far less than I expected. On, say, MySQL, it would've made the query ten times faster, at the very least. I'm suspecting Oracle adds some kind of automatic index when it detects that I query a non-indexed field often. Am I correct? I can find nothing even implying this in the docs.

    Read the article

  • Oracle SQL: Query results from previous X isoweeks () (where X might be > 52)

    - by tommy-o-dell
    How could I adapt this query to show the previous 61 weeks? (still exlcluding the current week). My query currently shows me the total weekly sales for 2010 grouped by ISO Week and ISO Year (exlcuding the current week). select to_char(order_date,'IYYY') as iso_year, to_char(order_date,'IW') as iso_week, sum(sale_amount) from orders where to_char(order_date,'IW') <> to_char(SYSDATE) and to_char(order_date,'IYYY') = 2010 group by to_char(order_date,'IYYY') to_char(order_date,'IW') I realize I could probably just omit the "2010" requirement, order by desc and limit results to a certain bnumber of rows. But that just doesn't seem right! Much appreciate any help pointing me in the right direction!

    Read the article

  • Put specific tds from a table row into edit using jQuery (then update w/ ajax)

    - by bbqsauced
    I'm somewhat new to jQuery, so I could use some help here. This is my issue: I have a php script outputting a dynamic table. Each row has an "edit" button, plus some other fields. Only 3 of those need to be turned into an input box. The edit button should only put that specific row into "edit mode." I got as far as assigning each row a unique class by adding a number to the end of it. I have been able to use jQuery to change all of the rows into edit mode, but I need it to be specific to a row. An example row would have classes like name0, price0, and desc0. The next row would go on to classes name1, price1, and desc1 (for the fields that need changed). How can I reference these values and pass them to jQuery so it processes an event on just those elements?

    Read the article

  • Hibernate inserting into join table

    - by Karl
    I got several entities. Two of them got a many-to-many relation. When I do a bigger operation on these entities it fails with this exception: org.hibernate.exception.ConstraintViolationException: could not insert collection rows: I execute the operation i a @Transactional context. I don't do any explicit flushing i my daos. The flush is triggered by a query. In the queue are 15 elements (all of the same structure). one of them always fails (but it's always a different one (I checked) and always at a different position). Does anybody have a hint for me for what I might do wrong? My Mapping: @ManyToMany(targetEntity = CategoryImpl.class) protected Set<Category> categories = new HashSet<Category>();

    Read the article

  • ORACLE -1401 error

    - by Sachin Chourasiya
    I have a stored procedure in Oracle 9i which inserts records in a table. The table has a primary key built to ensure duplicte rows doesnot exists. I am trying to insert a record by calling this stored procedure and it works first time properly. I am again trying to insert a duplicate record and expecting unique constraint violation error. But I am getting ORA-01401 inserted value too large for column I knew its meaning but my query is , if the value inserted is really large then how it got successful in the first attempt.

    Read the article

  • beginner Linq syntax and EF4 question

    - by user564577
    Question With the following linq code snip I get a list of clients with address filtered by the specifications but the form of the entities returned is not what i had expected. The data is 1 client with 2 addresses and 1 client with 1 address. The query returns 3 rows of clients each with 1 address Client 1 = Address1 Client 1 = Address2 Client 2 = Address3 var query = from t1 in context.Clients.Where(specification.SatisfiedBy()).Include("ClientAddresses") join t2 in context.ClientAddresses.Where(spec.SatisfiedBy()) on t1.ClientKey equals t2.ClientKey select t1; My expectation was a little more like a list with only two clients in it, one client with a collection of two addresses and one client with a collection of one address. Client 1 = Address1 / Address2 Client 2 = Address3 What am I missing??? Thanks!

    Read the article

  • Query to find duplicate item in 2 table

    - by Rico
    I have this table Antecedent Consequent I1 I2 I1 I1,I2,I3 I1 I4,I1,I3,I4 I1,I2 I1 I1,I2 I1,I4 I1,I2 I1,I3 I1,I4 I3,I2 I1,I2,I3 I1,I4 I1,I3,I4 I4 AS you can see it's pretty messed up. is there anyway i can remove rows if item in consequent exist in antecedent (in 1 row) for example: INPUT: Antecedent Consequent I1 I2 I1 I1,I2,I3 <---- DELETE since I1 exist in antecedent I1 I4,I1,I3,I4 <---- DELETE since I1 exist in antecedent I1,I2 I1 <---- DELETE since I1 exist in antecedent I1,I2 I1,I4 <---- DELETE since I1 exist in antecedent I1,I2 I1,I3 <---- DELETE since I1 exist in antecedent I1,I4 I3,I2 I1,I2,I3 I1,I4 <---- DELETE since I1 exist in antecedent I1,I3,I4 I4 <---- DELETE since I4 exist in antecedent OUTPUT: Antecedent Consequent I1 I2 I1,I4 I3,I2 is there anyway i can do that by query?

    Read the article

  • Updating Database From Dataset?

    - by Ases
    I wanna update my database from my dataset. mydataadapter = new MySqlDataAdapter("SELECT * FROM table0; SELECT * FROM table1; SELECT * FROM table2;", con); myda.Fill(dataset); //...... // for example I'm doing a change like this ds.Tables[2].Rows[1][3] = "S"; //Then updating the database MySqlCommandBuilder com = new MySqlCommandBuilder(mydataadapter); mydataadapter.Update(dataset, "table2"); then it returns this error TableMapping['table2'] or DataTable 'table2' didn't find by Update. Do you have any advice?

    Read the article

  • Store data in an inconvenient table or create a derived table?

    - by user1705685
    I have a certain predefined database structure that I am stuck with. The question is whether this structure is OK for ORM or I whether should add a processing layer that would create a more convenient structure every time something is inserted into the original DB. To simplify, here's what it kind of looks like. I have a person table: PersonId Name And I have a properties table: PersonId PropertyType PropertyValue So, for person John Doe... (1, 'John Doe') ...I could have three properties: (1, 'phone', '555-55-55'), (1, 'email', '[email protected]), (1, 'type', 'employee') By using ORM I would like to get a "person" object that would have properties "name", "phone", "email", "type". Can Propel do that? How efficient is it? Is it a better idea to create a table with columns "phone", "email", "type" and fill it automatically as new rows are inserted into the properties table?

    Read the article

  • Query for multiple joins

    - by Shailaja
    i have 3 tables named dataset,dataelem and transformdataelem with column names as below: main.Dataset ------------ datasetID (PK) applicationID main.Dataelem ------------- dataelemID(PK) datasetID(FK) dataelemname biztermID main.Transformdataelem ---------------------- OutputdataelemID InputdataelemID My requirement is: All tables are referenced. Extract all the dataelemId rows from dataelem table where applicationID of dataset table is equal to 1044 and biztermid shud be null. Then whatever resultant dataelemIDs from the above query should be matched with outputdataelemID of Transformdataelem table and we shud get the respective input dataelemId's. Again with these matched inputdataelemID's we shud get the dataelemname's from datelem table.

    Read the article

  • Query to return substring from string in SQL Server

    - by Jowie
    I have a user defined function called Sync_CheckData under Scalar-valued functions in Microsoft SQL Server. What it actually does is to check the quantity of issued product and balance quantity are the same. If something is wrong, returns an ErrorStr nvarchar(255). Output Example: Balance Stock Error for Product ID : 4 From the above string, I want to get 4 so that later on I can SELECT the rows which is giving errors by using WHERE clause (WHERE Product_ID = 4). Which SQL function can I use to get the substring?

    Read the article

  • Retreiving data from grid view cell to a text box

    - by Bader
    Hello , i am trying to retrieve a cell data to a textbox , that will happen when i select any row in the grid view , the textbox will take the new value i already enabled auto post back to the textbox here is my code protected void GridView2_SelectedIndexChanged(object sender, EventArgs e) { TextBox3.Text = GridView2.Rows[GridView2.SelectedIndex].Cells[2].Text; } however , there is not error in the syntax , it doesn't retrieve any thing in the textbox , any suggestions ? i am using using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data.SqlClient; using System.Data.Sql; i work in C# , Visual studio 2010 express web developer

    Read the article

  • 100+ tables to joined

    - by deian
    Hi guys, I was wondering if anyone ever had a change to measure how a would 100 joined tables perform? Each table would have an ID column with primary index and all table are 1:1 related. It is a common problem within many data entry applications where we need to collect 1000+ data points. One solution would be to have one big table with 1000+ columns and the alternative would be to split them into multiple tables and join them when it is necessary. So perhaps more real question would be how 30 tables (30 columns each) would behave with multitable join. 500K-1M rows should be the expected size of the tables. Cheers

    Read the article

  • DB management for Heroku apps

    - by zetarun
    Hi all, I'm fairly new to both Rails and Heroku but I'm seriously thinking of using it as a platform to deploy my Ruby/Rails applications. I want to use all the power of Heroku, so I prefer the "embedded" PostgreSQL managed by Heroku instead of the addon for Amazon RDS for MySQL, but I'm not so confident without the possibility to access my data in a SQL client... I know that in a well made app you have no need to access DB, but there are some situations (add rows to a config table, see data not mapped in a view, update some columns for debugging issues, performance monitoring, running queries for reporting, etc.) when this can be good... How do you solve this problem? What's you experience in a real life app powered by Heroku? Thanks!

    Read the article

  • HELP with sql query involving two tables and a max date

    - by wes
    hello all....firstly any help is greatly appreciated! i have been searching for a solution to my problem for a while and haven't found exactly what i am looking for. i have two tables notifications and mailmessages. notifications has fields( notifytime, notifynumber, and accountnumber). mailmessages has fields(id, messagesubject, messagenumber, accountnumber) my goal is to create a single sql query to retrieve distinct rows from mailmessages WHERE the accountnumber is a specific number AND the notifynumber=messagenumber AND ONLY the most recent notifytime from the notifications table where the accountnumbers match in both tables. i am using sqlexpress2008 as a backend to an asp.net page....this query should return distinct messages for an account with only the most recent date from the notifications table..please help! i'll buy you a beer!!!

    Read the article

  • Is count(*) really expensive ?

    - by Anil Namde
    I have a page where I have 4 tabs displaying 4 different reports based off different tables. I obtain the row count of each table using a select count(*) from <table> query and display number of rows available in each table on the tabs. As a result, each page postback causes 5 count(*) queries to be executed (4 to get counts and 1 for pagination) and 1 query for getting the report content. Now my question is: are count(*) queries really expensive -- should I keep the row counts (at least those that are displayed on the tab) in the view state of page instead of querying multiple times? How expensive are COUNT(*) queries ?

    Read the article

  • dirtyFields with itemview

    - by user1449437
    I have a master/detail application that I'm building with backbone marionette. as the user clicks the master rows, the detail will show the row details. users need to be able to edit the row. I'd like the notify them if they try to 'leave' the row before saving. I was thinking that I'd use the dirtyFields plugin for this functionality. Has anyone else used these together? when I swap out my itemview, how do I initialize the plugin? when I close the view, is there anything I should think about to clean up the view? any thoughts would be appreciated. thx

    Read the article

  • conditional copy between sheets google docs spreadsheets

    - by user1891545
    I have this situation: 1 spreadsheet with another 8 sheets The sheet1 have 16 column fill from a web form, so when the people fill in its created a new row. So i want create a script which read the rows and copy determined data column from this row in the specifics sheets A B C D E F G H I J K L M N O P 1 X X X X X X X X X X X X X X X X SHEET1 SHEET(F) SHEET(G) SHEET(H) SHEET4(I)... If theres some data in column E copy column A, column B, column C, column E, from sheet1 to last row sheet E also if theres no data on column E do nothing and continues If theres some data in column F copy column A, column B, column C, column F, from sheet1 to last row sheet F also if theres no data on column F do nothing and continues .... Also I want know if is possible launch this script with function onSubmitForm() so as the row is insert automatically run the script and clasify the datas between the sheets.

    Read the article

  • High CPU - What to do.

    - by Udi Kantzuker
    I have a high CPU problem with MYSQL using "top" ( linux ) shows cpu peaks of 90%. I was trying to find the source of the problem, turned on general log and slow query log, The slow query log did not find anything. The Db contains a few small tables and one large table that contains almost 100k rows, Database Engine is MyIsam. strange thing i have noticed that on the large table, select, insert are very fast but update takes 0.2 - 0.5 secs. already used optimize and repair and no improvement. the table is being updated frequently, could this be the source of the high CPU% ? What can i do to improve this?

    Read the article

  • Is there a way to hide a row or column in excel without using VBA?

    - by AJ
    I know of several approaches using a macro (VBA) to show/hide columns and rows in Excel, but I cannot figure out or find a way to do this using either a formula or conditional formatting. Of particular interest is Excel 2007 - but I'd be curious to know if someone has managed to do it in any version of Excel. For those who want background, I have a spread of data with dates across the top and labels down the first column. I would like to specify a date window (on another sheet) as two cells with drop down dates (months) which would then show/hide the appropriate columns on the data sheet.

    Read the article

  • Random select is not always returning a single row.

    - by Lieven
    The intention of following (simplified) code fragment is to return one random row. Unfortunatly, when we run this fragment in the query analyzer, it returns between zero and three results. As our input table consists of exactly 5 rows with unique ID's and as we perform a select on this table where ID equals a random number, we are stumped that there would ever be more than one row returned. Note: among other things, we already tried casting the checksum result to an integer with no avail. DECLARE @Table TABLE ( ID INTEGER IDENTITY (1, 1) , FK1 INTEGER ) INSERT INTO @Table SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 SELECT * FROM @Table WHERE ID = ABS(CHECKSUM(NEWID())) % 5 + 1

    Read the article

  • What's the simplest way to let the user download a file in ASP.NET MVC ?

    - by drasto
    In an ASP.NET MVC I have a database table. I want to have a button on some view page, if some user clicks that button I my application will generate XML file containing all rows in the database. Then the file containing XML should be sent to the client so that the user will see a download pop-up window. Similarly I want to allow user to upload an XML file whose content will be added to the database. What's the simplest way to let the user upload and download file ? Thanks for all the answers

    Read the article

  • MySQL: Records inserted by hour, for the last 24 hours

    - by Andrew M
    I'm trying to list the number of records per hour inserted into a database for the last 24 hours. Each row displays the records inserted that hour, as well as how many hours ago it was. Here's my query now: SELECT COUNT(*), FLOOR( TIME_TO_SEC( TIMEDIFF( NOW(), time)) / 3600 ) FROM `records` WHERE time > DATE_SUB(NOW(), INTERVAL 24 HOUR) GROUP BY HOUR(time) ORDER BY time ASC right now it returns: 28 23 62 23 14 20 1 4 28 3 19 1 That shows two rows from 23 hours ago, when it should only show one per hour. I think it has something to do with using NOW() instead of getting the time at the start of the hour, which I'm unsure on how to get. There must be a simpler way of doing this.

    Read the article

  • read text files containing binary data as a single matrix in matlab

    - by user1716595
    I have a text file which contains binary data in the following manner: 00000000000000000000000000000000001011111111111111111111111111111111111111111111111111111111110000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111111000111100000000000000000000000000000000 00000000000000000000000000000000000011111111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000111111111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000011111111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111111000111110000000000000000000000000000000 00000000000000000000000000000000000000111111111111111111111111111111111111111111111111111111110000000000000000000000000000000 00000000000000000000000000000000000000000000111111111111111111111111111111111111110000000011100000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111111100111110000000000000000000000000000000 00000000000000000000000000000000000111111111111111111111111111111111111111111111111111110111110000000000000000000000000000000 00000000000000000000000000000000001111111111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000000001111111111111111111111111111111111111111111111000011100000000000000000000000000000000 00000000000000000000000000000000000000001111111111111111111111111111111111111111111111000011100000000000000000000000000000000 00000000000000000000000000000000000001111111111111111111111111111111111111111111111111111111000000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111110000011100000000000000000000000000000000 00000000000000000000000000000000000000000000011111111111111111111111111111111111100000000011100000000000000000000000000000000 00000000000000000000000000000000000000111111111111111111111111111111111111111111111111110111100000000000000000000000000000000 Plz note that each 1 or 0 is independent i.e the values are not decimal.I need to find the column wise sum of the file.There are 125 columns in all (here it is jumping onto the next line) and there are 840946 rows. I have tried textread,fscanf and a few other matlab commands but the result is that they all read each row in decimal format and create a 840946*1 array.I want to create a 840946*125 array to compute a column wise sum. Kindly help, Thanks!

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >