Search Results

Search found 26283 results on 1052 pages for 'temporary table'.

Page 216/1052 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • javascript table sorting/paging (client-side). How big is too big?

    - by Aheho
    I'm using a jQuery plugin called Tablesorter to do client-side sorting of a log table in one of my applications. I am also making use of the tablepager add-in. I really like the responsiveness that client-side sorting and paging brings to the party. I also like how you don't have to hit the web server or database repeatedly. However I can see that, in time, the log I'm displaying could grow quite large. I'm sure there comes a point where client-side paging and sorting is going to be impractical. What point will this technique begin to collapse under it's own weight? 500 records? 2000 records? 10,000 records? EDIT: In nutshell, what criteria would you use to determine if you are going to use client-side sorting/paging as opposed to server-side paging? Does the size of expected result set factor into your decision? Where is the tipping point?

    Read the article

  • NSFetchedResultsController: changing predicate not working?

    - by icerelic
    Hi, I'm writing an app with two tables on one screen. The left table is a list of folders and the right table shows a list of files. When tapped on a row on the left, the right table will display the files belonging to that folder. I'm using Core Data for storage. When the selection of folder changes, the fetch predicate of the right table's NSFetchedResultsController will change and perform a new fetch, then reload the table data. I used the following code snippet: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"list = %@",self.list]; [fetchedResultsController.fetchRequest setPredicate:predicate]; NSError *error = nil; if (![[self fetchedResultsController] performFetch:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } [table reloadData]; However the fetch results are still the same. I've NSLog'ed "predicate" before and after the fetch, and they were correct with updated information. The fetch results stay the same as initial fetch (when view is loaded). I'm not very familiar with the way Core Data fetches objects (is there a caching system?), but I've done similar things before(changing predicates, re-fetching data, and refreshing table) with single table views and everything went well. If someone could gave me a hint I would be very appreciated. Thanks in advance.

    Read the article

  • How to make a Table of Content auto-update?

    - by Dan
    I am using Word 2007, but saving my documents in .doc (as opposed to .docx) formats because that's company policy. I have the ToC set up fine, but is there a way to have it update automatically (at document open, save or otherwise)? Word help suggests that it should update upon opening the document, but that doesn't seem to happen. Any ideas?

    Read the article

  • Simulating O_NOFOLLOW (2): Is this other approach safe?

    - by Daniel Trebbien
    As a follow-up question to this one, I thought of another approach which builds off of @caf's answer for the case where I want to append to file name and create it if it does not exist. Here is what I came up with: Create a temporary directory with mode 0700 in a system temporary directory on the same filesystem as file name. Create an empty, temporary, regular file (temp_name) in the temporary directory (only serves as placeholder). Open file name for reading only, just to create it if it does not exist. The OS may follow name if it is a symbolic link; I don't care at this point. Make a hard link to name at temp_name (overwriting the placeholder file). If the link call fails, then exit. (Maybe someone has come along and removed the file at name, who knows?) Use lstat on temp_name (now a hard link). If S_ISLNK(lst.st_mode), then exit. open temp_name for writing, append (O_WRONLY | O_APPEND). Write everything out. Close the file descriptor. unlink the hard link. Remove the temporary directory. (All of this, by the way, is for an open source project that I am working on. You can view the source of my implementation of this approach here.) Is this procedure safe against symbolic link attacks? For example, is it possible for a malicious process to ensure that the inode for name represents a regular file for the duration of the lstat check, then make the inode a symbolic link with the temp_name hard link now pointing to the new, symbolic link? I am assuming that a malicious process cannot affect temp_name.

    Read the article

  • How do I sort an internationalized i18n table with symfony and doctrine?

    - by Maurizio
    I would like to display a list of records from an internationalized table using sfDoctrinePager. Not all the records have been translated to all the languages supported by the application, so I had to implement a fallback mechanism for some fields (by overriding the getFoo() function in the Bar.class.php, as explained in another post here). I have different fallback list for each culture. Everything works fine until when it comes to sorting the records in alphabetical order. I'm sorting the records at the SQL (Dql) level, by adding an -orderBy('t.name') to the query: $q = Doctrine::getTable('Foo') ->createQuery('f') ->leftJoin('f.Translation t') ->orderBy('t.name') But here come the troubles: the list gets not sorted correctly, regardless of the active culture. I get rather better results when I limit the translations to the active culture, like this: ->leftJoin('f.Translation t WITH lang = ?', $request->getParameter('sf_culture'); Then the sorting is correct, as far as all the translations exist for the active culture. If a translation does not exist and I have to take the name from the fallback language, the record will be displayed at the very beginning of the list (I understand this happens because the value for the current culture is null). My question is: is there a best practice for getting internationalized fields (needing fallbacks) sorted correctly with doctrine and sfDoctrinePager? Thank you in advance.

    Read the article

  • Avoiding Duplicate Data in DB (for use with Rails)

    - by ants
    I have five tables that I am trying to get to work nicely together but may need some help. I have three main tables: accounts members and roles. With two join tables account_members and account_member_roles. The accounts and members table are joined by account_members (fk account_id and member_id) table. The other 2 tables are the problem (roles and account_member_roles). A member of an account can have more than one role and I have the account_member_roles (fk account_member_id and role_id) table joining the account_members join table and the roles table. That seems logical but can you have a relationship with a join table? What I'd like to be able to do is when creaeting an account, for instance, I would like @account.save to include the roles and update the account_member_roles table neatly ..... but through the account_members join table. I've tried ..... accept_nested_attributes_for :members, :account_member_roles in the account.rb but I get ..... ActiveRecord::HasManyThroughCantAssociateThroughHasManyReflection (Cannot modify association 'Account#account_member_roles' because the source reflection class 'AccountMemberRole' is associated to 'AccountMember' via :has_many.) upon trying to save a record. Any advice on how I should approach this? CIA -ants

    Read the article

  • What is the best way to auto-generate INSERT statements for a SQL Server table?

    - by JosephStyons
    We are writing a new application, and while testing, we will need a bunch of dummy data. I've added that data by using MS Access to dump excel files into the relevant tables. Every so often, we want to "refresh" the relevant tables, which means dropping them all, re-creating them, and running a saved MS Access append query. The first part (dropping & re-creating) is an easy sql script, but the last part makes me cringe. I want a single setup script that has a bunch of INSERTs to regenerate the dummy data. I have the data in the tables now. What is the best way to automatically generate a big list of INSERT statements from that dataset? I'm thinking of something like in TOAD (for Oracle) where you can right-click on a grid and click Save As-Insert Statements, and it will just dump a big sql script wherever you want. The only way I can think of doing it is to save the table to an excel sheet and then write an excel formula to create an INSERT for every row, which is surely not the best way. I'm using the 2008 Management Studio to connect to a SQL Server 2005 database.

    Read the article

  • beautifulsoup: find the n-th element's sibling

    - by deostroll
    I have a complex html DOM tree of the following nature: <table> ... <tr> <td> ... </td> <td> <table> <tr> <td> <!-- inner most table --> <table> ... </table> <h2>This is hell!</h2> <td> </tr> </table> </td> </tr> </table> I have some logic to find out the inner most table. But after having found it, I need to get the next sibling element (h2). Is there anyway you can do this?

    Read the article

  • SQL Full-Text indexing not populating

    - by Sam
    We installed a clustered SQL 2005 installation on windows 2008 and reattached our san drives from another machine and restored to do a migration to new hardware. There have been a few minor issues, but this one has me stuck. Trying to populate Full-Text indexes is not working. I create a basic table with some simple text in a new database and get the same results as old indexes. 2010-09-27 10:30:46.85 spid19s Informational: Full-text Full population initialized for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'). Population sub-tasks: 1. 2010-09-27 10:31:15.36 spid19s Error '0x80070003' occurred during full-text index population for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'), full-text key value 0x000001DF. Attempt will be made to reindex it. 2010-09-27 10:31:15.37 spid19s The component 'MSFTE.DLL' reported error while indexing. Component path 'D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\MSFTE.DLL'. 2010-09-27 10:31:15.37 spid19s Error '0x80070003' occurred during full-text index population for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'), full-text key value 0x000001E0. Attempt will be made to reindex it. The rebuild/repopulate procedure finishes, but I get zero rows in the index. The .dll in the message is present and the service accounts have access to this. My FTData also has data in it, so it seems there wouldn't be permission issue on this folder. Application throws this error: “PHP Warning: mssql_query() [function.mssql-query]: message: Full-text catalog 'ikm_PageIndex_FText' is in an unusable state. Drop and re-create this full-text catalog. (severity 16) in E:\Inetpub\knowledgebase_insidemesa\lib\database\mssql.php on line 154” A microsoft discussion is the only post I found which had claimed to fix this - said it was registry related, but then didn't post the fix.

    Read the article

  • SQL Server: Clustering by timestamp; pros/cons

    - by Ian Boyd
    I have a table in SQL Server, where i want inserts to be added to the end of the table (as opposed to a clustering key that would cause them to be inserted in the middle). This means I want the table clustered by some column that will constantly increase. This could be achieved by clustering on a datetime column: CREATE TABLE Things ( ... CreatedDate datetime DEFAULT getdate(), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (CreatedDate) ) But I can't guaranteed that two Things won't have the same time. So my requirements can't really be achieved by a datetime column. I could add a dummy identity int column, and cluster on that: CREATE TABLE Things ( ... RowID int IDENTITY(1,1), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (RowID) ) But you'll notice that my table already constains a timestamp column; a column which is guaranteed to be a monotonically increasing. This is exactly the characteristic I want for a candidate cluster key. So I cluster the table on the rowversion (aka timestamp) column: CREATE TABLE Things ( ... [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (timestamp) ) Rather than adding a dummy identity int column (RowID) to ensure an order, I use what I already have. What I'm looking for are thoughts of why this is a bad idea; and what other ideas are better. Note: Community wiki, since the answers are subjective.

    Read the article

  • PostgreSQL 9: Does Vacuuming a table on the primary replicate on the mirror?

    - by Scott Herbert
    Running PostgreSQL 9.0.1, with streaming replication keeping one read-only mirror instance up to date. Auto-vaccuum is on on the primary, except for a few tables which are not vacuumed by the auto-vacuum daemon, in an effort to reduce business-hour IO. These tables are "materialised views". Each night at midnight, we run a vacuum across the database in order to clean up those tables that are excluded from the auto-vacuum. I'm wondering if that process replicates across to the mirror, or if I need to set up vacuum on the mirror as well?

    Read the article

  • SQL Server: Clutering by timestamp; pros/cons

    - by Ian Boyd
    i have a table in SQL Server, where i want inserts to be added to the end of the table (as opposed to a clustering key that would cause them to be inserted in the middle). This means i want the table clustered by some column that will constantly increase. This could be achieved by clustering on a datetime column: CREATE TABLE Things ( ... CreatedDate datetime DEFAULT getdate(), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (CreatedDate) ) But i can't guaranteed that two Things won't have the same time. So my requirements can't really be achieved by a datetime column. i could add a dummy identity int column, and cluster on that: CREATE TABLE Things ( ... RowID int IDENTITY(1,1), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (RowID) ) But you'll notice that my table already constains a timestamp column; a column which is guaranteed to be a monotonically increasing. This is exactly the characteristic i want for a candidate cluster key. So i cluster the table on the rowversion (aka timestamp) column: CREATE TABLE Things ( ... [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (timestamp) ) Rather than adding a dummy identity int column (RowID) to ensure an order, i use what i already have. What i'm looking for are thoughts of why this is a bad idea; and what other ideas are better. Note: Community wiki, since the answers are subjective.

    Read the article

  • How can I save an NSDocument concurrently?

    - by Paperflyer
    I have a document based application. Saving the document can take a few seconds, so I want to enable the user to continue using the program while it saves the document in the background. Due to the document architecture, my application is asked to save to a temporary location and that temporary file is then copied over the old file. However, this means that I can not just run my file saving code in the background and return way before it is done, since the temporary file has to be written completely before it can be copied. Is there a way to disable this temporary-file-behavior or otherwise enable file saving in the background?

    Read the article

  • Oracle why does creating trigger fail when there is a field called timestamp?

    - by Omar Kooheji
    I've just wasted the past two hours of my life trying to create a table with an auto incrementing primary key bases on this tutorial, The tutorial is great the issue I've been encountering is that the Create Target fails if I have a column which is a timestamp and a table that is called timestamp in the same table... Why doesn't oracle flag this as being an issue when I create the table? Here is the Sequence of commands I enter: Creating the Table: CREATE TABLE myTable (id NUMBER PRIMARY KEY, field1 TIMESTAMP(6), timeStamp NUMBER, ); Creating the Sequence: CREATE SEQUENCE test_sequence START WITH 1 INCREMENT BY 1; Creating the trigger: CREATE OR REPLACE TRIGGER test_trigger BEFORE INSERT ON myTable REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT test_sequence.nextval INTO :NEW.ID FROM dual; END; / Here is the error message I get: ORA-06552: PL/SQL: Compilation unit analysis terminated ORA-06553: PLS-320: the declaration of the type of this expression is incomplete or malformed Any combination that does not have the two lines with a the word "timestamp" in them works fine. I would have thought the syntax would be enough to differentiate between the keyword and a column name. As I've said I don't understand why the table is created fine but oracle falls over when I try to create the trigger... CLARIFICATION I know that the issue is that there is a column called timestamp which may or may not be a keyword. MY issue is why it barfed when I tried to create a trigger and not when I created the table, I would have at least expected a warning. That said having used Oracle for a few hours, it seems a lot less verbose in it's error reporting, Maybe just because I'm using the express version though. If this is a bug in Oracle how would one who doesn't have a support contract go about reporting it? I'm just playing around with the express version because I have to migrate some code from MySQL to Oracle.

    Read the article

  • Record Disappeared from Mysql Table, How Can I Find Out What Happened?

    - by Jascha
    I got the fire alarm phone call, AIM messages and email today from a client stating "The site is down!, WTF happened?!" Well, after a little digging, it turns out one of the records in a table had been wiped clean, but without removing the row itself. So, I had the representation of data, but a bunch of empty fields. (needless to day I need to write into my code a catch for this.) What my real question is, where can I figure out what happened? I've got access to phpmyadmin and that's about it. I found some access logs on in the root directory of my server, but that just tells me the client was in the admin area I built editing that record, I'd like to know specifically what they did that made all of the data go away. (what query was run etc...) is it possible without real server admin access? is there a neat little php to mysql class that returns data like this? Thanks in advance. -Jascha

    Read the article

  • Insert a datetime value with GetDate() function to a SQL server (2005) table?

    - by David.Chu.ca
    I am working (or fixing bugs) on an application which was developed in VS 2005 C#. The application saves data to a SQL server 2005. One of insert SQL statement tries to insert a time-stamp value to a field with GetDate() TSQL function as date time value. Insert into table1 (field1, ... fieldDt) values ('value1', ... GetDate()); The reason to use GetDate() function is that the SQL server may be at a remove site, and the date time may be in a difference time zone. Therefore, GetDate() will always get a date from the server. As the function can be verified in SQL Management Studio, this is what I get: SELECT GetDate(), LEN(GetDate()); -- 2010-06-10 14:04:48.293 19 One thing I realize is that the length is not up to the milliseconds, i.e., 19 is actually for '2010-06-10 14:04:48'. Anyway, the issue I have right now is that after the insert, the fieldDt actually has a date time value up to minutes, for example, '2010-06-10 14:04:00'. I am not sure why. I don't have permission to update or change the table with a trigger to update the field. My question is that how I can use a INSERT T-SQL to add a new row with a date time value ( SQL server's local date time) with a precision up to milliseconds?

    Read the article

  • Debugging apache seg fault with gdb

    - by Joyce Babu
    Apache on a production server of mine is seg faulting intermittently. I have enabled core dump option in apache configuration and have several dumped core files. Unfortunately, since it is a production server, apache or the loaded modules are not compiled with debug symbols. From what I understand, gdb cannot do much without debug symbols. Can I at least find out which module is causing the seg fault, without debug symbols? If so, how? Following is the output from a gdb backtrace (gdb) bt full #0 0xb7f1f832 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 No symbol table info available. #1 0xb7be82bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/libpthread.so.0 No symbol table info available. #2 0xb771652a in ?? () from /usr/local/apache/modules/mod_pagespeed.so No symbol table info available. #3 0xb75df576 in ?? () from /usr/local/apache/modules/mod_pagespeed.so No symbol table info available. #4 0xb7715c20 in ?? () from /usr/local/apache/modules/mod_pagespeed.so No symbol table info available. #5 0xb7be4a49 in start_thread () from /lib/libpthread.so.0 No symbol table info available. #6 0xb7b2a63e in clone () from /lib/libc.so.6 No symbol table info available. Does this mean that /lib/ld-linux.so.2 is causing the seg fault?

    Read the article

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • PHP / Zend Framework: Force prepend table name to column name in result array?

    - by Brian Lacy
    I am using Zend_Db_Select currently to retrieve hierarchical data from several joined tables. I need to be able to convert this easily into an array. Short of using a switch statement and listing out all the columns individually in order to sort the data, my thought was that if I could get the table names auto-prepended to the keys in the result array, that would solve my problem. So considering the following (assembled) SQL: SELECT user.*, contact.* FROM user INNER JOIN contact ON contact.user_id = user.user_id I would normally get a result array like this: [username] => 'bob', [contact_id] => 5, [user_id] => 2, [firstname] => 'bob', [lastname] => 'larsen' But instead I want this: [user.user_id] => 2, [user.username] => 'bob', [contact.contact_id] => 5, [contact.firstname] => 'bob', [contact.lastname] => 'larsen' Does anyone have an idea how to achieve this? Thanks!

    Read the article

  • Clustered index - multi-part vs single-part index and effects of inserts/deletes

    - by Anssssss
    This question is about what happens with the reorganizing of data in a clustered index when an insert is done. I assume that it should be more expensive to do inserts on a table which has a clustered index than one that does not because reorganizing the data in a clustered index involves changing the physical layout of the data on the disk. I'm not sure how to phrase my question except through an example I came across at work. Assume there is a table (Junk) and there are two queries that are done on the table, the first query searches by Name and the second query searches by Name and Something. As I'm working on the database I discovered that the table has been created with two indexes, one to support each query, like so: --drop table Junk1 CREATE TABLE Junk1 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name ON Junk1 ( Name ) CREATE NONCLUSTERED INDEX IX_Name_Something ON Junk1 ( Name, Something ) Now when I looked at the two indexes, it seems that IX_Name is redundant since IX_Name_Something can be used by any query that desires to search by Name. So I would eliminate IX_Name and make IX_Name_Something the clustered index instead: --drop table Junk2 CREATE TABLE Junk2 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name_Something ON Junk2 ( Name, Something ) Someone suggested that the first indexing scheme should be kept since it would result in more efficient inserts/deletes (assume that there is no need to worry about updates for Name and Something). Would that make sense? I think the second indexing method would be better since it means one less index needs to be maintained. I would appreciate any insight into this specific example or directing me to more info on maintenance of clustered indexes.

    Read the article

  • Mysql return value as 0 in the fetch result.

    - by Karthik
    I have this two tables, -- -- Table structure for table `t1` -- CREATE TABLE `t1` ( `pid` varchar(20) collate latin1_general_ci NOT NULL, `pname` varchar(20) collate latin1_general_ci NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci; -- -- Dumping data for table `t1` -- INSERT INTO `t1` VALUES ('p1', 'pro1'); INSERT INTO `t1` VALUES ('p2', 'pro2'); -- -------------------------------------------------------- -- -- Table structure for table `t2` -- CREATE TABLE `t2` ( `pid` varchar(20) collate latin1_general_ci NOT NULL, `year` int(6) NOT NULL, `price` int(3) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci; -- -- Dumping data for table `t2` -- INSERT INTO `t2` VALUES ('p1', 2009, 50); INSERT INTO `t2` VALUES ('p1', 2010, 60); INSERT INTO `t2` VALUES ('p3', 2007, 200); INSERT INTO `t2` VALUES ('p4', 2008, 501); my query is, SELECT * FROM `t1` LEFT JOIN `t2` ON t1.pid = t2.pid Getting the result, pid pname pid year price p1 pro1 p1 2009 50 p1 pro1 p1 2010 60 p2 pro2 NULL NULL NULL My question is, i want to get the price value is 0 instead of NULL. How can i write the query to getting the price value is 0. Thanks in advance for help.

    Read the article

  • How do I return a nested table from an oracle function using Java?

    - by Benny
    I have the following type declaration and Oracle function: CREATE OR REPLACE TYPE var_outcomes_results IS TABLE OF VARCHAR2(80); CREATE OR REPLACE FUNCTION getValuesAbove(in_nodeID IN table1.KEY_SL%TYPE, in_variable IN VARCHAR2) RETURN var_outcomes_results IS currentID table1.KEY_SL%TYPE; results var_outcomes_results; currentIndex integer := 0; BEGIN currentID := in_nodeID; WHILE currentID != null LOOP FOR outcomeRecord IN (select distinct a.PARENT, b.NAME, c.OUTCOME from table1 a left outer join table2 b on a.KEY_SL = b.KEY_SL left outer join table3 c on b.VAR_ID = c.VAR_ID where a.KEY_SL = currentID) LOOP currentID := outcomeRecord.PARENT; IF lower(outcomeRecord.NAME) = lower(in_variable) AND outcomeRecord.OUTCOME != null THEN currentIndex := currentIndex + 1; results(currentIndex) := outcomeRecord.OUTCOME; END IF; END LOOP; END LOOP; RETURN results; END; I have the following Java function: public List<Object> getAboveValues(String variable, Integer nodeID) { Connection connection = null; CallableStatement callableStatement = null; try { connection = dataSource.getConnection(); callableStatement = connection.prepareCall("begin ? := getValuesAbove(?,?); end;"); callableStatement.registerOutParameter(1, OracleTypes.ARRAY); callableStatement.setInt(2, nodeID); callableStatement.setString(3, variable); callableStatement.execute(); System.out.println(callableStatement.getObject(1)); } catch( SQLException e ) { logger.error("An Exception was thrown in getAboveValues: " + e); } finally { closeDataResources(callableStatement, connection); } } However, when I execute the function, I get the following error message: "ORA-03115: unsupported network datatype or representation" What am I doing wrong? Any ideas/suggestions would be appreciated. Thanks, B.J.

    Read the article

  • How to avoid multiple, unused has_many associations when using multiple models for the same entity (

    - by mikep
    Hello, I'm looking for a nice, Ruby/Rails-esque solution for something. I'm trying to split up some data using multiple tables, rather than just using one gigantic table. My reasoning is pretty much to try and avoid the performance drop that would come with having a big table. So, rather than have one table called books, I have multiple tables: books1, books2, books3, etc. (I know that I could use a partition, but, for now, I've decided to go the 'multiple tables' route.) Each user has their books placed into a specific table. The actual book table is chosen when the user is created, and all of their books go into the same table. The goal is to try and keep each table pretty much even -- but that's a different issue. One thing I don't particularly want to have is a bunch of unused associations in the User class. Right now, it looks like I'd have to do the following: class User < ActiveRecord::Base has_many :books1, :books2, :books3, :books4, :books5 end class Books1 < ActiveRecord::Base belongs_to :user end class Books2 < ActiveRecord::Base belongs_to :user end First off, for each specific user, only one of the book tables would be usable/applicable, since all of a user's books are stored in the same table. So, only one of the associations would be in use at any time and any other has_many :bookX association that was loaded would be a waste. I don't really know Ruby/Rails does internally with all of those has_many associations though, so maybe it's not so bad. But right now I'm thinking that it's really wasteful, and that there may just be a better, more efficient way of doing this. Is there's some sort of special Ruby/Rails methodology that could be applied here to avoid having to have all of those has_many associations? Also, does anyone have any advice on how to abstract the fact that there's multiple book tables behind a single books model/class?

    Read the article

  • How can you trigger the viewWillAppear of a UITableView AFTER its UINavigationController?

    - by Troy Sartain
    I have a situation where I use a tab bar set up but with nav bar controllers on a couple tabs. Those tabs have table views on them. Everything works great, I can pick a tab and get a different table in a nav bar structure. The other tabs are non-nav controllers. Fine. I want to use the same table view controller and even the same detail screen since they are essentially the same format. I have two-dimensional arrays and a couple of vars tracking which tab and which table row so when I get to the detail it's all good. Now to the problem. It all seems to work just fine until I return to a tab that has already been visited. At that point, I do indeed get a viewWillAppear for both the view controller of that specific tab and the table view controller. However, I get the table view one first! It doesn't know which tab was tapped on; the other one does but that's too late to dynamically change the table! Any suggestions? Am I being too greedy about code duplication? I mean I could just make separate controllers for for each table view and then separate detail view controllers but I thought I had a good solution.

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >