Search Results

Search found 24675 results on 987 pages for 'table'.

Page 267/987 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • How to make columns as wide as the widest entry?

    - by Helper Method
    For a gcc cheatsheet I'm writing, I want to create a table which should describe how gcc interprets different file endings. The table I created so far is defined as follows: |====================================================================== |.c |C source code which must be preprocessed. |.i |C source code which should not be preprocessed. |.h |C header file to be turned into a precompiled header. |.s |Assembler code. |other | An object file to be fed straight into linking. Any file name with no recognized suffix is treated this way. |====================================================================== The problem I have is that the table spans the total page width, but what I want is that each column only is as wide as it's widest entry, and that the table will span only as much witdh as it needs.

    Read the article

  • Adding a Line Break in mySQL INSERT INTO text

    - by james
    Hi Could someone tell me how to add a new line in a text that i enter in a mySql table. I tried using the '\n in the line i entered with INSERT INTO statement but '\n is shown as it is. Actually i have created a table in Ms Access with some data. Ms Access adds new line with '\n. I am converting Ms Access table data into mySql . But when i convert, the '\n is ignored and all the text is shown in one single line when i display it from mySql table on a php form. Can anyone tell me how mySQL can add a new line in a text ? Awaiting response Thanks !!

    Read the article

  • Accessing Element Count and Values Within a Hidden Div

    - by Tegan Snyder
    I'm having trouble grabbing the TR count in JQuery of a table that inside a DIV set to be hidden. Here is an example: <div id="questions" style="display: none;"> <table id="tbl_questions"> <thead> <tr> <th>Question</th> <th>Weight</th> </tr> </thead> <tbody> <tr id="q0"> <td id="td_question0">Some Question 0</td> <td id="td_wieght0">Some Weight 0</td> </tr> <tr id="q1"> <td id="td_question1">Some Question 1</td> <td id="td_wieght1">Some Weight 1</td> </tr> </tbody> </table> </div> Note the table is within the containing div. It's set to display: none. When I try to run this JQuery code it returns 0. var question_count = $("#tbl_questions> tr").size(); alert(question_count); Any ideas? I'm try to locate the count so I can then build an array of each question and weight in the table. Since the ID's provide me with an index this should be simple, but is the fact that it is contained in a hidden DIV going to be a problem just as the above question? Thanks, Tegan Snyder

    Read the article

  • Calculating and saving space in Postgresql

    - by punkish
    I have a table in Pg like so CREATE TABLE t ( a BIGSERIAL NOT NULL, -- 8 b b SMALLINT, -- 2 b c SMALLINT, -- 2 b d REAL, -- 4 b e REAL, -- 4 b f REAL, -- 4 b g INTEGER, -- 4 b h REAL, -- 4 b i REAL, -- 4 b j SMALLINT, -- 2 b k INTEGER, -- 4 b l INTEGER, -- 4 b m REAL, -- 4 b CONSTRAINT a_pkey PRIMARY KEY (a) ) The above adds up to 50 bytes per row. My experience is that I need another 40% to 50% for system overhead, without even any user-created indexes to the above. So, about 75 bytes per row. I will have many, many rows in the table, potentially upward of 145 billion rows, so the table is going to be pushing 13-14 Terabytes. What tricks, if any, could I use to compact this table? My possible ideas below -- Convert the REAL values to INTEGERs. If they can stored as SMALLINT, that is a saving of 2 bytes per field. Convert the columns b .. m into an array. I don't need to search on those columns, but I do need to be able to return one column's value at a time. So, if I need column g, I could do something like SELECT a, arr[5] FROM t; Would I save space with the array option? Would there be a speed penalty? Any other ideas?

    Read the article

  • temporary tables within stored procedures on slave servers with readonly set

    - by lau
    Hi, We have set up a replication scheme master/slave and we've had problems lately because some users wrote directly on the slave instead of the master, making the whole setup inconsistent. To prevent these problems from happening again, we've decided to remove the insert, delete, update, etc... rights from the users accessing the slave. Problems is that some stored procedure (for reading) require temporary tables. I read that changing the global variable read_only to true would do what I want and allow the stored procedures to work correctly ( http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_read_only ) but I keep getting the error : The MySQL server is running with the --read-only option so it cannot execute this statement (1290) The stored procedure that I used (for testing purpose) is this one : DELIMITER $$ DROP PROCEDURE IF EXISTS test_readonly $$ CREATE DEFINER=dbuser@% PROCEDURE test_readonly() BEGIN CREATE TEMPORARY TABLE IF NOT EXISTS temp ( BT_INDEX int(11), BT_DESC VARCHAR(10) ); INSERT INTO temp (BT_INDEX, BT_DESC) VALUES (222,'walou'), (111,'bidouille'); DROP TABLE temp; END $$ DELIMITER ; The create temporary table and the drop table work fine with the readonly flag - if I comment the INSERT line, it runs fine- but whenever I want to insert or delete from that temporary table, I get the error message. I use Mysql 5.1.29-rc. My default storage engine is InnoDB. Thanks in advance, this problem is really driving me crazy.

    Read the article

  • access values of controls dynamically created on postback

    - by userk
    Hi, My problem is: I've got a table, dynamically created, fill with a lot of dropdownlists witches IDs are dynamically created. When a button is pressed, I need to scan all controls in the table and save their value. But after the postback I can't no longer access to the table, and I've no idea how can I get those values... Thanks!

    Read the article

  • How do I check for the existence of an external file with XSL?

    - by LOlliffe
    I've found a lot of examples that reference Java and C for this, but how do I, or can I, check for the existence of an external file with XSL. First, I realize that this is only a snippet, but it's part of a huge stylesheet, so I'm hoping it's enough to show my issue. <!-- Use this template for Received SMSs --> <xsl:template name="ReceivedSMS"> <!-- Set/Declare "SMSname" variable (local, evaluates per instance) --> <xsl:variable name="SMSname"> <xsl:value-of select=" following-sibling::Name"/> </xsl:variable> <fo:table font-family="Arial Unicode MS" font-size="8pt" text-align="start"> <fo:table-column column-width=".75in"/> <fo:table-column column-width="6.75in"/> <fo:table-body> <fo:table-row> <!-- Cell contains "speakers" icon --> <fo:table-cell display-align="after"> <fo:block text-align="start"> <fo:external-graphic src="../images/{$SMSname}.jpg" content-height="0.6in"/> What I'd like to do, is put in an "if" statement, surronding the {$SMSname}.jpg line. That is: <fo:block text-align="start"> <xsl:if test="exists( the external file {$SMSname}.jpg)"> <fo:external-graphic src="../images/{$SMSname}.jpg" content-height="0.6in"/> </xsl:if> <xsl:if test="not(exists( the external file {$SMSname}.jpg))"> <fo:external-graphic src="../images/unknown.jpg" content-height="0.6in"/> </xsl:if> </fo:block> Because of "grouping", etc., I'm using XSLT 2.0. I hope that this is something that can be done. I hope even more that it's something simple. As always, thanks in advance for any help. LO

    Read the article

  • Handling auto-incrementing IDENTITY SQL Server fields with LINQ to SQL in C#

    - by Maxim Z.
    I'm building an ASP.NET MVC site that uses LINQ to SQL to connect to SQL Server, where I have a table that has an IDENTITY bigint primary key column that represents an ID. In one of my code methods, I need to create an object of that table to get its ID, which I will place into another object based on another table (FK-to-PK relationship). At what point is the IDENTITY column value generated and how can I obtain it from my code? Is the right approach to: Create the object that has the IDENTITY column Do an InsertOnSubmit() and SubmitChanges() to submit the object to the database table Get the value of the ID property of the object

    Read the article

  • Data retrieval and Join operations with cluster db server

    - by Goerge
    If any database spreads across multiple servers (ex. Microsoft Sql Server), how can we do join or filter operations. In my scenario, if suppose: A single table spreads across multiple servers how can we filter rows based on user input? If master table is there on one db server and transaction table is at another db server, how can we do join operations? Please let me know how can we achieve this and where can I get more details about this?

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • Implementation of MVC with SQLite and NSURLConnection, use cases?

    - by user324723
    I'm interested in knowing how others have implemented/designed database & web services in their iphone app and how they simplified it for the entire application. My application is dependent on these services and I can't figure out a efficient way to use them together due to the (semi)complexity of my requirements. My past attempts on combining them haven't been completely successful or at least optimal in my mind. I'm building a database driven iphone app that uses a relational database in sqlite and consumes web services based on missing content or user interaction. Like this hasn't been done before...right? Since I am using a relational database - any web services consumed requires normalization, parsing the result and persisting it to the database before it can be displayed in a table view controller. The applications UI consists of nested(nav controller) table views where a user can select a cell and be taken to the next table view where it attempts to populate the table views data source from the database. If nothing exists in the database then it will send a request via web services to download its content, thus download - parse - persist - query - display. Since the user has the ability to request a refresh of this data it still requires the same process. Quickly describing what I've implemented and tried to run with - 1st attempt - Used a singleton web service class that handled sending web service requests, parsing the result and returning it to the table view controller via delegate protocols. Once the controller received that data it would then be responsible for persisting it to the database and re-returning the result. I didn't like the idea of only preventing the case where the app delegate selector doesn't exists(released) causing the app to crash. 2nd attempt - Used NSNotificationCenter for easy access to both database and web services but later realized it was more complex due to adding and removing observers per view(which isn't advised anyways).

    Read the article

  • Merge computed data from two tables back into one of them

    - by Tyler McHenry
    I have the following situation (as a reduced example). Two tables, Measures1 and Measures2, each of which store an ID, a Weight in grams, and optionally a Volume in fluid onces. (In reality, Measures1 has a good deal of other data that is irrelevant here) Contents of Measures1: +----+----------+--------+ | ID | Weight | Volume | +----+----------+--------+ | 1 | 100.0000 | NULL | | 2 | 200.0000 | NULL | | 3 | 150.0000 | NULL | | 4 | 325.0000 | NULL | +----+----------+--------+ Contents of Measures2: +----+----------+----------+ | ID | Weight | Volume | +----+----------+----------+ | 1 | 75.0000 | 10.0000 | | 2 | 400.0000 | 64.0000 | | 3 | 100.0000 | 22.0000 | | 4 | 500.0000 | 100.0000 | +----+----------+----------+ These tables describe equivalent weights and volumes of a substance. E.g. 10 fluid ounces of substance 1 weighs 75 grams. The IDs are related: ID 1 in Measures1 is the same substance as ID 1 in Measures2. What I want to do is fill in the NULL volumes in Measures1 using the information in Measures2, but keeping the weights from Measures1 (then, ultimately, I can drop the Measures2 table, as it will be redundant). For the sake of simplicity, assume that all volumes in Measures1 are NULL and all volumes in Measures2 are not. I can compute the volumes I want to fill in with the following query: SELECT Measures1.ID, Measures1.Weight, (Measures2.Volume * (Measures1.Weight / Measures2.Weight)) AS DesiredVolume FROM Measures1 JOIN Measures2 ON Measures1.ID = Measures2.ID; Producing: +----+----------+-----------------+ | ID | Weight | DesiredVolume | +----+----------+-----------------+ | 4 | 325.0000 | 65.000000000000 | | 3 | 150.0000 | 33.000000000000 | | 2 | 200.0000 | 32.000000000000 | | 1 | 100.0000 | 13.333333333333 | +----+----------+-----------------+ But I am at a loss for how to actually insert these computed values into the Measures1 table. Preferably, I would like to be able to do it with a single query, rather than writing a script or stored procedure that iterates through every ID in Measures1. But even then I am worried that this might not be possible because the MySQL documentation says that you can't use a table in an UPDATE query and a SELECT subquery at the same time, and I think any solution would need to do that. I know that one workaround might be to create a new table with the results of the above query (also selecting all of the other non-Volume fields in Measures1) and then drop both tables and replace Measures1 with the newly-created table, but I was wondering if there was any better way to do it that I am missing.

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • sql insert query needed

    - by masfenix
    Hey guys, so I have two tables. They are pictured below. I have a master table "all_reports". And a user table "user list". The master table may have users that do not exist in the user list. I need to add them to the user list. The master table may have duplicates in them (check picture). The master list does not contain all the information that the user list requires (no manager, no HR status, no department.. again check picture).

    Read the article

  • SQL Server many-to-many design recommendation

    - by Jean-Philippe Brabant
    I have a SQL Server database with two table : Users and Achievements. My users can have multiple achievements so it a many-to-many relation. At school we learned to create an associative table for that sort of relation. That mean creating a table with a UserID and an AchivementID. But if I have 500 users and 50 achievements that could lead to 25 000 row. As an alternative, I could add a binary field to my Users table. For example, if that field contained 10010 that would mean that this user unlocked the first and the fourth achievements. Is their other way ? And which one should I use.

    Read the article

  • Multiply 2 values in form using php.

    - by DAFFODIL
    You can find the code below in the url,i used function to do the work but,i am getting this error Notice: Use of undefined constant Quantity - assumed 'Quantity' in C:\wamp\www\table.php on line 48 Notice: Use of undefined constant Rate - assumed 'Rate' in C:\wamp\www\table.php on line 49 Notice: Use of undefined constant Amount - assumed 'Amount' in C:\wamp\www\table.php on line 50 http://dpaste.com/hold/189360/

    Read the article

  • habtm multiple times with the same model

    - by Ermin
    I am trying to model a publications. A publication can have multiple authors and editors. Since it is possible that one person is an author of one publication and an editor of another, no separate models for Authors and Editors: class Publication < ActiveRecord::Base has_and_belongs_to_many :authors, :class_name=>'Person' has_and_belongs_to_many :editors, :class_name=>'Person' end The above code doesn't work, because it uses the same join table. Now I now that I can specify the name of the join table, but there is a warning in the API documentation is a warning about that which I don't understand: :join_table: Specify the name of the join table if the default based on lexical order isn’t what you want. WARNING: If you’re overwriting the table name of either class, the table_name method MUST be declared underneath any has_and_belongs_to_many declaration in order to work.

    Read the article

  • .before method adds unexpected close tags

    - by timkl
    I have a table in my markup on which I want to add some divs before and efter like this: <div class="widebox"> <div class="widebox-header">Opret/rediger bruger</div> <div class="widebox-middle"> <table id="Table3"></table> </div> <div class="widebox-bottom"></div> </div> I'm trying to do this with jQuery, like this: $('#Table3').before('<div class="widebox"><div class="widebox-header">Opret/rediger bruger</div><div class="widebox-middle">'); $('#Table3').after('</div><div class="widebox-bottom"></div></div>'); However this is what renders out, the method seems to close my opening divs: <div class="widebox"> <div class="widebox-header">Opret/rediger bruger</div> <div class="widebox-middle"></div></div><!-- unexpected close divs --> <table id="Table3"></table> <div class="widebox-bottom"></div> Anyone know what could be causing this?

    Read the article

  • Check value at insert

    - by ThreeFingerMark
    Hello, i have this three tables. Table: Item Columns: ItemID, Title, Content, NoChange (Date) Table: Tag Columns: TagID, Title Table: ItemTag Columns: ItemID, TagID In the Item Table is a Field with NoChange, if this field = true no Tag is allowed to insert a ItemTag value with this ItemID. How can i check this in the insert? For Updates i have this Statement: UPDATE ItemTag SET TagID = ? where ItemID = ? AND TagID = ? AND exists ( select ItemID from Item where ItemID = ? AND NoChange is null)"); Thank you.

    Read the article

  • One to One relationship in MySQL

    - by Botto
    I'm trying to make a one to one relationship in a MySQL DB. I'm using the InnoDB engine and the basic table looks like this: CREATE TABLE `foo` ( `fooID` INT(11) NOT NULL PRIMARY KEY AUTO_INCREMENT, `name` TEXT NOT NULL ) CREATE TABLE `bar` ( `barName` VARCHAR(100) NOT NULL, `fooID` INT(11) NOT NULL PRIMARY KEY, CONSTRAINT `contact` FOREIGN KEY (`fooID`) REFERENCES `foo`(`fooID`) ) Now once I have set up these I alter the foo table so that the fooID also becomes a foreign key to the fooID in bar. The only issue I am facing with this is that there will be a integrity issue when I try to insert into either. I would like some help, thanks.

    Read the article

  • Need help tuning a SQL statement

    - by jeffself
    I've got a table that has two fields (custno and custno2) that need to be searched from a query. I didn't design this table, so don't scream at me. :-) I need to find all records where either the custno or custno2 matches the value returned from a query on the same table based on a titleno. In other words, the user types in 1234 for the titleno. My query searches the table to find the custno associated with the titleno. It also looks for the custno2 for that titleno. Then it needs to do a search on the same table for all other records that have either the custno or custno2 returned in the previous search in the custno or custno2 fields for those other records. Here is what I've come up with: SELECT BILLYR, BILLNO, TITLENO, VINID, TAXPAID, DUEDATE, DATEPIF, PROPDESC FROM TRCDBA.BILLSPAID WHERE CUSTNO IN (select custno from trcdba.billspaid where titleno = '1234' union select custno2 from trcdba.billspaid where titleno = '1234' and custno2 != '') OR CUSTNO2 IN (select custno from trcdba.billspaid where titleno = '1234' union select custno2 from trcdba.billspaid where titleno = '1234' and custno2 != '') The query takes about 5-10 seconds to return data. Can it be rewritten to work faster?

    Read the article

  • Mass update of data in sql from int to varchar

    - by Christopher Kelly
    we have a large table (5608782 rows and growing) that has 3 columns Zip1,Zip2, distance all columns are currently int, we would like to convert this table to use varchars for international usage but need to do a mass import into the new table convert zip < 5 digits to 0 padded varchars 123 becomes 00123 etc. is there a way to do this short of looping over each row and doing the translation programmaticly?

    Read the article

  • How can fill a variable of my own created data type within Oracle PL/SQL?

    - by Frankie Simon
    In Oracle I've created a data type: TABLE of VARCHAR2(200) I want to have a variable of this type within a Stored Procedure (defined locally, not as an actual table in the DB) and fill it with data. Some online samples show how I'd use my type if it was filled and passed as a parameter to the stored procedure: SELECT column_value currVal FROM table(pMyPassedParameter) However what I want is to fill it during the PL/SQL code itself, with INSERT statements. Anyone knows the syntax of this?

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >