Search Results

Search found 7127 results on 286 pages for 'calculated columns'.

Page 261/286 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • Loop through Array with conditional output based on key/value pair

    - by Daniel C
    I have an array with the following columns: Task Status I would like to print out a table that shows a list of the Tasks, but not the Status column. Instead, for Tasks where the Status = 0, I want to add a tag <del> to make the completed task be crossed out. Here's my current code: foreach ($row as $key => $val){ if ($key != 'Status') print "<td>$val</td>"; else if ($val == '0') print "<td><del>$val</del></td>"; } This seems to be correct, but when I print it out, it prints all the tasks with the <del> tag. So basically the "else" clause is being run every time. Here is the var_dump($row): array 'Task' => string 'Task A' (length=6) 'Status' => string '3' (length=1) array 'Task' => string 'Task B' (length=6) 'Status' => string '0' (length=1)

    Read the article

  • entering datas of php page to database

    - by Rahima farzana.S
    I have a php form with two text boxes and i want to enter the text box values into the database. I have created the table (with two columns namely webmeasurementsuite id and webmeasurements id) I used the following syntax for creating table: CREATE TABLE `radio` ( `webmeasurementsuite id` INT NOT NULL, `webmeasurements id` INT NOT NULL ); Utilising the tutorial in the following link, I wrote the php coding but unfortunately the datas are not getting entered into the database. I am getting an error in the insert sql syntax. I checked it but i am not able to trace out the error.Can anyone correct me? I got the coding from http://www.webune.com/forums/php-how-to-enter-data-into-database-with-php-scripts.html $sql = "INSERT INTO $db_table(webmeasurementsuite id,webmeasurements id) values ('".mysql_real_escape_string(stripslashes($_REQUEST['webmeasurementsuite id']))."','".mysql_real_escape_string(stripslashes($_REQUEST['webmeasurements id']))."')"; echo$sql; My error is as follows: INSERT INTO radio(webmeasurementsuite id,webmeasurements id) values ('','')ERROR: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'id,webmeasurements id) values ('','')' at line 1

    Read the article

  • Speed up csv export when using php from mysql database query

    - by John
    Ok, so i've got a web system (built on codeigniter & running on mysql) that allows people to query a database of postal address data by making selections in a series of forms until they arrive at the selection that want, pretty standard stuff. They can then buy that information and download it via that system. The queries run very fast, but when it comes to applying that query to the database,and exporting it to csv, once the datasets get to around the 30,000 record mark (each row has around 40 columns of which about 20 are all populated with on average 20 chars of data per cell) it can take 5 or so minutes to export to csv. So, my question is, what is the main cause for the slowness? Is it that the resultset of data from the query is so large, that it is running into memory issues? Therefore should i allow much more memory to the process? Or, is there a much more efficient way of exporting to csv from a mysql query that i'm not doing? Should i save the contents of the query to a temp table and simply export the temp table to csv? Or am i going about this all wrong? Also, is the fact that i'm using Codeigniters Active Record for this prohibitive due to the way that it stores the resultset? Any advice is welcome! Thank you for reading!

    Read the article

  • How to emulate a BEFORE DELETE trigger in SQL Server 2005

    - by Mark
    Let's say I have three tables, [ONE], [ONE_TWO], and [TWO]. [ONE_TWO] is a many-to-many join table with only [ONE_ID and [TWO_ID] columns. There are foreign keys set up to link [ONE] to [ONE_TWO] and [TWO] to [ONE_TWO]. The FKs use the ON DELETE CASCADE option so that if either a [ONE] or [TWO] record is deleted, the associated [ONE_TWO] records will be automatically deleted as well. I want to have a trigger on the [TWO] table such that when a [TWO] record is deleted, it executes a stored procedure that takes a [ONE_ID] as a parameter, passing the [ONE_ID] values that were linked to the [TWO_ID] before the delete occurred: DECLARE @Statement NVARCHAR(max) SET @Statement = '' SELECT @Statement = @Statement + N'EXEC [MyProc] ''' + CAST([one_two].[one_id] AS VARCHAR(36)) + '''; ' FROM deleted JOIN [one_two] ON deleted.[two_id] = [one_two].[two_id] EXEC (@Statement) Clearly, I need a BEFORE DELETE trigger, but there is no such thing in SQL Server 2005. I can't use an INSTEAD OF trigger because of the cascading FK. I get the impression that if I use a FOR DELETE trigger, when I join [deleted] to [ONE_TWO] to find the list of [ONE_ID] values, the FK cascade will have already deleted the associated [ONE_TWO] records so I will never find any [ONE_ID] values. Is this true? If so, how can I achieve my objective? I'm thinking that I'd need to change the FK joining [TWO] to [ONE_TWO] to not use cascades and to do the delete from [ONE_TWO] manually in the trigger just before I manually delete the [TWO] records. But I'd rather not go through all that if there is a simpler way.

    Read the article

  • Rails active record association problem

    - by Harm de Wit
    Hello, I'm new at active record association in rails so i don't know how to solve the following problem: I have a tables called 'meetings' and 'users'. I have correctly associated these two together by making a table 'participants' and set the following association statements: class Meeting < ActiveRecord::Base has_many :participants, :dependent => :destroy has_many :users, :through => :participants and class Participant < ActiveRecord::Base belongs_to :meeting belongs_to :user and the last model class User < ActiveRecord::Base has_many :participants, :dependent => :destroy At this point all is going well and i can access the user values of attending participants of a specific meeting by calling @meeting.users in the normal meetingshow.html.erb view. Now i want to make connections between these participants. Therefore i made a model called 'connections' and created the columns of 'meeting_id', 'user_id' and 'connected_user_id'. So these connections are kinda like friendships within a certain meeting. My question is: How can i set the model associations so i can easily control these connections? I would like to see a solution where i could use @meeting.users.each do |user| user.connections.each do |c| <do something> end end I tried this by changing the model of meetings to this: class Meeting < ActiveRecord::Base has_many :participants, :dependent => :destroy has_many :users, :through => :participants has_many :connections, :dependent => :destroy has_many :participating_user_connections, :through => :connections, :source => :user Please, does anyone have a solution/tip how to solve this the rails way?

    Read the article

  • SQL queries to determine all values that would satisfy an arbitrary query

    - by jasterm007
    I'm trying to figure out how to efficiently run a set of queries that will provide a new table of all values that would return results for an arbitrary query. Say my table has a schema like: id name age city What is an efficient way to list all values that would return results for an arbitrary query, say "NOT city=X AND age BETWEEN Y and Z"? My naive approach for this would be to use a script and recurse through all possible combinations of {city, age, age} and see which SELECTs return more than 0 results, but that seems incredibly inefficient. I've also tried building large joins on {city, age, age} as well and basically using that table as an argument list to the query, but that quickly becomes an impossibility for queries on many columns. For simple conjunctive equality queries, i.e. "name=X and age=Y", this is much simpler, as I can do something like SELECT name, age, count(*) AS count FROM main GROUP BY name, age HAVING count > 0 But I'm having difficulty coming up with a general approach for anything more complicated than that. Any pointers in the right direction would be most helpful, thanks.

    Read the article

  • Unnecessary Redundancy with Tables.

    - by Stacey
    My items are listed as follows; This is just a summary of course. But I'm using a method shown for the "Detail" table to represent a type of 'inheritence', so to speak - since "Item" and "Downloadable" are going to be identical except that each will have a few additional fields relevant only to them. My question is in this design pattern. This sort of thing appears many, many times in our projects - is there a more intelligent way to handle it? I basically need to normalize the tables as much as possible. I'm extremely new to databases and so this is all very confusing to me. There are 5 items. Awards, Items, Purchases, Tokens, and Downloads. They are all very, very similar, except each has a few pieces of data relevant only to itself. I've tried to use a declaration field (like an enumerator 'Type' field) in conjunction with nullable columns, but I was told that is a bad approach. What I have done is take everything similar and place it in a single table, and then each type has its own table that references a column in the 'base' table. The problem occurs with relationships, or junctions. Linking all of these back to a customer. Each type takes around 2 additional tables to properly junction all of the data together- and as such, my database is growing very, very large. Is there a smarter practice for this kind of behavior? Item ID | GUID Name | varchar(64) Product ID | GUID Name | varchar(64) Store | GUID [ FK ] Details | GUID [FK] Downloadable ID | GUID Name | varchar(64) Url | nvarchar(2048) Details | GUID [FK] Details ID | GUID Price | decimal Description | text Peripherals [ JUNCTION ] ID | GUID Detail | GUID [FK] Store ID | GUID Addresses | GUID Addresses ID | GUID Name | nvarchar(64) State | int [FK] ZipCode | int Address | nvarchar(64) State ID | int Name | varchar(32)

    Read the article

  • reformatting a matrix in matlab with nan values

    - by Kate
    This post follows a previous question regarding the restructuring of a matrix: re-formatting a matrix in matlab An additional problem I face is demonstrated by the following example: depth = [0:1:20]'; data = rand(1,length(depth))'; d = [depth,data]; d = [d;d(1:20,:);d]; Here I would like to alter this matrix so that each column represents a specific depth and each row represents time, so eventually I will have 3 rows (i.e. days) and 21 columns (i.e. measurement at each depth). However, we cannot reshape this because the number of measurements for a given day are not the same i.e. some are missing. This is known by: dd = sortrows(d,1); for i = 1:length(depth); e(i) = length(dd(dd(:,1)==depth(i),:)); end From 'e' we find that the number of depth is different for different days. How could I insert a nan into the matrix so that each day has the same depth values? I could find the unique depths first by: unique(d(:,1)) From this, if a depth (from unique) is missing for a given day I would like to insert the depth to the correct position and insert a nan into the respective location in the column of data. How can this be achieved?

    Read the article

  • Need help INSERT record(s) MySQL DB

    - by JM4
    I have an online form which collects member(s) information and stores it into a very long MySQL database. We allow up to 16 members to enroll at a single time and originally structured the DB to allow such. For example: If 1 Member enrolls, his personal information (first name, last name, address, phone, email) are stored on a single row. If 15 Members enroll (all at once), their personal information are stored in the same single row. The row has information housing columns for all 'possible' inputs. I am trying to consolidate this code and having every nth member that enrolls put onto a new record within the database. I have seen sugestions before for inserting multiple records as such: INSERT INTO tablename VALUES (('$f1name', '$f1address', '$f1phone'), ('$f2name', '$f2address', '$f2phone')... The issue with this is two fold: I do not know how many records are being enrolled from person to person so the only way to make the statement above is to use a loop The information collected from the forms is NOT a single array so I can't loop through one array and have it parse out. My information is collected as individual input fields like such: Member1FirstName, Member1LastName, Member1Phone, Member2Firstname, Member2LastName, Member2Phone... and so on Is it possible to store information in separate rows WITHOUT using a loop (and therefore having to go back and completely restructure my form field names and such (which can't happen due to the way the validation rules are built.)

    Read the article

  • SQL -- How is DISTINCT so fast without an index?

    - by Jonathan
    Hi, I have a database with a table called 'links' with 600 million rows in it in SQLite. There are 2 columns in the database - a "src" column and a "dest" column. At present there are no indices. There are a fair number of common values between src and dest, but also a fair number of duplicated rows. The first thing I'm trying to do is remove all the duplicate rows, and then perform some additional processing on the results, however I've been encountering some weird issues. Firstly, SELECT * FROM links WHERE src=434923 AND dest=5010182. Now this returns one result fairly quickly and then takes quite a long time to run as I assume it's performing a tablescan on the rest of the 600m rows. However, if I do SELECT DISTINCT * FROM links, then it immediately starts returning rows really quickly. The question is: how is this possible?? Surely for each row, the row must be compared against all of the other rows in the table, but this would require a tablescan of the remaining rows in the table which SHOULD takes ages! Any ideas why SELECT DISTINCT is so much quicker than a standard SELECT?

    Read the article

  • Best way to get distinct values from large table

    - by derivation
    I have a db table with about 10 or so columns, two of which are month and year. The table has about 250k rows now, and we expect it to grow by about 100-150k records a month. A lot of queries involve the month and year column (ex, all records from march 2010), and so we frequently need to get the available month and year combinations (ie do we have records for april 2010?). A coworker thinks that we should have a separate table from our main one that only contains the months and years we have data for. We only add records to our main table once a month, so it would just be a small update on the end of our scripts to add the new entry to this second table. This second table would be queried whenever we need to find the available month/year entries on the first table. This solution feels kludgy to me and a violation of DRY. What do you think is the correct way of solving this problem? Is there a better way than having two tables?

    Read the article

  • Sqlite3 INSERT INTO Question × 377

    - by user316717
    Hi, My 1st post. I am creating an exercise app that will record the weight used and the number of "reps" the user did in 4 "Sets" per day over a period of 7 days so the user may view their progress. I have built the database table named FIELDS with 2 columns ROW and FIELD_DATA and I can use the code below to load the data into the db. But the code has a sql statement that says, INSERT OR REPLACE INTO FIELDS (ROW, FIELD_DATA)VALUES (%d, '%@'); When I change the statment to: INSERT INTO FIELDS (ROW, FIELD_DATA)VALUES (%d, '%@'); Nothing happens. That is no data is recorded in the db. Below is the code: #define kFilname @"StData.sqlite3" - (NSString *)dataFilePath { NSArray *paths = NSSearchPathForDirectoriesInDomains (NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; return [documentsDirectory stringByAppendingPathComponent:kFilname]; } -(IBAction)saveData:(id)sender; { for (int i = 1; i <= 8; i++) { NSString *fieldName = [[NSString alloc]initWithFormat:@"field%d", i]; UITextField *field = [self valueForKey:fieldName]; [fieldName release]; NSString *insert = [[NSString alloc] initWithFormat: @"INSERT OR REPLACE INTO FIELDS (ROW, FIELD_DATA) VALUES (%d, '%@');",i, field.text]; // sqlite3_stmt *stmt; char *errorMsg; if (sqlite3_exec (database, [insert UTF8String], NULL, NULL, &errorMsg) != SQLITE_OK) { // NSAssert1(0, @"Error updating table: %s", errorMsg); sqlite3_free(errorMsg); } } sqlite3_close(database); } So how do I modify the code to do what I want? It seemed like a simple sql statement change at first but obviously there must be more. I am new to Objective-C and iPhone programming. I am not new to using sql statements as I have been creating web apps in ASP for a number of years. Any help will be greatly appreciated, this is driving me nuts! Thanks in advance Dave

    Read the article

  • Abstract away a compound identity value for use in business logic?

    - by John K
    While separating business logic and data access logic into two different assemblies, I want to abstract away the concept of identity so that the business logic deals with one consistent identity without having to understand its actual representation in the data source. I've been calling this a compound identity abstraction. Data sources in this project are swappable and various and the business logic shouldn't care which data source is currently in use. The identity is the toughest part because its implementation can change per kind of data source, whereas other fields like name, address, etc are consistently scalar values. What I'm searching for is a good way to abstract the concept of identity, whether it be an existing library, a software pattern or just a solid good idea of some kind is provided. The proposed compound identity value would have to be comparable and usable in the business logic and passed back to the data source to specify records, entities and/or documents to affect, so the data source must be able to parse back out the details of its own compound ids. Data Source Examples: This serves to provide an idea of what I mean by various data sources having different identity implementations. A relational data source might express a piece of content with an integer identifier plus a language specific code. For example. content_id language Other Columns expressing details of content 1 en_us 1 fr_ca The identity of the first record in the above example is: 1 + en_us However when a NoSQL data source is substituted, it might somehow represent each piece of content with a GUID string 936DA01F-9ABD-4d9d-80C7-02AF85C822A8 plus language code of a different standardization, And a third kind of data source might use just a simple scalar value. So on and so forth, you get the idea.

    Read the article

  • Left Join only returning one row

    - by Adam
    I am trying to join two tables. I would like all the columns from the product_category table (there are a total of 6 now) and count the number of products, CatCount, that are in each category from the products_has_product_category table. My query result is 1 row with the first category and a total count of 68, when I am looking for 6 rows with each individual category's count. <?php $result = mysql_query(" SELECT a.*, COUNT(b.category_id) AS CatCount FROM `product_category` a LEFT JOIN `products_has_product_category` b ON a.product_category_id = b.category_id "); while($row = mysql_fetch_array($result)) { echo ' <li class="ui-shadow" data-count-theme="d"> <a href="' . $row['product_category_ref_page'] . '.php" data-icon="arrow-r" data-iconpos="right">' . $row['product_category_name'] . '</a><span class="ui-li-count">' . $row['CatCount'] . '</span></li>'; } ?> I have been working on this for a couple of hours and would really appreciate any help on what I am doing wrong.

    Read the article

  • Using Gtk.TreeDragSource.drag_data_get()

    - by knutsondc
    I have a simple Gtk.TreeView with a Gtk.ListStore with four columns as the model. I want to be able to drag and drop rows within the TreeView and track row insertions, changes and deletions as they happen so I can implement undo/redo to drag and drop operations. I'm using PyGObject 3 and Python 3.2. I thought that using methods under the Gtk.TreeDragSource and Gtk.TreeDragDest interfaces would suit my needs perfectly, with Gtk.TreeDragSource.drag_data_get() in my drag_data_get handler and Gtk.TreeDragDest.drag_data_received() or Gtk.tree_get_row_drag_data() in my drag_data_received handler. Basically, what I've tried looks something like this: def drag_data_received(self, treeview, context, x, y, selection, info, time): treeselection = tv.get_selection() model, my_iter = treeselection.get_selected() path = model.get_path(my_iter) result = Gtk.TreeDragSource.drag_data_get(path, selection) def drag_data_received(self, tv, context, x, y, selection, info, time): result, model, row = Gtk.tree_get_row_drag_data(selection) my_iter = model.get_iter(row) data = [model.get_value(my_iter, i) for i in range(model.get_n_columns())] drop_info = tv.get_dest_row_at_pos(x, y) if drop_info: path, position = drop_info my_iter = model.get_iter(path) if (position == Gtk.TreeViewDropPosition.BEFORE or position == Gtk.TreeViewDropPosition.INTO_OR_BEFORE): model.insert_before(my_iter, [data]) else: model.insert_after(my_iter, [data]) else: model.append([data]) if context.get_actions() == Gdk.DragAction.MOVE|Gdk.DragAction.DEFAULT: context.finish(True, True, time) return This fails spectacularly - when Python hits the call to Gtk.TreeDragSource.drag_data_get(), Python crashes and my program window swiftly disappears. I don't even get to the drag_data_received handler. Can anyone point me to some example code showing how these methods using the TreeDragSource and TreeDragDest interfaces work? Any help much appreciated!

    Read the article

  • Add eclipseies after some letter in Phone gap

    - by user1542984
    I want to add eclipseies after some letter in table view , I am creating table at run time .can you suggest me.In one columns i want not more than 10 char so i need eclipseies , i share my code. var tbl = document.getElementById('mainBodyDivContent'); //tbl.innerHTML = ""; for (var i = 0; i <r.length-1; i++) { var tr = document.createElement('tr'); tr.setAttribute('id', i); (function(id) { tr.onclick = function() { if(myScroll.isScrolling) {return;} //window.clearTimeout(myTimedCall); //window.clearInterval(myTimedCall); window.location.href="route.html?RID="+r[id].RID+"&StationCode="+stationCode; }; }(i)); var td1 = document.createElement('td'); td1.setAttribute('width', '10%'); td1.setAttribute("align","center"); td1.innerHTML = r[i].platformNo;; var td2 = document.createElement('td'); td2.setAttribute('width', '40%'); td2.setAttribute("align","center"); td2.innerHTML = r[i].schDepart + " - " + r[i].expDepart;; var td3 = document.createElement('td'); td3.setAttribute('width', '40%'); td3.setAttribute("align","center"); *************************i want to add eclipseies if stationName is more than 10 char is show dot.....********* td3.innerHTML = r[i].stationName + " (" + r[i].crsCode + ")" ;;

    Read the article

  • Advance Query with Join

    - by user1462589
    I'm trying to convert a product table that contains all the detail of the product into separate tables in SQL. I've got everything done except for duplicated descriptor details. The problem I am having all the products have size/color/style/other that many other products contain. I want to only have one size or color descriptor for all the items and reuse the "ID" for all the product which I believe is a Parent key to the Product ID which is a ...Foreign Key. The only problem is that every descriptor would have multiple Foreign Keys assigned to it. So I was thinking on the fly just have it skip figuring out a Foreign Parent key for each descriptor and just check to see if that descriptor exist and if it does use its Key for the descriptor. Data Table PI Colo Sz OTHER 1 | Blue | 5 | Vintage 2 | Blue | 6 | Vintage 3 | Blac | 5 | Simple 4 | Blac | 6 | Simple =================================== Its destination table is this =================================== DI Description 1 | Blue 2 | Blac 3 | 5 4 | 6 6 | Vintage 7 | Simple ============================= Select Data.Table Unique.Data.Table.Colo Unique.Data.Table.Sz Unique.Data.Table.Other ======================================= Then the dual part of the questions after we create all the descriptors how to do a new query and assign the product ID to the descriptors. PI| DI 1 | 1 1 | 3 1 | 4 2 | 1 2 | 3 2 | 4 By figuring out how to do this I should be able to duplicate this pattern for all 300 + columns in the product. Some of these fields are 60+ characters large so its going to save a ton of space. Do I use a Array?

    Read the article

  • SQL Server: Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • Fast and efficient way to read a space separated file of numbers into an array?

    - by John_Sheares
    I need a fast and efficient method to read a space separated file with numbers into an array. The files are formatted this way: 4 6 1 2 3 4 5 6 2 5 4 3 21111 101 3 5 6234 1 2 3 4 2 33434 4 5 6 The first row is the dimension of the array [rows columns]. The lines following contain the array data. The data may also be formatted without any newlines like this: 4 6 1 2 3 4 5 6 2 5 4 3 21111 101 3 5 6234 1 2 3 4 2 33434 4 5 6 I can read the first line and initialize an array with the row and column values. Then I need to fill the array with the data values. My first idea was to read the file line by line and use the split function. But the second format listed gives me pause, because the entire array data would be loaded into memory all at once. Some of these files are in the 100 of MBs. The second method would be to read the file in chunks and then parse them piece by piece. Maybe somebody else has a better a way of doing this?

    Read the article

  • find a duplicate series in SQL

    - by SomeMiscGuy
    I have a table with 3 columns containing a variable number of records based off of the first column which is a foreign key. I am trying to determine if I can detect when there is a duplicate across multiple rows for an entire series declare @finddupseries table ( portid int, asset_id int, allocation float ) ; INSERT INTO @finddupseries SELECT 250, 6, 0.05 UNION ALL SELECT 250, 66, 0.8 UNION ALL SELECT 250, 2, 0.105 UNION ALL SELECT 250, 4, 0.0225 UNION ALL SELECT 250, 5, 0.0225 UNION ALL SELECT 251, 13, 0.6 UNION ALL SELECT 251, 2, 0.3 UNION ALL SELECT 251, 5, 0.1 UNION ALL SELECT 252, 13, 0.8 UNION ALL SELECT 252, 2, 0.15 UNION ALL SELECT 252, 5, 0.05 UNION ALL SELECT 253, 13, 0.4 UNION ALL SELECT 253, 2, 0.45 UNION ALL SELECT 253, 5, 0.15 UNION ALL SELECT 254, 6, 0.05 UNION ALL SELECT 254, 66, 0.8 UNION ALL SELECT 254, 2, 0.105 UNION ALL SELECT 254, 4, 0.0225 UNION ALL SELECT 254, 5, 0.0225 select * from @finddupseries The records for portid 250 and 254 match. Is there any way I can write a query to detect this? edit: yes, the entire series must match. Also, if there was a way to determine which one it DID match would be helpful as the actual table has around 10k records. thanks!

    Read the article

  • getting double value from group concact

    - by Sackling
    I am having a problem where I am getting duplicated values from what I think I should be getting. here is my sql: SELECT DISTINCT p.products_image, pd.products_name, p.products_id, p.products_model, p.manufacturers_id, m.manufacturers_name, p.products_price, p.products_sort_order, p.products_tax_class_id, pd.products_viewed, group_concat(p2i.icons_id separator ",") AS icon_ids, group_concat(pi.icon_class separator ",") AS icon_class, IF(s.status, s.specials_new_products_price, NULL) AS specials_new_products_price, IF(s.status, s.specials_new_products_price, p.products_price) AS final_price FROM products p LEFT JOIN specials s ON p.products_id = s.products_id LEFT JOIN manufacturers m ON p.manufacturers_id = m.manufacturers_id JOIN products_description pd ON p.products_id = pd.products_id JOIN products_to_categories p2c ON p.products_id = p2c.products_id INNER JOIN products_specifications ps7 ON p.products_id = ps7.products_id LEFT JOIN products_to_icon p2i ON p.products_id = p2i.products_id LEFT JOIN products_icons pi ON p2i.icons_id = pi.icons_id WHERE p.products_status = '1' AND pd.language_id = '1' AND ps7.specification IN ('Polycotton' , 'Reflective') AND ps7.specifications_id = '7' AND ps7.language_id = '1' AND p2c.categories_id = '21' GROUP BY p.products_id ORDER BY p.products_sort_order The column that is getting double values is icon_ids from the group concact. This seams to happen only if ploycotton, and reflective are both IN ps7.specification. If it is only one or the other then it works fine. The products_to_icon table contains 2 columns, products_id and icons_id. If a product has 2 icons, there are 2 rows so I'm pretty sure it is this fact that is causing the duplicate icons ids. When I run this, the icon_ids column for a product with 2 icons is "4,4,6,6" for example, when what I need is "4,6"

    Read the article

  • Elegant ways to print out a bunch of instance attributes in python 2.6?

    - by wds
    First some background. I'm parsing a simple file format, and wish to re-use the results in python code later, so I made a very simple class hierarchy and wrote the parser to construct objects from the original records in the text files I'm working from. At the same time I'd like to load the data into a legacy database, the loader files for which take a simple tab-separated format. The most straightforward way would be to just do something like: print "%s\t%s\t....".format(record.id, record.attr1, len(record.attr1), ...) Because there are so many columns to print out though, I thought I'd use the Template class to make it a bit easier to see what's what, i.e.: templ = Template("$id\t$attr1\t$attr1_len\t...") And I figured I could just use the record in place of the map used by a substitute call, with some additional keywords for derived values: print templ.substitute(record, attr1_len=len(record.attr1), ...) Unfortunately this fails, complaining that the record instance does not have an attribute __getitem__. So my question is twofold: do I need to implement __getitem__ and if so how? is there a more elegant way for something like this where you just need to output a bunch of attributes you already know the name for?

    Read the article

  • Multisite Enabling a Table

    - by Joe Fitzgibbons
    I am creating a table (table A) that will have a number of columns(of course) and there will be another table (table B) that holds metadata associated to rows in table A. I am working with a multi site implementation that has one database for the whole shabang. Rows in table A could belong to any number of sites but must belong to at least one. The problem I have is I am not sure what the best practice is for defining what site each row in table A belongs to. I want performance and scalability. There is no finite number of sites going forward. Rows in table A could belong to any number of sites in the future. Right now there are only 3. My initial thoughts are to have a primary site ID in Table A and then metadata in table B will have rows defining additional sites as needed. Another thought is to have a column in Table A for each site and it is a boolean as to wether it belongs to that site. Lastly I have thought about having another table to map rows in Table A to each site. What is the best way to associate rows in a table with any number of sites with performance and scalability in mind?

    Read the article

  • Performing a MYSQL query based off of $_GET results

    - by Michael N
    When a user clicks an item on my items page, it takes them to blank page template using $_GET to pass the item brand and model through. I'd like to perform another MYSQL query when that user clicks through to populate the blank page with the product details from my database. I'd like to retrieve the single row using the model number (unique ID) to populate the page with the information. I've tried a couple of things but am having a little difficulty. On my blank item page, I have $brand = $_GET['Brand']; $modelnumber = $_GET['ModelNumber']; $query = mysql_query("SELECT * FROM items WHERE `Model Number` = '$modelnumber'"); $results = mysql_fetch_row($query); echo $results; I think having ''s around Model Number is causing troubles, but without them, I get a Warning: mysql_fetch_row() expects parameter 1 to be resource, boolean given error. My database columns looks like Brand | Model Number | Price | Description | Image A few other things I have tried include $query = mysql_query("SELECT * FROM item WHERE Model Number = $_GET['ModelNumber']"); Which gave me a syntax error. I've also tried concatenating the $_GET which gives me a mysql_fetch_row() expects parameter 1 to be resource, boolean given error Which leads me to believe that I'm also going about displaying the results incorrectly. I'm not sure if I need to put it in a where loop like I have with my previous page which displays all items in the database because this is just displaying one.

    Read the article

  • Strange PHP array behavior overwriting values with all the same values

    - by dasdas
    Im doing a simple mysqli query with code ive used many times before but have never had this problem happen to me. I am grabbing an entire table with an unknown number of columns (it changes often so i dont know the exact value, nor the column names). I have some code that uses metadata to grab everything and stick it in an array. This all works fine, but the output is messed up: $stmt -> execute(); //the query is legit, no problems there $meta = $stmt->result_metadata(); while ($field = $meta->fetch_field()) { $params[] = &$row[$field->name]; } call_user_func_array(array($stmt, 'bind_result'), $params); while ($stmt->fetch()) { $pvalues[++$i] = $row; //$pvalues is an array of arrays. row is an array //print_r($row); print_r($pvalues[$i-1]); } $stmt -> close(); I would assume that $pvalues has the results that I am looking for. My table currently has 2 rows. $pvalues has array length 2. Both rows in $pvalues are exactly the same. If i use the: print_r($row) it prints out the correct values for both rows, but if later on i check what is in $pvalues it is incorrect (1 row is assigned to both indices of $pvalues). If i use the print_r($pvalues[$i-1]) it prints exactly as I expect, the same row in the table twice. Why isnt the data getting assigned to $pvalues? I know $row holds the right information at one point, but it is getting overwritten or lost.

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >