Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 263/293 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • Find Lines with N occurrences of a char

    - by Martín Marconcini
    I have a txt file that I’m trying to import as flat file into SQL2008 that looks like this: “123456”,”some text” “543210”,”some more text” “111223”,”other text” etc… The file has more than 300.000 rows and the text is large (usually 200-500 chars), so scanning the file by hand is very time consuming and prone to error. Other similar (and even more complex files) were successfully imported. The problem with this one, is that “some lines” contain quotes in the text… (this came from an export from an old SuperBase DB that didn’t let you specify a text quantifier, there’s nothing I can do with the file other than clear it and try to import it). So the “offending” lines look like this: “123456”,”this text “contains” a quote” “543210”,”And the “above” text is bad” etc… You can see the problem here. Now, 300.000 is not too much if I could perform a search using a text editor that can use regex, I’d manually remove the quotes from each line. The problem is not the number of offending lines, but the impossibility to find them with a simple search. I’m sure there are less than 500, but spread those in a 300.000 lines txt file and you know what I mean. Based upon that, what would be the best regex I could use to identify these lines? My first thought is: Tell me which lines contain more than 4 quotes (“). But I couldn’t come up with anything (I’m not good at Regex beyond the basics).

    Read the article

  • Displaying Query Results Horizontally

    - by AndyD273
    I am wondering if it is possible to take the results of a query and return them as a CSV string instead of as a column of cells. Basically, we have a table called Customers, and we have a table called CustomerTypeLines, and each Customer can have multiple CustomerTypeLines. When I run a query against it, I run into problems when I want to check multiple types, for instance: Select * from Customers a Inner Join CustomerTypeLines b on a.CustomerID = b.CustomerID where b.CustomerTypeID = 14 and b.CustomerTypeID = 66 ...returns nothing because a customer can't have both on the same line, obviously. In order to make it work, I had to add a field to Customers called CustomerTypes that looks like ,14,66,67, so I can do a Where a.CustomerTypes like '%,14,%' and a.CustomerTypes like '%,66,%' which returns 85 rows. Of course this is a pain because I have to make my program rebuild this field for that Customer each time the CustomerTypeLines table is changed. It would be nice if I could do a sub query in my where that would do the work for me, so instead of returning the results like: 14 66 67 it would return them like ,14,66,67, Is this possible?

    Read the article

  • Best way to get distinct values from large table

    - by derivation
    I have a db table with about 10 or so columns, two of which are month and year. The table has about 250k rows now, and we expect it to grow by about 100-150k records a month. A lot of queries involve the month and year column (ex, all records from march 2010), and so we frequently need to get the available month and year combinations (ie do we have records for april 2010?). A coworker thinks that we should have a separate table from our main one that only contains the months and years we have data for. We only add records to our main table once a month, so it would just be a small update on the end of our scripts to add the new entry to this second table. This second table would be queried whenever we need to find the available month/year entries on the first table. This solution feels kludgy to me and a violation of DRY. What do you think is the correct way of solving this problem? Is there a better way than having two tables?

    Read the article

  • Add inputs to more than one row in a cell array in matlab

    - by ZaZu
    Hi there, I would like to know how can I get certain inputs and put them in more than one row in the cell array ... I basically want an array that updates one input per row in ever loop. The loop is looped 30 times, so I want to have 30 rows and 2 columns ( x and y columns) I have this code : For N=1:30 . . . Binary = bwlabel(blacknwhite); s = regionprops(Binary,'centroid'); centroids = cat(1, s.Centroid); hold(imgca,'on') plot(imgca,centroids(1,1), centroids(1,2),'r*') . . . end I dont think this does what I want ... only the first row is updated in my loop .. So how can I create this cell array ? If you want more info please tell me and I will update it right away. Thanks !

    Read the article

  • reformatting a matrix in matlab with nan values

    - by Kate
    This post follows a previous question regarding the restructuring of a matrix: re-formatting a matrix in matlab An additional problem I face is demonstrated by the following example: depth = [0:1:20]'; data = rand(1,length(depth))'; d = [depth,data]; d = [d;d(1:20,:);d]; Here I would like to alter this matrix so that each column represents a specific depth and each row represents time, so eventually I will have 3 rows (i.e. days) and 21 columns (i.e. measurement at each depth). However, we cannot reshape this because the number of measurements for a given day are not the same i.e. some are missing. This is known by: dd = sortrows(d,1); for i = 1:length(depth); e(i) = length(dd(dd(:,1)==depth(i),:)); end From 'e' we find that the number of depth is different for different days. How could I insert a nan into the matrix so that each day has the same depth values? I could find the unique depths first by: unique(d(:,1)) From this, if a depth (from unique) is missing for a given day I would like to insert the depth to the correct position and insert a nan into the respective location in the column of data. How can this be achieved?

    Read the article

  • Is this a bad indexing strategy for a table?

    - by llamaoo7
    The table in question is part of a database that a vendor's software uses on our network. The table contains metadata about files. The schema of the table is as follows Metadata ResultID (PK, int, not null) MappedFieldname (char(50), not null) Fieldname (PK, char(50), not null) Fieldvalue (text, null) There is a clustered index on ResultID and Fieldname. This table typically contains millions of rows (in one case, it contains 500 million). The table is populated by 24 workers running 4 threads each when data is being "processed". This results in many non-sequential inserts. Later after processing, more data is inserted into this table by some of our in-house software. The fragmentation for a given table is at least 50%. In the case of the largest table, it is at 90%. We do not have a DBA. I am aware we desperately need a DB maintenance strategy. As far as my background, I'm a college student working part time at this company. My question is this, is a clustered index the best way to go about this? Should another index be considered? Are there any good references for this type and similar ad-hoc DBA tasks?

    Read the article

  • Need help INSERT record(s) MySQL DB

    - by JM4
    I have an online form which collects member(s) information and stores it into a very long MySQL database. We allow up to 16 members to enroll at a single time and originally structured the DB to allow such. For example: If 1 Member enrolls, his personal information (first name, last name, address, phone, email) are stored on a single row. If 15 Members enroll (all at once), their personal information are stored in the same single row. The row has information housing columns for all 'possible' inputs. I am trying to consolidate this code and having every nth member that enrolls put onto a new record within the database. I have seen sugestions before for inserting multiple records as such: INSERT INTO tablename VALUES (('$f1name', '$f1address', '$f1phone'), ('$f2name', '$f2address', '$f2phone')... The issue with this is two fold: I do not know how many records are being enrolled from person to person so the only way to make the statement above is to use a loop The information collected from the forms is NOT a single array so I can't loop through one array and have it parse out. My information is collected as individual input fields like such: Member1FirstName, Member1LastName, Member1Phone, Member2Firstname, Member2LastName, Member2Phone... and so on Is it possible to store information in separate rows WITHOUT using a loop (and therefore having to go back and completely restructure my form field names and such (which can't happen due to the way the validation rules are built.)

    Read the article

  • What constitutes explicit creation of entities in LINQ to SQL? What elegant "solutions" are there to

    - by Marcelo Zabani
    Hi SO, I've been having problems with the rather famous "Explicit construction of entity type '##' in query is not allowed." error. Now, for what I understand, this exists because if explicit construction of these objects were allowed, tracking changes to the database would be very complicated. So I ask: What constitutes the explicit creation of these objects? In other terms: Why can I do this: Product foo = new Product(); foo.productName = "Something"; But can't do this: var bar = (from item in myDataContext.Products select new Product { productName = item.productName }).ToList(); I think that when running the LINQ query, some kind of association is made between the objects selected and the table rows retrieved (and this is why newing a Product in the first snippet of code is no problem at all, because no associations were made). I, however, would like to understand this a little more in depth (and this is my first question to you, that is: what is the difference from one snippet of code to another). Now, I've heard of a few ways to attack this problem: 1) The creation of a class that inherits the linq class (or one that has the same properties) 2) Selecting anonymous objects And this leads me to my second question: If you chose one of the the two approaches above, which one did you choose and why? What other problems did your approach introduce? Are there any other approaches?

    Read the article

  • dm_exec_query_stats returning stale data?

    - by VoiceOfUnreason
    I've been testing my app on a SQL Server 2005 database, and am trying to establish a preliminary picture of the query performance using sys.dm_exec_query_stats. Problem: there's a particular query that I'm interested in, because total_elapsed_time and last_elapsed_time are both large numbers. When I tickle my app to invoke that query (this runs successfully), then refresh my view of the stats, I find that 1) execution_count has incremented (expected) 2) last_execution_time has updated to now (expected) 3) last_elapsed_time is still a large value (not expected - I anticipated a new value) 4) total_elapsed_time is unchanged (contradiction?) If last_elapsed_time refers to the execution that happened @ last_execution_time, then the total_elapsed_time should have increased? This documentation: http://msdn.microsoft.com/en-us/library/ms189741(SQL.90).aspx tells me that last_execution_time is the last time the plan was executed, and last_elapsed_time comes from the "most recently executed plan", but doesn't tell me why those might be different. The query itself is uncomplicated (SELECT/WHERE/ORDER BY - parameters appearing in the where clause, but no clever operations), the table has maybe 25 rows in it right now. Questions: 1) What's the real relationship between execution_count, last_execution_time, and last_elapsed_time? 2) Where is the documentation of this relationship (manual, third party book, blog, bug ticket, stone tablets...) ?

    Read the article

  • why my pagination link doesnt appear ?

    - by udaya
    This is my script which i have used to paginate ,,The datas are restricted to 4 but the pagination link doesn't appear <? require_once ('Pager/Pager.php'); $connection = mysql_connect( "localhost" , "root" , "" ); mysql_select_db( "ssit",$connection); $result=mysql_query("SELECT dFrindName FROM tbl_friendslist", $connection); $row = mysql_fetch_array($result); $totalItems = $row['total']; $pager_options = array( 'mode' => 'Sliding', // Sliding or Jumping mode. See below. 'perPage' => 4, // Total rows to show per page 'delta' => 4, // See below 'totalItems' => $totalItems, ); $pager = Pager::factory($pager_options); echo $pager->links; list($from, $to) = $pager->getOffsetByPageId(); $from = $from - 1; $perPage = $pager_options['perPage']; $result = mysql_query("SELECT * FROM tbl_friendslist LIMIT 5 , $perPage",$connection); while($row = mysql_fetch_array($result)) { echo $row['dFrindName'].'</br>'; } ?>

    Read the article

  • Joining tables, if percentage is above certain value

    - by CluelessGerman
    My question is similar to this one: Compare rows and get percentage However, little different. I adapted my question to the other post. I got 2 tables. First table: user_id | post_id 1 1 1 2 1 3 2 12 2 15 And second table: post_id | rating 1 1 1 2 1 3 2 1 2 5 3 null 3 1 3 4 12 4 15 1 So now I would like to count the rating for each post, in the second table. If the rating has more than, lets say, 50% positive ratings than I want to get the post_id and going it to the post_id from table one and add 1 to the user_id. At the end it would return the user_id with the number of positive posts. The result for above table would be: user_id | helpfulPosts 1 2 2 1 The post with post_id 1 and 3 have positive rating, because more than 50% have ratings of 1-3. The post with id = 2 is not positive, because the rating is exactly 50%. How would I achieve this? For clarification: It's a mysql rdbm and a positive post, is one where the number of rating_ids with 1, 2 and 3 are more than half of the overall rating. Basically the same thing, from the other thread I posted above.

    Read the article

  • highlighting data values in a sql result set

    - by potatoe
    I can think of a number of ways to do this in PHP or even JavaScript, but I'm wondering if there's a SQL-based technique I'm overlooking. I have a database table, let's say 20 fields X 10 rows. I want to display the entire table on an web page, so I'd do something like SELCT * FROM data_table;, and then format the result set using HTML table tags. However, I'd also like to highlight values in the table based on whether they are the maximum or minimum value in their column. For example, I'd add bold tags around the max in each column. A resulting table might look something like this, with bold tags shown: id | field1 | field2 | field3 | ... 0 | 5 | 2 | <b>7</b> | ... 1 | 3 | <b>8</b> | 6 | ... 2 | <b>9</b> | 5 | 1 | ... ... I could do a separate SELECT with an ORDER BY for each field and then interpret the results, but that seems like a lot of extra DB access. My alternative right now is to just fetch the whole table, and then sort/search for the highlight values using PHP. Is there a better way?

    Read the article

  • OSX: Programmatically added subviews not responding to mouse down events

    - by BigCola
    I have 3 subclasses: a Block class, a Row class and a Table class. All are subclasses of NSView. I have a Table added with IB which programmatically displays 8 rows, each of which displays 8 blocks. I overrode the mouseDown: method in Block to change the background color to red, but it doesn't work. Still if I add a block directly on top of the Table with IB it does work so I can't understand why it won't work in the first case. Here's the implementation code for Block and Row (Table's implementation works the same way as Row's): //block.m - (void)drawRect:(NSRect)dirtyRect { [color set]; [NSBezierPath fillRect:dirtyRect]; } -(void)mouseDown:(NSEvent *)theEvent { color = [NSColor redColor]; checked = YES; [self setNeedsDisplay:YES]; } //row.m - (void)drawRect:(NSRect)dirtyRect { [[NSColor blueColor] set]; [NSBezierPath fillRect:dirtyRect]; int x; for(x=0; x<8; x++){ int margin = x*2; NSRect rect = NSMakeRect(0, 50*x+margin, 50, 50); Block *block = [[Block alloc] initWithFrame:rect]; [self addSubview:block]; } }

    Read the article

  • A self-creator: What pattern is this? php

    - by user151841
    I have several classes that are basically interfaces to database rows. Since the class assumes that a row already exists ( __construct expects a field value ), there is a public static function that allows creation of the row and returns an instance of the class. Here's a pseudo-code example : class fruit { public $id; public function __construct( $id ) { $this->id = $id; $sql = "SELECT * FROM Fruits WHERE id = $id"; ... $this->arrFieldValues[$field] = $row[$value]; } public function __get( $var ) { return $this->arrFieldValues[$var]; } public function __set( $var, $val ) { $sql = "UPDATE fruits SET $var = $val WHERE id = $this->id"; } public static function create( $id ) { $sql = "INSERT INTO Fruits ( fruit_name ) VALUE ( '$fruit' )"; $id = mysql_insert_id(); $fruit = & new fruit($id); return $fruit; } } $obj1 = fruit::create( "apple" ); $obj2 = & new fruit( 12 ); What is this pattern called? Edit: I changed the example to one that has more database-interface functionality. For most of the time, this kind of class would be instantiated normally, through __construct(). But sometimes when you need to create a new row first, you would call create().

    Read the article

  • Is there a way to restart a cursor? Oracle

    - by Solid1Snake1
    I am trying to do something such as: for(int i = 0; i<10; i++) { for(int j = 0; j<10; j++) { Blah; } } //As you can see each time that there is a different i, j starts at 0 again. Using cursors in Oracle. But if I'm correct, after I fetch all rows from a cursor, it will not restart. Is there a way to do this? Here is my sql: CREATE OR REPLACE PROCEDURE SSACHDEV.SyncTeleappWithClientinfo as teleCase NUMBER; CURSOR TeleAppCursor is Select distinct(casenbr) from TeleApp; CURSOR ClientInfoCursor is Select casenbr from clientinfo where trim(cashwithappyn) is null; BEGIN open TeleAppCursor; open ClientInfoCursor; LOOP fetch TeleAppCursor into teleCase; EXIT when TeleAppCursor%NOTFOUND; LOOP fetch ClientInfoCursor into clientCase; EXIT when ClientInfoCursor%NOTFOUND; if clientCase = teleCase then update ClientInfo set cashwithappyn = (select cashwithappyn from teleapp where casenbr = clientCase) where casenbr = clientCase; break; end if; END LOOP; END LOOP; END; I did check online and was unable to find anything on this.

    Read the article

  • Assigning values from list to list excluding nulls

    - by GutierrezDev
    Hi. Sorry for the title but I don't know other way of asking. I got 2 Dictionary < string,string list One of them has a length of 606 items including null values. The other one has a length of 285 items. I determined the length doing this: int count = 0; for (int i = 0; i < temp.Length; i++) { if (temp[i] != null) count++; } Now I want to assign every value in the temp variable to another variable excluding the null values. Any Idea? Remember that I have a different size variables. Edited Dictionary<string, string>[] info; Dictionary<string, string>[] temp = new Dictionary<string, string>[ds.Tables[0].Rows.Count]; .... Here I added some data to the Temp variable ..... Then this: int count = 0 for (int i = 0; i < temp.Length; i++) { if (temp[i] != null) count++; } info = new Dictionary<string, string>[count]; Hope you understand now.

    Read the article

  • How to convert object to string list?

    - by user1381501
    I want to get two values by using linq select query and try to convert object to string list. I am trying to convert list to list. The code as below. I got the error when I convert object to string list : return returnvalue = (List)userlist; public List<string> GetUserList(string username) { List<User> UserList = new List<User>(); List<string> returnvalue=new List<string>(); try { string returnstring = string.Empty; DataTable dt = null; dt = Library.Helper.FindUser(username, 200); foreach (DataRow dr in dt.Rows) { User user = new User(); spuser.id = dr["ID"].ToString(); spuser.name = dr["Name"].ToString(); UserList.Adduser } } catch (Exception ex) { } List<SharePointMentoinUser> userlist = UserList.Select(a => new User { name = (string)a.name, id = (string)a.id }).ToList(); **return returnvalue = (List<string>)userlist;** }

    Read the article

  • CoffeeScript 2 Dimensional Array Usage

    - by Chris
    I feel like I'm missing something with CoffeeScript and 2 dimensional arrays. I'm simply attempting to make a grid of spaces (think checkers). After some searching and a discovery with the arrays.map function, I came up with this: @spaces = [0...20].map (x)-> [0...20].map (y) -> new Elements.Space() And this seems to work great, I have a nice 2 dimensional array with my Space object created in each. But now I want to send the created space constructor the x,y location. Because I'm two layers deep, I lost the x variable when I entered the map function for y. Ideally I would want to do something like: @spaces = [0...20].map (x)-> [0...20].map (y) -> new Elements.Space(x, y) or something that feels more natural to me like: for row in rows for column in row @spaces[row][column] = new Elements.Space(row, column) I'm really open to any better way of doing this. I know how I would do it in standard JavaScript, but really would like to learn how to do it in CoffeeScript.

    Read the article

  • Sinatra: rendering snippets (partials)

    - by Michael
    I'm following along with an O'Reilly book that's building a twitter clone with Sinatra. As Sinatra doesn't have 'partials' (in the way that Rails does), the author creates his own 'snippets' that work like partials. I understand that this is fairly common in Sinatra. Anyways, inside one of his snippets (see the first one below) he calls another snippet text_limiter_js (which is copied below). Text_limiter_js is basically a javascript function. If you look at the javascript function in text_limiter_js, you'll notice that it takes two parameters. I don't understand where these parameters are coming from because they're not getting passed in when text_limiter_js is rendered inside the other snippet. I'm not sure if I've given enough information/code for someone to help me understand this, but if you can, please explain. =snippet :'/snippets/text_limiter_js' %h2.comic What are you doing? %form{:method => 'post', :action => '/update'} %textarea.update.span-15#update{:name => 'status', :rows => 2, :onKeyDown => "text_limiter($('#update'), $('#counter'))"} .span-6 %span#counter 140 characters left .prepend-12 %input#button{:type => 'submit', :value => 'update'} text_limiter_js.haml :javascript function text_limiter(field,counter_field) { limit = 139; if (field.val().length > limit) field.val(field.val().substring(0, limit)); else counter_field.text(limit - field.val().length); }

    Read the article

  • Left Join only returning one row

    - by Adam
    I am trying to join two tables. I would like all the columns from the product_category table (there are a total of 6 now) and count the number of products, CatCount, that are in each category from the products_has_product_category table. My query result is 1 row with the first category and a total count of 68, when I am looking for 6 rows with each individual category's count. <?php $result = mysql_query(" SELECT a.*, COUNT(b.category_id) AS CatCount FROM `product_category` a LEFT JOIN `products_has_product_category` b ON a.product_category_id = b.category_id "); while($row = mysql_fetch_array($result)) { echo ' <li class="ui-shadow" data-count-theme="d"> <a href="' . $row['product_category_ref_page'] . '.php" data-icon="arrow-r" data-iconpos="right">' . $row['product_category_name'] . '</a><span class="ui-li-count">' . $row['CatCount'] . '</span></li>'; } ?> I have been working on this for a couple of hours and would really appreciate any help on what I am doing wrong.

    Read the article

  • Bind event to AJAX populated list items

    - by AnPel
    I have an unordered list with no list items. I get some information from the user, I run it through my database using AJAX and the servers sends back a JSON object response. Then I append each list item I want to display to the list using something like $('blabla').append('<li>information</li>') My question is, since the li elements were not there at the time the DOM was ready, how can I bind a click event to them? Here is my full code: $(function(){ var timer; $('#f').keyup(function(){ clearTimeout(timer); timer = setTimeout(getSuggestions, 400); }); }) function getSuggestions(){ var a = $('#f')[0].value.length; if( a < 3){ if(a == 0){ $('#ajaxerror').hide(300,function(){ $('#loading').hide(300,function(){ $('#suggest').hide(300) }) }) } return; } $('#loading').slideDown(200,function(){ $.ajax({ url: '/models/AJAX/suggest.php', dataType: 'json', data: {'data' : $('#f')[0].value }, success: function(response){ $('#ajaxerror').hide(0,function(){ $('#loading').hide(100,function(){ $('#suggest ul li').remove(); for(var b = 0; b < ( response.rows * 3); b = b + 3){ $('#suggest ul').append('<li>'+response[b]+' '+response[b+1]+' '+response[b+2]+'</li>') // MISSING SOME CODE HERE TO BIND CLICK EVENT TO NEWLY CREATED LI } $('#suggest').show(300) }) }) }, error: function(){ $('#suggest').hide(0,function(){ $('#loading').slideUp(100,function(){ $('#ajaxerror').show(300) }) }) } }) }) }

    Read the article

  • Query to bring count from comma seperated Value

    - by Mugil
    I have Two Tables One for Storing Products and Other for Storing Orders List. CREATE TABLE ProductsList(ProductId INT NOT NULL PRIMARY KEY, ProductName VARCHAR(50)) INSERT INTO ProductsList(ProductId, ProductName) VALUES(1,'Product A'), (2,'Product B'), (3,'Product C'), (4,'Product D'), (5,'Product E'), (6,'Product F'), (7,'Product G'), (8,'Product H'), (9,'Product I'), (10,'Product J'); CREATE TABLE OrderList(OrderId INT NOT NULL PRIMARY KEY AUTO_INCREMENT, EmailId VARCHAR(50), CSVProductIds VARCHAR(50)) SELECT * FROM OrderList INSERT INTO OrderList(EmailId, CSVProductIds) VALUES('[email protected]', '2,4,1,5,7'), ('[email protected]', '5,7,4'), ('[email protected]', '2'), ('[email protected]', '8,9'), ('[email protected]', '4,5,9'), ('[email protected]', '1,2,3'), ('[email protected]', '9,10'), ('[email protected]', '1,5'); Output ItemName NoOfOrders Product A 4 Product B 3 Product C 1 Product D 3 Product E 4 Product F 0 Product G 2 Product H 1 Product I 2 Product J 1 The Order List Stores the ItemsId as Comma separated value for every customer who places order.Like this i am having more than 40k Records in my dB table Now I am assigned with a task of creating report in which I should display Items and No of People ordered Items as Shown Below I Used Query as below in my PHP to bring the Orders One By One and storing in array. SELECT COUNT(PL.EmailId) FROM OrderList PL WHERE CSVProductIds LIKE '2' OR CSVProductIds LIKE '%,2,%' OR CSVProductIds LIKE '%,2' OR CSVProductIds LIKE '2,%'; 1.Is it possible to get the same out put by using Single Query 2.Does using a like in mysql query slows down the dB when the table has more no of records i.e 40k rows

    Read the article

  • mysql report sql help

    - by sfgroups
    I have mysql table with data like this. record will have server with total cpu and virtual server with cpu assinged type, cpu srv1, 10 vsrv11, 2 vsrv12, 3 srv2, 15 vsrv21, 6 vsrv22, 7 vsrv23, 1 from the above data, I want to create output like this. server, total cpu, assigned cpu, free cpu srv1, 10, 5, 5 srv2, 15, 14, 1 Can you help me on creating sql query for this report? I have changed my table and data like this. CREATE TABLE `cpuallocation` ( `servertype` varchar(10) DEFAULT NULL, `servername` varchar(20) DEFAULT NULL, `hostname` varchar(20) DEFAULT NULL, `cpu_count` float DEFAULT NULL, UNIQUE KEY `server_uniq_idx` (`servertype`,`servername`,`hostname`) insert into cpuallocation values('srv', 'server1', '',16); insert into cpuallocation values('vir', 'server1', 'host1',5); insert into cpuallocation values('vir', 'server1', 'host2',2.5); insert into cpuallocation values('vir', 'server1', 'host3',4.5); insert into cpuallocation values('srv', 'server2', '',8); insert into cpuallocation values('vir', 'server2', 'host1',5); insert into cpuallocation values('vir', 'server2', 'host2',2.5); insert into cpuallocation values('srv', 'server3', '',24); insert into cpuallocation values('vir', 'server3', 'host1',12); insert into cpuallocation values('vir', 'server3', 'host2',2); insert into cpuallocation values('srv', 'server4', '',12); Update: I created two view, now I getting the result I want. create view v1 as select servername, sum(cpu_count) as cpu_allocated from cpuallocation where servertype='vir' group by servername; create view v2 as select servername, cpu_count as total_cpu from cpuallocation where servertype='srv'; select a.servername, a.total_cpu, b.cpu_allocated from v2 as a left join v1 as b on a.servername=b.servername; +------------+-----------+---------------+ | servername | total_cpu | cpu_allocated | +------------+-----------+---------------+ | server1 | 16 | 12 | | server2 | 8 | 7.5 | | server3 | 24 | 14 | | server4 | 12 | NULL | +------------+-----------+---------------+ 4 rows in set (0.00 sec) Is it possible to create a query with-out creating views?

    Read the article

  • update entire table with pdo

    - by MephDaddy
    I am working on a simple gaming ladder script. I am having little to no luck trying to find an effective way to reset my ladder information while leaving my table id and name fields intact. I am trying to get create a loop to update my entire table, similar to the way I draw my table. Shown below. ...... //Start displaying ladder with with team with most wins at the top echo "<TABLE border=1 width=500 align=center><TR>"; foreach($db->query('SELECT * FROM test ORDER BY win DESC , name ASC') as $row) { echo "<TR><TD>" . $row['name'] . "</TD><TD>" . $row['win'] . "</TD><TD>"; echo $row['loss'] . "</TD><TD>" . $row['battles'] . "</TD><TD>"; echo $row['score'] . "</TD></TR>"; } ...... I currently have a table with 6 fields(id,name,win,loss,battles,score). I want to reset the values of win,loss,battles, and score back to 0. While leaving id and name alone. Effective reseting my ladder for a new season to begin. The only way I have been able to complete this is to find out how many rows there are and run a for loop. It seems vary inefficient. Was hoping I could get some better insight as to how to go about this.

    Read the article

  • Fast and efficient way to read a space separated file of numbers into an array?

    - by John_Sheares
    I need a fast and efficient method to read a space separated file with numbers into an array. The files are formatted this way: 4 6 1 2 3 4 5 6 2 5 4 3 21111 101 3 5 6234 1 2 3 4 2 33434 4 5 6 The first row is the dimension of the array [rows columns]. The lines following contain the array data. The data may also be formatted without any newlines like this: 4 6 1 2 3 4 5 6 2 5 4 3 21111 101 3 5 6234 1 2 3 4 2 33434 4 5 6 I can read the first line and initialize an array with the row and column values. Then I need to fill the array with the data values. My first idea was to read the file line by line and use the split function. But the second format listed gives me pause, because the entire array data would be loaded into memory all at once. Some of these files are in the 100 of MBs. The second method would be to read the file in chunks and then parse them piece by piece. Maybe somebody else has a better a way of doing this?

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >