Search Results

Search found 5233 results on 210 pages for 'records'.

Page 145/210 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Core Data fetch request with array

    - by JK
    I am trying to set a fetch request with a predicate to obtain records in the store whose identifiers attribute match an array of identifiers specified in the predicate e.g. NSString *predicateString = [NSString stringWithFormat:@"identifier IN %@", employeeIDsArray]; The employeeIDsArray contains a number of NSNumber objects that match IDs in the store. However, I get an error "Unable to parse the format string". This type of predicate works if it is used for filtering an array, but as mentioned, fails for a core data fetch. How should I set the predicate please?

    Read the article

  • Efficient way to delete a line from a text file (C#)

    - by Valentin Vasilyev
    Hello. I need to delete a certain line from a text file. What is the most efficient way of doing this? File can be potentially large(over million records). Thank you. UPDATE: below is the code I'm currently using, but I'm not sure if it is good. internal void DeleteMarkedEntries() { string tempPath=Path.GetTempFileName(); using (var reader = new StreamReader(logPath)) { using (var writer = new StreamWriter(File.OpenWrite(tempPath))) { int counter = 0; while (!reader.EndOfStream) { if (!_deletedLines.Contains(counter)) { writer.WriteLine(reader.ReadLine()); } ++counter; } } } if (File.Exists(tempPath)) { File.Delete(logPath); File.Move(tempPath, logPath); } }

    Read the article

  • Exploring search options for PHP

    - by Joshua
    I have innoDB table using numerous foreign keys, but we just want to look up some basic info out of it. I've done some research but still lost. 1) How can I tell if my host has Sphinx installed already? I don't see it as an option for table storage method (i.e. innodb, myisam). 2) Zend_Search_Lucene, responsive enough for AJAX functionality of millions of records? 3) Mirror my innoDB with a myisam? Make every innodb transaction end with a write to the myisam, then use 1:1 lookups? How would I do this automagically? This should make MyISAM ACID-compliant and free(er) from corruption no? 4) PostgreSQL fulltext queries don't even look like SQL to me wtf, I don't have time to learn a new SQL syntax I need noob options 5) ???????????????????? This is high volume site on a decently-equipped VPS Thanks very much for any ideas.

    Read the article

  • PHP Export to CSV

    - by Ali Hamra
    I'm not really familiar with PHP exporting to excel or csv, but I'm using PHP MySQL for a local point of sale. According to the code below, this actually works..But not in the way it should be ! All records are placed as 1 row inside the csv file, how can i fix that ? Also, How would I stop overwriting the same file...I mean When I click on a Button to export the csv, it should check if there is an existing csv file, If there is--Create new one ! Thank You require_once('connect_db.php'); $items_array = array(); $result = mysql_query("SELECT * FROM sold_items"); while($row = mysql_fetch_array($result)) { $items_array[] = $row['item_no']; $items_array[] = $row['qty']; } $f = fopen('C:/mycsv.csv', 'w'); fputcsv($f, $items_array); fclose($f);

    Read the article

  • Ruby: would using Fibers increase my DB insert throughput?

    - by Zombies
    Currently I am using Ruby 1.9.1 and the 'ruby-mysql' gem, which unlike the 'mysql' gem is written in ruby only. This is pretty slow actually, as it seems to insert at a rate of almost 1 per second (SLOOOOOWWWWWW). And I have a lot of inserts to make too, its pretty much what this script does ultamitely. I am using just 1 connection (since I am using just one thread). I am hoping to speed things up by creating a fiber that will create a new DB connection insert 1-3 records close the DB connection I would imagine launching 20-50 of these would greatly increase DB throughput. Am I correct to go along this route? I feel that this is the best option, as opposed to refactoring all of my DB code :(

    Read the article

  • mysqli returns only one row instead of multiple rows

    - by Tristan
    Hello, i'm totally new to mysqli and i took a generated code and adapted it for my need. public function getServeurByName($string) { $stmt = mysqli_prepare($this->connection, "SELECT DISTINCT * FROM $this->tablename where GSP_nom=?"); $this->throwExceptionOnError(); mysqli_stmt_bind_param($stmt, 's', $string); $this->throwExceptionOnError(); mysqli_stmt_execute($stmt); $this->throwExceptionOnError(); mysqli_stmt_bind_result($stmt, $row->idServ, $row->timestamp, ........... ........... $row->email); if(mysqli_stmt_fetch($stmt)) { $row->timestamp = new DateTime($row->timestamp); return $row; } else { return null; } } the problem, this example i took the template on returns only one row instead of all the records. how to fix that please ?

    Read the article

  • How to use group by for grouping varchar data.

    - by Shantanu Gupta
    I have a table that contains some data given below pk_map_id preferences ImmediateParent Department_Id -------------------- -------------------- -------------------- -------------------- 20 14 5 1 21 15 5 1 22 16 6 1 23 9 4 2 24 4 3 2 25 24 20 2 26 25 20 2 27 23 13 2 I want to group my records on behalf of department then immediate parent then preferences each seperated by ',' i.e. department Immediate Parent preferences 1 5,6 14,15,16 2 4,3,20,13 9,4,24,25,23 and this table also Immediate parent preferences 5 14,15 6 16 4 9 3 4 20 24,25 13 13 In actual scenario all these are my ids which are to be replaced by their string fields. I am using sql server 2k5

    Read the article

  • Access VBA: How to test if recordSet is empty? isNull?

    - by Shubham
    How can you test if a record set is empty? Dim temp_rst1 As Recordset Dim temp_rst2 As Recordset Set temp_rst1 = db.OpenRecordset("SELECT * FROM ORDER_DATA WHERE SKUS_ORDERED = '" & curSKU1 & "' AND [ORDER] = " & curOrder) Set temp_rst2 = db.OpenRecordset("SELECT * FROM ORDER_DATA WHERE SKUS_ORDERED = '" & curSKU2 & "' AND [ORDER] = " & curOrder) If IsNull(temp_rst1) Or IsNull(temp_rst2) Then MsgBox "null" I'm opening up a couple of record sets based on a select statement. If there are no records, will IsNull return true?

    Read the article

  • What's the proper way to use sqlite on the iPhone?

    - by Elliot Chen
    Hi, Experts: Can you please give some suggestions on sqlite using on the iPhone? Within my application, I use a sqlite DB to store all local data. Two methods can be used to retrieve those data during running time. 1, Load all the data into memory at initialization stage. (More memory used, less DB open/close operation needed) 2, Read corresponding records when necessary, free the occupied memory after using. (Good habit for memory using, but much DB open/close operations needs). I prefer to use method 2, but not sure whether too many DB opening/closing operations could affect app's efficiency. Or do you think I can 'upgrade' method 2 by opening DB when app launches and closing DB when app quits? Thanks for your suggestions very much!

    Read the article

  • LINQ-to-SQL: Searching against a CSV

    - by Peter Bridger
    I'm using LINQtoSQL and I want to return a list of matching records for a CSV contains a list of IDs to match. The following code is my starting point, having turned a CSV string in a string array, then into a generic list (which I thought LINQ would like) - but it doesn't: Error Error 22 Operator '==' cannot be applied to operands of type 'int' and 'System.Collections.Generic.List<int>' C:\Documents and Settings\....\Search.cs 41 42 C:\...\ Code DataContext db = new DataContext(); List<int> geographyList = new List<int>( Convert.ToInt32(geography.Split(',')) ); var geographyMatches = from cg in db.ContactGeographies where cg.GeographyId == geographyList select new { cg.ContactId }; Where do I go from here?

    Read the article

  • Overriding unique indexed values

    - by Yeti
    This is what I'm doing right now (name is UNIQUE): SELECT * FROM fruits WHERE name='apple'; Check if the query returned any result. If yes, don't do anything. If no, a new value has to be inserted: INSERT INTO fruits (name) VALUES ('apple'); Instead of the above is it ok to insert the value into the table without checking if it already exists? If the name already exists in the table, an error will be thrown and if it doesn't, a new record will be inserted. Right now I am having to insert 500 records in a for loop, which results in 1000 queries. Will it be ok to skip the "already-exists" check?

    Read the article

  • ASP.NET MVC: View does not get rendered with updated model value

    - by Newton
    I'm experiencing the a challenge in populating the child records. My previous code was like - <%= Html.TextBox("DyeOrder.Summary[" + i + "].Ratio", Model.DyeOrder.Summary[i].Ratio.ToString("#0.00"), ratioProperties) %> This code does not render the updated values after post back. To resolve this issue my work around was like - <%= "<input id='DyeOrder_Summary_" + i + "__Ratio' name='DyeOrder.Summary[" + i + "].Ratio' value='" + Model.DyeOrder.Summary[i].Ratio.ToString("#0.00") + "' " + ratioCss + " type='text' />"%> This is very clumsy to me. Is there any better ideas...

    Read the article

  • Left Join works with table but fails with query

    - by Frank Martin
    The following left join query in MS Access 2007 SELECT Table1.Field_A, Table1.Field_B, qry_Table2_Combined.Field_A, qry_Table2_Combined.Field_B, qry_Table2_Combined.Combined_Field FROM Table1 LEFT JOIN qry_Table2_Combined ON (Table1.Field_A = qry_Table2_Combined.Field_A) AND (Table1.Field_B = qry_Table2_Combined.Field_B); is expected by me to return this result: +--------+---------+---------+---------+----------------+ |Field_A | Field_B | Field_A | Field_B | Combined_Field | +--------+---------+---------+---------+----------------+ |1 | | | | | +--------+---------+---------+---------+----------------+ |1 | | | | | +--------+---------+---------+---------+----------------+ |2 |1 |2 |1 |John, Doe | +--------+---------+---------+---------+----------------+ |2 |2 | | | | +--------+---------+---------+---------+----------------+ [Table1] has 4 records, [qry_Table2_Combined] has 1 record. But it gives me this: +--------+---------+---------+---------+----------------+ |Field_A | Field_B | Field_A | Field_B | Combined_Field | +--------+---------+---------+---------+----------------+ |2 |1 |2 |1 |John, Doe | +--------+---------+---------+---------+----------------+ |2 |2 |2 | |, | +--------+---------+---------+---------+----------------+ Really weird is that the [Combined_Field] has a comma in the second row. I use a comma to concatenate two fields in [qry_Table2_Combined]. If the left join query uses a table created from the query [qry_Table2_Combined] it works as expected. Why does this left join query not give the same result for a query and a table? And how can i get the right results using a query in the left join?

    Read the article

  • WebClient.DownloadString() Not Producing Exact HTML

    - by Ryan Fuentes
    So here's the deal. I'm creating a spider bot for a website that scans all the product pages and records the product data. I'm using C# and the WebClient library to download the HTML string. The site I'm crawling must be specially made because the HTML that is received from WebClient.DownloadString() is different than the HTML that I get when I view the source of the HTML when visiting it on a browser. This seems intentional because the only info I can't get is the price. Does anyone know a workaround for this problem or can anyone explain what is happening? Thanks.

    Read the article

  • SQL Server / T-SQL : How to update equal percentages of a resultset?

    - by Kent Comeaux
    I need a way to take a resultset of KeyIDs and divide it up as equally as possible and update records differently for each division based on the KeyIDs. In other words, there is SELECT KeyID FROM TableA WHERE (some criteria exists) I want to update TableA 3 different ways by 3 equal portions of KeyIDs. UPDATE TableA SET FieldA = Value1 WHERE KeyID IN (the first 1/3 of the SELECT resultset above) UPDATE TableA SET FieldA = Value2 WHERE KeyID IN (the second 1/3 of the SELECT resultset above) UPDATE TableA SET FieldA = Value3 WHERE KeyID IN (the third 1/3 of the SELECT resultset above) or something to that effect. Thanks for any and all of your responses.

    Read the article

  • how to optimize an oracle query that has to_char in where clause for date

    - by panorama12
    I have a table that contains about 49403459 records. I want to query the table on a date range. say 04/10/2010 to 04/10/2010. However, the dates are stored in the table as format 10-APR-10 10.15.06.000000 AM (time stamp). As a result. When I do: SELECT bunch,of,stuff,create_date FROM myTable WHERE TO_CHAR (create_date,'MM/DD/YYYY)' >= '04/10/2010' AND TO_CHAR (create_date, 'MM/DD/YYYY' <= '04/10/2010' I get 529 rows but in 255.59 seconds! which is because I guess I am doing to_char on EACH record. However, When I do SELECT bunch,of,stuff,create_date FROM myTable WHERE create_date >= to_date('04/10/2010','MM/DD/YYYY') AND create_date <= to_date('04/10/2010','MM/DD/YYYY') then I get 0 results in 0.14 seconds. How can I make this query fast and still get valid (529) results?? At this point I can not change indexes. Right now I think index is created on create_date column

    Read the article

  • c#, can you think of anymore optimization in this code?

    - by Sha Le
    Hi All: Look at the following code: StringBuilder row = TemplateManager.GetRow("xyz"); // no control over this method StringBuilder rows = new StringBuilder(); foreach(Record r in records) { StringBuilder thisRow = new StringBuilder(row.ToString()); thisRow.Replace("%firstName%", r.FirstName) .Replace("%lastName%", r.LastName) // all other replacement goes here .Replace("%LastModifiedDate%", r.LastModifiedDate); //finally append row to rows rows.Append(thisRow); } Currently 3 StringBuilders and row.ToString() is inside a loop. Is there any room for further optimization here? Thanks a bunch. :-)

    Read the article

  • SQL query to get field value distribution

    - by Bryan Lewis
    I have a table of over 1 million test score records that basically have a unique score_ID, a subject_ID and a score given by a person. The score range for most subjects is 0-3, but some have a range of 0-4. There are about 25 possible subjects. I need to produce a score distribution report which looks like: subject_ID 0 1 2 3 4 ---------- --- --- --- --- --- 1 967 576 856 234 2 576 947 847 987 324 . . So it groups the data by subject_ID, then shows how many times a specific score value was given within that subject. Any SQL pointers to generate this would be greatly appreciated.

    Read the article

  • Build SUM based daily record

    - by ximarin
    I have a problem building an aggregate function. Here's my problem: I have a table like this id action day isSum difference 1 ping 2012-01-01 1 500 (this is the sum of the differences from last year) 2 ping 2012-01-01 0 -2 3 ping 2012-01-02 0 1 4 ping 2012-01-03 0 -4 5 ping 2012-01-04 0 -2 6 ping 2012-01-05 0 3 7 ping 2012-01-06 0 2 8 ping 2012-01-01 1 0 (this is the sum of the differences from last year, now for pong) 9 pong 2012-01-01 0 -5 10 pong 2012-01-02 0 2 11 pong 2012-01-03 0 -2 12 pong 2012-01-04 0 -8 13 pong 2012-01-05 0 3 14 pong 2012-01-06 0 4 I now need to select the action, day and the summarized difference since 01-01 for every day, so that my result looks like this action day total ping 2012-01-01 498 ping 2012-01-02 499 ping 2012-01-03 495 ping 2012-01-04 493 ping 2012-01-05 496 ping 2012-01-06 498 pong 2012-01-01 - 5 pong 2012-01-02 - 3 pong 2012-01-03 - 5 pong 2012-01-04 -13 pong 2012-01-05 -10 pong 2012-01-06 - 6 How can I do this? there a are a lot of datasets (~1 million), so the query needs to be pretty cheap. I don't know how the use sum to get daily sums for daily records depending on the action-column.

    Read the article

  • Appropriate high level language to deal with binary data

    - by fortran
    Hi, I need to write a small tool that parses a textual input and generates some binary encoded data. I would prefer to stay away from C and the like, in favour of a higher level, (optionally) safer, more expressive and faster to develop language. My language of choice for this kind of tasks usually is Python, but for this case dealing with binary raw data can be problematic if one isn't very careful with the numbers being promoted to bignums, sign extensions and such. Ideally I would like to have records with named bitfields that are portable to be serialised in a consistent manner. (I know that there's a strong point in doing it in a language I already master, although it isn't optimal, but I think this could be a good opportunity to learn something new). Thanks.

    Read the article

  • Embedded Record is not getting loaded in Ember.js

    - by Venky
    Following is the JSON data I am trying to load using ember-data: { "product" : [ { "id" : 1, "name" : "product1", "master" : { "id" : 1, "name" : "product1", "images" : [ { "id" : 1, "productUrl" : "/images/product1_1.jpg" }, { "id" : 2, "productUrl" : "/images/product1_2.jpg" } ] } }, { "id" : 2, "name" : "product2", "master" : { "id" : 2, "name" : "product2", "images" : [ { "id" : 3, "productUrl" : "/images/product2_1.jpg" }, { "id" : 4, "productUrl" : "/images/product2_2.jpg" } ] } } ] } The models are as follows: App.Product = DS.Model.extend name: DS.attr('string') description: DS.attr('string') master: DS.belongsTo('master') App.Master = DS.Model.extend images: DS.hasMany('image') App.Image = DS.Model.extend productUrl: DS.attr('string') The Application Serializer code is as follows: App.ApplicationSerializer = DS.ActiveModelSerializer.extend(DS.EmbeddedRecordsMixin, attrs: { images: { embedded : 'always' } master: { embedded : 'always' } } ) The problem is that the "master" model records are being returned empty. I am not sure, where I am going wrong. I am using the following platform configuration: ember-source (1.4.0) ember-data-source (1.0.0.beta.7) ember-rails (0.15.0) Rails (4.1.0) Thanks

    Read the article

  • Make a usable Join relationship with LINQ on top of a database CSV design error

    - by jdk
    I'm looking for a way to fix and/or abstract away a comma-separated values (CSV) list in a database field in order to reconstruct a usable relationship such that I can properly join the two tables below and query them using LINQ and its Join method. Following is a sample showing the Person table and CsvArticleIds field having a CSV value to represent a one-to-many association with Article records. TABLE [dbo].[Person] Id Name CsvArticleIds -- ---------- -------- 1 Joe "15,22" 5 Ed "22" 10 Arnie "8,15,22" ^^^(Of course a link table should have been created; nonetheless the relationship with articles is trapped inside that list of CSV values.) TABLE [dbo].[Article] Id Title -- ---------- 8 Beginning C# 15 A Historic look at Programming in the 90s 22 Gardening in January Additional Info the fix can be at any level: C#.NET or SQL Server something easy because I will be repeating the solution for many other CSV values in other tables. Elegant is nice too. not looking for efficiency because this is part of a one-time data migration task and can take as long as it wants to run.

    Read the article

  • SQL Server Delete - Froregin Key

    - by Ahmet Altun
    I have got two tables in Sql Server 2005: USER Table: information about user and so on. COUNTRY Table : Holds list of whole countries on the world. USER_COUNTRY Table: Which matches, which user has visited which county. It holds, UserID and CountryID. For example, USER_COUNTRY table looks like this: ID -- UserID -- CountryID 1 -- 1 -- 34 2 -- 1 -- 5 3 -- 2 -- 17 4 -- 2 -- 12 5 -- 2 -- 21 6 -- 3 -- 19 My question is that: When a user is deleted in USER table, how can I make associated records in USER_COUNTRY table deleted directly. Maybe, by using Foreign Key Constaint?

    Read the article

  • WHERE IN Query with two recordsets in Access VBA

    - by Henry Owens
    Hi All, My first post here, so i hope this is the right area. I am currently trying to compare 2 recordsets, one of which has come from an Excel named range, and the other from a table in the Access database. The code for each is: Set existingUserIDs = db.OpenRecordset("SELECT Username FROM UserData") Set IDsToImport = exceldb.OpenRecordset("SELECT Username FROM Named_Range") The problem is that I would like to somehow compare these two recordsets, without looping (there is a very large number of records). Is there any way to do a join or similar on these recordsets? I can not do a join before creating the recordsets, due to the fact that one is coming from Excel, and the other from Access, so they are two different DAO databases. The end goal is that I will choose only the usernames that do not already exist in the access table to be imported (so in an SQL query, it would be a NOT IN(table)). Thanks for any assistance you can lend! Regards, Bricky.

    Read the article

  • Need help on nested loop of queries in php and mysql?

    - by mysqllearner
    Hi, I am trying to get do this: <?php $good_customer = 0; $q = mysql_query("SELECT user FROM users WHERE activated = '1'"); // this gives me about 40k users while($r = mysql_fetch_assoc($q)){ $money_spent = 0; $user = $r['user']; // Do queries on another 20 tables for($i = 1; $i<=20 ; $i++){ $tbl_name = 'data' . $i; $q2 = mysql_query("SELECT money_spent FROM $tbl_name WHERE user = '{$user}'"); while($r2 = mysql_fetch_assoc($q2)){ $money_spend += $r2['money_spent']; } if($money_spend > 1000000){ $good_customer += 1; } } } This is just an example. I am testing on localhost, for single user, it returns very fast. But when I try 1000, it takes forever, not even mentioned 40k users. Anyway to optimise/improve this code? EDIT: By the way, each of the others 20 tables has ~20 - 40k records

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >