Search Results

Search found 6630 results on 266 pages for 'cname record'.

Page 56/266 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Database nesting layout confusion

    - by arzon
    I'm no expert in databases and a beginner in Rails, so here goes something which kinda confuses me... Assuming I have three classes as a sample (note that no effort has been made to address any possible Rails reserved words issue in the sample). class File < ActiveRecord::Base has_many :records, :dependent => :destroy accepts_nested_attributes_for :records, :allow_destroy => true end class Record < ActiveRecord::Base belongs_to :file has_many :users, :dependent => :destroy accepts_nested_attributes_for :users, :allow_destroy => true end class User < ActiveRecord::Base belongs_to :record end Upon entering records, the database contents will appear as such. My issue is that if there are a lot of Files for the same Record, there will be duplicate record names. This will also be true if there will be multiple Records for the same user in the the Users table. I was wondering if there is a better way than this so as to have one or more files point to a single Record entry and one or more Records will point to a single User. BTW, the File names are unique. Files table: id name 1 name1 2 name2 3 name3 4 name4 Records table: id file_id record_name record_type 1 1 ForDaisy1 ... 2 2 ForDonald1 ... 3 3 ForDonald2 ... 4 4 ForDaisy1 ... Users table: id record_id username 1 1 Daisy 2 2 Donald 3 3 Donald 4 4 Daisy Is there any way to optimize the database to prevent duplication of entries, or this should really the correct and proper behavior. I spread out the database into different tables to be able to easily add new columns in the future.

    Read the article

  • OO Objective-C design with XML parsing

    - by brainfsck
    Hi, I need to parse an XML record that represents a QuizQuestion. The "type" attribute tells the type of question. I then need to create an appropriate subclass of QuizQuestion based on the question type. The following code works ([auto]release statements omitted for clarity): QuizQuestion *question = [[QuizQuestion alloc] initWithXMLString:xml]; if( [ [question type] isEqualToString:@"multipleChoiceQuestion"] ) { [myQuestions addObject:[[MultipleChoiceQuizQuestion alloc] initWithXMLString:xml]; } //QuizQuestion.m -(id)initWithXMLString:(NSString*)xml { self.type = ...// parse "type" attribute from xml // parse the rest of the xml } //MultipleChoiceQuizQuestion.m -(id)initWithXMLString:(NSString*)xml { if( self= [super initWithXMLString:xml] ) { // multiple-choice stuff } } Of course, this means that the XML is parsed twice: once to find out the type of QuizQuestion, and once when the appropriate QuizQuestion is initialized. To prevent parsing the XML twice, I tried the following approach: // MultipleChoiceQuizQuestion.m -(id)initWithQuizRecord:(QuizQuestion*)record { self=record; // record has already parsed the "type" and other parameters // multiple-choice stuff } However, this fails due to the "self=record" assignment; whenever the MultipleChoiceQuizQuestion tries to call an instance-method, it tries to call the method on the QuizQuestion class instead. Can someone tell me the correct approach for parsing XML into the appropriate subclass when the parent class needs to be initialized to know which subclass is appropriate?

    Read the article

  • delphi Ado update insert betwen 2 records

    - by user315957
    i nead to update recordes from one table to the other (this for the master Table and f«the same for the DetailTable. first a posicion the record of the record´s of the table i whant to copy update from: Tabelamestre(Local_deste_cliente) (1 record ) NInterv.text:=dbedit1.text; Begin with ADOTable_casa do Begin Close; SQL.Clear; SQL.Add('SELECT * from Vibrometria_'); SQL.Add('Where numeracao LIKE ''%'+NInterv.text ); Open; end; end now i need to update/insert the record´s from Vibrometria := Local_deste_cliente (TADOTABLE) Now i nead to get the record above and do the same for the 2 detail tables Vibrometria_Sub (J) := Tabeladetail (Variaveis_neste_local). ((J) Records and i stil have another table thar get a master record from (K) Tabeladetail (Variaveis_neste_local) Vibrometria_Sub1 (K) := Tabeladetail1(Variaveis_neste_local1). ((k) Records lest´s say i nead to update 1 to N starting in the first table!!!!!!!! is there a fast solucion for this!!!!!! Thanks

    Read the article

  • How to make a report with count of type of cases in each month in Acess 2010

    - by amir shadaab
    I have a database is access with each record having a date and yes/no type columns for each record which shows which category the record comes under. I want to create a report which shows the types of cases in each month by taking a date range as a parameter through prompts. I have done the prompt part but I'm not sure how the query should be to show values for each month in that date range. Can someone please help me with this?

    Read the article

  • How can I modify my classes to use it's collections in WPF TreeView

    - by Victor
    Hello, i'am trying to modify my objects to make hierarchical collection model. I need help. My objects are Good and GoodCategory: public class Good { int _ID; int _GoodCategory; string _GoodtName; public int ID { get { return _ID; } } public int GoodCategory { get { return _GoodCategory; } set { _GoodCategory = value; } } public string GoodName { get { return _GoodName; } set { _GoodName = value; } } public Good(IDataRecord record) { _ID = (int)record["ID"]; _GoodtCategory = (int)record["GoodCategory"]; } } public class GoodCategory { int _ID; string _CategoryName; public int ID { get { return _ID; } } public string CategoryName { get { return _CategoryName; } set { _CategoryName = value; } } public GoodCategory(IDataRecord record) { _ID = (int)record["ID"]; _CategoryName = (string)record["CategoryName"]; } } And I have two Collections of these objects: public class GoodsList : ObservableCollection<Good> { public GoodsList() { string goodQuery = @"SELECT `ID`, `ProductCategory`, `ProductName`, `ProductFullName` FROM `products`;"; using (MySqlConnection conn = ConnectToDatabase.OpenDatabase()) { if (conn != null) { MySqlCommand cmd = conn.CreateCommand(); cmd.CommandText = productQuery; MySqlDataReader rdr = cmd.ExecuteReader(); while (rdr.Read()) { Add(new Good(rdr)); } } } } } public class GoodCategoryList : ObservableCollection<GoodCategory> { public GoodCategoryList () { string goodQuery = @"SELECT `ID`, `CategoryName` FROM `product_categoryes`;"; using (MySqlConnection conn = ConnectToDatabase.OpenDatabase()) { if (conn != null) { MySqlCommand cmd = conn.CreateCommand(); cmd.CommandText = productQuery; MySqlDataReader rdr = cmd.ExecuteReader(); while (rdr.Read()) { Add(new GoodCategory(rdr)); } } } } } So I have two collections which takes data from the database. But I want to use thats collections in the WPF TreeView with HierarchicalDataTemplate. I saw many post's with examples of Hierarlichal Objects, but I steel don't know how to make my objects hierarchicaly. Please help.

    Read the article

  • [Django] How to find out whether a model's column is a foreign key?

    - by codethief
    I'm dynamically storing information in the database depending on the request: // table, id and column are provided by the request table_obj = getattr(models, table) record = table_obj.objects.get(pk=id) setattr(record, column, request.POST['value']) The problem is that request.POST['value'] sometimes contains a foreign record's primary key (i.e. an integer) whereas Django expects the column's value to be an object of type ForeignModel: Cannot assign "u'122'": "ModelA.b" must be a "ModelB" instance. Now, is there an elegant way to dynamically check whether b is a column containing foreign keys and what model these keys are linked to? (So that I can load the foreign record by it's primary key and assign it to ModelA?) Or doesn't Django provide information like this to the programmer so I really have to get my hands dirty and use isinstance() on the foreign-key column?

    Read the article

  • adding custom fields dynamically to a model

    - by pankajbhageria
    I have a model called List which has many records: class List has_many :records end class Record end The table Record has 2 permanent fields: name, email. Besides these 2 fields, for each List a Record can have 'n' custom fields. For example: for list1 I add address(text), dob(date) as custom fields. Then while adding records to list one, each record can have values for address and dob. Is there any ActiveRecord plugin which provides this type of functionality? Or else could you share your thoughts on how to model this? Thanks in advance, Pankaj

    Read the article

  • Create Directory for records in MS Access 2007

    - by glinch
    Hi there, Is it possible to create a directory folder for individual records in Access 2007. For example tblUser ID firstName surName When adding a record, would create a folder C:\userdatabase\Surname,firstName,ID Could see this being useful in situations for example where a large amount of images/files would need to be associated with a record. Access would create and link to a directory for each record. Thanks in advance for any advice Noel

    Read the article

  • Does php mvc framework agavi use CRUD compliant to REST?

    - by txwikinger
    The agavi framework uses the PUT request for create and POST for updating information. Usually in REST this is used the other way around (often referring to POST adding information while PUT replacing the whole data record). If I understand it correctly, the important issue is that PUT must be idempotent, while POST does not have this requirement. Therefore, I wounder how creating a new record can be idempotent (i.e. multiple request do not lead to multiple creations of a record) in particular when usually the ORM uses an id as a primary key and the id of a new record would not be known to the client (since it is autocreated in the database), hence cannot be part of the request. How does agavi maintain the requirement of idempotence in light of this for the PUT request. Thanks.

    Read the article

  • Find records IN BETWEEN Date Range

    - by Muhammad Kashif Nadeem
    Please see attached image I have a table which have FromDate and ToDate. FromDate is start of some event and ToDate is end of taht event. I need to find a record if search criteria is in between range of dates. e.g. If a record has FromDate 2010/15/5 and ToDate 2010/15/25 and my criteria is FromDate 2010/5/18 and ToDate is 2010/5/21 then this record should be in search results becasue this is in the range of 15 to 25. Following is my search query (chunk of) SELECT m.EventId FROM MajorEvents WHERE ( (m.LocationID = @locationID OR @locationID IS NULL) OR M.LocationID IS NULL) AND ( CONVERT(VARCHAR(10),M.EventDateFrom,23) BETWEEN CONVERT(VARCHAR(10),@DateTimeFrom,23) AND CONVERT(VARCHAR(10),@DateTimeTo,23) OR CONVERT(VARCHAR(10),M.EventDateTo,23) BETWEEN CONVERT(VARCHAR(10),@DateTimeFrom,23) AND CONVERT(VARCHAR(10),@DateTimeTo,23) ) If Search Criteria is equal to FromDate or ToDate then results are ok e.g. If search criterai is DateFrom = 2010/5/15 AND DateTo = 2010/5/18 then this record will return becasue Date From is exactly what is DateFrom in db. OR If search criterai is DateFrom = 2010/5/22 AND DateTo = 2010/5/25 then this record will return becasue Date To is exactly what is DateTo in db But if anything in between this range it does not work Thanks for the help.

    Read the article

  • Should I be using callbacks or should I override attributes?

    - by ryeguy
    What is the more "rails-like"? If I want to modify a model's property when it's set, should I do this: def url=(url) #remove session id self[:url] = url.split('?s=')[0] end or this? before_save do |record| #remove session id record.url = record.url.split('?s=')[0] end Is there any benefit for doing it one way or the other? If so, why? If not, which one is generally more common?

    Read the article

  • Capture video from WPF app

    - by Julien Couvreur
    I want to write a C# application which can record a video capture of one of its WPF controls. Is there a solution in .Net to record video from a control, or is there some library I could use? My goal is to write a SketchCast application. The use case is the following: launch SketchCast app and press record button, write ink into a WPF ink area, and talk, press stop, recorded voice and ink animation get saved into a video file in some encoding.

    Read the article

  • The conceptual process of populating related tables in a database (MySql) from a CSV file

    - by user322772
    I'm new to relational databases and all the material I've read covered primary and foreign keys, normal forms, and joins but left out to populate the database once it's created. How do you import a CSV file so the fields match their related table? Say you were tying to build a beer database and had a CSV file with each line as a record. Header: brewer, beer_name, country, city, state, beer_category, beer_type, alcohol_content Record 1: Anheuser-Busch, Budweiser, United States, St. Louis, Mo, Pale lager, Regular, 5.0% Record 2: Anheuser-Busch, Bud Light, United States, St. Louis, Mo, Pale lager Light, 4.2% Record 3: Miller Brewing Company, Miller Lite, United States, Milwaukee, WI, Pale lager, Light, 4.2% You can create a "Brewer" table and a "Beer" table. When importing how does you connect the primary keys between the tables?

    Read the article

  • array of structures, or structure of arrays?

    - by Jason S
    Hmmm. I have a table which is an array of structures I need to store in Java. The naive don't-worry-about-memory approach says do this: public class Record { final private int field1; final private int field2; final private long field3; /* constructor & accessors here */ } List<Record> records = new ArrayList<Record>(); If I end up using a large number ( 106 ) of records, where individual records are accessed occasionally, one at a time, how would I figure out how the preceding approach (an ArrayList) would compare with an optimized approach for storage costs: public class OptimizedRecordStore { final private int[] field1; final private int[] field2; final private long[] field3; Record getRecord(int i) { return new Record(field1[i],field2[i],field3[i]); } /* constructor and other accessors & methods */ } edit: assume the # of records is something that is changed infrequently or never I'm probably not going to use the OptimizedRecordStore approach, but I want to understand the storage cost issue so I can make that decision with confidence. obviously if I add/change the # of records in the OptimizedRecordStore approach above, I either have to replace the whole object with a new one, or remove the "final" keyword. kd304 brings up a good point that was in the back of my mind. In other situations similar to this, I need column access on the records, e.g. if field1 and field2 are "time" and "position", and it's important for me to get those values as an array for use with MATLAB, so I can graph/analyze them efficiently.

    Read the article

  • Tail recursion and memoization with C#

    - by Jay
    I'm writing a function that finds the full path of a directory based on a database table of entries. Each record contains a key, the directory's name, and the key of the parent directory (it's the Directory table in an MSI if you're familiar). I had an iterative solution, but it started looking a little nasty. I thought I could write an elegant tail recursive solution, but I'm not sure anymore. I'll show you my code and then explain the issues I'm facing. Dictionary<string, string> m_directoryKeyToFullPathDictionary = new Dictionary<string, string>(); ... private string ExpandDirectoryKey(Database database, string directoryKey) { // check for terminating condition string fullPath; if (m_directoryKeyToFullPathDictionary.TryGetValue(directoryKey, out fullPath)) { return fullPath; } // inductive step Record record = ExecuteQuery(database, "SELECT DefaultDir, Directory_Parent FROM Directory where Directory.Directory='{0}'", directoryKey); // null check string directoryName = record.GetString("DefaultDir"); string parentDirectoryKey = record.GetString("Directory_Parent"); return Path.Combine(ExpandDirectoryKey(database, parentDirectoryKey), directoryName); } This is how the code looked when I realized I had a problem (with some minor validation/massaging removed). I want to use memoization to short circuit whenever possible, but that requires me to make a function call to the dictionary to store the output of the recursive ExpandDirectoryKey call. I realize that I also have a Path.Combine call there, but I think that can be circumvented with a ... + Path.DirectorySeparatorChar + .... I thought about using a helper method that would memoize the directory and return the value so that I could call it like this at the end of the function above: return MemoizeHelper( m_directoryKeyToFullPathDictionary, Path.Combine(ExpandDirectoryKey(database, parentDirectoryKey)), directoryName); But I feel like that's cheating and not going to be optimized as tail recursion. Any ideas? Should I be using a completely different strategy? This doesn't need to be a super efficient algorithm at all, I'm just really curious. I'm using .NET 4.0, btw. Thanks!

    Read the article

  • jsf, richfaces, popup window

    - by Hubidubi
    Hi I would like to make a list-detail view with richfaces. There will be a link for every record in the list that should open a new window containing record details. I tried to implement the link this way: <a4j:commandLink oncomplete="window.open('/pages/serviceDetail.jsf','popupWindow', 'dependent=yes, menubar=no, toolbar=no, height=500, width=400')" actionListener="#{monitoringBean.recordDetail}" value="details" /> I use <a4j:keepAlive beanName="monitoringBean" ajaxOnly="false" /> for both the list and the detail page. recordDetail method fills the data of the selected record to a variable of the bean that I would like to display on the detail page. The problem is that keepalive doesn't work, so I get new bean instance on the detail page every time. So the the previously selected record from the other bean is not accessible here. Is there a way to pass parameter (id) to the detail page to handle record selection. Or is there any way to make keepalive work? (I this this would be the easiest). Thanks

    Read the article

  • JQuery: how to store records id for data from ajax query in dynamicly created html elements

    - by grapkulec
    Probably question title is rather cryptic but I will try to explain myself here so please bare with me :) Let's assume this configuration: server-side is PHP application responding for requests with data (list of items, single item details, etc.) in json format client-side is JQuery application sending ajax request to that PHP app and creating html content corresponding with received data So, for example: client requests "list of all animals with names staring with 'A'", gets the json response from server, and for every "animal" creates some html gizmo like div with animal description or something like that. It doesn't really matter what html element it will be but it has to point exactly to specific record by "containing" id of that record. And here is my dilemma: is it good solution to use "id" property for that? So it would be like: <div id="10" class="animal"> <p> This is animal of very mysterious kind... </p> </div> <div id="11" class="animal"> <p> And this one is very common to our country... </p> </div> where id="10" is of course indication that this is representation of record with id = 10. Or maybe I should store this record id in some custom made tag like <record_id>10</record_id> and leave an "id" strictly for what it was meant to be (css selector)? I need that record id for further stuff like updating database with some user input or deleting some of "animals" or creating new ones or anything that will be needed. All manipulations will be done with JQuery and ajax requests and responses will be visualized also with dynamic creation of html interface. I'm sure that somebody had to deal with that kind of stuff before so I would be grateful for some tips on that topic.

    Read the article

  • System.Threading.Timer Doesn't Trigger my TimerCallBack Delegate

    - by Tom Kong
    Hi, I am writing my first Windows Service using C# and I am having some trouble with my Timer class. When the service is started, it runs as expected but the code will not execute again (I want it to run every minute) Please take a quick look at the attached source and let me know if you see any obvious mistakes! TIA using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Diagnostics; using System.Linq; using System.ServiceProcess; using System.Text; using System.Threading; using System.IO; namespace CXO001 { public partial class Service1 : ServiceBase { public Service1() { InitializeComponent(); } /* * Aim: To calculate and update the Occupancy values for the different Sites * * Method: Retrieve data every minute, updating a public value which can be polled */ protected override void OnStart(string[] args) { Daemon(); } public void Daemon() { TimerCallback tcb = new TimerCallback(On_Tick); TimeSpan duetime = new TimeSpan(0, 0, 1); TimeSpan interval = new TimeSpan(0, 1, 0); Timer querytimer = new Timer(tcb, null, duetime, interval); } protected override void OnStop() { } static int[] floorplanids = new int[] { 115, 114, 107, 108 }; public static List<Record> Records = new List<Record>(); static bool firstrun = true; public static void On_Tick(object timercallback) { //Update occupancy data for the last minute //Save a copy of the public values to HDD with a timestamp string starttime; if (Records.Count > 0) { starttime = Records.Last().TS; firstrun = false; } else { starttime = DateTime.Today.AddHours(7).ToString(); firstrun = true; } DateTime endtime = DateTime.Now; GetData(starttime, endtime); } public static void GetData(string starttime, DateTime endtime) { string connstr = "Data Source = 192.168.1.123; Initial Catalog = Brickstream_OPS; User Id = Brickstream; Password = bstas;"; DataSet resultds = new DataSet(); //Get the occupancy for each Zone foreach (int zone in floorplanids) { SQL s = new SQL(); string querystr = "SELECT SUM(DIRECTIONAL_METRIC.NUM_TO_ENTER - DIRECTIONAL_METRIC.NUM_TO_EXIT) AS 'Occupancy' FROM REPORT_OBJECT INNER JOIN REPORT_OBJ_METRIC ON REPORT_OBJECT.REPORT_OBJ_ID = REPORT_OBJ_METRIC.REPORT_OBJECT_ID INNER JOIN DIRECTIONAL_METRIC ON REPORT_OBJ_METRIC.REP_OBJ_METRIC_ID = DIRECTIONAL_METRIC.REP_OBJ_METRIC_ID WHERE (REPORT_OBJ_METRIC.M_START_TIME BETWEEN '" + starttime + "' AND '" + endtime.ToString() + "') AND (REPORT_OBJECT.FLOORPLAN_ID = '" + zone + "');"; resultds = s.Go(querystr, connstr, zone.ToString(), resultds); } List<Record> result = new List<Record>(); int c = 0; foreach (DataTable dt in resultds.Tables) { Record r = new Record(); r.TS = DateTime.Now.ToString(); r.Zone = dt.TableName; if (!firstrun) { r.Occupancy = (dt.Rows[0].Field<int>("Occupancy")) + (Records[c].Occupancy); } else { r.Occupancy = dt.Rows[0].Field<int>("Occupancy"); } result.Add(r); c++; } Records = result; MrWriter(); } public static void MrWriter() { StringBuilder output = new StringBuilder("Time,Zone,Occupancy\n"); foreach (Record r in Records) { output.Append(r.TS); output.Append(","); output.Append(r.Zone); output.Append(","); output.Append(r.Occupancy.ToString()); output.Append("\n"); } output.Append(firstrun.ToString()); output.Append(DateTime.Now.ToFileTime()); string filePath = @"C:\temp\CXO.csv"; File.WriteAllText(filePath, output.ToString()); } } }

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Mysql for update

    - by shantanuo
    MySQL supports "for update" keyword. Here is how I tested that it is working as expected. I opened 2 browser tabs and executed the following commands in one window. mysql> start transaction; Query OK, 0 rows affected (0.00 sec) mysql> select * from myxml where id = 2 for update; .... mysql> update myxml set id = 3 where id = 2 limit 1; Query OK, 1 row affected, 1 warning (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> commit; Query OK, 0 rows affected (0.08 sec) In another window, I started the transaction and tried to take an update lock on the same record. mysql> start transaction; Query OK, 0 rows affected (0.00 sec) mysql> select * from myxml where id = 2 for update; Empty set (43.81 sec) As you can see from the above example, I could not select the record for 43 seconds as the transaction was being processed by another application in the Window No 1. Once the transaction was over, I got to select the record, but since the id 2 was changed to id 3 by the transaction that was executed first, no record was returned. My question is what are the disadvantages of using "for update" syntax? If I do not commit the transaction that is running in window 1 will the record be locked for-ever?

    Read the article

  • Reg Google app engine datastore -primarykey

    - by megala
    hi, I created table in google Big table datastore ,In that the i set primary key using @annotations as follows @Id @Column(name = "groupname") private String groupname; @Basic private String groupdesc; I worked corretly,but it override the previous record,how to solve this for eg if i entered groupname=group1 groupdesc=groupdesc than it accept after that i enter same groupname it override previous record for eg groupname=group1 groupdesc=groups this record override previous one.

    Read the article

  • Input/output (read) errors in Bacula while setting up a Tape Drive + Autochanger

    - by Kyle Brandt
    When running the label barcode command in bacula I am getting Input/output errors. I am just getting started in trying to set this up: Connecting to Storage daemon TapeDevice at ny-back01.ny.stackoverflow.com:9103 ... Sending label command for Volume "ACJ332" Slot 1 ... 3307 Issuing autochanger "unload slot 8, drive 0" command. 3304 Issuing autochanger "load slot 1, drive 0" command. 3305 Autochanger "load slot 1, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ332" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ332", Slot 1 successfully created. Sending label command for Volume "ACJ331" Slot 2 ... 3307 Issuing autochanger "unload slot 1, drive 0" command. 3304 Issuing autochanger "load slot 2, drive 0" command. 3305 Autochanger "load slot 2, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ331" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ331", Slot 2 successfully created. Sending label command for Volume "ACJ328" Slot 3 ... 3307 Issuing autochanger "unload slot 2, drive 0" command. 3304 Issuing autochanger "load slot 3, drive 0" command. 3305 Autochanger "load slot 3, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ328" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ328", Slot 3 successfully created. Sending label command for Volume "ACJ329" Slot 4 ... 3307 Issuing autochanger "unload slot 3, drive 0" command. 3304 Issuing autochanger "load slot 4, drive 0" command. 3305 Autochanger "load slot 4, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ329" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ329", Slot 4 successfully created. Sending label command for Volume "ACJ335" Slot 5 ... 3307 Issuing autochanger "unload slot 4, drive 0" command. 3304 Issuing autochanger "load slot 5, drive 0" command. 3305 Autochanger "load slot 5, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ335" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ335", Slot 5 successfully created. Sending label command for Volume "ACJ334" Slot 6 ... 3307 Issuing autochanger "unload slot 5, drive 0" command. 3304 Issuing autochanger "load slot 6, drive 0" command. 3305 Autochanger "load slot 6, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ334" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ334", Slot 6 successfully created. Sending label command for Volume "ACJ333" Slot 7 ... 3307 Issuing autochanger "unload slot 6, drive 0" command. 3304 Issuing autochanger "load slot 7, drive 0" command. 3305 Autochanger "load slot 7, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ333" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ333", Slot 7 successfully created. Sending label command for Volume "ACJ330" Slot 8 ... 3307 Issuing autochanger "unload slot 7, drive 0" command. Bacula-dir: # Definition of file storage device Storage { Name = TapeDevice # Do not use "localhost" here Address = ny-back01.... # N.B. Use a fully qualified name here SDPort = 9103 Password = "..." Device = ULTRIUM-HH4 Media Type = LTO-4 Media Type = File Autochanger = Yes } Bacula-sd: Autochanger { Name = StorageLoader1U Device = ULTRIUM-HH4 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg5 } Device { Name = ULTRIUM-HH4 Media Type = LTO-4 Archive Device = /dev/st0 AutomaticMount = yes; AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes; RandomAccess = no; } Anyone knows what this means / why I am getting this?

    Read the article

  • .NET 4.0 Generic Invariant, Covariant, Contravariant

    - by Sameer Shariff
    Here's the scenario i am faced with: public abstract class Record { } public abstract class TableRecord : Record { } public abstract class LookupTableRecord : TableRecord { } public sealed class UserRecord : LookupTableRecord { } public interface IDataAccessLayer<TRecord> where TRecord : Record { } public interface ITableDataAccessLayer<TTableRecord> : IDataAccessLayer<TTableRecord> where TTableRecord : TableRecord { } public interface ILookupTableDataAccessLayer<TLookupTableRecord> : ITableDataAccessLayer<TLookupTableRecord> where TLookupTableRecord : LookupTableRecord { } public abstract class DataAccessLayer<TRecord> : IDataAccessLayer<TRecord> where TRecord : Record, new() { } public abstract class TableDataAccessLayer<TTableRecord> : DataAccessLayer<TTableRecord>, ITableDataAccessLayer<TTableRecord> where TTableRecord : TableRecord, new() { } public abstract class LookupTableDataAccessLayer<TLookupTableRecord> : TableDataAccessLayer<TLookupTableRecord>, ILookupTableDataAccessLayer<TLookupTableRecord> where TLookupTableRecord : LookupTableRecord, new() { } public sealed class UserDataAccessLayer : LookupTableDataAccessLayer<UserRecord> { } Now when i try to cast UserDataAccessLayer to it's generic base type ITableDataAccessLayer<TableRecord>, the compiler complains that it cannot implicitly convert the type.

    Read the article

  • activerecord search conditions - looking for null or false

    - by Daniel
    When doing a search in active record I'm looking for record's that do not have an archived bit set to true. Some of the archived bits are null (which are not archived) others have archived set to false. Obviously, Project.all(:conditions => {:archived => false}) misses the projects with the archived bits with null values. How can all non-archived projects be selected wtih active record?

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >