Search Results

Search found 20359 results on 815 pages for 'fixed length record'.

Page 111/815 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • adding custom fields dynamically to a model

    - by pankajbhageria
    I have a model called List which has many records: class List has_many :records end class Record end The table Record has 2 permanent fields: name, email. Besides these 2 fields, for each List a Record can have 'n' custom fields. For example: for list1 I add address(text), dob(date) as custom fields. Then while adding records to list one, each record can have values for address and dob. Is there any ActiveRecord plugin which provides this type of functionality? Or else could you share your thoughts on how to model this? Thanks in advance, Pankaj

    Read the article

  • Create Directory for records in MS Access 2007

    - by glinch
    Hi there, Is it possible to create a directory folder for individual records in Access 2007. For example tblUser ID firstName surName When adding a record, would create a folder C:\userdatabase\Surname,firstName,ID Could see this being useful in situations for example where a large amount of images/files would need to be associated with a record. Access would create and link to a directory for each record. Thanks in advance for any advice Noel

    Read the article

  • Does php mvc framework agavi use CRUD compliant to REST?

    - by txwikinger
    The agavi framework uses the PUT request for create and POST for updating information. Usually in REST this is used the other way around (often referring to POST adding information while PUT replacing the whole data record). If I understand it correctly, the important issue is that PUT must be idempotent, while POST does not have this requirement. Therefore, I wounder how creating a new record can be idempotent (i.e. multiple request do not lead to multiple creations of a record) in particular when usually the ORM uses an id as a primary key and the id of a new record would not be known to the client (since it is autocreated in the database), hence cannot be part of the request. How does agavi maintain the requirement of idempotence in light of this for the PUT request. Thanks.

    Read the article

  • Find records IN BETWEEN Date Range

    - by Muhammad Kashif Nadeem
    Please see attached image I have a table which have FromDate and ToDate. FromDate is start of some event and ToDate is end of taht event. I need to find a record if search criteria is in between range of dates. e.g. If a record has FromDate 2010/15/5 and ToDate 2010/15/25 and my criteria is FromDate 2010/5/18 and ToDate is 2010/5/21 then this record should be in search results becasue this is in the range of 15 to 25. Following is my search query (chunk of) SELECT m.EventId FROM MajorEvents WHERE ( (m.LocationID = @locationID OR @locationID IS NULL) OR M.LocationID IS NULL) AND ( CONVERT(VARCHAR(10),M.EventDateFrom,23) BETWEEN CONVERT(VARCHAR(10),@DateTimeFrom,23) AND CONVERT(VARCHAR(10),@DateTimeTo,23) OR CONVERT(VARCHAR(10),M.EventDateTo,23) BETWEEN CONVERT(VARCHAR(10),@DateTimeFrom,23) AND CONVERT(VARCHAR(10),@DateTimeTo,23) ) If Search Criteria is equal to FromDate or ToDate then results are ok e.g. If search criterai is DateFrom = 2010/5/15 AND DateTo = 2010/5/18 then this record will return becasue Date From is exactly what is DateFrom in db. OR If search criterai is DateFrom = 2010/5/22 AND DateTo = 2010/5/25 then this record will return becasue Date To is exactly what is DateTo in db But if anything in between this range it does not work Thanks for the help.

    Read the article

  • Trying to fadein divs in a sequence, over time, using JQuery

    - by user346602
    Hi, I'm trying to figure out how to make 4 images fade in sequentially when the page loads. The following is my (amateurish) code: Here is the HTML: <div id="outercorners"> <img id="corner1" src="images/corner1.gif" width="6" height="6" alt=""/> <img id="corner2" src="images/corner2.gif" width="6" height="6" alt=""/> <img id="corner3" src="images/corner3.gif" width="6" height="6" alt=""/> <img id="corner4" src="images/corner4.gif" width="6" height="6" alt=""/> </div><!-- end #outercorners--> Here is the JQuery: $(document).ready(function() { $("#corner1").fadeIn("2000", function(){ $("#corner3").fadeIn("4000", function(){ $("#corner2").fadeIn("6000", function(){ $("#corner4").fadeIn("8000", function(){ }); }); }); }); Here is the css: #outercorners { position: fixed; top:186px; left:186px; width:558px; height:372px; } #corner1 { position: fixed; top:186px; left:186px; display: none; } #corner2 { position: fixed; top:186px; left:744px; display: none; } #corner3 { position: fixed; top:558px; left:744px; display: none; } #corner4 { position: fixed; top:558px; left:186px; display: none; } They seem to just wink at me, rather than fade in in the order I've ascribed to them. Should I be using the queue() function? And, if so, how would I implement it in this case? Thank you for any assistance.

    Read the article

  • Decode base64 data as array in Python

    - by skerit
    I'm using this handy Javascript function to decode a base64 string and get an array in return. This is the string: base64_decode_array('6gAAAOsAAADsAAAACAEAAAkBAAAKAQAAJgEAACcBAAAoAQAA') This is what's returned: 234,0,0,0,235,0,0,0,236,0,0,0,8,1,0,0,9,1,0,0,10,1,0,0,38,1,0,0,39,1,0,0,40,1,0,0 The problem is I don't really understand the javascript function: var base64chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'.split(""); var base64inv = {}; for (var i = 0; i < base64chars.length; i++) { base64inv[base64chars[i]] = i; } function base64_decode_array (s) { // remove/ignore any characters not in the base64 characters list // or the pad character -- particularly newlines s = s.replace(new RegExp('[^'+base64chars.join("")+'=]', 'g'), ""); // replace any incoming padding with a zero pad (the 'A' character is zero) var p = (s.charAt(s.length-1) == '=' ? (s.charAt(s.length-2) == '=' ? 'AA' : 'A') : ""); var r = []; s = s.substr(0, s.length - p.length) + p; // increment over the length of this encrypted string, four characters at a time for (var c = 0; c < s.length; c += 4) { // each of these four characters represents a 6-bit index in the base64 characters list // which, when concatenated, will give the 24-bit number for the original 3 characters var n = (base64inv[s.charAt(c)] << 18) + (base64inv[s.charAt(c+1)] << 12) + (base64inv[s.charAt(c+2)] << 6) + base64inv[s.charAt(c+3)]; // split the 24-bit number into the original three 8-bit (ASCII) characters r.push((n >>> 16) & 255); r.push((n >>> 8) & 255); r.push(n & 255); } // remove any zero pad that was added to make this a multiple of 24 bits return r; } What's the function of those "<<<" and "" characters. Or is there a function like this for Python?

    Read the article

  • Should I be using callbacks or should I override attributes?

    - by ryeguy
    What is the more "rails-like"? If I want to modify a model's property when it's set, should I do this: def url=(url) #remove session id self[:url] = url.split('?s=')[0] end or this? before_save do |record| #remove session id record.url = record.url.split('?s=')[0] end Is there any benefit for doing it one way or the other? If so, why? If not, which one is generally more common?

    Read the article

  • The conceptual process of populating related tables in a database (MySql) from a CSV file

    - by user322772
    I'm new to relational databases and all the material I've read covered primary and foreign keys, normal forms, and joins but left out to populate the database once it's created. How do you import a CSV file so the fields match their related table? Say you were tying to build a beer database and had a CSV file with each line as a record. Header: brewer, beer_name, country, city, state, beer_category, beer_type, alcohol_content Record 1: Anheuser-Busch, Budweiser, United States, St. Louis, Mo, Pale lager, Regular, 5.0% Record 2: Anheuser-Busch, Bud Light, United States, St. Louis, Mo, Pale lager Light, 4.2% Record 3: Miller Brewing Company, Miller Lite, United States, Milwaukee, WI, Pale lager, Light, 4.2% You can create a "Brewer" table and a "Beer" table. When importing how does you connect the primary keys between the tables?

    Read the article

  • Capture video from WPF app

    - by Julien Couvreur
    I want to write a C# application which can record a video capture of one of its WPF controls. Is there a solution in .Net to record video from a control, or is there some library I could use? My goal is to write a SketchCast application. The use case is the following: launch SketchCast app and press record button, write ink into a WPF ink area, and talk, press stop, recorded voice and ink animation get saved into a video file in some encoding.

    Read the article

  • MATLAB image corner coordinates & referncing to cell arrays

    - by James
    Hi, I am having some problems comparing the elements in different cell arrays. The context of this problem is that I am using the bwboundaries function in MATLAB to trace the outline of an image. The image is of a structural cross section and I am trying to find if there is continuity throughout the section (i.e. there is only one outline produced by the bwboundaries command). Having done this and found where the is more than one section traced (i.e. it is not continuous), I have used the cornermetric command to find the corners of each section. The code I have is: %% Define the structural section as a binary matrix (Image is an I-section with the web broken) bw(20:40,50:150) = 1; bw(160:180,50:150) = 1; bw(20:60,95:105) = 1; bw(140:180,95:105) = 1; Trace = bw; [B] = bwboundaries(Trace,'noholes'); %Traces the outer boundary of each section L = length(B); % Finds number of boundaries if L > 1 disp('Multiple boundaries') % States whether more than one boundary found end %% Obtain perimeter coordinates for k=1:length(B) %For all the boundaries perim = B{k}; %Obtains perimeter coordinates (as a 2D matrix) from the cell array end %% Find the corner positions C = cornermetric(bw); Areacorners = find(C == max(max(C))) % Finds the corner coordinates of each boundary [rowindexcorners,colindexcorners] = ind2sub(size(Newgeometry),Areacorners) % Convert corner coordinate indexes into subcripts, to give x & y coordinates (i.e. the same format as B gives) %% Put these corner coordinates into a cell array Cornerscellarray = cell(length(rowindexcorners),1); % Initialises cell array of zeros for i =1:numel(rowindexcorners) Cornerscellarray(i) = {[rowindexcorners(i) colindexcorners(i)]}; %Assigns the corner indicies into the cell array %This is done so the cell arrays can be compared end for k=1:length(B) %For all the boundaries found perim = B{k}; %Obtains coordinates for each perimeter Z = perim; % Initialise the matrix containing the perimeter corners Sectioncellmatrix = cell(length(rowindexcorners),1); for i =1:length(perim) Sectioncellmatrix(i) = {[perim(i,1) perim(i,2)]}; end for i = 1:length(perim) if Sectioncellmatrix(i) ~= Cornerscellarray Sectioncellmatrix(i) = []; %Gets rid of the elements that are not corners, but keeps them associated with the relevent section end end end This creates an error in the last for loop. Is there a way I can check whether each cell of the array (containing an x and y coordinate) is equal to any pair of coordinates in cornercellarray? I know it is possible with matrices to compare whether a certain element matches any of the elements in another matrix. I want to be able to do the same here, but for the pair of coordinates within the cell array. The reason I don't just use the cornercellarray cell array itself, is because this lists all the corner coordinates and does not associate them with a specific traced boundary.

    Read the article

  • array of structures, or structure of arrays?

    - by Jason S
    Hmmm. I have a table which is an array of structures I need to store in Java. The naive don't-worry-about-memory approach says do this: public class Record { final private int field1; final private int field2; final private long field3; /* constructor & accessors here */ } List<Record> records = new ArrayList<Record>(); If I end up using a large number ( 106 ) of records, where individual records are accessed occasionally, one at a time, how would I figure out how the preceding approach (an ArrayList) would compare with an optimized approach for storage costs: public class OptimizedRecordStore { final private int[] field1; final private int[] field2; final private long[] field3; Record getRecord(int i) { return new Record(field1[i],field2[i],field3[i]); } /* constructor and other accessors & methods */ } edit: assume the # of records is something that is changed infrequently or never I'm probably not going to use the OptimizedRecordStore approach, but I want to understand the storage cost issue so I can make that decision with confidence. obviously if I add/change the # of records in the OptimizedRecordStore approach above, I either have to replace the whole object with a new one, or remove the "final" keyword. kd304 brings up a good point that was in the back of my mind. In other situations similar to this, I need column access on the records, e.g. if field1 and field2 are "time" and "position", and it's important for me to get those values as an array for use with MATLAB, so I can graph/analyze them efficiently.

    Read the article

  • Tail recursion and memoization with C#

    - by Jay
    I'm writing a function that finds the full path of a directory based on a database table of entries. Each record contains a key, the directory's name, and the key of the parent directory (it's the Directory table in an MSI if you're familiar). I had an iterative solution, but it started looking a little nasty. I thought I could write an elegant tail recursive solution, but I'm not sure anymore. I'll show you my code and then explain the issues I'm facing. Dictionary<string, string> m_directoryKeyToFullPathDictionary = new Dictionary<string, string>(); ... private string ExpandDirectoryKey(Database database, string directoryKey) { // check for terminating condition string fullPath; if (m_directoryKeyToFullPathDictionary.TryGetValue(directoryKey, out fullPath)) { return fullPath; } // inductive step Record record = ExecuteQuery(database, "SELECT DefaultDir, Directory_Parent FROM Directory where Directory.Directory='{0}'", directoryKey); // null check string directoryName = record.GetString("DefaultDir"); string parentDirectoryKey = record.GetString("Directory_Parent"); return Path.Combine(ExpandDirectoryKey(database, parentDirectoryKey), directoryName); } This is how the code looked when I realized I had a problem (with some minor validation/massaging removed). I want to use memoization to short circuit whenever possible, but that requires me to make a function call to the dictionary to store the output of the recursive ExpandDirectoryKey call. I realize that I also have a Path.Combine call there, but I think that can be circumvented with a ... + Path.DirectorySeparatorChar + .... I thought about using a helper method that would memoize the directory and return the value so that I could call it like this at the end of the function above: return MemoizeHelper( m_directoryKeyToFullPathDictionary, Path.Combine(ExpandDirectoryKey(database, parentDirectoryKey)), directoryName); But I feel like that's cheating and not going to be optimized as tail recursion. Any ideas? Should I be using a completely different strategy? This doesn't need to be a super efficient algorithm at all, I'm just really curious. I'm using .NET 4.0, btw. Thanks!

    Read the article

  • Stuck at being unable to print a substring no more than 4679 characters

    - by Newcoder
    I have a program that does string manipulation on very large strings (around 100K). The first step in my program is to cleanup the input string so that it only contains certain characters. Here is my method for this cleanup: public static String analyzeString (String input) { String output = null; output = input.replaceAll("[-+.^:,]",""); output = output.replaceAll("(\\r|\\n)", ""); output = output.toUpperCase(); output = output.replaceAll("[^XYZ]", ""); return output; } When i print my 'input' string of length 97498, it prints successfully. My output string after cleanup is of length 94788. I can print the size using output.length() but when I try to print this in Eclipse, output is empty and i can see in eclipse output console header. Since this is not my final program, so I ignored this and proceeded to next method that does pattern matching on this 'cleaned-up' string. Here is code for pattern matching: public static List<Integer> getIntervals(String input, String regex) { List<Integer> output = new ArrayList<Integer> (); // Do pattern matching Pattern p1 = Pattern.compile(regex); Matcher m1 = p1.matcher(input); // If match found while (m1.find()) { output.add(m1.start()); output.add(m1.end()); } return output; } Based on this program, i identify the start and end intervals of my pattern match as 12351 and 87314. I tried to print this match as output.substring(12351, 87314) and only get blank output. Numerous hit and trial runs resulted in the conclusion that biggest substring that i can print is of length 4679. If i try 4680, i again get blank input. My confusion is that if i was able to print original string (97498) length, why i couldnt print the cleaned-up string (length 94788) or the substring (length 4679). Is it due to regular expression implementation which may be causing some memory issues and my system is not able to handle that? I have 4GB installed memory.

    Read the article

  • Help translating Reflector deconstruction into compilable code

    - by code poet
    So I am Reflector-ing some framework 2.0 code and end up with the following deconstruction fixed (void* voidRef3 = ((void*) &_someMember)) { ... } This won't compile due to 'The right hand side of a fixed statement assignment may not be a cast expression' I understand that Reflector can only approximate and generally I can see a clear path but this is a bit outside my experience. Question: what is Reflector trying to describe to me? Update: Am also seeing the following fixed (IntPtr* ptrRef3 = ((IntPtr*) &this._someMember)) Update: So, as Mitch says, it is not a bitwise operator, but an addressOf operator. Question is now: fixed (IntPtr* ptrRef3 = &_someMember) fails with an 'Cannot implicitly convert type 'xxx*' to 'System.IntPtr*'. An explicit conversion exists (are you missing a cast?)' compilation error. So I seemed to be damned if I do and damned if I dont. Any ideas?

    Read the article

  • jsf, richfaces, popup window

    - by Hubidubi
    Hi I would like to make a list-detail view with richfaces. There will be a link for every record in the list that should open a new window containing record details. I tried to implement the link this way: <a4j:commandLink oncomplete="window.open('/pages/serviceDetail.jsf','popupWindow', 'dependent=yes, menubar=no, toolbar=no, height=500, width=400')" actionListener="#{monitoringBean.recordDetail}" value="details" /> I use <a4j:keepAlive beanName="monitoringBean" ajaxOnly="false" /> for both the list and the detail page. recordDetail method fills the data of the selected record to a variable of the bean that I would like to display on the detail page. The problem is that keepalive doesn't work, so I get new bean instance on the detail page every time. So the the previously selected record from the other bean is not accessible here. Is there a way to pass parameter (id) to the detail page to handle record selection. Or is there any way to make keepalive work? (I this this would be the easiest). Thanks

    Read the article

  • JQuery: how to store records id for data from ajax query in dynamicly created html elements

    - by grapkulec
    Probably question title is rather cryptic but I will try to explain myself here so please bare with me :) Let's assume this configuration: server-side is PHP application responding for requests with data (list of items, single item details, etc.) in json format client-side is JQuery application sending ajax request to that PHP app and creating html content corresponding with received data So, for example: client requests "list of all animals with names staring with 'A'", gets the json response from server, and for every "animal" creates some html gizmo like div with animal description or something like that. It doesn't really matter what html element it will be but it has to point exactly to specific record by "containing" id of that record. And here is my dilemma: is it good solution to use "id" property for that? So it would be like: <div id="10" class="animal"> <p> This is animal of very mysterious kind... </p> </div> <div id="11" class="animal"> <p> And this one is very common to our country... </p> </div> where id="10" is of course indication that this is representation of record with id = 10. Or maybe I should store this record id in some custom made tag like <record_id>10</record_id> and leave an "id" strictly for what it was meant to be (css selector)? I need that record id for further stuff like updating database with some user input or deleting some of "animals" or creating new ones or anything that will be needed. All manipulations will be done with JQuery and ajax requests and responses will be visualized also with dynamic creation of html interface. I'm sure that somebody had to deal with that kind of stuff before so I would be grateful for some tips on that topic.

    Read the article

  • How can I center something if I don't know ahead of time what the width is?

    - by zeckdude
    I am trying to center a paragraph tag with some text in it within a div, but I can't seem to center it using margin: 0 auto without having to specify a fixed width for the paragraph. I don't want to specify a fixed width, because I will have dynamic text coming into the paragraph tag and it will always be a different width based on how much text it is. Does anyone know how I can center the paragraph tag within the div without having to specify a fixed width for the paragraph or without using tables?

    Read the article

  • System.Threading.Timer Doesn't Trigger my TimerCallBack Delegate

    - by Tom Kong
    Hi, I am writing my first Windows Service using C# and I am having some trouble with my Timer class. When the service is started, it runs as expected but the code will not execute again (I want it to run every minute) Please take a quick look at the attached source and let me know if you see any obvious mistakes! TIA using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Diagnostics; using System.Linq; using System.ServiceProcess; using System.Text; using System.Threading; using System.IO; namespace CXO001 { public partial class Service1 : ServiceBase { public Service1() { InitializeComponent(); } /* * Aim: To calculate and update the Occupancy values for the different Sites * * Method: Retrieve data every minute, updating a public value which can be polled */ protected override void OnStart(string[] args) { Daemon(); } public void Daemon() { TimerCallback tcb = new TimerCallback(On_Tick); TimeSpan duetime = new TimeSpan(0, 0, 1); TimeSpan interval = new TimeSpan(0, 1, 0); Timer querytimer = new Timer(tcb, null, duetime, interval); } protected override void OnStop() { } static int[] floorplanids = new int[] { 115, 114, 107, 108 }; public static List<Record> Records = new List<Record>(); static bool firstrun = true; public static void On_Tick(object timercallback) { //Update occupancy data for the last minute //Save a copy of the public values to HDD with a timestamp string starttime; if (Records.Count > 0) { starttime = Records.Last().TS; firstrun = false; } else { starttime = DateTime.Today.AddHours(7).ToString(); firstrun = true; } DateTime endtime = DateTime.Now; GetData(starttime, endtime); } public static void GetData(string starttime, DateTime endtime) { string connstr = "Data Source = 192.168.1.123; Initial Catalog = Brickstream_OPS; User Id = Brickstream; Password = bstas;"; DataSet resultds = new DataSet(); //Get the occupancy for each Zone foreach (int zone in floorplanids) { SQL s = new SQL(); string querystr = "SELECT SUM(DIRECTIONAL_METRIC.NUM_TO_ENTER - DIRECTIONAL_METRIC.NUM_TO_EXIT) AS 'Occupancy' FROM REPORT_OBJECT INNER JOIN REPORT_OBJ_METRIC ON REPORT_OBJECT.REPORT_OBJ_ID = REPORT_OBJ_METRIC.REPORT_OBJECT_ID INNER JOIN DIRECTIONAL_METRIC ON REPORT_OBJ_METRIC.REP_OBJ_METRIC_ID = DIRECTIONAL_METRIC.REP_OBJ_METRIC_ID WHERE (REPORT_OBJ_METRIC.M_START_TIME BETWEEN '" + starttime + "' AND '" + endtime.ToString() + "') AND (REPORT_OBJECT.FLOORPLAN_ID = '" + zone + "');"; resultds = s.Go(querystr, connstr, zone.ToString(), resultds); } List<Record> result = new List<Record>(); int c = 0; foreach (DataTable dt in resultds.Tables) { Record r = new Record(); r.TS = DateTime.Now.ToString(); r.Zone = dt.TableName; if (!firstrun) { r.Occupancy = (dt.Rows[0].Field<int>("Occupancy")) + (Records[c].Occupancy); } else { r.Occupancy = dt.Rows[0].Field<int>("Occupancy"); } result.Add(r); c++; } Records = result; MrWriter(); } public static void MrWriter() { StringBuilder output = new StringBuilder("Time,Zone,Occupancy\n"); foreach (Record r in Records) { output.Append(r.TS); output.Append(","); output.Append(r.Zone); output.Append(","); output.Append(r.Occupancy.ToString()); output.Append("\n"); } output.Append(firstrun.ToString()); output.Append(DateTime.Now.ToFileTime()); string filePath = @"C:\temp\CXO.csv"; File.WriteAllText(filePath, output.ToString()); } } }

    Read the article

  • Mysql for update

    - by shantanuo
    MySQL supports "for update" keyword. Here is how I tested that it is working as expected. I opened 2 browser tabs and executed the following commands in one window. mysql> start transaction; Query OK, 0 rows affected (0.00 sec) mysql> select * from myxml where id = 2 for update; .... mysql> update myxml set id = 3 where id = 2 limit 1; Query OK, 1 row affected, 1 warning (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> commit; Query OK, 0 rows affected (0.08 sec) In another window, I started the transaction and tried to take an update lock on the same record. mysql> start transaction; Query OK, 0 rows affected (0.00 sec) mysql> select * from myxml where id = 2 for update; Empty set (43.81 sec) As you can see from the above example, I could not select the record for 43 seconds as the transaction was being processed by another application in the Window No 1. Once the transaction was over, I got to select the record, but since the id 2 was changed to id 3 by the transaction that was executed first, no record was returned. My question is what are the disadvantages of using "for update" syntax? If I do not commit the transaction that is running in window 1 will the record be locked for-ever?

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Input/output (read) errors in Bacula while setting up a Tape Drive + Autochanger

    - by Kyle Brandt
    When running the label barcode command in bacula I am getting Input/output errors. I am just getting started in trying to set this up: Connecting to Storage daemon TapeDevice at ny-back01.ny.stackoverflow.com:9103 ... Sending label command for Volume "ACJ332" Slot 1 ... 3307 Issuing autochanger "unload slot 8, drive 0" command. 3304 Issuing autochanger "load slot 1, drive 0" command. 3305 Autochanger "load slot 1, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ332" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ332", Slot 1 successfully created. Sending label command for Volume "ACJ331" Slot 2 ... 3307 Issuing autochanger "unload slot 1, drive 0" command. 3304 Issuing autochanger "load slot 2, drive 0" command. 3305 Autochanger "load slot 2, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ331" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ331", Slot 2 successfully created. Sending label command for Volume "ACJ328" Slot 3 ... 3307 Issuing autochanger "unload slot 2, drive 0" command. 3304 Issuing autochanger "load slot 3, drive 0" command. 3305 Autochanger "load slot 3, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ328" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ328", Slot 3 successfully created. Sending label command for Volume "ACJ329" Slot 4 ... 3307 Issuing autochanger "unload slot 3, drive 0" command. 3304 Issuing autochanger "load slot 4, drive 0" command. 3305 Autochanger "load slot 4, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ329" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ329", Slot 4 successfully created. Sending label command for Volume "ACJ335" Slot 5 ... 3307 Issuing autochanger "unload slot 4, drive 0" command. 3304 Issuing autochanger "load slot 5, drive 0" command. 3305 Autochanger "load slot 5, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ335" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ335", Slot 5 successfully created. Sending label command for Volume "ACJ334" Slot 6 ... 3307 Issuing autochanger "unload slot 5, drive 0" command. 3304 Issuing autochanger "load slot 6, drive 0" command. 3305 Autochanger "load slot 6, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ334" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ334", Slot 6 successfully created. Sending label command for Volume "ACJ333" Slot 7 ... 3307 Issuing autochanger "unload slot 6, drive 0" command. 3304 Issuing autochanger "load slot 7, drive 0" command. 3305 Autochanger "load slot 7, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ333" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ333", Slot 7 successfully created. Sending label command for Volume "ACJ330" Slot 8 ... 3307 Issuing autochanger "unload slot 7, drive 0" command. Bacula-dir: # Definition of file storage device Storage { Name = TapeDevice # Do not use "localhost" here Address = ny-back01.... # N.B. Use a fully qualified name here SDPort = 9103 Password = "..." Device = ULTRIUM-HH4 Media Type = LTO-4 Media Type = File Autochanger = Yes } Bacula-sd: Autochanger { Name = StorageLoader1U Device = ULTRIUM-HH4 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg5 } Device { Name = ULTRIUM-HH4 Media Type = LTO-4 Archive Device = /dev/st0 AutomaticMount = yes; AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes; RandomAccess = no; } Anyone knows what this means / why I am getting this?

    Read the article

  • Reg Google app engine datastore -primarykey

    - by megala
    hi, I created table in google Big table datastore ,In that the i set primary key using @annotations as follows @Id @Column(name = "groupname") private String groupname; @Basic private String groupdesc; I worked corretly,but it override the previous record,how to solve this for eg if i entered groupname=group1 groupdesc=groupdesc than it accept after that i enter same groupname it override previous record for eg groupname=group1 groupdesc=groups this record override previous one.

    Read the article

  • .NET 4.0 Generic Invariant, Covariant, Contravariant

    - by Sameer Shariff
    Here's the scenario i am faced with: public abstract class Record { } public abstract class TableRecord : Record { } public abstract class LookupTableRecord : TableRecord { } public sealed class UserRecord : LookupTableRecord { } public interface IDataAccessLayer<TRecord> where TRecord : Record { } public interface ITableDataAccessLayer<TTableRecord> : IDataAccessLayer<TTableRecord> where TTableRecord : TableRecord { } public interface ILookupTableDataAccessLayer<TLookupTableRecord> : ITableDataAccessLayer<TLookupTableRecord> where TLookupTableRecord : LookupTableRecord { } public abstract class DataAccessLayer<TRecord> : IDataAccessLayer<TRecord> where TRecord : Record, new() { } public abstract class TableDataAccessLayer<TTableRecord> : DataAccessLayer<TTableRecord>, ITableDataAccessLayer<TTableRecord> where TTableRecord : TableRecord, new() { } public abstract class LookupTableDataAccessLayer<TLookupTableRecord> : TableDataAccessLayer<TLookupTableRecord>, ILookupTableDataAccessLayer<TLookupTableRecord> where TLookupTableRecord : LookupTableRecord, new() { } public sealed class UserDataAccessLayer : LookupTableDataAccessLayer<UserRecord> { } Now when i try to cast UserDataAccessLayer to it's generic base type ITableDataAccessLayer<TableRecord>, the compiler complains that it cannot implicitly convert the type.

    Read the article

  • Is there a way to dynamically define the height and width that are going to appear in a page?

    - by Starx
    For so many time, I have encountered problems with managing image having abnormally long height or width. If I fixed their height and widht, they will appear streched? If I fixed their width, and if the height of the image is very long then also it will mess up the overall website. If I fixed their height, and if the width of the image is very long then also it will mess up the overall website. How is the best way to fix this?

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >