Search Results

Search found 4955 results on 199 pages for 'range'.

Page 164/199 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Error while uploading file method in Client Object Model Sharepoint 2010

    - by user1481570
    Error while uploading file method in Client Object Model + Sharepoint 2010. Once the file got uploaded. After that though the code compiles with no error I get the error while executing "{"Value does not fall within the expected range."} {System.Collections.Generic.SynchronizedReadOnlyCollection} I have a method which takes care of functionality to upload files /////////////////////////////////////////////////////////////////////////////////////////// public void Upload_Click(string documentPath, byte[] documentStream) { String sharePointSite = "http://cvgwinbasd003:28838/sites/test04"; String documentLibraryUrl = sharePointSite +"/"+ documentPath.Replace('\\','/'); //////////////////////////////////////////////////////////////////// //Get Document List List documentsList = clientContext.Web.Lists.GetByTitle("Doc1"); var fileCreationInformation = new FileCreationInformation(); //Assign to content byte[] i.e. documentStream fileCreationInformation.Content = documentStream; //Allow owerwrite of document fileCreationInformation.Overwrite = true; //Upload URL fileCreationInformation.Url = documentLibraryUrl; Microsoft.SharePoint.Client.File uploadFile = documentsList.RootFolder.Files.Add( fileCreationInformation); //uploadFile.ListItemAllFields.Update(); clientContext.ExecuteQuery(); } ///////////////////////////////////////////////////////////////////////////////////////////////// In the MVC 3.0 application in the controller I have defined the following method to invoke the upload method. ////////////////////////////////////////////////////////////////////////////////////////////////// public ActionResult ProcessSubmit(IEnumerable<HttpPostedFileBase> attachments) { System.IO.Stream uploadFileStream=null; byte[] uploadFileBytes; int fileLength=0; foreach (HttpPostedFileBase fileUpload in attachments) { uploadFileStream = fileUpload.InputStream; fileLength=fileUpload.ContentLength; } uploadFileBytes= new byte[fileLength]; uploadFileStream.Read(uploadFileBytes, 0, fileLength); using (DocManagementService.DocMgmtClient doc = new DocMgmtClient()) { doc.Upload_Click("Doc1/Doc2/Doc2.1/", uploadFileBytes); } return RedirectToAction("SyncUploadResult"); } ////////////////////////////////////////////////////////////////////////////////////////////////// Please help me to locate the error

    Read the article

  • Catching 'Last Record' in Coldfusion for IE javascript bug

    - by Simon Hume
    I'm using ColdFusion to pull UK postcodes into an array for display on a Google Map. This happens dynamically from a SQL database, so the numbers can range from 1 to 100+ the script works great, however, in IE (groan) it decides to display one point way off line, over in California somewhere. I fixed this issue in a previous webapp, this was due to the comma between each array item still being present at the end. Works fine in Firefox, Safari etc, but not IE. But, that one was using a set 10 records, so was easy to fix. I just need a little if statement to wrap around my comma to hide it when it hits the last record. I can't seem to get it right. Any tips/suggestions? here is the line of code in question: var address = [<cfloop query="getApplicant"><cfif getApplicant.dbHomePostCode GT ""><cfoutput>'#getApplicant.dbHomePostCode#',</cfoutput></cfif> </cfloop>]; Hopefully someone can help with this rather simple request. I'm just having a bad day at the office!

    Read the article

  • Threading is slow and unpredictable?

    - by Jake
    I've created the basis of a ray tracer, here's my testing function for drawing the scene: public void Trace(int start, int jump, Sphere testSphere) { for (int x = start; x < scene.SceneWidth; x += jump) { for (int y = 0; y < scene.SceneHeight; y++) { Ray fired = Ray.FireThroughPixel(scene, x, y); if (testSphere.Intersects(fired)) sceneRenderer.SetPixel(x, y, Color.Red); else sceneRenderer.SetPixel(x, y, Color.Black); } } } SetPixel simply sets a value in a single dimensional array of colours. If I call the function normally by just directly calling it it runs at a constant 55fps. If I do: Thread t1 = new Thread(() => Trace(0, 1, testSphere)); t1.Start(); t1.Join(); It runs at a constant 50fps which is fine and understandable, but when I do: Thread t1 = new Thread(() => Trace(0, 2, testSphere)); Thread t2 = new Thread(() => Trace(1, 2, testSphere)); t1.Start(); t2.Start(); t1.Join(); t2.Join(); It runs all over the place, rapidly moving between 30-40 fps and sometimes going out of that range up to 50 or down to 20, it's not constant at all. Why is it running slower than it would if I ran the whole thing on a single thread? I'm running on a quad core i5 2500k.

    Read the article

  • Fixing color in scatter plots in matplotlib

    - by ajhall
    Hi guys, I'm going to have to come back and add some examples if you need them, which you might. But, here's the skinny- I'm plotting scatter plots of lab data for my research. I need to be able to visually compare the scatter plots from one plot to the next, so I want to fix the color range on the scatter plots and add in a colorbar to each plot (which will be the same in each figure). Essentially, I'm fixing all aspects of the axes and colorspace etc. so that the plots are directly comparable by eye. For the life of me, I can't seem to get my scatter() command to properly set the color limits in the colorspace (default)... i.e., I figure out my total data's min and total data's max, then apply them to vmin, vmax, for the subset of data, and the color still does not come out properly in both plots. This must come up here and there, I can't be the only one that wants to compare various subsets of data amongst plots... so, how do you fix the colors so that each data keeps it's color between plots and doesn't get remapped to a different color due to the change in max/min of the subset -v- the whole set? I greatly appreciate all your thoughts!!! A mountain-dew and fiery-hot cheetos to all! -Allen

    Read the article

  • Javascript: Error 'Object Required.'

    - by javascripthelp
    The following is the error popup message I get when I click "Finalize" button on my website: "Line: 298 Char: 5 Error: Object required: 'lobi_c_selected(...)' Code: 0 URL: http://10.128.23.50/i-prostage/AP/w_ap_check_reconciliation.asp?" Normally, when I click "Finalize" button, it'll generate and show a report in a popup window. However, I'd get this error message instead. Can any of you help me locate the error in the following source code for the page that I'm running in IE 6? sub cb_finalize dim ll_loop, ll_found, lobj_c_selected of_SetHourGlass(True) rpt_link.innerHTML = "" rpt_link.href = "" 'Process only if at least one record was selected if rds1.Recordset.Recordcount 0 then lb_found = false if rds1.Recordset.Recordcount = 1 then if c_selected.checked then lb_found = true else Set lobj_c_selected = document.all.item("c_selected") for ll_loop = 1 to rds1.Recordset.Recordcount if lobj_c_selected(ll_loop - 1).checked then lb_found = true exit for end if next end if if not lb_found then msgbox "Please select a record to be posted.", vbInformation, "ProStage Accounting" of_SetHourGlass(False) else window.setTimeout "ue_process()", 100, vbscript 'Post Event end if else msgbox "There's no record to be posted." + vbcrlf + "Please select a record to be posted.", vbInformation, "ProStage Accounting" of_SetHourGlass(False) end if end sub Sub ue_process dim ll_loop, ll_count, ls_ret, ls_trxid, ls_r1 'Get only selected records redim ls_trxid(rds1.Recordset.Recordcount) for ll_loop = 1 to rds1.Recordset.Recordcount rds1.Recordset.AbsolutePosition = ll_loop if not isnull(rds1.Recordset("clrdt")) then 'Add to TRXID array if selected ll_count = ll_count + 1 ls_trxid(ll_count) = rds1.Recordset("trxid") end if next 'Process reconciliation rds1.Recordset.MarshalOptions = 1 ls_ret = iBO_Update.of_update_1(is_dbsrc, rds1.Recordset, "GLTRX", is_sql) if ls_ret < "1" then msgbox "Update Failed ! " + ls_ret, vbExclamation + vbOKonly, document.title of_SetHourGlass(False) else 'Display Posting Journal & clear screen ue_posting_journal ls_trxid, ll_count Set rds1.SourceRecordset = iBO_Company.of_validate(is_dbsrc, "SELECT 1 FROM DUAL WHERE 1 = 2") ib_query = false 'Not to process RetrieveEnd end if End Sub Sub ue_posting_journal(as_trxid, al_count) dim ll_argseq, ls_argtyp, ls_argmnt, ll_sargseq of_setreport() ' Start service 'Prepare arguments for report in RPTMSTR table for ll_loop = 1 to al_count + 1 select case ll_loop case 1 'Range displayed as report title ll_argseq = 800 ls_argtyp = null ls_argmnt = "st_title.text = 'Bank: " + bnkid_name.value + ", As of Date: " + _ of_date_stringtodate(id_trxdt) + "'" ll_sargseq = 0 case else 'TRXID array ll_argseq = 1 ll_sargseq = ll_loop - 1 ls_argtyp = "S" ls_argmnt = as_trxid(ll_loop - 1) end select of_report_register_array "d_rpt_ap_check_reconciliation_register", ll_argseq, ls_argtyp, ls_argmnt, ll_sargseq next of_report_process "d_rpt_ap_check_reconciliation_register", true, true 'Display report of_sethourglass(False) End Sub

    Read the article

  • deepcopy and python - tips to avoid using it?

    - by blackkettle
    Hi, I have a very simple python routine that involves cycling through a list of roughly 20,000 latitude,longitude coordinates and calculating the distance of each point to a reference point. def compute_nearest_points( lat, lon, nPoints=5 ): """Find the nearest N points, given the input coordinates.""" points = session.query(PointIndex).all() oldNearest = [] newNearest = [] for n in xrange(nPoints): oldNearest.append(PointDistance(None,None,None,99999.0,99999.0)) newNearest.append(obj2) #This is almost certainly an inappropriate use of deepcopy # but how SHOULD I be doing this?!?! for point in points: distance = compute_spherical_law_of_cosines( lat, lon, point.avg_lat, point.avg_lon ) k = 0 for p in oldNearest: if distance < p.distance: newNearest[k] = PointDistance( point.point, point.kana, point.english, point.avg_lat, point.avg_lon, distance=distance ) break else: newNearest[k] = deepcopy(oldNearest[k]) k += 1 for j in range(k,nPoints-1): newNearest[j+1] = deepcopy(oldNearest[j]) oldNearest = deepcopy(newNearest) #We're done, now print the result for point in oldNearest: print point.station, point.english, point.distance return I initially wrote this in C, using the exact same approach, and it works fine there, and is basically instantaneous for nPoints<=100. So I decided to port it to python because I wanted to use SqlAlchemy to do some other stuff. I first ported it without the deepcopy statements that now pepper the method, and this caused the results to be 'odd', or partially incorrect, because some of the points were just getting copied as references(I guess? I think?) -- but it was still pretty nearly as fast as the C version. Now with the deepcopy calls added, the routine does it's job correctly, but it has incurred an extreme performance penalty, and now takes several seconds to do the same job. This seems like a pretty common job, but I'm clearly not doing it the pythonic way. How should I be doing this so that I still get the correct results but don't have to include deepcopy everywhere?

    Read the article

  • DSP - Filtering frequencies using DFT

    - by Trap
    I'm trying to implement a DFT-based 8-band equalizer for the sole purpose of learning. To prove that my DFT implementation works I fed an audio signal, analyzed it and then resynthesized it again with no modifications made to the frequency spectrum. So far so good. I'm using the so-called 'standard way of calculating the DFT' which is by correlation. This method calculates the real and imaginary parts both N/2 + 1 samples in length. To attenuate a frequency I'm just doing: float atnFactor = 0.6; Re[k] *= atnFactor; Im[k] *= atnFactor; where 'k' is an index in the range 0 to N/2, but what I get after resynthesis is a slighty distorted signal, especially at low frequencies. The input signal sample rate is 44.1 khz and since I just want a 8-band equalizer I'm feeding the DFT 16 samples at a time so I have 8 frequency bins to play with. Can someone show me what I'm doing wrong? I tried to find info on this subject on the internet but couldn't find any. Thanks in advance.

    Read the article

  • MVC Localization of Default Model Binder

    - by Dai Bok
    Hi, I am currently trying to figure out how to localize the error messages generated by MVC. Let me use the default model binder as an example, so I can explain the problem. Assuming I have a form, where a user enters thier age. The user then enters "ten" in to the form, but instead of getting the expected error of "Age must be beween 18 and 25." the message "The value 'ten' is not valid for Age." is displayed. The entity's age property is defined below: [Range(18, 25, ErrorMessageResourceType = typeof (Errors), ErrorMessageResourceName = "Age", ErrorMessage = "Range_ErrorMessage")] public int Age { get; set; } After some digging, I notice that this error text comes from the System.Web.Mvc.Resources.DefaultModelBinder_ValueInvalid in the MvcResources.resx file. Now, how can create localized versions of this file? As A solution, for example, should I download MVC source and add MvcResources.en_GB.resx, MvcResources.fr_FR.resx, MvcResources.es_ES.resx and MvcResources.de_DE.resx, and then compile my own version of MVC.dll? But I don't like this idea. Any one else know a better way?

    Read the article

  • What is usefulness of W3C's Semantic Data Extractor in semantically correct XHTML CSS Development?

    - by metal-gear-solid
    What is the usefulness of W3C's Semantic Data Extractor? http://www.w3.org/2003/12/semantic-extractor.html This tool, geared by an XSLT stylesheet, tries to extract some information from a HTML semantic rich document. It only uses information available through a good usage of the semantics defined in HTML. The aim is to show that providing a semantically rich HTML gives much more value to your code: using a semantically rich HTML code allows a better use of CSS, makes your HTML intelligible to a wider range of user agents (especially search engines bots). As an aside, it can give clues to user agents developers on some hooks that could be interesting to add in their product. After checking validation for CSS and HTML. Should i go for Semantic Data Extractor tool. What it does. and how it can improved our coding.? Is anyone using it? And i check some site randomly with but with most of sites it gives error Using org.apache.xerces.parsers.SAXParser Exception net.sf.saxon.trans.XPathException: org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`". org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`".

    Read the article

  • What is usefulness of W3C's "Semantic Data Extractor" in semantically correct XHTML CSS Development?

    - by metal-gear-solid
    What is the usefulness of W3C's Semantic Data Extractor? http://www.w3.org/2003/12/semantic-extractor.html This tool, geared by an XSLT stylesheet, tries to extract some information from a HTML semantic rich document. It only uses information available through a good usage of the semantics defined in HTML. The aim is to show that providing a semantically rich HTML gives much more value to your code: using a semantically rich HTML code allows a better use of CSS, makes your HTML intelligible to a wider range of user agents (especially search engines bots). As an aside, it can give clues to user agents developers on some hooks that could be interesting to add in their product. After checking validation for CSS and HTML. Should i go for Semantic Data Extractor tool. What it does. and how it can improved our coding.? Is anyone using it? And i check some site randomly with but with most of sites it gives error Using org.apache.xerces.parsers.SAXParser Exception net.sf.saxon.trans.XPathException: org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`". org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`". Is it possible to get pass every site with this tool? on one site i got this error No top-level heading (h1) found, no outline extracted. Is it necessary to have at least a H1 in any webpage?

    Read the article

  • Magento Set Grid to Filter Automatically by Current Day using Existing Datetime Column in Grid

    - by Tegan Snyder
    In Magento I'm creating a custom module and would love to be able to filter automatically by the datetime column so that the intial grid listing shows only entities related to "todays" date. Here is my datetime column: $this->addColumn('ts', array( 'header' => $hlp->__('Activated'), 'align' => 'left', 'index' => 'ts', 'type' => 'datetime', 'width' => '160px', )); I'm think there should be a way for me to just add a filter to the collection like so: $now = Mage::getModel('core/date')->timestamp(time()); $dateTime = date('m/d/y h:i:s', $now); $collection = Mage::getModel('mymodule/items')->getCollection() ->addFieldToFilter('ts', $dateTime); But this doesn't work? Am I using the wrong filter? My "ts" field in the database is a "datetime" field, but the default magento "From: " - "To:" date range selectors don't use hours, minutes, seconds. Any ideas? Thanks, Tegan

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • Is a red-black tree my ideal data structure?

    - by Hugo van der Sanden
    I have a collection of items (big rationals) that I'll be processing. In each case, processing will consist of removing the smallest item in the collection, doing some work, and then adding 0-2 new items (which will always be larger than the removed item). The collection will be initialised with one item, and work will continue until it is empty. I'm not sure what size the collection is likely to reach, but I'd expect in the range 1M-100M items. I will not need to locate any item other than the smallest. I'm currently planning to use a red-black tree, possibly tweaked to keep a pointer to the smallest item. However I've never used one before, and I'm unsure whether my pattern of use fits its characteristics well. 1) Is there a danger the pattern of deletion from the left + random insertion will affect performance, eg by requiring a significantly higher number of rotations than random deletion would? Or will delete and insert operations still be O(log n) with this pattern of use? 2) Would some other data structure give me better performance, either because of the deletion pattern or taking advantage of the fact I only ever need to find the smallest item? Update: glad I asked, the binary heap is clearly a better solution for this case, and as promised turned out to be very easy to implement. Hugo

    Read the article

  • Grails UrlMappings with .html

    - by Glennn
    I'm developing a Grails web application (mainly as a learning exercise). I have previously written some standard Grails apps, but in this case I wanted to try creating a controller that would intercept all requests (including static html) of the form: <a href="/testApp/testJsp.jsp">test 1</a> <a href="/testApp/testGsp.gsp">test 2</a> <a href="/testApp/testHtm.htm">test 3</a> <a href="/testApp/testHtml.html">test 4</a> The intent is to do some simple business logic (auditing) each time a user clicks a link. I know I could do this using a Filter (or a range of other methods), however I thought this should work too and wanted to do this using a Grails framework. I set up the Grail UrlMappings.groovy file to map all URLs of that form (/$myPathParam?) to a single controller: class UrlMappings { static mappings = { "/$controller/$action?/$id?"{ constraints { } } "/$path?" (controller: 'auditRecord', action: 'showPage') "500"(view:'/error') } } In that controller (in the appropriate "showPage" action) I've been printing out the path information, for example: def showPage = { println "params.path = " + params.path ... render(view: resultingView) } The results of the println in the showPage action for each of my four links are testJsp.jsp testGsp.gsp testHtm.htm testHtml Why is the last one "testHtml", not "testHtml.html"? In a previous (Stack Overflow query) Olexandr encountered this issue and was advised to simply concatenate the value of request.format - which, indeed, does return "html". However request.format also returns "html" for all four links. I'm interested in gaining an understanding of what Grails is doing and why. Is there some way to configure Grails so the params.path variable in the controller shows "testHtml.html" rather than stripping off the "html" extension? It doesn't seem to remove the extension for any other file type (including .htm). Is there a good reason it's doing this? I know that it is a bit unusual to use a controller for static html, but still would like to understand what's going on.

    Read the article

  • Basic syntax for an animation loop?

    - by Moshe
    I know that jQuery, for example, can do animation of sorts. I also know that at the very core of the animation, there must me some sort of loop doing the animation. What is an example of such a loop? A complete answer should ideally answer the following questions: What is a basic syntax for an effective animation recursion that can animate a single property of a particular object at a time? The function should be able to vary its target object and property of the object. What arguments/parameters should it take? What is a good range of reiterating the loop? In milliseconds? (Should this be a parameter/argument to the function?) REMEMBER: The answer is NOT necessarily language specific, but if you are writing in a specific language, please specify which one. Error handling is a plus. {Nothing is more irritating (for our purposes) than an animation that does something strange, like stopping halfway through.} Thanks!

    Read the article

  • Fetching real time data from excel

    - by Umesh Sharma
    I am seriouly looking for your valuable help first time here. If possible, plese help me. I am developing a VB.NET app in which i read "real time data" from a excel sheet using "Microsoft.Office.Interop.Excel" i.e. excel automation. All cells in excel sheet are fetching stock data from some LOCAL DDE Server like "=XYZ|Bid!GOLD", "=XYZ|Bid!SILVER", "=XYZ|Ask!SILVER" and so on... Some cells also having fixed values like "Symbol", "Bid Rate", "32.90" etc. Values of DDE mapped cells (i.e. =XYZ|xxxx!yyy) are continuously changing. THE PROBLEM is here..."FIXED values" from excel cells are coming quite ok to my app but all DDE mapped cells values are coming "-2146826246" (When datasource local dde server ON) or "-2146826265" (OFF). Although, if i use C#.NET, it's all ok but not with Vb.NET. I want to display range of excel (A1 to J50) into VB.NET ListView which are changing in every 200ms (5 times in every 1 second) ================ Important ====================================================== Is it possible to BIND "listview items/columns values" with "excel cells" or some local memory variables ?? Currently, i am reading excel "cell by cell" and trying to put values in .NET listview but CPU USES are very high as well as it's toooo slow process. If yes, then how please ? I am a VFP developer but new to .NET It's very easy in VFP then why not in .NET ?? Please guide me, if someone has the solution...

    Read the article

  • reading csv files in scipy/numpy in Python

    - by user248237
    I am having trouble reading a csv file, delimited by tabs, in python. I use the following function: def csv2array(filename, skiprows=0, delimiter='\t', raw_header=False, missing=None, with_header=True): """ Parse a file name into an array. Return the array and additional header lines. By default, parse the header lines into dictionaries, assuming the parameters are numeric, using 'parse_header'. """ f = open(filename, 'r') skipped_rows = [] for n in range(skiprows): header_line = f.readline().strip() if raw_header: skipped_rows.append(header_line) else: skipped_rows.append(parse_header(header_line)) f.close() if missing: data = genfromtxt(filename, dtype=None, names=with_header, deletechars='', skiprows=skiprows, missing=missing) else: if delimiter != '\t': data = genfromtxt(filename, dtype=None, names=with_header, delimiter=delimiter, deletechars='', skiprows=skiprows) else: data = genfromtxt(filename, dtype=None, names=with_header, deletechars='', skiprows=skiprows) if data.ndim == 0: data = array([data.item()]) return (data, skipped_rows) the problem is that genfromtxt complains about my files, e.g. with the error: Line #27100 (got 12 columns instead of 16) I am not sure where these errors come from. Any ideas? Here's an example file that causes the problem: #Gene 120-1 120-3 120-4 30-1 30-3 30-4 C-1 C-2 C-5 genesymbol genedesc ENSMUSG00000000001 7.32 9.5 7.76 7.24 11.35 8.83 6.67 11.35 7.12 Gnai3 guanine nucleotide binding protein alpha ENSMUSG00000000003 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Pbsn probasin Is there a better way to write a generic csv2array function? thanks.

    Read the article

  • Fastest way to generate delimited string from 1d numpy array

    - by Abiel
    I have a program which needs to turn many large one-dimensional numpy arrays of floats into delimited strings. I am finding this operation quite slow relative to the mathematical operations in my program and am wondering if there is a way to speed it up. For example, consider the following loop, which takes 100,000 random numbers in a numpy array and joins each array into a comma-delimited string. import numpy as np x = np.random.randn(100000) for i in range(100): ",".join(map(str, x)) This loop takes about 20 seconds to complete (total, not each cycle). In contrast, consider that 100 cycles of something like elementwise multiplication (x*x) would take than one 1/10 of a second to complete. Clearly the string join operation creates a large performance bottleneck; in my actual application it will dominate total runtime. This makes me wonder, is there a faster way than ",".join(map(str, x))? Since map() is where almost all the processing time occurs, this comes down to the question of whether there a faster to way convert a very large number of numbers to strings.

    Read the article

  • Where is my python script spending time? Is there "missing time" in my cprofile / pstats trace?

    - by fmark
    I am attempting to profile a long running python script. The script does some spatial analysis on raster GIS data set using the gdal module. The script currently uses three files, the main script which loops over the raster pixels called find_pixel_pairs.py, a simple cache in lrucache.py and some misc classes in utils.py. I have profiled the code on a moderate sized dataset. pstats returns: p.sort_stats('cumulative').print_stats(20) Thu May 6 19:16:50 2010 phes.profile 355483738 function calls in 11644.421 CPU seconds Ordered by: cumulative time List reduced from 86 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.008 0.008 11644.421 11644.421 <string>:1(<module>) 1 11064.926 11064.926 11644.413 11644.413 find_pixel_pairs.py:49(phes) 340135349 544.143 0.000 572.481 0.000 utils.py:173(extent_iterator) 8831020 18.492 0.000 18.492 0.000 {range} 231922 3.414 0.000 8.128 0.000 utils.py:152(get_block_in_bands) 142739 1.303 0.000 4.173 0.000 utils.py:97(search_extent_rect) 745181 1.936 0.000 2.500 0.000 find_pixel_pairs.py:40(is_no_data) 285478 1.801 0.000 2.271 0.000 utils.py:98(intify) 231922 1.198 0.000 2.013 0.000 utils.py:116(block_to_pixel_extent) 695766 1.990 0.000 1.990 0.000 lrucache.py:42(get) 1213166 1.265 0.000 1.265 0.000 {min} 1031737 1.034 0.000 1.034 0.000 {isinstance} 142740 0.563 0.000 0.909 0.000 utils.py:122(find_block_extent) 463844 0.611 0.000 0.611 0.000 utils.py:112(block_to_pixel_coord) 745274 0.565 0.000 0.565 0.000 {method 'append' of 'list' objects} 285478 0.346 0.000 0.346 0.000 {max} 285480 0.346 0.000 0.346 0.000 utils.py:109(pixel_coord_to_block_coord) 324 0.002 0.000 0.188 0.001 utils.py:27(__init__) 324 0.016 0.000 0.186 0.001 gdal.py:848(ReadAsArray) 1 0.000 0.000 0.160 0.160 utils.py:50(__init__) The top two calls contain the main loop - the entire analyis. The remaining calls sum to less than 625 of the 11644 seconds. Where are the remaining 11,000 seconds spent? Is it all within the main loop of find_pixel_pairs.py? If so, can I find out which lines of code are taking most of the time?

    Read the article

  • MySQL GIS and Spatial Extensions - how to map regions and query against them

    - by chibineku
    I am trying to make a smartphone app which will return a list of users within a certain proximity, say 100m. It's easy to get the coordinates of my BlackBerry and write them to a database, but in order to return a list of other users within 100m, I need to pull every other record from the database and compare the distance between the two points, checking to see if it's within range, before outputting that user's information. This is going to be time consuming if there are many users involved. So I would like to map areas (countries, cities, I'm not yet sure of the resolution I'll need) so that I can first target a smaller subset of all users. This will save on processing time. I have read the basics of GIS and spatial querying on the mysql website but to be honest the query is over my head and I hate copying and pasting code without understanding it. Plus it only checks for proximity - I want to first check if a coordinate falls within a certain area. Does anyone have any experience of such matters and feel like giving me some pointers? Resources such as any preexisting databases of points describing countries as polygons would be really helpful too. Many thanks to anyone who takes the time :)

    Read the article

  • PHP IP Validation Help

    - by Zubair1
    Hello, I am using this IP Validation Function that i came across while browsing, it has been working well until today i ran into a problem. For some reason the function won't validate this IP as valid: 203.81.192.26 I'm not too great with regular expressions, so would appreciate any help on what could be wrong. If you have another function, i would appreciate if you could post that for me. |--------------------------------------------| The code for the function is below: |--------------------------------------------| public static function validateIpAddress($ip_addr) { global $errors; $preg = '#^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}' . '(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$#'; if(preg_match($preg, $ip_addr)) { //now all the intger values are separated $parts = explode(".", $ip_addr); //now we need to check each part can range from 0-255 foreach($parts as $ip_parts) { if(intval($ip_parts) > 255 || intval($ip_parts) < 0) { $errors[] = "ip address is not valid."; return false; } return true; } return true; } else { $errors[] = "please double check the ip address."; return false; } }

    Read the article

  • Is there a website to look up common, already written functions?

    - by pinnacler
    I'm sitting here writing a function that I'm positive has been written before, somewhere on earth. It's just too common to have not been attempted, and I'm wondering why I can't just go to a website and search for a function that I can then copy and paste into my project in 2 seconds, instead of wasting my day reinventing the wheel. Sure there are certain libraries you can use, but where do you find these libraries and when they are absent, is there a site like I'm describing? Possibly a wiki of some type that contains free code that anybody can edit and improve? Edit: I can code things fine, I just don't know HOW to do them. So for example, right now, I'm trying to localize a robot/car/point in space. I KNOW there is a way to do it, just based off of range and distance. Triangulation and Trilateration. How to code that is a different story. A site that could have psuedo code, step by step how to do that would be ridiculously helpful. It would also ensure the optimal solution since everybody can edit it. I'm also writing in Matlab, which I hate because it's quirky, adding to my desire for creating a website like I describe.

    Read the article

  • How can I filter a Perl DBIx recordset with 2 conditions on the same column?

    - by BrianH
    I'm getting my feet wet in DBIx::Class - loving it so far. One problem I am running into is that I want to query records, filtering out records that aren't in a certain date range. It took me a while to find out how to do a "<=" type of match instead of an equality match: my $start_criteria = ">= $start_date"; my $end_criteria = "<= $end_date"; my $result = $schema->resultset('MyTable')->search( { 'status_date' => \$start_criteria, 'status_date' => \$end_criteria, }); The obvious problem with this is that since the filters are in a hash, I am overwriting the value for "status_date", and am only searching where the status_date <= $end_date. The SQL that gets executed is: SELECT me.* from MyTable me where status_date <= '9999-12-31' I've searched CPAN, Google and SO and haven't been able to figure out how to apply 2 conditions to the same column. All documentation I've been able to find shows how to filter on more than 1 column, but not 2 conditions on the same column. I'm sure I'm missing something obvious - hoping someone here can point it out to me? Thanks in advance! Brian

    Read the article

  • MySQL: order by and limit gives wrong result

    - by Larry K
    MySQL ver 5.1.26 I'm getting the wrong result with a select that has where, order by and limit clauses. It's only a problem when the order by uses the id column. I saw the MySQL manual for LIMIT Optimization My guess from reading the manual is that there is some problem with the index on the primary key, id. But I don't know where I should go from here... Question: what should I do to best solve the problem? Works correctly: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC ; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | | 1334 | 2010-05-06 08:05:25 | | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 3 rows in set (0.00 sec) WRONG result when limit added! Should be the first row, id - 1336 mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC limit 1; +------+---------------------+ | id | created_at | +------+---------------------+ | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 1 row in set (0.00 sec) Works correctly: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY created_at DESC ; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | | 1334 | 2010-05-06 08:05:25 | | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 3 rows in set (0.01 sec) Works correctly with limit: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY created_at DESC limit 1; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | +------+---------------------+ 1 row in set (0.01 sec) Additional info: explain SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC limit 1; +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+ | 1 | SIMPLE | billing_invoices | range | index_billing_invoices_on_account_id | index_billing_invoices_on_account_id | 4 | NULL | 3 | Using where | +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+

    Read the article

  • Efficient method to calculate the rank vector of a list in Python

    - by Tamás
    I'm looking for an efficient way to calculate the rank vector of a list in Python, similar to R's rank function. In a simple list with no ties between the elements, element i of the rank vector of a list l should be x if and only if l[i] is the x-th element in the sorted list. This is simple so far, the following code snippet does the trick: def rank_simple(vector): return [rank for rank in sorted(range(n), key=vector.__getitem__)] Things get complicated, however, if the original list has ties (i.e. multiple elements with the same value). In that case, all the elements having the same value should have the same rank, which is the average of their ranks obtained using the naive method above. So, for instance, if I have [1, 2, 3, 3, 3, 4, 5], the naive ranking gives me [0, 1, 2, 3, 4, 5, 6], but what I would like to have is [0, 1, 3, 3, 3, 5, 6]. Which one would be the most efficient way to do this in Python? Footnote: I don't know if NumPy already has a method to achieve this or not; if it does, please let me know, but I would be interested in a pure Python solution anyway as I'm developing a tool which should work without NumPy as well.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >