Search Results

Search found 1402 results on 57 pages for 'dataset'.

Page 49/57 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Where is my python script spending time? Is there "missing time" in my cprofile / pstats trace?

    - by fmark
    I am attempting to profile a long running python script. The script does some spatial analysis on raster GIS data set using the gdal module. The script currently uses three files, the main script which loops over the raster pixels called find_pixel_pairs.py, a simple cache in lrucache.py and some misc classes in utils.py. I have profiled the code on a moderate sized dataset. pstats returns: p.sort_stats('cumulative').print_stats(20) Thu May 6 19:16:50 2010 phes.profile 355483738 function calls in 11644.421 CPU seconds Ordered by: cumulative time List reduced from 86 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.008 0.008 11644.421 11644.421 <string>:1(<module>) 1 11064.926 11064.926 11644.413 11644.413 find_pixel_pairs.py:49(phes) 340135349 544.143 0.000 572.481 0.000 utils.py:173(extent_iterator) 8831020 18.492 0.000 18.492 0.000 {range} 231922 3.414 0.000 8.128 0.000 utils.py:152(get_block_in_bands) 142739 1.303 0.000 4.173 0.000 utils.py:97(search_extent_rect) 745181 1.936 0.000 2.500 0.000 find_pixel_pairs.py:40(is_no_data) 285478 1.801 0.000 2.271 0.000 utils.py:98(intify) 231922 1.198 0.000 2.013 0.000 utils.py:116(block_to_pixel_extent) 695766 1.990 0.000 1.990 0.000 lrucache.py:42(get) 1213166 1.265 0.000 1.265 0.000 {min} 1031737 1.034 0.000 1.034 0.000 {isinstance} 142740 0.563 0.000 0.909 0.000 utils.py:122(find_block_extent) 463844 0.611 0.000 0.611 0.000 utils.py:112(block_to_pixel_coord) 745274 0.565 0.000 0.565 0.000 {method 'append' of 'list' objects} 285478 0.346 0.000 0.346 0.000 {max} 285480 0.346 0.000 0.346 0.000 utils.py:109(pixel_coord_to_block_coord) 324 0.002 0.000 0.188 0.001 utils.py:27(__init__) 324 0.016 0.000 0.186 0.001 gdal.py:848(ReadAsArray) 1 0.000 0.000 0.160 0.160 utils.py:50(__init__) The top two calls contain the main loop - the entire analyis. The remaining calls sum to less than 625 of the 11644 seconds. Where are the remaining 11,000 seconds spent? Is it all within the main loop of find_pixel_pairs.py? If so, can I find out which lines of code are taking most of the time?

    Read the article

  • Examples of monoids/semigroups in programming

    - by jkff
    It is well-known that monoids are stunningly ubiquitous in programing. They are so ubiquitous and so useful that I, as a 'hobby project', am working on a system that is completely based on their properties (distributed data aggregation). To make the system useful I need useful monoids :) I already know of these: Numeric or matrix sum Numeric or matrix product Minimum or maximum under a total order with a top or bottom element (more generally, join or meet in a bounded lattice, or even more generally, product or coproduct in a category) Set union Map union where conflicting values are joined using a monoid Intersection of subsets of a finite set (or just set intersection if we speak about semigroups) Intersection of maps with a bounded key domain (same here) Merge of sorted sequences, perhaps with joining key-equal values in a different monoid/semigroup Bounded merge of sorted lists (same as above, but we take the top N of the result) Cartesian product of two monoids or semigroups List concatenation Endomorphism composition. Now, let us define a quasi-property of an operation as a property that holds up to an equivalence relation. For example, list concatenation is quasi-commutative if we consider lists of equal length or with identical contents up to permutation to be equivalent. Here are some quasi-monoids and quasi-commutative monoids and semigroups: Any (a+b = a or b, if we consider all elements of the carrier set to be equivalent) Any satisfying predicate (a+b = the one of a and b that is non-null and satisfies some predicate P, if none does then null; if we consider all elements satisfying P equivalent) Bounded mixture of random samples (xs+ys = a random sample of size N from the concatenation of xs and ys; if we consider any two samples with the same distribution as the whole dataset to be equivalent) Bounded mixture of weighted random samples Which others do exist?

    Read the article

  • Passing control references as ref parameters

    - by Enmanuel
    Hi everyone. Please help me out here because im getting kind of confused.. I have a form in a C# winforms project and a couple of methods that are suposed to perform some tasks for this particular form and all derived ones, so one of those helper methods can make the example.. this one should fill comboboxes with a dataset. Its working properly now but when i coded the method with this signature protected void FillComboBox(kComboBox target, IEntClass_DA entity) { target.DataSource = entity.GetList().Tables[0]; target.DisplayMember = "name"; target.ValueMember = "id"; } I saw that the displayMember and ValueMember in the comboboxes were not holding the values after the method call. I just thought I should use ref parameters so the asignments are not wasted in read-only reference variables. It was ok by then but later, making an exercise of passing the whole form as a parameter I was warned by the compiler with the notice that this could not be passed as a ref parameter because it is read-only. Fine then, I keep working and see that even without the ref keyword i can use the ref variable from the form, update some properties and see the changes. So whats happening here: passing a reference of the control to the helper method gives me ability to change its members even when not using the ref parameter?? Thanks.

    Read the article

  • What is the best way to auto-generate INSERT statements for a SQL Server table?

    - by JosephStyons
    We are writing a new application, and while testing, we will need a bunch of dummy data. I've added that data by using MS Access to dump excel files into the relevant tables. Every so often, we want to "refresh" the relevant tables, which means dropping them all, re-creating them, and running a saved MS Access append query. The first part (dropping & re-creating) is an easy sql script, but the last part makes me cringe. I want a single setup script that has a bunch of INSERTs to regenerate the dummy data. I have the data in the tables now. What is the best way to automatically generate a big list of INSERT statements from that dataset? I'm thinking of something like in TOAD (for Oracle) where you can right-click on a grid and click Save As-Insert Statements, and it will just dump a big sql script wherever you want. The only way I can think of doing it is to save the table to an excel sheet and then write an excel formula to create an INSERT for every row, which is surely not the best way. I'm using the 2008 Management Studio to connect to a SQL Server 2005 database.

    Read the article

  • Code bacteria: evolving mathematical behavior

    - by Stefano Borini
    It would not be my intention to put a link on my blog, but I don't have any other method to clarify what I really mean. The article is quite long, and it's in three parts (1,2,3), but if you are curious, it's worth the reading. A long time ago (5 years, at least) I programmed a python program which generated "mathematical bacteria". These bacteria are python objects with a simple opcode-based genetic code. You can feed them with a number and they return a number, according to the execution of their code. I generate their genetic codes at random, and apply an environmental selection to those objects producing a result similar to a predefined expected value. Then I let them duplicate, introduce mutations, and evolve them. The result is quite interesting, as their genetic code basically learns how to solve simple equations, even for values different for the training dataset. Now, this thing is just a toy. I had time to waste and I wanted to satisfy my curiosity. however, I assume that something, in terms of research, has been made... I am reinventing the wheel here, I hope. Are you aware of more serious attempts at creating in-silico bacteria like the one I programmed? Please note that this is not really "genetic algorithms". Genetic algorithms is when you use evolution/selection to improve a vector of parameters against a given scoring function. This is kind of different. I optimize the code, not the parameters, against a given scoring function.

    Read the article

  • Merging datasets with 2 different time variables in SAS

    - by John
    Hye Guys, for those regularly browsing this site sorry for already another question (however I did solve my last question myself!) I have another problem with merging datasets, it seems that accounting for time in datasets is a real pain in the ass. I succesfully managed to merge on months in my previous datasets, however it seems I have a final dataset which only has quarter as a time count variable. So where all my normal databases have month 1- xxx as an indicator of time, this database had quarter as an indicator of time. I still want to add the variables of this last database, let's call it TVOL, into my WORK database. Quick summary QUARTER: Quarter 0 = JAN1996-MAR1996 Month: Month 0 = JAN1996 Example: TVOL TVOL _ Ticker ____ Quarter 1500 _ AA ________ -1 52546 _ BB ________ 15 Example: WORK BETA _ Ticker ____ Month 1.52 _ AA ________ 2 1.54__ BB _______ 3 Example: Merged: BETA ______ TVOL __ Ticker ____ Month 1.52 _______ 500 ___ AA _______ 2 I now want to merge this 2 tables using following relationship if the month is in quarter 1, the data of quarter 0 has to be used, so if i have an observation i nWORK with date 2FEB1996 the TVOL of quarter -1 has to be put behind this observation. Something like IF month = quarter i use data quarter i-1. Also, as TVOL is measured quarterly and I have to put in monthly I have to take the average, so (TVOL/3) should be added as a variable. Thanks!

    Read the article

  • dotNet Templated, Repeating, Databound ServerControl: Modifying underlying ServerControl data per te

    - by Campbeln
    I have a server control that wraps an underlying class which manages a number of indexes to track where it is in a dataset (ie: RenderedRecordCount, ErroredRecordCount, NewRecordCount, etc.). I've got the server control rendering great, but OnDataBinding I'm having an issue as to seems to happen after CreateChildControls and before Render (both of which properly manage the iteration of the underlying indexes). While I'm somewhat familiar with the ASP.NET page lifecycle, this one seems to be beyond me at the moment. So... How do I hook into the iterative process OnDataBinding uses so I can manage the underlying indexes? Will I have to iterate over the ITemplates myself, managing the indexes as I go or is there an easier solution? [edit: Agh... writing the problem out is very cathartic... I'm thinking this is exactly what I will need to do...] Also... I implemented the iteration of the underlying indexes during CreateChildControls originally in the belief that was the proper place to hook in for events like OnDataBinding (thinking it was done as the controls were being .Add'd). Now it seems that this may actually be unnecessary. So I guess the secondary question is: What happens during CreateChildControls? Are the unadulterated (read: with various <%-tags in place) controls added to the .Controls collection without any other processing?

    Read the article

  • Rails : fighting long http response times with ajax. Is it a good idea? Please, help with implementa

    - by baranov
    Hi, everybody! I've googled some tutorials, browsed some SO answers, and was unable to find a recipe for my problem. I'm writing a web site which is supposed to display almost realtime stock chart. Data is stored in constantly updating MySQL database, I wrote a find_by_sql query code which fetches all the data I need to get my chart drawn. Everything is ok, except performance - it takes from one second to one minute for different queries to fetch all the data from the database, this time includes necessary (My)SQL-server side calculations. This is simply unacceptable. I got the following idea: if the data is queried from the MySQL server one point a time instead of entire dataset, it takes only about 1-100ms to get an individual point. I imagine the data fetch process might be browser-driven. After the user presses the button in order to get a chart drawn, controller makes one request to the database and renders, say, a progress bar, say 1% ready. When the browser gets the response, it immediately makes an (ajax) request, and the server fetches the next piece of data and renders "2%". And so on, until all the data is ready and the server displays the requested chart. Could this be implemented in rails+js, is there a tutorial for solving a similar problem on the Web? I suppose if the thing is feasible at all, somebody should have already done this before. I have read several articles about ajax, I believe I do understand general principles, but never did nontrivial ajax programming myself. Thanks for your time!

    Read the article

  • Core Data Inferred Migration – Automatic "lightweight" vs Manual

    - by ohhorob
    I've updated the model of an existing iPhone app in some simple ways (remove attribute, add attribute, remove index), and can use automatic lightweight migration to migrate the persistent store. Due to the typical size of the data set, the processing time is not insignificant, and warrants feedback for the user. NSMigrationManager provides a simple but useful migrationProgress value that sends KVO notifications as the migration is performed. That forms the basis of providing feedback, however attempting to use an inferred model ([NSMappingModel inferredMappingModelForSourceModel:destinationModel:error:]) results in drastically different timing for the exact same dataset. Profile results on and original iPhone (2G) Automatic inferred lightweight migration PROFILE: CacheManager -migrateStore PROFILE: 0.6130 (+0.6130) models loaded PROFILE: 1.1759 (+0.5629) delegate -CacheManagerWillMigrate: PROFILE: 1.2516 (+0.0757) persistent store coordinator loaded PROFILE: 5.1436 (+3.8920) automatic lightweight migration completed PROFILE: 5.5435 (+0.3999) delegate -CacheManagerDidFinishMigration:withError: Manual inferred migration PROFILE: CacheManager -migrateStore PROFILE: 0.6660 (+0.6660) models loaded PROFILE: 1.1471 (+0.4811) inferred mapping model generated PROFILE: 1.4046 (+0.2574) delegate -CacheManagerWillMigrate: PROFILE: 1.5058 (+0.1013) persistent store coordinator loaded PROFILE: 22.6952 (+21.1894) manual migration completed PROFILE: 23.1478 (+0.4525) delegate -CacheManagerDidFinishMigration:withError: So, with an inferred model, the manual migration takes over 5 times longer than automatic! It's a big inconsistency, and the lightweight option that NSPersistentStoreCoordinator -addPersistentStoreWithType:configuration:URL:options:error: provides absolutely no indication of progress while processing. Can anybody provide a supported way to get the migrationProgress values during automatic migration, OR a way to configure an inferred mapping model to be as fast during manual processing as automatic?

    Read the article

  • How do I fetch a set of rows from data table

    - by cmrhema
    Hi, I have a dataset that has two datatables. In the first datatable I have EmpNo,EmpName and EmpAddress In the second datatable I have Empno,EmpJoindate, EmpSalary. I want a result where I should show EmpName as the label and his/her details in the gridview I populate a datalist with the first table, and have EmpNo as the datakeys. Then I populate the gridview inside the datatable which has EmpNo,EmpJoinDate and EmpAddress. My code is some what as below Datalist1.DataSource = dt; Datalist1.DataBind(); for (int i = 0; i < Datalist1.Items.Count; i++) { int EmpNo = Convert.ToInt32(Datalist1.DataKeys[i]); GridView gv = (GridView)Datalist1.FindControl("gv"); gv.DataSource = dt2; gv.DataBind(); } Now I have a problem, I have to bind the Details of the corresponding Employee to the gridview. Whereas the above code will display all the details of all employees in the gridview. If we use IEnumerable we give a condition where(a=a.eno=EmpNo), and bind that list to the gridview. How do I do this in datatable. kindly do not give me suggestions of altering the stored procedure which results the values in two tables, because that cannot be altered. I have to find a solution within the existing objects I have. Regards Hema

    Read the article

  • Assigning a RecID field to Gridview TemplateField (Checbox column)

    - by user279521
    I want to assign a RecID to the checkbox column "cbPOID". The RecID field that is being returned in my dataset, but should not be displayed in the gridview. <asp:GridView ID="gvOrders" runat="server" AutoGenerateColumns="False" CellPadding="4" GridLines="None" Width="100%" AllowPaging="True" PageSize="20" onpageindexchanging="gvOrders_PageIndexChanging" ForeColor="#333333"> <Columns> <asp:TemplateField HeaderText="VerifiedComplete" > <ItemTemplate> <asp:CheckBox ID="cbPOID" runat="server"/> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField="PurchaseOrderID" HeaderText="PurchaseOrderID" HtmlEncode="False" ></asp:BoundField> <asp:BoundField DataField="VENDOR_ID" HeaderText="Vendor ID"></asp:BoundField> <asp:BoundField DataField="VENDOR_NAME" HeaderText="Vendor Name"></asp:BoundField> <asp:BoundField DataField="ITEM_DESC" HeaderText="Item Desc"></asp:BoundField> <asp:BoundField DataField="SYS_DATE" HeaderText="System Date"></asp:BoundField> </Columns> <FooterStyle CssClass="GridFooter" BackColor="#990000" Font-Bold="True" ForeColor="White" /> <PagerStyle CssClass="GridPager" ForeColor="#333333" BackColor="#FFCC66" HorizontalAlign="Center" /> <SelectedRowStyle BackColor="#FFCC66" Font-Bold="True" ForeColor="Navy" /> <HeaderStyle CssClass="GridHeader" BackColor="#990000" Font-Bold="True" ForeColor="White" /> <RowStyle CssClass="GridItem" BackColor="#FFFBD6" ForeColor="#333333" /> <AlternatingRowStyle CssClass="GridAltItem" BackColor="White" /> </asp:GridView>

    Read the article

  • 3 tier application pattern suggestion

    - by Maxim Gershkovich
    I have attempted to make my first 3 tier application. In the process I have run into one problem I am yet to find an optimal solution for. Basically all my objects use an IFillable interface which forces the implementation of a sub as follows Public Sub Fill(ByVal Datareader As Data.IDataReader) Implements IFillable.Fill This sub then expects the Ids from the datareader will be identical to the properties of the object as such. Me.m_StockID = Datareader.GetGuid(Datareader.GetOrdinal("StockID")) In the end I end up with a datalayer that looks something like this. Public Shared Function GetStockByID(ByVal ConnectionString As String, ByVal StockID As Guid) As Stock Dim res As New Stock Using sqlConn As New SqlConnection(ConnectionString) sqlConn.Open() res.Fill(StockDataLayer.GetStockByIDQuery(sqlConn, StockID)) End Using Return res End Function Mostly this pattern seems to make sense. However my problem is, lets say I want to implement a property for Stock called StockBarcodeList. Under the above mentioned pattern any way I implement this property I will need to pass a connectionstring to it which obviously breaks my attempt at layer separation. Does anyone have any suggestions on how I might be able to solve this problem or am I going about this the completely wrong way? Does anyone have any suggestions on how I might improve my implementation? Please note however I am deliberately trying to avoid using the dataset in any form.

    Read the article

  • Finding Common Phrases in MS SQL TEXT Column

    - by regex
    Hello All, Short Desc: I'm curious to see if I can use SQL Analysis services or some other MS SQL service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification. Thanks in advance for your help.

    Read the article

  • SSRS2005 timeout error

    - by jaspernygaard
    Hi I've been running around circles the last 2 days, trying to figure a problem in our customers live environment. I figured I might as well post it here, since google gave me very limited information on the error message (5 results to be exact). The error boils down to a timeout when requesting a certain report in SSRS2005, when a certain parameter is used. The deployment scenario is: Machine #1 Running reporting services (SQL2005, W2K3, IIS6) Machine #2 Running datawarehouse database (SQL2005, W2K3) which is the data source for #1 Both machines are running on the same vm cluster and LAN. The report requests a fairly simple SP - lets called it sp(param $a, param $b). When requested with param $a filled, it executes correctly. When using param $b, it times out after the global timeout periode has passed. If I run the stored procedure with param $b directly from sql management studio on #2, it returns the results perfectly fine (within 3-4s). I've profiled the datawarehouse database on #2 and when param $b is used, the query from the reporting service to the database, never reaches #2. The error message that I get upon timeout, when using param $b, when invoking the report directly from SSRS web interface is: "An error has occurred during report processing. Cannot read the next data row for the data set DataSet. A severe error occurred on the current command. The results, if any, should be discarded. Operation cancelled by user." The ExecutionLog for the SSRS does give me much information besides the error message rsProcessingAborted I'm running out of ideas of how to nail this problem. So I would greatly appreciate any comments, suggestions or ideas. Thanks in advance!

    Read the article

  • why DbCommandBuilder (Oracle) produces weird WHERE-clause to UpdateCommand?

    - by matti
    I have a table HolidayHome in oracle db which has unique db index on Id (I haven't specified this in the code in any way for adapter/table/dataset, don't know if i should/can). DbDataAdapter.SelectCommand is like this: SELECT Id, ExtId, Label, Location1, Location2, Location3, Location4, ClassId, X, Y, UseType FROM HolidayHome but UpdateCommand generated by DbCommandBuilder has very weird where clause: UPDATE HOLIDAYHOME SET ID = :p1, EXTID = :p2, LABEL = :p3, LOCATION1 = :p4, LOCATION2 = :p5, LOCATION3 = :p6, LOCATION4 = :p7, CLASSID = :p8, X = :p9, Y = :p10, USETYPE = :p11 WHERE ((ID = :p12) AND ((:p13 = 1 AND EXTID IS NULL) OR (EXTID = :p14)) AND ((:p15 = 1 AND LABEL IS NULL) OR (LABEL = :p16)) AND ((:p17 = 1 AND LOCATION1 IS NULL) OR (LOCATION1 = :p18)) AND ((:p19 = 1 AND LOCATION2 IS NULL) OR (LOCATION2 = :p20)) AND ((:p21 = 1 AND LOCATION3 IS NULL) OR (LOCATION3 = :p22)) AND ((:p23 = 1 AND LOCATION4 IS NULL) OR (LOCATION4 = :p24)) AND (CLASSID = :p25) AND (X = :p26) AND (Y = :p27) AND (USETYPE = :p28)) all these fields that have like: ((:p17 = 1 AND LOCATION1 IS NULL) OR (LOCATION1 = :p18)) are defined in oracle db like this: LOCATION1 VARCHAR2(30) so they allow null values. the code looks like this: static bool CreateInsertUpdateDeleteCmds(DbDataAdapter dataAdapter) { DbCommandBuilder builder = _trgtProvFactory.CreateCommandBuilder(); builder.DataAdapter = dataAdapter; // Get the insert, update and delete commands. dataAdapter.InsertCommand = builder.GetInsertCommand(); dataAdapter.UpdateCommand = builder.GetUpdateCommand(); dataAdapter.DeleteCommand = builder.GetDeleteCommand(); } what to do? The UpdateCommand is utter madness. Thanks & Best Regards: Matti

    Read the article

  • How do I make the following interaction with mySQL more efficient?

    - by Travis
    I've got an array that contains combinations of unique MySql IDs: For example: [ [1,10,11], [2,10], [3,10,12], [3,12,13,20], [4,12] ] In total there are a couple hundred different combinations of IDs. Some of these combinations are "valid" and some are not. For example, [1,10,11] may be a valid combination, whereas [3,10,12] may be invalid. Combinations are valid or invalid depending on how the data is arranged in the database. Currently I am using a SELECT statement to determine whether or not a specific combination of IDs is valid. It looks something like this: SELECT id1 FROM table WHERE id2 IN ($combination) GROUP BY id1 HAVING COUNT(distinct id2) = $number ...where $combination is one possible combination of IDs (eg 1,10,11) and $number is the number of IDs in that combination (in this case, 3). An invalid combination will return 0 rows. A valid combination will return 1 or more rows. However, to solve the entire set of possible combinations means looping a couple hundred SELECT statements, which I would rather not be doing. I am wondering: Are there any tricks for making this more efficient? Is it possible to submit the entire dataset to mySQL in one go, and have mySQL iterate through it? Any suggestions would be much appreciated. Thanks in advance!

    Read the article

  • Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?

    - by Michael Aaron Safyan
    I'm using PYML to construct a multiclass linear support vector machine (SVM). After training the SVM, I would like to be able to save the classifier, so that on subsequent runs I can use the classifier right away without retraining. Unfortunately, the .save() function is not implemented for that classifier, and attempting to pickle it (both with standard pickle and cPickle) yield the following error message: pickle.PicklingError: Can't pickle : it's not found as __builtin__.PySwigObject Does anyone know of a way around this or of an alternative library without this problem? Thanks. Edit/Update I am now training and attempting to save the classifier with the following code: mc = multi.OneAgainstRest(SVM()); mc.train(dataset_pyml,saveSpace=False); for i, classifier in enumerate(mc.classifiers): filename=os.path.join(prefix,labels[i]+".svm"); classifier.save(filename); Notice that I am now saving with the PyML save mechanism rather than with pickling, and that I have passed "saveSpace=False" to the training function. However, I am still gettting an error: ValueError: in order to save a dataset you need to train as: s.train(data, saveSpace = False) However, I am passing saveSpace=False... so, how do I save the classifier(s)? P.S. The project I am using this in is pyimgattr, in case you would like a complete testable example... the program is run with "./pyimgattr.py train"... that will get you this error. Also, a note on version information: [michaelsafyan@codemage /Volumes/Storage/classes/cse559/pyimgattr]$ python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. import PyML print PyML.__version__ 0.7.0

    Read the article

  • .NET How would I build a DAL to meet my requirments?

    - by Jonno
    Assuming that I must deploy an asp.net app over the following 3 servers: 1) DB - not public 2) 'middle' - not public 3) Web server - public I am not allowed to connect from the web server to the DB directly. I must pass through 'middle' - this is purely to slow down an attacker if they breached the web server. All db access is via stored procedures. No table access. I simply want to provide the web server with a ado dataset (I know many will dislike this, but this is the requirement). Using asmx web services - it works, but XML serialisation is slow and it's an extra set of code to maintain and deploy. Using a ssh/vpn tunnel so that the one connects to the db 'via' the middle server, seems to remove any possible benefit of maintaining 'middle'. Using WCF binary/tcp removes the XML problem, but still there is extra code. Is there an approach that provides the ease of ssh/vpn, but the potential benefit of having the dal on the middle server? Many thanks.

    Read the article

  • Accessing XML file using JavaScript And ASP.net |VB code

    - by Bubba
    Am trying to read in data from an xml file but using javascript which is embedded into my asp.net|vb code. I am new to asp.net but coming from a programming background. so I declared the xml objects for the appropriate browsers, as well as the name of the local xml to read data from, I then start by appending the create the table tag and then append it to the div tag in hack5.aspx I declare the variable that will represent/ hold the xml returned data object. I then run a for loop , before creating a row tag and then appending it to the div tag in hack5.aspx I then create the a row tag and then appending it to the div tag in hack5.aspx | then create a TextNode which is passed to variable, then create a td and append to div . then lastly append the textnode to td this format is the same for creating another 13 td tags that are to hold the data. The main problem is when I run the script - I see nothing display on my screen . no errors are shown, but with your sample code runs smoothly. So the first file hack5.aspx is as follows: <%@ Page Language="VB" AutoEventWireup="false" CodeFile="hack5.aspx.vb" Inherits="_Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Diplaying MessageBox from ASP.NET</title> </head> <body> <form id="form1" runat="server"> <div id="showtime" > </div> </form> </body> </html> The next file hack5.aspx.vb is as follows: Partial Class _Default Inherits System.Web.UI.Page Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim scriptString as String = "<script language=JavaScript> if (window.XMLHttpRequest) " scriptString += " { " scriptString += " xhttp=new XMLHttpRequest(); " scriptString += " } " scriptString += " else " scriptString += " { " scriptString += " xhttp=new ActiveXObject('Microsoft.XMLHTTP'); " scriptString += " } " scriptString += " xhttp.open('GET','yes.xml',false); " scriptString += " xhttp.send(null);" scriptString += " xmlDoc= xhttp.responseXML; " scriptString += " var table1 = document.createElement('table'); " scriptString += " document.getElementById('showtime').appendChild(table1); " scriptString += " var x=xmlDoc.getElementsByTagName('Table'); " scriptString += " for (i=0;i<x.length;i++) " scriptString += " { " scriptString += " var assessment = document.createTextNode(x[i].getElementsByTagName('Assessment')[0].childNodes[0].nodeValue);" scriptString += " var row1 = document.createElement('tr'); " scriptString += " document.getElementById('showtime').appendChild(row1); " scriptString += " var column1 = document.createElement('td'); " scriptString += " document.getElementById('showtime').appendChild(column1); " scriptString += " column1.appendChild(assessment); " scriptString += " var Issue_Date = document.createTextNode(x[i].getElementsByTagName('Issue_Date')[0].childNodes[0].nodeValue);" scriptString += " var column2 = document.createElement('td'); " scriptString += " document.getElementById('showtime').appendChild(column2); " scriptString += " column2.appendChild(Issue_Date); " scriptString += " var Due_Date = document.createTextNode(x[i].getElementsByTagName('Due_Date')[0].childNodes[0].nodeValue);" scriptString += " var column3 = document.createElement('td'); " scriptString += " document.getElementById('showtime').appendChild(column3); " scriptString += " column3.appendChild(Due_Date); " scriptString += " var Interest = document.createTextNode(x[i].getElementsByTagName('Interest')[0].childNodes[0].nodeValue);" scriptString += " var column4 = document.createElement('td'); " scriptString += " document.getElementById('showtime').appendChild(column4); " scriptString += " column4.appendChild(Interest); " scriptString += " var Summary = document.createTextNode(x[i].getElementsByTagName('Summary')[0].childNodes[0].nodeValue);" scriptString += " var column5 = document.createElement('td'); " scriptString += " document.getElementById('showtime').appendChild(column5); " scriptString += " column5.appendChild(Summary);" scriptString += " var Amount_Due= document.createTextNode(x[i].getElementsByTagName('Amount_Due')[0].childNodes[0].nodeValue);" scriptString += " var column6 = document.createElement('td'); " scriptString += " document.getElementById('showtime').appendChild(column6); " scriptString += " column6.appendChild(Amount_Due);" scriptString += " var IEduty = document.createTextNode(x[i].getElementsByTagName('IEduty')[0].childNodes[0].nodeValue);" scriptString += " var column7 = document.createElement('td'); " scriptString += " document.getElementById('showtime').appendChild(column7); " scriptString += " column7.appendChild(IEduty);" scriptString += " var LEsurtax = document.createTextNode(x[i].getElementsByTagName('LEsurtax')[0].childNodes[0].nodeValue);" scriptString += " var column8 = document.createElement('td'); " scriptString += " document.getElementById('showtime').appendChild(column8); " scriptString += " column8.appendChild(LEsurtax);" scriptString += " var CEsurtax = document.createTextNode(x[i].getElementsByTagName('CEsurtax')[0].childNodes[0].nodeValue);" scriptString += " var column9 = document.createElement('td'); " scriptString += " document.getElementById('showtime').appendChild(column9); " scriptString += " column9.appendChild(CEsurtax);" scriptString += " var EXduty = document.createTextNode(x[i].getElementsByTagName('EXduty')[0].childNodes[0].nodeValue);" scriptString += " var column10 = document.createElement('td'); " scriptString += " document.getElementById('showtime').appendChild(column10); " scriptString += " column10.appendChild(EXduty);" scriptString += " var IMvat = document.createTextNode(x[i].getElementsByTagName('IMvat')[0].childNodes[0].nodeValue);" scriptString += " var column11 = document.createElement('td'); " scriptString += " document.getElementById('showtime').appendChild(column11); " scriptString += " column11.appendChild(IMvat);" scriptString += " var SYSfee = document.createTextNode(x[i].getElementsByTagName('SYSfee')[0].childNodes[0].nodeValue);" scriptString += " var column12 = document.createElement('td'); " scriptString += " document.getElementById('showtime').appendChild(column12); " scriptString += " column12.appendChild(SYSfee);" scriptString += " var AItax = document.createTextNode(x[i].getElementsByTagName('AItax')[0].childNodes[0].nodeValue);" scriptString += " var column13 = document.createElement('td'); " scriptString += " document.getElementById('showtime').appendChild(column13); " scriptString += " column13.appendChild(AItax);" scriptString += " var Cduty = document.createTextNode(x[i].getElementsByTagName('Cduty')[0].childNodes[0].nodeValue);" scriptString += " var column14 = document.createElement('td'); " scriptString += " document.getElementById('showtime').appendChild(column14); " scriptString += " column14.appendChild(Cduty);" scriptString += " } " scriptString += " <" scriptString += "/" scriptString += "script>" If(Not ClientScript.IsStartupScriptRegistered("clientScript")) ClientScript.RegisterClientScriptBlock(Me.GetType(),"clientScript", scriptString) End If End Sub End Class And finally the xml file is as follows: <?xml version="1.0" encoding="utf-8" ?> <DataSet xmlns="http://tempuri.org/"> <xs:schema id="NewDataSet" xmlns="" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"> <xs:element name="NewDataSet" msdata:IsDataSet="true" msdata:UseCurrentLocale="true"> <xs:complexType> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element name="Table"> <xs:complexType> <xs:sequence> <xs:element name="UserName" type="xs:string" minOccurs="0" /> <xs:element name="Password" type="xs:string" minOccurs="0" /> <xs:element name="UserLevel" type="xs:string" minOccurs="0" /> <xs:element name="FName" type="xs:string" minOccurs="0" /> <xs:element name="LName" type="xs:string" minOccurs="0" /> <xs:element name="Branch" type="xs:string" minOccurs="0" /> <xs:element name="Department" type="xs:string" minOccurs="0" /> </xs:sequence> </xs:complexType> </xs:element> </xs:choice> </xs:complexType> </xs:element> </xs:schema> <diffgr:diffgram xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"> <NewDataSet xmlns=""> <Table diffgr:id="Table1" msdata:rowOrder="0"> <Assessment>CHR/A157/2009</Assessment> <Issue_Date>20/10/2009</Issue_Date> <Due_Date>01/11/2009</Due_Date> <Interest>2.00</Interest> <Summary>BENTLEY 2009</Summary> <Amount_Due>28000000.00</Amount_Due> <IEduty>3000000.00</IEduty> <LEsurtax>4000000.00</LEsurtax> <CEsurtax>5000000.00</CEsurtax> <EXduty>0.00</EXduty> <IMvat>5000000.00</IMvat> <SYSfee>8000000.00</SYSfee> <AItax>2000000.00</AItax> <Cduty>1000000.00</Cduty> </Table> <Table diffgr:id="Table1" msdata:rowOrder="1"> <Assessment>CHR/A167/2009</Assessment> <Issue_Date>20/10/2009</Issue_Date> <Due_Date>01/11/2009</Due_Date> <Interest>2.00</Interest> <Summary>BENTLEY 2009</Summary> <Amount_Due>24000000.00</Amount_Due> <IEduty>3000000.00</IEduty> <LEsurtax>4000000.00</LEsurtax> <CEsurtax>5000000.00</CEsurtax> <EXduty>0.00</EXduty> <IMvat>1000000.00</IMvat> <SYSfee>8000000.00</SYSfee> <AItax>2000000.00</AItax> <Cduty>1000000.00</Cduty> </Table> <Table diffgr:id="Table1" msdata:rowOrder="2"> <Assessment>CHR/A196/2009</Assessment> <Issue_Date>11/11/2009</Issue_Date> <Due_Date>21/11/2009</Due_Date> <Interest>2.00</Interest> <Summary>BENTLEY 2009</Summary> <Amount_Due>20000000.00</Amount_Due> <IEduty>3000000.00</IEduty> <LEsurtax>4000000.00</LEsurtax> <CEsurtax>5000000.00</CEsurtax> <EXduty>0.00</EXduty> <IMvat>1000000.00</IMvat> <SYSfee>4000000.00</SYSfee> <AItax>2000000.00</AItax> <Cduty>1000000.00</Cduty> </Table> </NewDataSet> </diffgr:diffgram> </DataSet>

    Read the article

  • Uiimport does not save variable to base workspace

    - by Tim
    I tried using uiimport to load a file to the base workspace.....It worked first time....but after trying again after a while...I wasnt seeing the variable in the base work space. I used the default variable name which is given by 'uiimport". This was the command I used: uiimport(filename) And two variables where created by default..."data" and "textdata"(which is the header)....but now when i run it is no longer saved in the base workspace I do not want to assign a variable to the uiimport like so... K = uiimport(filename) assignin(base,'green',K) I do not want to do that because My dataset has a text header and the data itself, and doing this would assign both "textdata" and "data" to "green" variable How would I be able to get the dimensions of ONLY the "data" in green and how would I pass only "data"(which is in the green variable in the workspace.."rmbr"...the green variable holds both "data" and "textdata") to another function. I was able to do all this when the uiimport automatically saved the variables in the base workspace....but somehow now it doesn't. I would appreciate any help or tips on this matter

    Read the article

  • How do I setup Linq to SQL and WCF

    - by Jisaak
    So I'm venturing out into the world of Linq and WCF web services and I can't seem to make the magic happen. I have a VERY basic WCF web service going and I can get my old SqlConnection calls to work and return a DataSet. But I can't/don't know how to get the Linq to SQL queries to work. I'm guessing it might be a permissions problem since I need to connect to the SQL Database with a specific set of credentials but I don't know how I can test if that is the issue. I've tried using both of these connection strings and neither seem to give me a different result. <add name="GeoDataConnectionString" connectionString="Data Source=SQLSERVER;Initial Catalog=GeoData;Integrated Security=True" providerName="System.Data.SqlClient" /> <add name="GeoDataConnectionString" connectionString="Data Source=SQLSERVER;Initial Catalog=GeoData;User ID=domain\userName; Password=blahblah; Trusted_Connection=true" providerName="System.Data.SqlClient" /> Here is the function in my service that does the query and I have the interface add the [OperationContract] public string GetCity(int cityId) { GeoDataContext db = new GeoDataContext(); var city = from c in db.Cities where c.CITY_ID == 30429 select c.DESCRIPTION; return city.ToString(); } The GeoData.dbml only has one simple table in it with a list of city id's and city names. I have also changed the "Serialization Mode" on the DataContext to "Unidirectional" which from what I've read needs to be done for WCF. When I run the service I get this as the return: SELECT [t0].[DESCRIPTION] FROM [dbo].[Cities] AS [t0] WHERE [t0].[CITY_ID] = @p0 Dang, so as I'm writing this I realize that maybe my query is all messed up?

    Read the article

  • Temporary storage for keeping data between program iterations?

    - by mr.b
    I am working on an application that works like this: It fetches data from many sources, resulting in pool of about 500,000-1,500,000 records (depends on time/day) Data is parsed Part of data is processed in a way to compare it to pre-existing data (read from database), calculations are made, and stored in database. Resulting dataset that has to be stored in database is, however, much smaller in size (compared to original data set), and ranges from 5,000-50,000 records. This process almost always updates existing data, perhaps adds few more records. Then, data from step 2 should be kept somehow, somewhere, so that next time data is fetched, there is a data set which can be used to perform calculations, without touching pre-existing data in database. I should point out that this data can be lost, it's not irreplaceable (key information can be read from database if needed), but it would speed up the process next time. Application components can (and will be) run off different computers (in the same network), so storage has to be reachable from multiple hosts. I have considered using memcached, but I'm not quite sure should I do so, because one record is usually no smaller than 200 bytes, and if I have 1,500,000 records, I guess that it would amount to over 300 MB of memcached cache... But that doesn't seem scalable to me - what if data was 5x that amount? If it were to consume 1-2 GB of cache only to keep data in between iterations (which could easily happen)? So, the question is: which temporary storage mechanism would be most suitable for this kind of processing? I haven't considered using mysql temporary tables, as I'm not sure if they can persist between sessions, and be used by other hosts in network... Any other suggestion? Something I should consider?

    Read the article

  • How to create refresh statements for TableAdapter objects in Visual Studio?

    - by Mark Wilkins
    I am working on developing an ADO.NET data provider and an associated DDEX provider. I am unable to convince the Visual Studio TableAdapater Configuration Wizard to generate SQL statements to refresh the data table after inserts and updates. It generates the insert and delete statements but will not produce the select statements to do the refresh. The functionality referred to can be accessed by dropping a table from the Server Explorer (inside Visual Studio) onto a DataSet (e.g., DataSet1.xsd). It creates a TableAdapter object and configures SELECT, UPDATE, DELETE, and INSERT statements. If you right click on the TableAdapter object, the context menu has a “Configure” option that starts the “TableAdapter Configuration Wizard”. The first dialog of that wizard has an Advanced Options button, which leads to an option titled “Refresh the data table”. When used with SQL Server tables, that option causes a statement of the form “select field1, field2, …” to be added on to the end of the commands for the TableAdapter’s InsertCommand and UpdateCommand. Do you have any idea what type property or interface might need to be exposed from the DDEX provider (or maybe the ADO.NET data provider) in order to make Visual Studio add those refresh statements to the update/insert commands? The MSDN documentation for the Advanced SQL Generation Options Dialog Box has a note stating, “Refreshing the data table is only supported on databases that support batching of SQL statements.” This seems to imply that a .NET data provider might need to expose some property indicating such behavior is supported. But I cannot find it. Any ideas?

    Read the article

  • Specify which xml file to load when link is clicked

    - by Jason
    Good morning, I would like it so when a link is clicked on the homepage it would load a particular xml file into the next page (the page is called category-list.apsx). This category list page uses the Repeater Control method to display the xml details on the page. I used the example shown here: http://www.w3schools.com/aspnet/aspnet_repeater.asp So at the moment the repeater script looks like: <script runat="server"> sub Page_Load if Not Page.IsPostBack then dim mycategories=New DataSet mycategories.ReadXml(MapPath("categories.xml")) categories.DataSource=mycategories categories.DataBind() end if end sub </script> After doing some research I did find someone with the same problem and the solution was to insert #tags as part of the link on the homepage (i.e. category-list.apsx#company1results) and then some script on the list page to pick up the correct xml file: <script type="text/javascript"> var old_onload = window.onload; // Play it safe by respecting onload handlers set by other scripts. window.onload=function() { var categories = document.location.href.substring(document.location.href.indexOf("#")+1); loadXMLDoc('XML/'+categories+'.xml'); old_onload(); } </script> This was from the following link: http://www.hotscripts.com/forums/javascript/45641-solved-specify-xml-file-load-when-click-link.html How can I get these two scripts to connect with each other? Thank you for your time

    Read the article

  • Finding Common Byte Sequences in MS SQL TEXT Column

    - by regex
    Hello All, Short Desc: I'm curious to see if I can use SQL Analysis services or some other MS SQL service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification. Thanks in advance for your help.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >