Search Results

Search found 3135 results on 126 pages for 'typed dataset'.

Page 22/126 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Error Handling Examples(C#)

    “The purpose of reviewing the Error Handling code is to assure that the application fails safely under all possible error conditions, expected and unexpected. No sensitive information is presented to the user when an error occurs.” (OWASP, 2011) No Error Handling The absence of error handling is still a form of error handling. Based on the code in Figure 1, if an error occurred and was not handled within either the ReadXml or BuildRequest methods the error would bubble up to the Search method. Since this method does not handle any acceptations the error will then bubble up the stack trace. If this continues and the error is not handled within the application then the environment in which the application is running will notify the user running the application that an error occurred based on what type of application. Figure 1: No Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); dt.ReadXml(BuildRequest(searchTerm, resultCount)); return dt; } Generic Error Handling One simple way to add error handling is to catch all errors by default. If you examine the code in Figure 2, you will see a try-catch block. On April 6th 2010 Louis Lazaris clearly describes a Try Catch statement by defining both the Try and Catch aspects of the statement. “The try portion is where you would put any code that might throw an error. In other words, all significant code should go in the try section. The catch section will also hold code, but that section is not vital to the running of the application. So, if you removed the try-catch statement altogether, the section of code inside the try part would still be the same, but all the code inside the catch would be removed.” (Lazaris, 2010) He also states that all errors that occur in the try section cause it to stops the execution of the try section and redirects all execution to the catch section. The catch section receives an object containing information about the error that occurred so that they system can gracefully handle the error properly. When errors occur they commonly log them in some form. This form could be an email, database entry, web service call, log file, or just an error massage displayed to the user.  Depending on the error sometimes applications can recover, while others force an application to close. Figure 2: Generic Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); try { dt.ReadXml(BuildRequest(searchTerm, resultCount)); } catch (Exception ex) { // Handle all Exceptions } return dt; } Error Specific Error Handling Like the Generic Error Handling, Error Specific error handling allows for the catching of specific known errors that may occur. For example wrapping a try catch statement around a soap web service call would allow the application to handle any error that was generated by the soap web service. Now, if the systems wanted to send a message to the web service provider every time a soap error occurred but did not want to notify them if any other type of error occurred like a network time out issue. This would be varying tedious to accomplish using the General Error Handling methodology. This brings us to the use case for using the Error Specific error handling methodology.  The Error Specific Error handling methodology allows for the TryCatch statement to catch various types of errors depending on the type of error that occurred. In Figure 3, the code attempts to handle DataException differently compared to how it potentially handles all other errors. This allows for specific error handling for each type of known error, and still allows for error handling of any unknown error that my occur during the execution of the TryCatch statement. Figure 5: Error Specific Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); try { dt.ReadXml(BuildRequest(searchTerm, resultCount)); } catch (TimeoutException ex) { // Handle Timeout TimeoutException Only } catch (Exception) { // Handle all Exceptions } return dt; }

    Read the article

  • iPhone SDK: How to store the time each word was typed?

    - by Harkonian
    My problem is twofold: 1) I'm trying to determine an eloquent way to allow the user to type into a UITextView and store the time each word was typed into an array. The time will be a float which starts at 0 when the user begins to type. 2) Conversely, I'd like the user to be able to tap on a word in the UITextView and display the time that word was typed (displaying in an NSLog() is fine). Considerations that may throw a wrench into a possible approach -- what if the user goes back to the top of the text and starts typing or to the middle of the text? Even a suggested approach without code would be appreciated, because right now I'm drawing a blank.

    Read the article

  • DB2 load partitioned data in parallel

    - by Erik Paulson
    I have a 10-node DB2 9.5 database, with raw data on each machine (ie node1:/scratch/data/dataset.1 node2:/scratch/data/dataset.2 ... node10:/scratch/data/dataset.10 There is no shared NFS mount - none of my machines could handle all of the datasets combined. each line of a dataset file is a long string of text, column delimited. The first column is the key. I don't know the hash function that DB2 will use, so dataset is not pre-partitioned. Short of renaming all of my files, is there any way to get DB2 to load them all in parallel? I'm trying to do something like load from '/scratch/data/dataset' of del modified by coldel| fastparse messages /dev/null replace into TESTDB.data_table part_file_location '/scratch/data'; but I have no idea how to suggest to db2 that it should look for dataset.1 on the first node, etc.

    Read the article

  • How/where to run the algorithm on large dataset?

    - by niko
    I would like to run the PageRank algorithm on graph with 4 000 000 nodes and around 45 000 000 edges. Currently I use neo4j graph databse and classic relational database (postgres) and for software projects I mostly use C# and Java. Does anyone know what would be the best way to perform a PageRank computation on such graph? Is there any way to modify the PageRank algorithm in order to run it at home computer or server (48GB RAM) or is there any useful cloud service to push the data along the algorithm and retrieve the results? At this stage the project is at the research stage so in case of using cloud service if possible, would like to use such provider that doesn't require much administration and service setup, but instead focus just on running the algorith once and get the results without much overhead administration work.

    Read the article

  • Cleaning a dataset of song data - what sort of problem is this?

    - by Rob Lourens
    I have a set of data about songs. Each entry is a line of text which includes the artist name, song title, and some extra text. Some entries are only "extra text". My goal is to resolve as many of these as possible to songs on Spotify using their web API. My strategy so far has been to search for the entry via the API - if there are no results, apply a transformation such as "remove all text between ( )" and search again. I have a list of heuristics and I've had reasonable success with this but as the code gets more and more convoluted I keep thinking there must be a more generic and consistent way. I don't know where to look - any suggestions for what to try, topics to study, buzzwords to google?

    Read the article

  • Static DataTable or DataSet in a class - bad idea?

    - by Superbest
    I have several instances of a class. Each instance stores data in a common database. So, I thought "I'll make the DataTable table field static, that way every instance can just add/modify rows to its own table field, but all the data will actually be in one place!" However, apparently it's a bad idea to do use static fields, especially if it's databases: Don't Use "Static" in C#? Is this a bad idea? Will I run into problems later on if I use it? This is a small project so I can accept no testing as a compromise if that is the only drawback. The benefit of using a static database is that there can be many objects of type MyClass, but only one table they all talk to, so a static field seems to be an implementation of exactly this, while keeping syntax concise. I don't see why I shouldn't use a static field (although I wouldn't really know) but if I had to, the best alternative I can think of is creating one DataTable, and passing a reference to it when creating each instance of MyClass, perhaps as a constructor parameter. But is this really an improvement? It seems less intuitive than a static field.

    Read the article

  • What is the best way to INSERT a large dataset into a MySQL database (or any database in general)

    - by Joe
    As part of a PHP project, I have to insert a row into a MySQL database. I'm obviously used to doing this, but this required inserting into 90 columns in one query. The resulting query looks horrible and monolithic (especially inserting my PHP variables as the values): INSERT INTO mytable (column1, colum2, ..., column90) VALUES ('value1', 'value2', ..., 'value90') and I'm concerned that I'm not going about this in the right way. It also took me a long (boring) time just to type everything in and testing writing the test code will be equally tedious I fear. How do professionals go about quickly writing and testing these queries? Is there a way I can speed up the process?

    Read the article

  • Why would a TableAdapter populate a DataSet with "1/1/2000" for an entire timestamp column?

    - by Rob
    I have a TableAdapter filling a DataSet, and for some reason every select query populates my timestamp column with the value 1/1/2000 for every selected row. I first verified that original values are intact on the DB side; for the most part, they are, although it seems a few rows lost their original timestamp because of update queries performed programmatically before the issue was discovered. The DataColumn type is DateType, while the database (Postgres) column type is timestamp. Up until recently, this was all playing very nicely. I noticed the issue in a bound DataGridView control, and verified that this is not related to data binding by utilizing the 'Preview Data' option in the VS DataSet Editor. Usually when I notice unexpected values popping up in my application it's related to a mis-configured property, type conflict, or another silly mistake I've made. So after checking properties and types, and even recreating the TableAdapter from scratch, to say I'm a little baffled is an understatement. Does anyone have any ideas of what I could do to fix the issue and/or diagnose the cause?

    Read the article

  • saving dataset in excel and allow user to download it in the client machine.

    - by Jebli
    Hi, I am developing an application where i want i am displaying a dataset in the datagrid view for the user.Now the user wants to download the data in the datagridview in an excel format.How can i do it ? 1) should i write the dataset in the excel and save it the server before the user download the file ? 2) Can i use a hyper link and set the path of the file that is saved in the server to the hyper link hRef property , so that the user can click and download the file ? I am using C# ASP.net 2.0 Please help !

    Read the article

  • How to import datarow from one dataset to another?

    - by Disciple
    Hi, I want to copy the row to the new dataset if the previous value in second column isn't the same as the current one (i.e this dataset should have rows with unique values): DataTable tbl = new DataTable(); DataTable tmpTable = ds.Tables[0]; for( var rowIndex = 0; rowIndex < ds.Tables[0].Rows.Count; rowIndex++ ) { object value = null; foreach (DataRow x in tbl.Rows) { if (ds.Tables[0].Rows[rowIndex][1] == x[1]) { value = ds.Tables[0].Rows[rowIndex][1]; break; } } // value already exists if (value == null) { tbl.ImportRow(ds.Tables[0].Rows[rowIndex]); } } How to do this correctly? Maybe one loop instead 2?

    Read the article

  • ASP.NET MVC 2 - How do I use an Interface as the Type for a Strongly Typed View

    - by Rake36
    I'd like to keep my concrete classes separate from my views. Without using strongly typed views, I'm fine. I just use a big parameter list in the controller method signatures and then use my service layer factory methods to create my concrete objects. This is actually just fine with me, but it got me thinking and after a little playing, I realized it was literally impossible for a controller method to accept an interface as a method parameter - because it has no way of instantiating it. Can't create a strongly-typed view using an interface through the IDE either (which makes sense actually). So my question. Is there some way to tell the controller how to instantiate the interface parameter using my service layer factory methods? I'd like to convert from: [Authorize] [AcceptVerbs(HttpVerbs.Post)] [UrlRoute(Path = "Application/Edit/{id}")] public ActionResult Edit(String id, String TypeCode, String TimeCode, String[] SelectedSchoolSystems, String PositionChoice1, String PositionChoice2, String PositionChoice3, String Reason, String LocationPreference, String AvailableDate, String RecipientsNotSelected, String RecipientsSelected) { //New blank app IApplication _application = ApplicationService.GetById(id); to something like [Authorize] [AcceptVerbs(HttpVerbs.Post)] [UrlRoute(Path = "Application/Edit/{id}")] public ActionResult Edit(String id, IApplication app) { //Don't need to do this anymore //IApplication _application = ApplicationService.GetById(id);

    Read the article

  • Infinite loop when adding a row to a list in a class in python3

    - by Margaret
    I have a script which contains two classes. (I'm obviously deleting a lot of stuff that I don't believe is relevant to the error I'm dealing with.) The eventual task is to create a decision tree, as I mentioned in this question. Unfortunately, I'm getting an infinite loop, and I'm having difficulty identifying why. I've identified the line of code that's going haywire, but I would have thought the iterator and the list I'm adding to would be different objects. Is there some side effect of list's .append functionality that I'm not aware of? Or am I making some other blindingly obvious mistake? class Dataset: individuals = [] #Becomes a list of dictionaries, in which each dictionary is a row from the CSV with the headers as keys def field_set(self): #Returns a list of the fields in individuals[] that can be used to split the data (i.e. have more than one value amongst the individuals def classified(self, predicted_value): #Returns True if all the individuals have the same value for predicted_value def fields_exhausted(self, predicted_value): #Returns True if all the individuals are identical except for predicted_value def lowest_entropy_value(self, predicted_value): #Returns the field that will reduce <a href="http://en.wikipedia.org/wiki/Entropy_%28information_theory%29">entropy</a> the most def __init__(self, individuals=[]): and class Node: ds = Dataset() #The data that is associated with this Node links = [] #List of Nodes, the offspring Nodes of this node level = 0 #Tree depth of this Node split_value = '' #Field used to split out this Node from the parent node node_value = '' #Value used to split out this Node from the parent Node def split_dataset(self, split_value): fields = [] #List of options for split_value amongst the individuals datasets = {} #Dictionary of Datasets, each one with a value from fields[] as its key for field in self.ds.field_set()[split_value]: #Populates the keys of fields[] fields.append(field) datasets[field] = Dataset() for i in self.ds.individuals: #Adds individuals to the datasets.dataset that matches their result for split_value datasets[i[split_value]].individuals.append(i) #<---Causes an infinite loop on the second hit for field in fields: #Creates subnodes from each of the datasets.Dataset options self.add_subnode(datasets[field],split_value,field) def add_subnode(self, dataset, split_value='', node_value=''): def __init__(self, level, dataset=Dataset()): My initialisation code is currently: if __name__ == '__main__': filename = (sys.argv[1]) #Takes in a CSV file predicted_value = "# class" #Identifies the field from the CSV file that should be predicted base_dataset = parse_csv(filename) #Turns the CSV file into a list of lists parsed_dataset = individual_list(base_dataset) #Turns the list of lists into a list of dictionaries root = Node(0, Dataset(parsed_dataset)) #Creates a root node, passing it the full dataset root.split_dataset(root.ds.lowest_entropy_value(predicted_value)) #Performs the first split, creating multiple subnodes n = root.links[0] n.split_dataset(n.ds.lowest_entropy_value(predicted_value)) #Attempts to split the first subnode.

    Read the article

  • Is this way of storing typed objects in memory good?

    - by Pindatjuh
    This is an "is this okay, or can it be done better" question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical possition when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignement. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 11111111 11111111 11111111 11111111 Data Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 11111111 11111111 11111111 11111111 Is this okay, or can it be done better?

    Read the article

  • Vim: how to make the text I've just typed uppercase?

    - by Pavel Shved
    Use case: I've just entered insert mode, and typed some text. Now I want to make it uppercase. It can be done via gUmotion. However, I can't find the motion over the text entered in the recent input session. It's somewhat strange and the concept of such motion is buggy (where to move if you've deleted text, for example?), but it may solve my problem. Or, are there other ways of making uppercase the text you've recently inputted?

    Read the article

  • Can this way of storing typed objects be improved?

    - by Pindatjuh
    This is an "can it be improved"-question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 Windows platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical position when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignment. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 01010101 01010101 01010101 01010101 Data (user defined) Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 01010101 01010101 01010101 01010101 How can I improve this? (Both performance- and memory-consumption wise)

    Read the article

  • wpf treeview ObjectDataProvider update - how?

    - by wonea
    I've been having trouble updating the data bindings which relates to a treeview. The dataset is mapped onto the ObjectDataProvider, passed through two data templates. Now I've tried updating the ObjectDataProvider with two calls, hoping that they would trigger the CreateTheDataSet() method, and rebuild the tree. Tried calling; thetreeView.ItemTemplate.LoadContent(); thetreeView.Items.Refresh(); Program code; <Window.Resources> <ObjectDataProvider x:Key="dataSetProvider" MethodName="CreateDataSet" ObjectType="{x:Type local:DataSetCreator}" /> <DataTemplate x:Key="DetailTemplate"> <TextBlock Text="{Binding Path=personname}" /> </DataTemplate> <HierarchicalDataTemplate x:Key="MasterTemplate" ItemsSource="{Binding Master2Detail}" ItemTemplate="{StaticResource DetailTemplate}"> <TextBlock Text="{Binding Path=jobname}" /> </HierarchicalDataTemplate> </Window.Resources> <x:Name="thetreeView" DataContext="{StaticResource dataSetProvider}" ItemsSource="{Binding compMaster}" ItemTemplate="{StaticResource MasterTemplate}" /> My dataset creation class; public static class DataSetCreator { public static DataSet CreateTheDataSet() { DataSet dataset = new DataSet(); // Create dataset ... blah blah return dataset; } }

    Read the article

  • How can I get a dataset to populate from an RDL file?

    - by NotDan
    I have an RDL report file and I would like to somehow "run" the report and get the dataset that would be used to fill the report. What I'm trying to do is get a raw data extract from the data that would be used to fill the report, without actually showing the report to the user. Is this possible?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >