Search Results

Search found 61631 results on 2466 pages for 'duplicate data'.

Page 298/2466 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • export data from WCF Service to excel

    - by Dave
    I need to provide an export to excel feature for a large amount of data returned from a WCF web service. The code to load the datalist is as below: List<resultSet> r = myObject.ReturnResultSet(myWebRequestUrl); //call to WCF service myDataList.DataSource = r; myDataList.DataBind(); I am using the Reponse object to do the job: Response.Clear(); Response.Buffer = true; Response.ContentType = "application/vnd.ms-excel"; Response.AddHeader("Content-Disposition", "attachment; filename=MyExcel.xls"); StringBuilder sb = new StringBuilder(); StringWriter sw = new StringWriter(sb); HtmlTextWriter tw = new HtmlTextWriter(sw); myDataList.RenderControl(tw); Response.Write(sb.ToString()); Response.End(); The problem is that WCF Service times out for large amount of data (about 5000 rows) and the result set is null. When I debug the service, I can see the window for saving/opening the excel sheet appear before the service returns the result and hence the excel sheet is always empty. Please help me figure this out.

    Read the article

  • How to bind a datatable to a wpf editable combobox: selectedItem showing System.Data.DataRowView

    - by black sensei
    Hello Good People!! I bound a datatable to a combobox and defined a dataTemplate in the itemTemplate.i can see desired values in the combobox dropdown list,what i see in the selectedItem is System.Data.DataRowView here are my codes: <ComboBox Margin="128,139,123,0" Name="cmbEmail" Height="23" VerticalAlignment="Top" TabIndex="1" ToolTip="enter the email you signed up with here" IsEditable="True" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding}"> <ComboBox.ItemTemplate> <DataTemplate> <StackPanel> <TextBlock Text="{Binding Path=username}"/> </StackPanel> </DataTemplate> </ComboBox.ItemTemplate> The code behind is so : if (con != null) { con.Open(); //users table has columns id | username | pass SQLiteCommand cmd = new SQLiteCommand("select * from users", con); SQLiteDataAdapter da = new SQLiteDataAdapter(cmd); userdt = new DataTable("users"); da.Fill(userdt); cmbEmail.DataContext = userdt; } I've been looking for something like SelectedValueTemplate or SelectedItemTemplate to do the same kind of data templating but i found none. I'll like to ask if i'm doing something wrong or it's a known issue for combobox binding? if something is wrong in my code please point me to the right direction. thanks for reading this

    Read the article

  • Add new item to UITableView and Core Data as data source?

    - by David.Chu.ca
    I have trouble to add new item to my table view with core data. Here is the brief logic in my codes. In my ViewController class, I have a button to trigle the edit mode: - (void) toggleEditing { UITableView *tv = (UITableView *)self.view; if (isEdit) // class level flag for editing { self.newEntity = [NSEntityDescription insertNewObjectForEntityName:@"entity1" inManagedObjectContext:managedObjectContext]; NSArray *insertIndexPaths = [NSArray arrayWithObjects: [NSInextPath indexPathForRow:0 inSection:0], nil]; // empty at beginning so hard code numbers here. [tv insertRowsAtIndexPaths:insertIndexPaths withRowAnimation:UITableViewRowAnimationFade]; [self.tableView setEditing:YES animated:YES]; // enable editing mode } else { ...} } In this block of codes, I added a new item to my current managed object context first, and then I added a new row to my tv. I think that both the number of objects in my data source or context and the number of rows in my table view should be 1. However, I got an exception in the event of tabView:numberOfRowsInSection: Invalid update: invalid number of rows in section 0. The number of rows contained in an existing section after the update (0) must be equal to the number of rows contained in that section before the update (0), plus or minus the number of rows inserted or deleted from that section (1 inserted, 0 deleted). The exception was raised right after the delegate event: - (NSInteger) tableView:(UITableView *) tableView numberOfRawsInSection:(NSInteger) section { // fetchedResultsController is class member var NSFetchedResultsController id <NSFechedResultsSectionInfo> sectionInfo = [[fetchedResultsController sections] objectAtIndex: section]; NSInteger rows = [sectionInfo numberOfObjects]; return rows; } In debug mode, I found that the rows was still 0 and the event invoked after the the even of toggleEditing. It looks like that sectionInfo obtained from fetchedResultsController did not include the new entity object inserted. Not sure if I miss anything or steps? I am not sure how it works: to get the fetcedResultsController notified or reflect the change when a new entity is inserted into the current managed object context?

    Read the article

  • jqGrid with JSON data renders table as empty

    - by jgreep
    I'm trying to create a jqgrid, but the table is empty. The table renders, but the data doesn't show. The data I'm getting back from the php call is: { "page":"1", "total":1, "records":"10", "rows":[ {"id":"2:1","cell":["1","image","Chief Scout","Highest Award test","0"]}, {"id":"2:2","cell":["2","image","Link Badge","When you are invested as a Scout, you may be eligible to receive a Link Badge. (See page 45)","0"]}, {"id":"2:3","cell":["3","image","Pioneer Scout","Upon completion of requirements, the youth is invested as a Pioneer Scout","0"]}, {"id":"2:4","cell":["4","image","Voyageur Scout Award","Voyageur Scout Award is the right after Pioneer Scout.","0"]}, {"id":"2:5","cell":["5","image","Voyageur Citizenship","Learning about and caring for your community.","0"]}, {"id":"2:6","cell":["6","image","Fish and Wildlife","Demonstrate your knowledge and involvement in fish and wildlife management.","0"]}, {"id":"2:7","cell":["7","image","Photography","To recognize photography knowledge and skills","0"]}, {"id":"2:8","cell":["8","image","Recycling","Demonstrate your knowledge and involvement in Recycling","0"]}, {"id":"2:10","cell":["10","image","Voyageur Leadership ","Show leadership ability","0"]}, {"id":"2:11","cell":["11","image","World Conservation","World Conservation Badge","0"]} ]} The javascript configuration looks like so: $("#"+tableId).jqGrid ({ url:'getAwards.php?id='+classId, dataType : 'json', mtype:'POST', colNames:['Id','Badge','Name','Description',''], colModel : [ {name:'awardId', width:30, sortable:true, align:'center'}, {name:'badge', width:40, sortable:false, align:'center'}, {name:'name', width:180, sortable:true, align:'left'}, {name:'description', width:380, sortable:true, align:'left'}, {name:'selected', width:0, sortable:false, align:'center'} ], sortname: "awardId", sortorder: "asc", pager: $('#'+tableId+'_pager'), rowNum:15, rowList:[15,30,50], caption: 'Awards', viewrecords:true, imgpath: 'scripts/jqGrid/themes/green/images', jsonReader : { root: "rows", page: "page", total: "total", records: "records", repeatitems: true, cell: "cell", id: "id", userdata: "userdata", subgrid: {root:"rows", repeatitems: true, cell:"cell" } }, width: 700, height: 200 }); The HTML looks like: <table class="awardsList" id="awardsList2" class="scroll" name="awardsList" /> <div id="awardsList2_pager" class="scroll"></div> I'm not sure that I needed to define jsonReader, since I've tried to keep to the default. If the php code will help, I can post it too.

    Read the article

  • how to import csv data into django models

    - by little_fish
    i have some csv data and i want to export into django models the example of csv data 1;"02-01-101101";"Worm Gear HRF 50";"Ratio 1 : 10";"input shaft, output shaft, direction A, color dark green"; 2;"02-01-101102";"Worm Gear HRF 50";"Ratio 1 : 20";"input shaft, output shaft, direction A, color dark green"; 3;"02-01-101103";"Worm Gear HRF 50";"Ratio 1 : 30";"input shaft, output shaft, direction A, color dark green"; 4;"02-01-101104";"Worm Gear HRF 50";"Ratio 1 : 40";"input shaft, output shaft, direction A, color dark green"; 5;"02-01-101105";"Worm Gear HRF 50";"Ratio 1 : 50";"input shaft, output shaft, direction A, color dark green"; and i have some django models name Product in Product there is some fields like name, description and price and i want to something like this product=Product() product.name = "Worm Gear HRF 70(02-01-101116)" product.description = "input shaft, output shaft, direction A, color dark green" product.price = 100

    Read the article

  • Thread-safe data structures

    - by Inso Reiges
    Hello, I have to design a data structure that is to be used in a multi-threaded environment. The basic API is simple: insert element, remove element, retrieve element, check that element exists. The structure's implementation uses implicit locking to guarantee the atomicity of a single API call. After i implemented this it became apparent, that what i really need is atomicity across several API calls. For example if a caller needs to check the existence of an element before trying to insert it he can't do that atomically even if each single API call is atomic: if(!data_structure.exists(element)) { data_structure.insert(element); } The example is somewhat awkward, but the basic point is that we can't trust the result of exists call anymore after we return from atomic context (the generated assembly clearly shows a minor chance of context switch between the two calls). What i currently have in mind to solve this is exposing the lock through the data structure's public API. This way clients will have to explicitly lock things, but at least they won't have to create their own locks. Is there a better commonly-known solution to these kinds of problems? And as long as we're at it, can you advise some good literature on thread-safe design? Thank you.

    Read the article

  • OCR with Neural network: data extraction

    - by Sebastian Hoitz
    I'm using the AForge library framework and its neural network. At the moment when I train my network I create lots of images (one image per letter per font) at a big size (30 pt), cut out the actual letter, scale this down to a smaller size (10x10 px) and then save it to my harddisk. I can then go and read all those images, creating my double[] arrays with data. At the moment I do this on a pixel basis. So once I have successfully trained my network I test the network and let it run on a sample image with the alphabet at different sizes (uppercase and lowercase). But the result is not really promising. I trained the network so that RunEpoch had an error of about 1.5 (so almost no error), but there are still some letters left that do not get identified correctly in my test image. Now my question is: Is this caused because I have a faulty learning method (pixelbased vs. the suggested use of receptors in this article: http://www.codeproject.com/KB/cs/neural_network_ocr.aspx - are there other methods I can use to extract the data for the network?) or can this happen because my segmentation-algorithm to extract the letters from the image to look at is bad? Does anyone have ideas on how to improve it?

    Read the article

  • Adding new record to a VFP data table in VB.NET with ADO recordsets

    - by Gerry
    I am trying to add a new record to a Visual FoxPro data table using an ADO dataset with no luck. The code runs fine with no exceptions but when I check the dbf after the fact there is no new record. The mDataPath variable shown in the code snippet is the path to the .dbc file for the entire database. A note about the For loop at the bottom; I am adding the body of incoming emails to this MEMO field so thought I needed to break the addition of this string into 256 character Chunks. Any guidance would be greatly appreciated. cnn1.Open("Driver={Microsoft Visual FoxPro Driver};" & _ "SourceType=DBC;" & _ "SourceDB=" & mDataPath & ";Exclusive=No") Dim RS As ADODB.RecordsetS = New ADODB.Recordset RS.Open("select * from gennote", cnn1, 1, 3, 1) RS.AddNew() 'Assign values to the first three fields RS.Fields("ignnoteid").Value = NextIDI RS.Fields("cnotetitle").Value = "'" & mail.Subject & "'" RS.Fields("cfilename").Value = "''" 'Looping through 254 characters at a time and add the data 'to Ado Field buffer For i As Integer = 1 To Len(memo) Step liChunkSize liStartAt = i liWorkString = Mid(mail.Body, liStartAt, liChunkSize) RS.Fields("mnote").AppendChunk(liWorkString) Next 'Update the recordset RS.Update() RS.Requery() RS.Close()

    Read the article

  • posting nutch data into a BASIC auth secured Solr instance

    - by mlathe
    Hi. I've secured a solr instance using BASIC auth, kind of how it is shown here: http://blog.comtaste.com/2009/02/securing_your_solr_server_on_t.html Now i'm trying to update my batch processes to push data into the authenticated instance. The ones using "curl" are easy, but i also have a Nutch crawl that uses the "solrindex" command to push data into Solr. When i do that i get this error: 2010-02-22 12:09:28,226 INFO auth.AuthChallengeProcessor - basic authentication scheme selected 2010-02-22 12:09:28,229 INFO httpclient.HttpMethodDirector - No credentials available for BASIC 'Tomcat Manager Application'@ninja:5500 2010-02-22 12:09:28,236 WARN mapred.LocalJobRunner - job_local_0001 org.apache.solr.common.SolrException: Unauthorized Unauthorized request: http://ninja:5500/solr/foo/update?wt=javabin&version=2.2 at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:343) at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:183) at org.apache.solr.client.solrj.request.UpdateRequest.process(UpdateRequest.java:217) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:48) at org.apache.nutch.indexer.solr.SolrWriter.close(SolrWriter.java:69) at org.apache.nutch.indexer.IndexerOutputFormat$1.close(IndexerOutputFormat.java:48) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:170) 2010-02-22 12:09:29,134 FATAL solr.SolrIndexer - SolrIndexer: java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1232) at org.apache.nutch.indexer.solr.SolrIndexer.indexSolr(SolrIndexer.java:73) at org.apache.nutch.indexer.solr.SolrIndexer.run(SolrIndexer.java:95) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.nutch.indexer.solr.SolrIndexer.main(SolrIndexer.java:104) Apparently nutch uses SolrJ to push the content, and after going through the solrj code, it's clear that it uses commons-httpclient without providing a way to set the credentials. Here are my question(s) Is this possible to do? ie push from nutch into a BASIC auth secured Solr instance? Is it possible to tell commons-httpclient about a credential without explicitly doing an _httpclient.getState().setCredentials(...)? Anyother ideas? One idea i had was to use an IPfiltering Valve for just the "update" Solr webservices. That would mean you could only make an update call from certain nodes. Thanks

    Read the article

  • Ext.data.JsonStore + Ext.DataView = not loading records

    - by Mulone
    Hi guys, I'm trying to make a DataView work (on Ext JS 2.3). Here is the jsonStore, which seems to be working (it calls the server and gets a valid response). Ext.onReady(function(){ var prefStore = new Ext.data.JsonStore({ autoLoad: true, //autoload the data url: 'getHighestUserPreferences', baseParams:{ userId: 'andreab', max: '50' }, root: 'preferences', fields: [ {name:'prefId', type: 'int'}, {name:'absInteractionScore', type:'float'} ] }); Then the xtemplate: var tpl = new Ext.XTemplate( '<tpl for=".">', '<div class="thumb-wrap" id="{name}">', '<div class="thumb"><img src="{url}" title="{name}"></div>', '<span class="x-editable">{shortName}</span></div>', '</tpl>', '<div class="x-clear"></div>' ); The panel: var panel = new Ext.Panel({ id:'geoPreferencesView', frame:true, width:600, autoHeight:true, collapsible:false, layout:'fit', title:'Geo Preferences', And the DataView items: new Ext.DataView({ store: prefStore, tpl: tpl, autoHeight:true, multiSelect: true, overClass:'x-view-over', itemSelector:'div.thumb-wrap', emptyText: 'No images to display' }) }); panel.render('extOutput'); }); What I get in the page is a blue frame with the title, but nothing in it. How can I debug this and see why it is not working? Cheers, Mulone

    Read the article

  • using text and ntext SQL Datatypes in RPG

    - by David Stratton
    I'll preface this with saying that I'm a .NET developer, and am NOT an RPG developer. I'm working with one of our RPG developers to come up with a solution, so any suggestions you provide will get passed to him. We have a scenario where we want our iSeries to read from a SQL Server database. One of the columns is a TEXT column. IN RPG, there is no equivalent data type to use for this. We've gone back and forth on this, and our current plan is to change course, and have our SQL Server write out a text file, which the iSeries can pick up and parse. This is, however, a last resort option, as the data in the file is sensitive, and we'd like to avoid the additional security overhead. We've already got the SQL Server locked down as tight as possible (one user only has read access to this, and that user is an iSeries user.) We don't want to have to worry about transferring files back and forth. However, at this point, we see no other option. We have no in-house Java developers, and need to do this in RPG. So I'm wondering if there are any RPG developers out there who have faced this situation and have any advice.

    Read the article

  • Data Driven MSTest: DataRow is always null

    - by David Back
    I am having a problem using Visual Studio data driven testing. I have tried to deconstruct this to the simplest example. I am using Visual Studio 2012. I create a new unit test project. I am referencing system data. My code looks like this: namespace UnitTestProject1 { [TestClass] public class UnitTest1 { [DeploymentItem(@"OrderService.csv")] [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "OrderService.csv", "OrderService#csv", DataAccessMethod.Sequential)] [TestMethod] public void TestMethod1() { try { Debug.WriteLine(TestContext.DataRow["ID"]); } catch (Exception ex) { Assert.Fail(); } } public TestContext TestContext { get; set; } } } I have a very small csv file that I have set the Build Options to to 'Content' and 'Copy Always'. I have added a .testsettings file to the solution, and set enable deployment, and added the csv file. I have tried this with and without |DataDirectory|, and with/without a full path specified (the same path that I get with Environment.CurrentDirectory). I've tried variations of "../" and "../../" just in case. Right now the csv is at the project root level, same as the .cs test code file. I have tried variations with xml as well as csv. TestContext is not null, but DataRow always is. I have not gotten this to work despite a lot of fiddling with it. I'm not sure what I'm doing wrong. Does mstest create a log anywhere that would tell me if it is failing to find the csv file, or what specific error might be causing DataRow to fail to populate? I have tried the following csv files: ID 1 2 3 4 and ID, Whatever 1,0 2,1 3,2 4,3 So far, no dice.

    Read the article

  • SVM Classification - minimum number of input sets for each class

    - by Amol Joshi
    Im trying to build an app to detect images which are advertisements from the webpages. Once I detect those Ill not be allowing those to be displayed on the client side. From the help that I got here in stackoverflow, I thought SVM is the best approach to my aim. So, I have coded SVM and an SMO myself. The dataset which I have got from UCI data repository has 3280 instances ( Link to Dataset- http://archive.ics.uci.edu/ml/datasets/Internet+Advertisements )where around 400 of them are from class representing Advertisement images and rest of them representing non-advertisement images. Right now Im taking the first 2800 input sets and training the SVM. But after looking at the accuracy rate I realised that most of those 2800 input sets are from non-advertisement image class. So Im getting very good accuracy for that class. So what can I do here? About how many input set shall I give to SVM to train and how many of them for each class? Thanks. Cheers. ( Basically made a new question because the context was different from my previous question. http://stackoverflow.com/questions/1991113/optimization-of-neural-network-input-data )

    Read the article

  • Working with a large data object between ruby processes

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds Update: Since my original question did not receive many responses, I'm assuming there's no solution as easy as I would have hoped. Presently I'm considering two options: Create a Sinatra application to store this hash with an API to modify/access it. Create a C application to do the same as #1, but a lot faster. The scope of my problem has increased such that the hash may be larger than my original example. So #2 may be necessary. But I have no idea where to start in terms of writing a C application that exposes an appropriate API. A good walkthrough through how best to implement #1 or #2 may receive best answer credit.

    Read the article

  • Generating %pc relative address of constant data

    - by Hudson
    Is there a way to have gcc generate %pc relative addresses of constants? Even when the string appears in the text segment, arm-elf-gcc will generate a constant pointer to the data, load the address of the pointer via a %pc relative address and then dereference it. For a variety of reasons, I need to skip the middle step. As an example, this simple function: const char * filename(void) { static const char _filename[] __attribute__((section(".text"))) = "logfile"; return _filename; } generates (when compiled with arm-elf-gcc-4.3.2 -nostdlib -c -O3 -W -Wall logfile.c): 00000000 <filename>: 0: e59f0000 ldr r0, [pc, #0] ; 8 <filename+0x8> 4: e12fff1e bx lr 8: 0000000c .word 0x0000000c 0000000c <_filename.1175>: c: 66676f6c .word 0x66676f6c 10: 00656c69 .word 0x00656c69 I would have expected it to generate something more like: filename: add r0, pc, #0 bx lr _filename.1175: .ascii "logfile\000" The code in question needs to be partially position independent since it will be relocated in memory at load time, but also integrate with code that was not compiled -fPIC, so there is no global offset table. My current work around is to call a non-inline function (which will be done via a %pc relative address) to find the offset from the compiled location in a technique similar to how -fPIC code works: static intptr_t __attribute__((noinline)) find_offset( void ) { uintptr_t pc; asm __volatile__ ( "mov %0, %%pc" : "=&r"(pc) ); return pc - 8 - (uintptr_t) find_offset; } But this technique requires that all data references be fixed up manually, so the filename() function in the above example would become: const char * filename(void) { static const char _filename[] __attribute__((section(".text"))) = "logfile"; return _filename + find_offset(); }

    Read the article

  • MATLAB Builder NE (.NET Assembly) Data type question

    - by Brett
    Hi coders, I am using MATLAB Builder NE (MATLAB's integrated .NET assmebly builder), but I am having an issue with data types. I have compiled a small, very simple, function in matlab and build it for .NET. I am able to call the namespace and even the function just fine. However, my function returns a value, and MATLAB defaults to returning it as an "object[]" data type. However, I know that the value is an integer, but I can't figure out how to cast it. My MATLAB function looks like this: function addValue = Myfunction(value1, value2) addValue=value1+value2; end Pretty simple right? And then in .NET I can call it as: xClass.addValue (1, 3, 4); where xClass is the name of the MATLAB built class but when I try: int x = xClass.addValue (1, 3, 4); C# errors out. Typical .NET casting (int) doesn't work. The compiler states it cannot convert object[] to int. Does anyone have experience with the .NET builder in MATLAB that can help me with this? It is really throwing me for a loop. I have scanned through most of the MATLAB BUILDER doc (484 pages!) with zero help. Thank you, Brett

    Read the article

  • Copy Selective Data from Database to Invoice, Based on Certain Criteria

    - by Scott
    For starters, here is an example of a microsoft excel database I am working with: Month/Address/Name/Description/Amount January/123 Street/Fred/Painting/100 January/456 Avenue/Scott/Flooring/400 January/789 Road/Scott/Plumbing/100 February/123 Street/Fred/Flooring/600 February/246 Lane/Fred/Electrical/300 March/789 Road/Scott/Drywall/150 What I want to be able to do is selectively copy info from this databse to invoices (also excel). The invoice has three columns: Address/Description/Amount. I want to be able to automatically fill the invoices in as the database is filled in (either automatically, or if I have to actually manually run the macro to do it, that might be fine). Each name (Scott, Fred, etc.) will have their own set of 12 invoices for the year. So, e.g., I want to be able to produce a January invoice for all work done for Scott in January, showing the address, the description and the amount, line by line. So every time work on Scott's address(es) is done, the database is filled in, and i want it to "send" that information to the invoice on the next available line, filling in only the Address/Description/Amount columns from the database. Fred's invoice should fill in as any work is done on Fred's addresses. And once the month changes, the next invoice should start filling in. So first I need to filter the data by the month and the name (and there is actually one more column to filter by, but let's keep this example simpler). Then I need to list the remaining data on the invoice, but only certain cells from the rows that are now left. Help anyone?

    Read the article

  • Load testing multipart form

    - by JacobM
    I'm trying to load-test a Rails application using JMeter. A critical part of the application involves a form that includes both text inputs and file uploads. It works fine in a browser, but when I try to post that page in JMeter, Rails is saving all of the parts of the multipart form as temp files, which causes things to break when it's looking for a string and gets a tempfile instead. It appears that the difference is that, from a browser, the piece of the multipart request that contains a text input looks like this: -----------------------------7d93b4186074c Content-Disposition: form-data; name="field_name" test -----------------------------7d93b4186074c while from JMeter it looks like this: -----------------------------7d159c1302d0y0 Content-Disposition: form-data; name="field_name" Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit test -----------------------------7d159c1302d0y0 So apparently Rails sees the former and interprets it as a plain text value and treats it as a string, but sees the latter and saves it to a temp file. I have not been able to find a setting to convince JMeter not to send the additional headers in the multipart form for non-file fields. Is there a way to convince Rails to ignore those headers and treat the text/plain text as strings instead of text files? Or a quick way to put a filter in front of my controller that will strip the extra headers? Alternately, is there a better tool to load-test a Rails application that includes file upload?

    Read the article

  • relational data from xml

    - by Beta033
    Our problem is this, we have a relational database to store objects in tables. As any relational database could, several of these tables have multiple foreign keys pointing to other tables (all pretty normal stuff) We've been trying to identify a solution to allow export of this relational data, ideally, only 1 of the objects in the model, to some sort of file (xml, text, ??). So it wouldn't be simple enough to just export 1 table as data stored in other tables would contribute to the complete model of the object. Something like the following picture: Toward this, i've written a routine to export the structure by following the foreign key paths which exports something similar to the following. <Tables> <TableA PK="1", val1, val2, val3> <TableC PK="1", FK_A="1", Val1, val2, val3> <TableC PK="2", FK_A="1", val1, val2, val3> <TableB PK="1", FK_A="1", FK_C="1", val1, val2, val3> <TableB PK="2", FK_A="1", FK_C="2", val1, val2, val3> <TableD PK="1", FK_B="1", FK_C="1", val1> <TableD PK="2", FK_B="2", FK_C="1", val1> </Tables> However, given this structure, it cannot be placed into a heirarchial format (ie D2 is a child of C1 and B2; and B2 is a child of C2) Which in turn, makes my life very difficult when trying to identify a methodology to reimport (and reKey) these objects. Has anybody done anything like this? how do you do it? are there tools or documentation on how this is best accomplished? Thanks for your help.

    Read the article

  • How can I pass FormsAuthentication.SetAuthCookie from Data Access Layer Class to WebService to Javas

    - by Reaction21
    I am using DotNetOpenAuth in my ASP.Net Website. I have modified it to work with Facebook Connect as well, using the same methods and database structures. Now I have come across a problem. I have added a Facebook Connect button to a login page. From that HTML button, I have to somehow pull information from the Facebook Connect connection and pass it into a method to authenticate the user. The way I am currently doing this is by: Calling a Javascript Function on the onlogin function of the FBML/HTML Facebook Connect button. The javascript function calls a Web service to login, which it does correctly. The web service calls my data access layer to login. And here is the problem: FormsAuthentication.SetAuthCookie is set at the data access layer. The Cookie is beyond the scope of the user's page and therefore is not set in the browser. This means that the user is authenticated, but the user's browser is never notified. So, I need to figure out if this is a bad way of doing what I need or if there is a better way to accomplish what I need. I am just not sure and have been trying to find answers for hours. Any help you have would be great.

    Read the article

  • Why isn't my WPF Datagrid showing data?

    - by Edward Tanguay
    This walkthrough says you can create a WPF datagrid in one line but doesn't give a full example. So I created an example using a generic list and connected it to the WPF datagrid, but it doesn't show any data. What do I need to change on the code below to get it to show data in the datagrid? ANSWER: This code works now: XAML: <Window x:Class="TestDatagrid345.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:toolkit="http://schemas.microsoft.com/wpf/2008/toolkit" xmlns:local="clr-namespace:TestDatagrid345" Title="Window1" Height="300" Width="300" Loaded="Window_Loaded"> <StackPanel> <toolkit:DataGrid ItemsSource="{Binding}"/> </StackPanel> </Window> Code Behind: using System.Collections.Generic; using System.Windows; namespace TestDatagrid345 { public partial class Window1 : Window { private List<Customer> _customers = new List<Customer>(); public List<Customer> Customers { get { return _customers; }} public Window1() { InitializeComponent(); } private void Window_Loaded(object sender, RoutedEventArgs e) { DataContext = Customers; Customers.Add(new Customer { FirstName = "Tom", LastName = "Jones" }); Customers.Add(new Customer { FirstName = "Joe", LastName = "Thompson" }); Customers.Add(new Customer { FirstName = "Jill", LastName = "Smith" }); } } }

    Read the article

  • What is the best Binary Decision Diagram library for Java?

    - by reprogrammer
    A Binary Decision Diagram (BDD) is a data structure to represent boolean functions. I'd like use this data structure in a Java program. My search for Java based BDD libraries resulted into the following packages. Java Decision Diagram Libraries JavaBDD JDD JBDD bddbddb If you know of any other BDD libraries available for Java programs, please let me know so that I add it to the list above. If you have used any of these libraries, please tell me about your experience with the library. In particular, I'd like you to compare the available libraries along the following dimensions. Quality. Is the library mature and reasonably bug free? Performance. How do you evaluate the performance of the library? Support. Could you easily get support whenever you encountered a problem with the library? Was the library well documented? Ease of use. Was the API well designed? Could you install and use the library quickly and easily? Please mention the version of the library that you are evaluating.

    Read the article

  • One Model to Rule Them All - VS2010 UML, ADO.NET Entity Data Model, and T4

    - by Eric J.
    I worked on a fairly large project a while back where we modeled the classes in Enterprise Architect and generated the (partial) POCO classes (complete with model-driven business rule validations), persistence (NHibernate mapping file) and DDL. Based on certain model attributes we could flag alternate generation strategies or indicate that a particular portion would be entirely hand-coded. There was a good deal of initial investment, but it paid large dividends over the lifetime of a 15 developer, 3 year project. I'm investigating doing something similar with the current Microsoft technology stack. The place I'm stuck is that class modeling is done with the VS 2010 UML tools, but logical data modeling is done with Entity Data Modeler. Is it a reasonable path to use VS 2010 UML as the "single source of truth" and code generate the edmx files based on the class model? That's the inverse of the common path to create the entity model and use a POCO generator to generate classes. However, a good class model can be used to generate much more than just the properties so I tend to view it as a better choice than the entity model.

    Read the article

  • Will JSON replace XML as a data format?

    - by 13ren
    When I first saw XML, I thought it was basically a representation of trees. Then I thought: the important thing isn't that it's a particularly good representation of trees, but that it is one that everyone agrees on. Just like ASCII. And once established, it's hard to displace due to network effects. The new alternative would have to be much better (maybe 10 times better) to displace it. Of course, ASCII has been (mostly) replaced by Unicode, for internationalization. According to google trends, XML has a x43 lead, but is declining - while JSON grows. Will JSON replace XML as a data format? (edited) for which tasks? for which programmers/industries? NOTES: S-expressions (from lisp) are another representation of trees, but which has not gained mainstream adoption. There are many, many other proposals, such as YAML and Protocol Buffers (for binary formats). I can see JSON dominating the space of communicating with client-side AJAX (AJAJ?), and this possibly could back-spread into other systems transitively. XML, being based on SGML, is better than JSON as a document format. I'm interested in XML as a data format. XML has an established ecosystem that JSON lacks, especially ways of defining formats (XML Schema) and transforming them (XSLT). XML also has many other standards, esp for web services - but their weight and complexity can arguably count against XML, and make people want a fresh start (similar to "web services" beginning as a fresh start over CORBA).

    Read the article

  • Pattern for sharing data between views (MVP or MVVM)

    - by Dovix
    What is a good pattern for sharing data between related views?. I have an application where 1 form contains many small views, each views behaves independently from each other more or less (they communicate/interact via an event bus). Every so often I need to pass the same objects to the child views. Sometimes I need this same object to be passed to a child view and then the child passes it onto another child itself contains. What is a good approach to sharing this data between all the views contained within the parent form (view) ? I have looked into CAB and their approach and every "view" has a "root work item" this work item has dictionary that contains a shared "state" between the views that are contained. Is this the best approach? just a shared dictionary all the views under a root view can access? My current approach right now is to have a function on the view that allows one to set the object for that view. Something like view.SetCustomer(Customer c); then if the view contains a child view it knows to set it on the child view ala: this.childview1.SetCustomer(c); The application is written in C# 3.5, for winforms using MVP with structure map as a IoC/DI provider.

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >