Search Results

Search found 62853 results on 2515 pages for 'data success'.

Page 280/2515 | < Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • Play audio file data - Spring MVC

    - by Vijay Veeraraghavan
    In my web-application, I have various audio clips uploaded by the users in the database stored in the BLOB column. The audio files are low bit rate WAV files. The clips are secured, one can see only those clips he has uploaded. Instead of user downloading the clip and playing it in his player, I need it be steamed online in the web page itself. In the jsp I use the <audio> tag with the source mapping to the controller mappping url. <td> <audio controls><source src="recfile/${au.id}" type="audio/mpeg" /></audio> </td> Where, the recfile is the request mapping and the au.id is the audio id. In the controller I process the request like below @RequestMapping(value = "/recfile/{id}", method = RequestMethod.GET, produces = { MediaType.APPLICATION_OCTET_STREAM_VALUE }) public HttpEntity<byte[]> downloadRecipientFile(@PathVariable("id") int id, ModelMap model, HttpServletResponse response) throws IOException, ServletException { LOGGER.debug("[GroupListController downloadRecipientFile]"); VoiceAudioLibrary dGroup = audioClipService.findAudioClip(id); if (dGroup == null || dGroup.getAudioData() == null || dGroup.getAudioData().length <= 0) { throw new ServletException("No clip found/clip has not data, id=" + id); } HttpHeaders header = new HttpHeaders(); I tried this too //header.setContentType(new MediaType("audio", "mp3")); header.setContentType(new MediaType("audio", "vnd.wave"); header.setContentLength(dGroup.getAudioData().length); return new HttpEntity<byte[]>(dGroup.getAudioData(), header); } When the jsp loads, the controller get the request, it serves back the audio data fetched from the database, the jsp too shows the player with the controls. But when I play it nothing happens. Why is it? Am I missing anything in the configuration? Am I doing it right?

    Read the article

  • export data from WCF Service to excel

    - by Dave
    I need to provide an export to excel feature for a large amount of data returned from a WCF web service. The code to load the datalist is as below: List<resultSet> r = myObject.ReturnResultSet(myWebRequestUrl); //call to WCF service myDataList.DataSource = r; myDataList.DataBind(); I am using the Reponse object to do the job: Response.Clear(); Response.Buffer = true; Response.ContentType = "application/vnd.ms-excel"; Response.AddHeader("Content-Disposition", "attachment; filename=MyExcel.xls"); StringBuilder sb = new StringBuilder(); StringWriter sw = new StringWriter(sb); HtmlTextWriter tw = new HtmlTextWriter(sw); myDataList.RenderControl(tw); Response.Write(sb.ToString()); Response.End(); The problem is that WCF Service times out for large amount of data (about 5000 rows) and the result set is null. When I debug the service, I can see the window for saving/opening the excel sheet appear before the service returns the result and hence the excel sheet is always empty. Please help me figure this out.

    Read the article

  • How to bind a datatable to a wpf editable combobox: selectedItem showing System.Data.DataRowView

    - by black sensei
    Hello Good People!! I bound a datatable to a combobox and defined a dataTemplate in the itemTemplate.i can see desired values in the combobox dropdown list,what i see in the selectedItem is System.Data.DataRowView here are my codes: <ComboBox Margin="128,139,123,0" Name="cmbEmail" Height="23" VerticalAlignment="Top" TabIndex="1" ToolTip="enter the email you signed up with here" IsEditable="True" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding}"> <ComboBox.ItemTemplate> <DataTemplate> <StackPanel> <TextBlock Text="{Binding Path=username}"/> </StackPanel> </DataTemplate> </ComboBox.ItemTemplate> The code behind is so : if (con != null) { con.Open(); //users table has columns id | username | pass SQLiteCommand cmd = new SQLiteCommand("select * from users", con); SQLiteDataAdapter da = new SQLiteDataAdapter(cmd); userdt = new DataTable("users"); da.Fill(userdt); cmbEmail.DataContext = userdt; } I've been looking for something like SelectedValueTemplate or SelectedItemTemplate to do the same kind of data templating but i found none. I'll like to ask if i'm doing something wrong or it's a known issue for combobox binding? if something is wrong in my code please point me to the right direction. thanks for reading this

    Read the article

  • Add new item to UITableView and Core Data as data source?

    - by David.Chu.ca
    I have trouble to add new item to my table view with core data. Here is the brief logic in my codes. In my ViewController class, I have a button to trigle the edit mode: - (void) toggleEditing { UITableView *tv = (UITableView *)self.view; if (isEdit) // class level flag for editing { self.newEntity = [NSEntityDescription insertNewObjectForEntityName:@"entity1" inManagedObjectContext:managedObjectContext]; NSArray *insertIndexPaths = [NSArray arrayWithObjects: [NSInextPath indexPathForRow:0 inSection:0], nil]; // empty at beginning so hard code numbers here. [tv insertRowsAtIndexPaths:insertIndexPaths withRowAnimation:UITableViewRowAnimationFade]; [self.tableView setEditing:YES animated:YES]; // enable editing mode } else { ...} } In this block of codes, I added a new item to my current managed object context first, and then I added a new row to my tv. I think that both the number of objects in my data source or context and the number of rows in my table view should be 1. However, I got an exception in the event of tabView:numberOfRowsInSection: Invalid update: invalid number of rows in section 0. The number of rows contained in an existing section after the update (0) must be equal to the number of rows contained in that section before the update (0), plus or minus the number of rows inserted or deleted from that section (1 inserted, 0 deleted). The exception was raised right after the delegate event: - (NSInteger) tableView:(UITableView *) tableView numberOfRawsInSection:(NSInteger) section { // fetchedResultsController is class member var NSFetchedResultsController id <NSFechedResultsSectionInfo> sectionInfo = [[fetchedResultsController sections] objectAtIndex: section]; NSInteger rows = [sectionInfo numberOfObjects]; return rows; } In debug mode, I found that the rows was still 0 and the event invoked after the the even of toggleEditing. It looks like that sectionInfo obtained from fetchedResultsController did not include the new entity object inserted. Not sure if I miss anything or steps? I am not sure how it works: to get the fetcedResultsController notified or reflect the change when a new entity is inserted into the current managed object context?

    Read the article

  • jqGrid with JSON data renders table as empty

    - by jgreep
    I'm trying to create a jqgrid, but the table is empty. The table renders, but the data doesn't show. The data I'm getting back from the php call is: { "page":"1", "total":1, "records":"10", "rows":[ {"id":"2:1","cell":["1","image","Chief Scout","Highest Award test","0"]}, {"id":"2:2","cell":["2","image","Link Badge","When you are invested as a Scout, you may be eligible to receive a Link Badge. (See page 45)","0"]}, {"id":"2:3","cell":["3","image","Pioneer Scout","Upon completion of requirements, the youth is invested as a Pioneer Scout","0"]}, {"id":"2:4","cell":["4","image","Voyageur Scout Award","Voyageur Scout Award is the right after Pioneer Scout.","0"]}, {"id":"2:5","cell":["5","image","Voyageur Citizenship","Learning about and caring for your community.","0"]}, {"id":"2:6","cell":["6","image","Fish and Wildlife","Demonstrate your knowledge and involvement in fish and wildlife management.","0"]}, {"id":"2:7","cell":["7","image","Photography","To recognize photography knowledge and skills","0"]}, {"id":"2:8","cell":["8","image","Recycling","Demonstrate your knowledge and involvement in Recycling","0"]}, {"id":"2:10","cell":["10","image","Voyageur Leadership ","Show leadership ability","0"]}, {"id":"2:11","cell":["11","image","World Conservation","World Conservation Badge","0"]} ]} The javascript configuration looks like so: $("#"+tableId).jqGrid ({ url:'getAwards.php?id='+classId, dataType : 'json', mtype:'POST', colNames:['Id','Badge','Name','Description',''], colModel : [ {name:'awardId', width:30, sortable:true, align:'center'}, {name:'badge', width:40, sortable:false, align:'center'}, {name:'name', width:180, sortable:true, align:'left'}, {name:'description', width:380, sortable:true, align:'left'}, {name:'selected', width:0, sortable:false, align:'center'} ], sortname: "awardId", sortorder: "asc", pager: $('#'+tableId+'_pager'), rowNum:15, rowList:[15,30,50], caption: 'Awards', viewrecords:true, imgpath: 'scripts/jqGrid/themes/green/images', jsonReader : { root: "rows", page: "page", total: "total", records: "records", repeatitems: true, cell: "cell", id: "id", userdata: "userdata", subgrid: {root:"rows", repeatitems: true, cell:"cell" } }, width: 700, height: 200 }); The HTML looks like: <table class="awardsList" id="awardsList2" class="scroll" name="awardsList" /> <div id="awardsList2_pager" class="scroll"></div> I'm not sure that I needed to define jsonReader, since I've tried to keep to the default. If the php code will help, I can post it too.

    Read the article

  • how to import csv data into django models

    - by little_fish
    i have some csv data and i want to export into django models the example of csv data 1;"02-01-101101";"Worm Gear HRF 50";"Ratio 1 : 10";"input shaft, output shaft, direction A, color dark green"; 2;"02-01-101102";"Worm Gear HRF 50";"Ratio 1 : 20";"input shaft, output shaft, direction A, color dark green"; 3;"02-01-101103";"Worm Gear HRF 50";"Ratio 1 : 30";"input shaft, output shaft, direction A, color dark green"; 4;"02-01-101104";"Worm Gear HRF 50";"Ratio 1 : 40";"input shaft, output shaft, direction A, color dark green"; 5;"02-01-101105";"Worm Gear HRF 50";"Ratio 1 : 50";"input shaft, output shaft, direction A, color dark green"; and i have some django models name Product in Product there is some fields like name, description and price and i want to something like this product=Product() product.name = "Worm Gear HRF 70(02-01-101116)" product.description = "input shaft, output shaft, direction A, color dark green" product.price = 100

    Read the article

  • Thread-safe data structures

    - by Inso Reiges
    Hello, I have to design a data structure that is to be used in a multi-threaded environment. The basic API is simple: insert element, remove element, retrieve element, check that element exists. The structure's implementation uses implicit locking to guarantee the atomicity of a single API call. After i implemented this it became apparent, that what i really need is atomicity across several API calls. For example if a caller needs to check the existence of an element before trying to insert it he can't do that atomically even if each single API call is atomic: if(!data_structure.exists(element)) { data_structure.insert(element); } The example is somewhat awkward, but the basic point is that we can't trust the result of exists call anymore after we return from atomic context (the generated assembly clearly shows a minor chance of context switch between the two calls). What i currently have in mind to solve this is exposing the lock through the data structure's public API. This way clients will have to explicitly lock things, but at least they won't have to create their own locks. Is there a better commonly-known solution to these kinds of problems? And as long as we're at it, can you advise some good literature on thread-safe design? Thank you.

    Read the article

  • OCR with Neural network: data extraction

    - by Sebastian Hoitz
    I'm using the AForge library framework and its neural network. At the moment when I train my network I create lots of images (one image per letter per font) at a big size (30 pt), cut out the actual letter, scale this down to a smaller size (10x10 px) and then save it to my harddisk. I can then go and read all those images, creating my double[] arrays with data. At the moment I do this on a pixel basis. So once I have successfully trained my network I test the network and let it run on a sample image with the alphabet at different sizes (uppercase and lowercase). But the result is not really promising. I trained the network so that RunEpoch had an error of about 1.5 (so almost no error), but there are still some letters left that do not get identified correctly in my test image. Now my question is: Is this caused because I have a faulty learning method (pixelbased vs. the suggested use of receptors in this article: http://www.codeproject.com/KB/cs/neural_network_ocr.aspx - are there other methods I can use to extract the data for the network?) or can this happen because my segmentation-algorithm to extract the letters from the image to look at is bad? Does anyone have ideas on how to improve it?

    Read the article

  • Adding new record to a VFP data table in VB.NET with ADO recordsets

    - by Gerry
    I am trying to add a new record to a Visual FoxPro data table using an ADO dataset with no luck. The code runs fine with no exceptions but when I check the dbf after the fact there is no new record. The mDataPath variable shown in the code snippet is the path to the .dbc file for the entire database. A note about the For loop at the bottom; I am adding the body of incoming emails to this MEMO field so thought I needed to break the addition of this string into 256 character Chunks. Any guidance would be greatly appreciated. cnn1.Open("Driver={Microsoft Visual FoxPro Driver};" & _ "SourceType=DBC;" & _ "SourceDB=" & mDataPath & ";Exclusive=No") Dim RS As ADODB.RecordsetS = New ADODB.Recordset RS.Open("select * from gennote", cnn1, 1, 3, 1) RS.AddNew() 'Assign values to the first three fields RS.Fields("ignnoteid").Value = NextIDI RS.Fields("cnotetitle").Value = "'" & mail.Subject & "'" RS.Fields("cfilename").Value = "''" 'Looping through 254 characters at a time and add the data 'to Ado Field buffer For i As Integer = 1 To Len(memo) Step liChunkSize liStartAt = i liWorkString = Mid(mail.Body, liStartAt, liChunkSize) RS.Fields("mnote").AppendChunk(liWorkString) Next 'Update the recordset RS.Update() RS.Requery() RS.Close()

    Read the article

  • posting nutch data into a BASIC auth secured Solr instance

    - by mlathe
    Hi. I've secured a solr instance using BASIC auth, kind of how it is shown here: http://blog.comtaste.com/2009/02/securing_your_solr_server_on_t.html Now i'm trying to update my batch processes to push data into the authenticated instance. The ones using "curl" are easy, but i also have a Nutch crawl that uses the "solrindex" command to push data into Solr. When i do that i get this error: 2010-02-22 12:09:28,226 INFO auth.AuthChallengeProcessor - basic authentication scheme selected 2010-02-22 12:09:28,229 INFO httpclient.HttpMethodDirector - No credentials available for BASIC 'Tomcat Manager Application'@ninja:5500 2010-02-22 12:09:28,236 WARN mapred.LocalJobRunner - job_local_0001 org.apache.solr.common.SolrException: Unauthorized Unauthorized request: http://ninja:5500/solr/foo/update?wt=javabin&version=2.2 at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:343) at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:183) at org.apache.solr.client.solrj.request.UpdateRequest.process(UpdateRequest.java:217) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:48) at org.apache.nutch.indexer.solr.SolrWriter.close(SolrWriter.java:69) at org.apache.nutch.indexer.IndexerOutputFormat$1.close(IndexerOutputFormat.java:48) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:170) 2010-02-22 12:09:29,134 FATAL solr.SolrIndexer - SolrIndexer: java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1232) at org.apache.nutch.indexer.solr.SolrIndexer.indexSolr(SolrIndexer.java:73) at org.apache.nutch.indexer.solr.SolrIndexer.run(SolrIndexer.java:95) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.nutch.indexer.solr.SolrIndexer.main(SolrIndexer.java:104) Apparently nutch uses SolrJ to push the content, and after going through the solrj code, it's clear that it uses commons-httpclient without providing a way to set the credentials. Here are my question(s) Is this possible to do? ie push from nutch into a BASIC auth secured Solr instance? Is it possible to tell commons-httpclient about a credential without explicitly doing an _httpclient.getState().setCredentials(...)? Anyother ideas? One idea i had was to use an IPfiltering Valve for just the "update" Solr webservices. That would mean you could only make an update call from certain nodes. Thanks

    Read the article

  • Ext.data.JsonStore + Ext.DataView = not loading records

    - by Mulone
    Hi guys, I'm trying to make a DataView work (on Ext JS 2.3). Here is the jsonStore, which seems to be working (it calls the server and gets a valid response). Ext.onReady(function(){ var prefStore = new Ext.data.JsonStore({ autoLoad: true, //autoload the data url: 'getHighestUserPreferences', baseParams:{ userId: 'andreab', max: '50' }, root: 'preferences', fields: [ {name:'prefId', type: 'int'}, {name:'absInteractionScore', type:'float'} ] }); Then the xtemplate: var tpl = new Ext.XTemplate( '<tpl for=".">', '<div class="thumb-wrap" id="{name}">', '<div class="thumb"><img src="{url}" title="{name}"></div>', '<span class="x-editable">{shortName}</span></div>', '</tpl>', '<div class="x-clear"></div>' ); The panel: var panel = new Ext.Panel({ id:'geoPreferencesView', frame:true, width:600, autoHeight:true, collapsible:false, layout:'fit', title:'Geo Preferences', And the DataView items: new Ext.DataView({ store: prefStore, tpl: tpl, autoHeight:true, multiSelect: true, overClass:'x-view-over', itemSelector:'div.thumb-wrap', emptyText: 'No images to display' }) }); panel.render('extOutput'); }); What I get in the page is a blue frame with the title, but nothing in it. How can I debug this and see why it is not working? Cheers, Mulone

    Read the article

  • using text and ntext SQL Datatypes in RPG

    - by David Stratton
    I'll preface this with saying that I'm a .NET developer, and am NOT an RPG developer. I'm working with one of our RPG developers to come up with a solution, so any suggestions you provide will get passed to him. We have a scenario where we want our iSeries to read from a SQL Server database. One of the columns is a TEXT column. IN RPG, there is no equivalent data type to use for this. We've gone back and forth on this, and our current plan is to change course, and have our SQL Server write out a text file, which the iSeries can pick up and parse. This is, however, a last resort option, as the data in the file is sensitive, and we'd like to avoid the additional security overhead. We've already got the SQL Server locked down as tight as possible (one user only has read access to this, and that user is an iSeries user.) We don't want to have to worry about transferring files back and forth. However, at this point, we see no other option. We have no in-house Java developers, and need to do this in RPG. So I'm wondering if there are any RPG developers out there who have faced this situation and have any advice.

    Read the article

  • Data Driven MSTest: DataRow is always null

    - by David Back
    I am having a problem using Visual Studio data driven testing. I have tried to deconstruct this to the simplest example. I am using Visual Studio 2012. I create a new unit test project. I am referencing system data. My code looks like this: namespace UnitTestProject1 { [TestClass] public class UnitTest1 { [DeploymentItem(@"OrderService.csv")] [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "OrderService.csv", "OrderService#csv", DataAccessMethod.Sequential)] [TestMethod] public void TestMethod1() { try { Debug.WriteLine(TestContext.DataRow["ID"]); } catch (Exception ex) { Assert.Fail(); } } public TestContext TestContext { get; set; } } } I have a very small csv file that I have set the Build Options to to 'Content' and 'Copy Always'. I have added a .testsettings file to the solution, and set enable deployment, and added the csv file. I have tried this with and without |DataDirectory|, and with/without a full path specified (the same path that I get with Environment.CurrentDirectory). I've tried variations of "../" and "../../" just in case. Right now the csv is at the project root level, same as the .cs test code file. I have tried variations with xml as well as csv. TestContext is not null, but DataRow always is. I have not gotten this to work despite a lot of fiddling with it. I'm not sure what I'm doing wrong. Does mstest create a log anywhere that would tell me if it is failing to find the csv file, or what specific error might be causing DataRow to fail to populate? I have tried the following csv files: ID 1 2 3 4 and ID, Whatever 1,0 2,1 3,2 4,3 So far, no dice.

    Read the article

  • SVM Classification - minimum number of input sets for each class

    - by Amol Joshi
    Im trying to build an app to detect images which are advertisements from the webpages. Once I detect those Ill not be allowing those to be displayed on the client side. From the help that I got here in stackoverflow, I thought SVM is the best approach to my aim. So, I have coded SVM and an SMO myself. The dataset which I have got from UCI data repository has 3280 instances ( Link to Dataset- http://archive.ics.uci.edu/ml/datasets/Internet+Advertisements )where around 400 of them are from class representing Advertisement images and rest of them representing non-advertisement images. Right now Im taking the first 2800 input sets and training the SVM. But after looking at the accuracy rate I realised that most of those 2800 input sets are from non-advertisement image class. So Im getting very good accuracy for that class. So what can I do here? About how many input set shall I give to SVM to train and how many of them for each class? Thanks. Cheers. ( Basically made a new question because the context was different from my previous question. http://stackoverflow.com/questions/1991113/optimization-of-neural-network-input-data )

    Read the article

  • Working with a large data object between ruby processes

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds Update: Since my original question did not receive many responses, I'm assuming there's no solution as easy as I would have hoped. Presently I'm considering two options: Create a Sinatra application to store this hash with an API to modify/access it. Create a C application to do the same as #1, but a lot faster. The scope of my problem has increased such that the hash may be larger than my original example. So #2 may be necessary. But I have no idea where to start in terms of writing a C application that exposes an appropriate API. A good walkthrough through how best to implement #1 or #2 may receive best answer credit.

    Read the article

  • Generating %pc relative address of constant data

    - by Hudson
    Is there a way to have gcc generate %pc relative addresses of constants? Even when the string appears in the text segment, arm-elf-gcc will generate a constant pointer to the data, load the address of the pointer via a %pc relative address and then dereference it. For a variety of reasons, I need to skip the middle step. As an example, this simple function: const char * filename(void) { static const char _filename[] __attribute__((section(".text"))) = "logfile"; return _filename; } generates (when compiled with arm-elf-gcc-4.3.2 -nostdlib -c -O3 -W -Wall logfile.c): 00000000 <filename>: 0: e59f0000 ldr r0, [pc, #0] ; 8 <filename+0x8> 4: e12fff1e bx lr 8: 0000000c .word 0x0000000c 0000000c <_filename.1175>: c: 66676f6c .word 0x66676f6c 10: 00656c69 .word 0x00656c69 I would have expected it to generate something more like: filename: add r0, pc, #0 bx lr _filename.1175: .ascii "logfile\000" The code in question needs to be partially position independent since it will be relocated in memory at load time, but also integrate with code that was not compiled -fPIC, so there is no global offset table. My current work around is to call a non-inline function (which will be done via a %pc relative address) to find the offset from the compiled location in a technique similar to how -fPIC code works: static intptr_t __attribute__((noinline)) find_offset( void ) { uintptr_t pc; asm __volatile__ ( "mov %0, %%pc" : "=&r"(pc) ); return pc - 8 - (uintptr_t) find_offset; } But this technique requires that all data references be fixed up manually, so the filename() function in the above example would become: const char * filename(void) { static const char _filename[] __attribute__((section(".text"))) = "logfile"; return _filename + find_offset(); }

    Read the article

  • MATLAB Builder NE (.NET Assembly) Data type question

    - by Brett
    Hi coders, I am using MATLAB Builder NE (MATLAB's integrated .NET assmebly builder), but I am having an issue with data types. I have compiled a small, very simple, function in matlab and build it for .NET. I am able to call the namespace and even the function just fine. However, my function returns a value, and MATLAB defaults to returning it as an "object[]" data type. However, I know that the value is an integer, but I can't figure out how to cast it. My MATLAB function looks like this: function addValue = Myfunction(value1, value2) addValue=value1+value2; end Pretty simple right? And then in .NET I can call it as: xClass.addValue (1, 3, 4); where xClass is the name of the MATLAB built class but when I try: int x = xClass.addValue (1, 3, 4); C# errors out. Typical .NET casting (int) doesn't work. The compiler states it cannot convert object[] to int. Does anyone have experience with the .NET builder in MATLAB that can help me with this? It is really throwing me for a loop. I have scanned through most of the MATLAB BUILDER doc (484 pages!) with zero help. Thank you, Brett

    Read the article

  • Copy Selective Data from Database to Invoice, Based on Certain Criteria

    - by Scott
    For starters, here is an example of a microsoft excel database I am working with: Month/Address/Name/Description/Amount January/123 Street/Fred/Painting/100 January/456 Avenue/Scott/Flooring/400 January/789 Road/Scott/Plumbing/100 February/123 Street/Fred/Flooring/600 February/246 Lane/Fred/Electrical/300 March/789 Road/Scott/Drywall/150 What I want to be able to do is selectively copy info from this databse to invoices (also excel). The invoice has three columns: Address/Description/Amount. I want to be able to automatically fill the invoices in as the database is filled in (either automatically, or if I have to actually manually run the macro to do it, that might be fine). Each name (Scott, Fred, etc.) will have their own set of 12 invoices for the year. So, e.g., I want to be able to produce a January invoice for all work done for Scott in January, showing the address, the description and the amount, line by line. So every time work on Scott's address(es) is done, the database is filled in, and i want it to "send" that information to the invoice on the next available line, filling in only the Address/Description/Amount columns from the database. Fred's invoice should fill in as any work is done on Fred's addresses. And once the month changes, the next invoice should start filling in. So first I need to filter the data by the month and the name (and there is actually one more column to filter by, but let's keep this example simpler). Then I need to list the remaining data on the invoice, but only certain cells from the rows that are now left. Help anyone?

    Read the article

  • Load testing multipart form

    - by JacobM
    I'm trying to load-test a Rails application using JMeter. A critical part of the application involves a form that includes both text inputs and file uploads. It works fine in a browser, but when I try to post that page in JMeter, Rails is saving all of the parts of the multipart form as temp files, which causes things to break when it's looking for a string and gets a tempfile instead. It appears that the difference is that, from a browser, the piece of the multipart request that contains a text input looks like this: -----------------------------7d93b4186074c Content-Disposition: form-data; name="field_name" test -----------------------------7d93b4186074c while from JMeter it looks like this: -----------------------------7d159c1302d0y0 Content-Disposition: form-data; name="field_name" Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit test -----------------------------7d159c1302d0y0 So apparently Rails sees the former and interprets it as a plain text value and treats it as a string, but sees the latter and saves it to a temp file. I have not been able to find a setting to convince JMeter not to send the additional headers in the multipart form for non-file fields. Is there a way to convince Rails to ignore those headers and treat the text/plain text as strings instead of text files? Or a quick way to put a filter in front of my controller that will strip the extra headers? Alternately, is there a better tool to load-test a Rails application that includes file upload?

    Read the article

  • relational data from xml

    - by Beta033
    Our problem is this, we have a relational database to store objects in tables. As any relational database could, several of these tables have multiple foreign keys pointing to other tables (all pretty normal stuff) We've been trying to identify a solution to allow export of this relational data, ideally, only 1 of the objects in the model, to some sort of file (xml, text, ??). So it wouldn't be simple enough to just export 1 table as data stored in other tables would contribute to the complete model of the object. Something like the following picture: Toward this, i've written a routine to export the structure by following the foreign key paths which exports something similar to the following. <Tables> <TableA PK="1", val1, val2, val3> <TableC PK="1", FK_A="1", Val1, val2, val3> <TableC PK="2", FK_A="1", val1, val2, val3> <TableB PK="1", FK_A="1", FK_C="1", val1, val2, val3> <TableB PK="2", FK_A="1", FK_C="2", val1, val2, val3> <TableD PK="1", FK_B="1", FK_C="1", val1> <TableD PK="2", FK_B="2", FK_C="1", val1> </Tables> However, given this structure, it cannot be placed into a heirarchial format (ie D2 is a child of C1 and B2; and B2 is a child of C2) Which in turn, makes my life very difficult when trying to identify a methodology to reimport (and reKey) these objects. Has anybody done anything like this? how do you do it? are there tools or documentation on how this is best accomplished? Thanks for your help.

    Read the article

  • How can I pass FormsAuthentication.SetAuthCookie from Data Access Layer Class to WebService to Javas

    - by Reaction21
    I am using DotNetOpenAuth in my ASP.Net Website. I have modified it to work with Facebook Connect as well, using the same methods and database structures. Now I have come across a problem. I have added a Facebook Connect button to a login page. From that HTML button, I have to somehow pull information from the Facebook Connect connection and pass it into a method to authenticate the user. The way I am currently doing this is by: Calling a Javascript Function on the onlogin function of the FBML/HTML Facebook Connect button. The javascript function calls a Web service to login, which it does correctly. The web service calls my data access layer to login. And here is the problem: FormsAuthentication.SetAuthCookie is set at the data access layer. The Cookie is beyond the scope of the user's page and therefore is not set in the browser. This means that the user is authenticated, but the user's browser is never notified. So, I need to figure out if this is a bad way of doing what I need or if there is a better way to accomplish what I need. I am just not sure and have been trying to find answers for hours. Any help you have would be great.

    Read the article

  • Why isn't my WPF Datagrid showing data?

    - by Edward Tanguay
    This walkthrough says you can create a WPF datagrid in one line but doesn't give a full example. So I created an example using a generic list and connected it to the WPF datagrid, but it doesn't show any data. What do I need to change on the code below to get it to show data in the datagrid? ANSWER: This code works now: XAML: <Window x:Class="TestDatagrid345.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:toolkit="http://schemas.microsoft.com/wpf/2008/toolkit" xmlns:local="clr-namespace:TestDatagrid345" Title="Window1" Height="300" Width="300" Loaded="Window_Loaded"> <StackPanel> <toolkit:DataGrid ItemsSource="{Binding}"/> </StackPanel> </Window> Code Behind: using System.Collections.Generic; using System.Windows; namespace TestDatagrid345 { public partial class Window1 : Window { private List<Customer> _customers = new List<Customer>(); public List<Customer> Customers { get { return _customers; }} public Window1() { InitializeComponent(); } private void Window_Loaded(object sender, RoutedEventArgs e) { DataContext = Customers; Customers.Add(new Customer { FirstName = "Tom", LastName = "Jones" }); Customers.Add(new Customer { FirstName = "Joe", LastName = "Thompson" }); Customers.Add(new Customer { FirstName = "Jill", LastName = "Smith" }); } } }

    Read the article

  • One Model to Rule Them All - VS2010 UML, ADO.NET Entity Data Model, and T4

    - by Eric J.
    I worked on a fairly large project a while back where we modeled the classes in Enterprise Architect and generated the (partial) POCO classes (complete with model-driven business rule validations), persistence (NHibernate mapping file) and DDL. Based on certain model attributes we could flag alternate generation strategies or indicate that a particular portion would be entirely hand-coded. There was a good deal of initial investment, but it paid large dividends over the lifetime of a 15 developer, 3 year project. I'm investigating doing something similar with the current Microsoft technology stack. The place I'm stuck is that class modeling is done with the VS 2010 UML tools, but logical data modeling is done with Entity Data Modeler. Is it a reasonable path to use VS 2010 UML as the "single source of truth" and code generate the edmx files based on the class model? That's the inverse of the common path to create the entity model and use a POCO generator to generate classes. However, a good class model can be used to generate much more than just the properties so I tend to view it as a better choice than the entity model.

    Read the article

  • What is the best Binary Decision Diagram library for Java?

    - by reprogrammer
    A Binary Decision Diagram (BDD) is a data structure to represent boolean functions. I'd like use this data structure in a Java program. My search for Java based BDD libraries resulted into the following packages. Java Decision Diagram Libraries JavaBDD JDD JBDD bddbddb If you know of any other BDD libraries available for Java programs, please let me know so that I add it to the list above. If you have used any of these libraries, please tell me about your experience with the library. In particular, I'd like you to compare the available libraries along the following dimensions. Quality. Is the library mature and reasonably bug free? Performance. How do you evaluate the performance of the library? Support. Could you easily get support whenever you encountered a problem with the library? Was the library well documented? Ease of use. Was the API well designed? Could you install and use the library quickly and easily? Please mention the version of the library that you are evaluating.

    Read the article

< Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >