Search Results

Search found 105845 results on 4234 pages for 'asp net dynamic data'.

Page 265/4234 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • Big Data for Retail

    - by David Dorf
    Right up there with mobile, social, and cloud is the term "big data," which seems to be popping up lots in the press these days.  Companies like Google, Yahoo, and Facebook have popularized a new class of data technologies meant to solve the problem of processing large amounts of data quickly.  I first mentioned this in a posting back in March 2009.  Put simply, big data implies datasets so large they can't normally be processed using a standard transactional database.  The term "noSQL" is often used in this context as well. Actually, using parallel processing within the Oracle database combined with Exadata can achieve impressive results.  Look for more from Oracle at OpenWorld as hinted by Jean-Pierre Dijcks. McKinsey recently released a report on big data in which retail was specifically mentioned as an industry that can benefit from the new technologies.  I won't rehash that report because my friend Rama already did such a good job in his posting, Impact of "Big Data" on Retail. The presentation below does a pretty good job of framing the problem, although it doesn't really get into the available technologies (e.g. Exadata, Hadoop, Cassandra, etc.) and isn't retail specific. Determine the Right Analytic Database: A Survey of New Data Technologies So when a retailer asks me about big data, here's what I say:  Big data refers to a set of technologies for processing large volumes of structured and unstructured data.  Imagine collecting everything uttered by your customers on Facebook and Twitter and combining it with all the data you can find about the products you sell (e.g. reviews, images, demonstration videos), including competitive data.  Assuming you could process all that data, you could then personalize offers to specific customers based on their tastes, ensure prices are competitive, and implement better local assortments.  It's really not that far off.

    Read the article

  • Data and Secularism

    - by kaleidoscope
    Ever since we’ve been using Data we’ve been religious. Religious about the way we represent it and equally religious about the way we access it. Be it plain old SQL, DAO, ADO, ADO.Net and I am just referring to religions in MSFT world. A peek outside and I’d need a separate book to list out the Data faiths. Various application areas in networked computing are converging under the HTTP umbrella with a plausible transition to purist HTTP and in turn REST fuelled by the Web2.0 storm. It was time the Data access faiths also gave up the religious silos wrapped around our long worshipped data publishing and access methods. OData is the secular solution we have at hand today. It is an open protocol for sharing data. It can be exposed via REST. It is Open as in the Microsoft Open Specification Promise. This allows virtually everyone to build Data Services for any runtime. OData is one of the key standards for Data publishing/subscribing on Microsoft Codename Dallas. For us .Netters OData data sources can be exposed/consumed via WCF Data Services and the process is very simple, elegant and intuitive. Applications exposing OData Services Sharepoint 2010 IBM Web Sphere Microsoft SQL Azure Windows Azure Table Storage SQL Server Reporting Services   Live OData Services Netflix Open Science Data Initiative Open Government Data Initiatives Northwind database exposed as OData Service and many others Some may prefer to call it commoditization of data, unification of data access strategies or any other sweet name. I for one will stick to my secular definition. :) Technorati Tags: Sarang,OData,MOSP

    Read the article

  • Using Apache FOP from .NET level

    - by Lukasz Kurylo
    In one of my previous posts I was talking about FO.NET which I was using to generate a pdf documents from XSL-FO. FO.NET is one of the .NET ports of Apache FOP. Unfortunatelly it is no longer maintained. I known it when I decidec to use it, because there is a lack of available (free) choices for .NET to render a pdf form XSL-FO. I hoped in this implementation I will find all I need to create a pdf file with my really simple requirements. FO.NET is a port from some old version of Apache FOP and I found really quickly that there is a lack of some features that I needed, like dotted borders, double borders or support for margins. So I started to looking for some alternatives. I didn’t try the NFOP, another port of Apache FOP, because I found something I think much more better, the IKVM.NET project.   IKVM.NET it is not a pdf renderer. So what it is? From the project site:   IKVM.NET is an implementation of Java for Mono and the Microsoft .NET Framework. It includes the following components: a Java Virtual Machine implemented in .NET a .NET implementation of the Java class libraries tools that enable Java and .NET interoperability   In the simplest form IKVM.NET allows to use a Java code library in the C# code and vice versa.   I tried to use an Apache FOP, the best I think open source pdf –> XSL-FO renderer written in Java from my project written in C# using an IKVM.NET and it work like a charm. In the rest of the post I want to show, how to prepare a .NET *.dll class library from Apache FOP *.jar’s with IKVM.NET and generate a simple Hello world pdf document.   To start playing with IKVM.NET and Apache FOP we need to download their packages: IKVM.NET Apache FOP and then unpack them.   From the FOP directory copy all the *.jar’s files from lib and build catalogs to some location, e.g. d:\fop. Second step is to build the *.dll library from these files. On the console execute the following comand:   ikvmc –target:library –out:d:\fop\fop.dll –recurse:d:\fop   The ikvmc is located in the bin subdirectory where you unpacked the IKVM.NET. You must execute this command from this catalog, add this path to the global variable PATH or specify the full path to the bin subdirectory.   In no error occurred during this process, the fop.dll library should be created. Right now we can create a simple project to test if we can create a pdf file.   So let’s create a simple console project application and add reference to the fop.dll and the IKVM dll’s: IKVM.OpenJDK.Core and IKVM.OpenJDK.XML.API.   Full code to generate a pdf file from XSL-FO template:   static void Main(string[] args)         {             //initialize the Apache FOP             FopFactory fopFactory = FopFactory.newInstance();               //in this stream we will get the generated pdf file             OutputStream o = new DotNetOutputMemoryStream();             try             {                 Fop fop = fopFactory.newFop("application/pdf", o);                 TransformerFactory factory = TransformerFactory.newInstance();                 Transformer transformer = factory.newTransformer();                   //read the template from disc                 Source src = new StreamSource(new File("HelloWorld.fo"));                 Result res = new SAXResult(fop.getDefaultHandler());                 transformer.transform(src, res);             }             finally             {                 o.close();             }             using (System.IO.FileStream fs = System.IO.File.Create("HelloWorld.pdf"))             {                 //write from the .NET MemoryStream stream to disc the generated pdf file                 var data = ((DotNetOutputMemoryStream)o).Stream.GetBuffer();                 fs.Write(data, 0, data.Length);             }             Process.Start("HelloWorld.pdf");             System.Console.ReadLine();         }   Apache FOP be default using a Java’s Xalan to work with XML files. I didn’t find a way to replace this piece of code with equivalent from .NET standard library. If any error or warning will occure during generating the pdf file, on the console will ge shown, that’s why I inserted the last line in the sample above. The DotNetOutputMemoryStream this is my wrapper for the Java OutputStream. I have created it to have the possibility to exchange data between the .NET <-> Java objects. It’s implementation:   class DotNetOutputMemoryStream : OutputStream     {         private System.IO.MemoryStream ms = new System.IO.MemoryStream();         public System.IO.MemoryStream Stream         {             get             {                 return ms;             }         }         public override void write(int i)         {             ms.WriteByte((byte)i);         }         public override void write(byte[] b, int off, int len)         {             ms.Write(b, off, len);         }         public override void write(byte[] b)         {             ms.Write(b, 0, b.Length);         }         public override void close()         {             ms.Close();         }         public override void flush()         {             ms.Flush();         }     } The last thing we need, this is the HelloWorld.fo template.   <?xml version="1.0" encoding="utf-8"?> <fo:root xmlns:fo="http://www.w3.org/1999/XSL/Format"          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">   <fo:layout-master-set>     <fo:simple-page-master master-name="simple"                   page-height="29.7cm"                   page-width="21cm"                   margin-top="1.8cm"                   margin-bottom="0.8cm"                   margin-left="1.6cm"                   margin-right="1.2cm">       <fo:region-body margin-top="3cm"/>       <fo:region-before extent="3cm"/>       <fo:region-after extent="1.5cm"/>     </fo:simple-page-master>   </fo:layout-master-set>   <fo:page-sequence master-reference="simple">     <fo:flow flow-name="xsl-region-body">       <fo:block font-size="18pt" color="black" text-align="center">         Hello, World!       </fo:block>     </fo:flow>   </fo:page-sequence> </fo:root>   I’m not going to explain how how this template is created, because this will be covered in the near future posts.   Generated pdf file should look that:

    Read the article

  • Can web apps allow fast data-typists to "type-ahead"?

    - by user61852
    In some data entry contexts, I've seen data typists, type really fast and know so well the app they use, and have a mechanic quality in their work so that they can "type ahead", ie continue typing and "tab-bing" and "enter-ing" faster than the display updates, so that in many occasions they are typing in the data for the next form before it draws itself. Then when this next entry form appears, their keystrokes fill the text boxes and they continue typing, selecting etc. In contexts like this, this speed is desirable, since this persons are really productive. I think this "type ahead of time" is only possible in desktop apps, but I may be wrong. My question is whether this way of handling the keyboard buffer (which in desktop apps require no extra programming) is achievable in web apps, or is this impossible because of the way web apps work, handle sessions, etc (network latency and the overhead of generating new web pages ) ? Edit: By "type ahead" I mean "keyboard type ahead" (typing faster than the next entry form can load), not suggets-as-you-type-like-google type ahead. Typeahead is a feature of computers and software (and some typewriters) that enables users to continue typing regardless of program or computer operation—the user may type in whatever speed he or she desires, and if the receiving software is busy at the time it will be called to handle this later. Often this means that keystrokes entered will not be displayed on the screen immediately. This programming technique for handling user what is known as a keyboard buffer.

    Read the article

  • sorting dynamic table created by form inputs [migrated]

    - by mille
    i am having problems with sorting can someone help to sort this table not just by its form entry id but onclick with some other columns i tried a lot of plugins but cant get anything to work and i dont know what to do i am new at this i sorry for my english thanks. here is the js: var Animals ={ index: window.localStorage.getItem("Animals:index"), $table: document.getElementById("animals-table"), $form: document.getElementById("animals-form"), $button_save: document.getElementById("animals-save"), $button_discard: document.getElementById("animals-discard"), init: function() { if (!Animals.index) { window.localStorage.setItem("Animals:index", Animals.index = 1); } Animals.$form.reset(); Animals.$button_discard.addEventListener("click", function(event) { Animals.$form.reset(); Animals.$form.id_entry.value = 0; }, true); Animals.$form.addEventListener("submit", function(event) { var entry = { id: parseInt(this.id_entry.value), animal_id:this.animal_id.value, animal_name: this.animal_name.value, animal_type: this.animal_type.value, bday: this.bday.value, animal_sex: this.animal_sex.value, mother_name: this.mother_name.value, farm_name: this.farm_name.value, money: this.money.value, weight: this.weight.value, purchase_partner: this.purchase_partner.value }; if (entry.id === 0) { Animals.storeAdd(entry); Animals.tableAdd(entry); } else { // edit Animals.storeEdit(entry); Animals.tableEdit(entry); } this.reset(); this.id_entry.value = 0; event.preventDefault(); }, true); if (window.localStorage.length - 1) { var animals_list = [], i, key; for (i = 0; i < window.localStorage.length; i++) { key = window.localStorage.key(i); if (/Animals:\d+/.test(key)) { animals_list.push(JSON.parse(window.localStorage.getItem(key))); } } if (animals_list.length) { animals_list.sort(function(a, b) {return a.id < b.id ? -1 : (a.id > b.id ? 1 : 0);}) .forEach(Animals.tableAdd);} Animals.$table.addEventListener("click", function(event) { var op = event.target.getAttribute("data-op"); if (/edit|remove/.test(op)) { var entry = JSON.parse(window.localStorage.getItem("Animals:"+ event.target.getAttribute("data- id"))); if (op == "edit") { Animals.$form.id_entry.value = entry.id; Animals.$form.animal_id.value = entry.animal_id; Animals.$form.animal_name.value = entry.animal_name; Animals.$form.animal_type.value = entry.animal_type; Animals.$form.bday.value = entry.bday; Animals.$form.animal_sex.value = entry.animal_sex; Animals.$form.mother_name.value = entry.mother_name; Animals.$form.farm_name.value = entry.farm_name; Animals.$form.money.value = entry.money; Animals.$form.weight.value = entry.weight; Animals.$form.purchase_partner.value = entry.purchase_partner; } else if (op == "remove") { if (confirm('Are you sure you want to remove this animal from your list?' )) { Animals.storeRemove(entry); Animals.tableRemove(entry); } } event.preventDefault(); } }, true); }, storeAdd: function(entry) { entry.id = Animals.index; window.localStorage.setItem("Animals:index", ++Animals.index); window.localStorage.setItem("Animals:"+ entry.id, JSON.stringify(entry)); }, storeEdit: function(entry) { window.localStorage.setItem("Animals:"+ entry.id, JSON.stringify(entry)); }, storeRemove: function(entry) { window.localStorage.removeItem("Animals:"+ entry.id); }, tableAdd: function(entry) { var $tr = document.createElement("tr"), $td, key; for (key in entry) { if (entry.hasOwnProperty(key)) { $td = document.createElement("td"); $td.appendChild(document.createTextNode(entry[key])); $tr.appendChild($td); } } $td = document.createElement("td"); $td.innerHTML = '<a data-op="edit" data-id="'+ entry.id +'">Edit</a> | <a data-op="remove" data-id="'+ entry.id +'">Remove</a>'; $tr.appendChild($td); $tr.setAttribute("id", "entry-"+ entry.id); Animals.$table.appendChild($tr); }, tableEdit: function(entry) { var $tr = document.getElementById("entry-"+ entry.id), $td, key; $tr.innerHTML = ""; for (key in entry) { if (entry.hasOwnProperty(key)) { $td = document.createElement("td"); $td.appendChild(document.createTextNode(entry[key])); $tr.appendChild($td); } } $td = document.createElement("td"); $td.innerHTML = '<a data-op="edit" data-id="'+ entry.id +'">Edit</a> | <a data-op="remove" data-id="'+ entry.id +'">Remove</a>'; $tr.appendChild($td); }, tableRemove: function(entry) { Animals.$table.removeChild(document.getElementById("entry-"+ entry.id)); } }; Animals.init();

    Read the article

  • How to migrate user settings and data to new machine?

    - by torbengb
    I'm new to Ubuntu and recently started using it on my PC. I'm going to replace that PC with a new machine. I want to transfer my data and settings to the nettop. What aspects should I consider? Obviously I want to move my data over. What things am I missing if I only copy the entire home folder? This is a home pc (not corporate) so user rights and other security issues are not a concern, except that the files should be accessible on the new machine! Please take into account that the new machine is a nettop that doesn't have an optical drive and doesn't allow me to hook the old SATA disk into it, so any data transfer must be handled via home network (I can have both the old and the new machine turned on and connected to the home LAN) and I have an USB thumbdrive with limited capacity (2GB). This sounds like it might limit the general applicability, but it would in fact make it more general. I'll make this a wiki topic because there could be several "right" answers. Update: Or so I thought. I don't see a choice for that.

    Read the article

  • Is it conceivable to have millions of lists of data in memory in Python?

    - by Codemonkey
    I have over the last 30 days been developing a Python application that utilizes a MySQL database of information (specifically about Norwegian addresses) to perform address validation and correction. The database contains approximately 2.1 million rows (43 columns) of data and occupies 640MB of disk space. I'm thinking about speed optimizations, and I've got to assume that when validating 10,000+ addresses, each validation running up to 20 queries to the database, networking is a speed bottleneck. I haven't done any measuring or timing yet, and I'm sure there are simpler ways of speed optimizing the application at the moment, but I just want to get the experts' opinions on how realistic it is to load this amount of data into a row-of-rows structure in Python. Also, would it even be any faster? Surely MySQL is optimized for looking up records among vast amounts of data, so how much help would it even be to remove the networking step? Can you imagine any other viable methods of removing the networking step? The location of the MySQL server will vary, as the application might well be run from a laptop at home or at the office, where the server would be local.

    Read the article

  • Data structures for storing finger/stylus movements in drawing application?

    - by mattja?øb
    I have a general question about creating a drawing application, the language could be C++ or ObjectiveC with OpenGL. I would like to hear what are the best methods and practices for storing strokes data. Think of the many iPad apps that allow you to draw with your finger (or a stylus) or any other similar function on a desktop app. To summarize, the data structure must: be highly responsive to the movement store precise values (close in space / time) usable for rendering the strokes with complex textures (textures based on the dynamic of the stroke etc) exportable to a text file for saving/loading

    Read the article

  • MATLAB: What is an appropriate Data Structure for a Matrix with Random Variable Entries?

    - by user12707
    I'm working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. I am currently coding in MATLAB. To motivate this let me say I have the following matrix: [a b; c d] I want to find a data structure that will allow for a, b, c, d to be either real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 20 and SD 40. The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate an uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix. I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do it. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.

    Read the article

  • C# Can I return HttpWebResponse result to iframe - Uses Digest authentication

    - by chadsxe
    I am trying to figure out a way to display a cross-domain web page that uses Digest Authentication. My initial thought was to make a web request and return the entire page source. I currently have no issues with authenticating and getting a response but I am not sure how to properly return the needed data. // Create a request for the URL. WebRequest request = WebRequest.Create("http://some-url/cgi/image.php?type=live"); // Set the credentials. request.Credentials = new NetworkCredential(username, password); // Get the response. HttpWebResponse response = (HttpWebResponse)request.GetResponse(); // Get the stream containing content returned by the server. Stream dataStream = response.GetResponseStream(); // Open the stream using a StreamReader for easy access. StreamReader reader = new StreamReader(dataStream); // Read the content. string responseFromServer = reader.ReadToEnd(); // Clean up the streams and the response. reader.Close(); dataStream.Close(); response.Close(); return responseFromServer; My problems are currently... responseFromServer is not returning the entire source of the page. I.E. missing body and head tags The data is encoded improperly in responseFromServer. I believe this has something to do with the transfer encoding being of the type chunked. Further more... I am not entirely sure if this is even possible. If it matters, this is being done in ASP.NET MVC 4 C#. Thanks, Chad

    Read the article

  • Why is the UpdatePanel Response size changing on alternate requests?

    - by Decker
    We are using UpdatePanel in a small portion of a large page and have noticed a performance problem where IE7 becomes CPU bound and the control within the UpdatePanel takes a long time (upwards of 30 seconds) to render. We also noticed that Firefox does not seem to suffer from these delays. We ran both Fiddler (for IE) and Firebug (for Firefox) and noticed that the real problem lied with the amount of data being returned in update panel responses. Within the UpdatePanel control there is a table that contains a number of ListBox controls. The real problem is that EVERY OTHER TIME the response (from making ListBox selections) alternates from 30K to 430K. Firefox handles the 400+K response in a reasonable amount of time. For whatever reason, IE7 goes CPU bound while it is presumably processing this data. So irrespective of whether or not we should be using an UpdatePanel or not, we'd like to figure out why every other async postback response is larger by a factor of more than 10 than the previous one. When the response is in the 30K range, IE updates the display within a second. On the alternate times, the response time is well over 10 times longer. Any idea why this alternating behavior should be happening with an UpdatePanel?

    Read the article

  • How do I add a confirmation popup on a button (GET POST action in MVC)?

    - by user54197
    I have a get/post/JSON function on an aspx page. This page adds data entered in a textbox to a table populated by javascript. When the user select the submit button. If the textbox is not empty, have a popup button telling the user the data in the textbox is not saved in the table. How do I have a confirm "ok/cancel" popup display on the post action in the Controller? I made a quick summary of what my code looks like. ... <% using (Html.BeginForm("AddName", "Name", FormMethod.Post, new { id = "AddNameForm" })) { %> ... <table id="displayNameTable" width= "100%"> <tr> <th colspan="3">Names Already Added</th> </tr> <tr> <td style="font-size:smaller;" class="name"></td> </tr> </table> ... <input name="Name" id="txtInjuryName" type="text" value="<%=test.Name %>" /> ... <input type="submit" name="add" value="Add"/> <% } %> <form id="form1" runat="server"> string confirmNext = ""; if (test.Name == "") { confirmNext = "return confirm('It seems you have a name not added.\n\nAre Continue?')"; }%> <input type="submit" name="getNext" value="Next" onclick="<%=confirmNext%>" /> </form>

    Read the article

  • An unhandled exception of type 'System.StackOverflowException' occurred in mscorlib.dll

    - by mazhar
    Ok the thing is that I am using asp.net mvc with linq to sql. I have a scenario where there are two tables group and features with many to many relationship with the third table groupfeatures. I have drag and drop all three table in the dbml file. The thing is that it runs fine with group but when I call the Feature things the above error occured and the compilers stops at this line in ((())). public partial class EgovtDataContext : System.Data.Linq.DataContext { private static System.Data.Linq.Mapping.MappingSource mappingSource = new AttributeMappingSource(); #region Extensibility Method Definitions partial void OnCreated(); partial void InsertGroup(Group instance); partial void UpdateGroup(Group instance); partial void DeleteGroup(Group instance); partial void InsertFeature(Feature instance); partial void UpdateFeature(Feature instance); partial void DeleteFeature(Feature instance); partial void InsertGroupFeature(GroupFeature instance); partial void UpdateGroupFeature(GroupFeature instance); partial void DeleteGroupFeature(GroupFeature instance); #endregion (((public EgovtDataContext() : base(global::System.Configuration.ConfigurationManager.ConnectionStrings["egovtsConnectionString"].ConnectionString, mappingSource)))) The code in both the controller Group and features file is ditto copied. Note the group things are working, the features things are not. FeaturesRepository.cs public IQueryable<Feature> FindAllFeatures() { return db.Features; } FeaturesController.cs FeaturesRepository FeatureRepository = new FeaturesRepository(); public ActionResult Index() { var Feature = FeatureRepository.FindAllFeatures(); return View(Feature); } So what could be wrong?

    Read the article

  • Migrating data from Plone to Liferay, or how could I retrieve information from Plone's Data.fs

    - by brandizzi
    Hello, all. I need to migrate data from a Plone-based portal to Liferay. Has anyone some idea on how to do it? Anyway, I am trying to retrieve data from Data.fs and store it in a representation easier to work, such as JSON. To do it, I need to know which objects I should get from Plone's Data.fs. I already got the Products.CMFPlone.Portal.PloneSite instance from the Data.fs, but I cannot get anything from it. I would like to get the PloneSite instance and do something like this: >>> import ZODB >>> from ZODB import FileStorage, DB >>> path = r"C:\Arquivos de programas\Plone\var\filestorage\Data.fs" >>> storage = FileStorage.FileStorage(path) >>> db = DB(storage) >>> conn = db.open() >>> root = conn.root() >>> app = root['Application'] >>> plone_site = app.getChildNodes()[13] # 13 would be index of PloneSite object >>> a = plone_site.get_articles() >>> for article in a: ... print "Title:", a.title ... print "Content:", a.content Title: <some title> Conent: <some content> Title: <some title> Conent: <some content> Of course, it did not need to be so straightforward. I just want some information about the structure of PloneSite and how to recover its data. Has anyone some idea? Thank you in advance!

    Read the article

  • Using Mapping Models to migrate between Core Data Object Models

    - by westsider
    I have a fairly simply scheme. Essentially, Run <-- Data (where a Run holds a data, e.g., Temperature, sampled from some sort of sensor). Now, it seems that sensors can have more than one measurement (e.g., Temperature and Humidity). So, a single Run could have multiple data samples. Hence, Run <-- Sample and Sample <-- Data. (And for simplicity I am leaving Run <-- Data in place, for now.) If I create a new mapping model, then things generally work - except that no new Samples are created, no relationships are established between Runs and Samples nor between Samples and Datas. I am trying to get mapping model to migrate my model but even the slightest change to the generated mapping model results in Cocoa error 134110. For example, if I take the "Sample" mapping (which has no Source) and set its Source to 'Run' (so that I can set Sample's inverse relationship 'run' appropriately) then the mapping changes its name to "RunToSample". There are two relationships handled in this mapping: data and run. The data property gets set automatically to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "DataToData", $source.dataSet) Following this example, I set the run property to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToRun", $source) Similarly, I set the 'sample' property mapping in RunToRun to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source) and the 'sample' property in DataToData to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source.run) So, what, I wonder, is going wrong? I have tried various permutations, such as leaving the 'inverse' relationships unspecified. But I continue to get the same error (134110) regardless. I imagine that this is a lot easier than it seems and that I am missing some fundamental but minor piece. I have also tried subclassing NSEntityMigrationPolicy and overriding -createDestinationInstancesForSourceInstance: but these efforts have met with much the same results. Thanks in advance for any pointers or (relevant :-) advice.

    Read the article

  • Static Data Structures on Embedded Devices (Android in particular)

    - by Mark
    I've started working on some Android applications and have a question regarding how people normally deal with situations where you have a static data set and have an application where that data is needed in memory as one of the standard java collections or as an array. In my current specific issue i have a spreadsheet with some pre-calculated data. It consists of ~100 rows and 3 columns. 1 Column is a string, 1 column is a float, 1 column is an integer. I need access to this data as an array in java. It seems like i could: 1) Encode in XML - This would be cpu intensive to decode in my experience. 2) build into SQLite database - seems like a lot of overhead for static access to data i only need array style access to in ram. 3) Build into binary blob and read in. (never done this in java, i miss void *) 4) Build a python script to take the CSV version of my data and spit out a java function that adds the values to my desired structure with hard coded values. 5) Store a string array via androids resource mechanism and compute the other 2 columns on application load. In my case the computation would require a lot of calls to Math.log, Math.pow and Math.floor which i'd rather not have to do for load time and battery usage reasons. I mostly work in low power embedded applications in C and as such #4 is what i'm used to doing in these situations. It just seems like it should be far easier to gain access to static data structures in java/android. Perhaps I'm just being too battery usage conscious and in my single case i imagine the answer is that it doesn't matter much, but if every application took that stance it could begin to matter. What approaches do people usually take in this situation? Anything I missed?

    Read the article

  • MVC 2.0 - JqGrid Sorting with Mulitple Tables

    - by Billy Logan
    I am in the process of implementing the jqGrid and would like to be able to use the sorting functionality. I have run into some issues with sorting columns that are related to the base table. Here is the script to load the grid: public JsonResult GetData(GridSettings grid) { try { using (IWE dataContext = new IWE()) { var query = dataContext.LKTYPE.Include("VWEPICORCATEGORY").AsQueryable(); ////sorting query = query.OrderBy<LKTYPE>(grid.SortColumn, grid.SortOrder); //count var count = query.Count(); //paging var data = query.Skip((grid.PageIndex - 1) * grid.PageSize).Take(grid.PageSize).ToArray(); //converting in grid format var result = new { total = (int)Math.Ceiling((double)count / grid.PageSize), page = grid.PageIndex, records = count, rows = (from host in data select new { TYPE_ID = host.TYPE_ID, TYPE = host.TYPE, CR_ACTIVE = host.CR_ACTIVE, description = host.VWEPICORCATEGORY.description }).ToArray() }; return Json(result, JsonRequestBehavior.AllowGet); } } catch (Exception ex) { //send the error email ExceptionPolicy.HandleException(ex, "Exception Policy"); } //have to return something if there is an issue return Json(""); } As you can see the description field is a part of the related table("VWEPICORCATEGORY") and the order by is targeted at LKTYPE. I am trying to figure out how exactly one goes about sorting that particular field or maybe even a better way to implement this grid using multiple tables and it's sorting functionality. Thanks in advance, Billy

    Read the article

  • CSRF Protection in AJAX Requests using MVC2

    - by mnemosyn
    The page I'm building depends heavily on AJAX. Basically, there is just one "page" and every data transfer is handled via AJAX. Since overoptimistic caching on the browser side leads to strange problems (data not reloaded), I have to perform all requests (also reads) using POST - that forces a reload. Now I want to prevent the page against CSRF. With form submission, using Html.AntiForgeryToken() works neatly, but in AJAX-request, I guess I will have to append the token manually? Is there anything out-of-the box available? My current attempt looks like this: I'd love to reuse the existing magic. However, HtmlHelper.GetAntiForgeryTokenAndSetCookie is private and I don't want to hack around in MVC. The other option is to write an extension like public static string PlainAntiForgeryToken(this HtmlHelper helper) { // extract the actual field value from the hidden input return helper.AntiForgeryToken().DoSomeHackyStringActions(); } which is somewhat hacky and leaves the bigger problem unsolved: How to verify that token? The default verification implementation is internal and hard-coded against using form fields. I tried to write a slightly modified ValidateAntiForgeryTokenAttribute, but it uses an AntiForgeryDataSerializer which is private and I really didn't want to copy that, too. At this point it seems to be easier to come up with a homegrown solution, but that is really duplicate code. Any suggestions how to do this the smart way? Am I missing something completely obvious?

    Read the article

  • Efficient data importing?

    - by Kevin
    We work with a lot of real estate, and while rearchitecting how the data is imported, I came across an interesting issue. Firstly, the way our system works (loosely speaking) is we run a Coldfusion process once a day that retrieves data provided from an IDX vendor via FTP. They push the data to us. Whatever they send us is what we get. Over the years, this has proven to be rather unstable. I am rearchitecting it with PHP on the RETS standard, which uses SOAP methods of retrieving data, which is already proven to be much better than what we had. When it comes to 'updating' existing data, my initial thought was to query only for data that was updated. There is a field for 'Modified' that tells you when a listing was last updated, and the code I have will grab any listing updated within the last 6 hours (give myself a window in case something goes wrong). However, I see a lot of real estate developers suggest creating 'batch' processes that run through all listings regardless of updated status that is constantly running. Is this the better way to do it? Or am I fine with just grabbing the data I know I need? It doesn't make a lot of sense to me to do more processing than necessary. Thoughts?

    Read the article

  • Putting CAPTCHAs on their own page?

    - by mnemosyn
    We need to put a captcha image on our ASP.NET MVC 2 based website. We chose reCaptcha and built it in using the way described by Derik Whittaker. The idea there is baiscally to build some abstractions and all you need to do is decorate your Controller with a [ValidateCaptcha] attribute. This works all fine. However, we have a lot of form-widgets in different pages and I don't want to have the captcha floating around everywhere. So I'd like to implement it the way StackOverflow does: Submit a Form -> Challenge Captcha -> Submit Captcha -> Perform Action on original form data. Now, how do I redirect the user to the captcha page while keeping the originally submitted information? I thought of some very ugly hacks (hidden fields w/ base64 encoded form data, etc.) but I think I'm missing something obvious. On the other hand, this sounds as if I wanted to do something in a stateful manner, and I shouldn't?

    Read the article

  • Architecture with NHibernate and Repositories

    - by Matthew
    I've been reading up on MVC 2 and the recommended patterns, so far I've come to the conclusion (amongst much hair pulling and total confusion) that: Model - Is just a basic data container Repository - Provides data access Service - Provides business logic and acts as an API to the Controller The Controller talks to the Service, the Service talks to the Repository and Model. So for example, if I wanted to display a blog post page with its comments, I might do: post = PostService.Get(id); comments = PostService.GetComments(post); Or, would I do: post = PostService.Get(id); comments = post.Comments; If so, where is this being set, from the repository? the problem there being its not lazy loaded.. that's not a huge problem but then say I wanted to list 10 posts with the first 2 comments for each, id have to load the posts then loop and load the comments which becomes messy. All of the example's use "InMemory" repository's for testing and say that including db stuff would be out of scope. But this leaves me with many blanks, so for a start can anyone comment on the above?

    Read the article

  • How to html encode the output of an NHaml view (or any MVC view)?

    - by jessegavin
    I have several views written in NHaml that I would like to render as encoded html. Here's one. %table.data %thead %tr %th Country Name %th ISO 2 %th ISO 3 %th ISO # %tbody - foreach(var c in ViewData.Model.Countries) %tr %td =c.Name %td =c.Alpha2 %td =c.Alpha3 %td =c.Number I know that NHaml provides syntax to Html encode the output for given lines using &=. However, in order to encode the entire view, I would essentially lose the benefit of writing my view in NHaml since it would have to look like this.... &= "<table class='data'> &= " <thead> So I was wondering if there was any cool way to be able to capture the rendered view as a string, then to html encode that string. Maybe something like the following??? public ContentResult HtmlTable(string format) { var m = new CountryViewModel(); m.Countries = _countryService.GetAll(); // Somehow render the view and store it as a string? // Not sure how to achieve this. var viewHtml = View("HtmlTable", m); // ??? return Content(viewHtml); } This question may actually have no particular relevance to the View engine that I am using I guess. Any help or thoughts would be appreciated.

    Read the article

  • source of historical stock data

    - by rmeador
    I'm trying to make a stock market simulator (perhaps eventually growing into a predicting AI), but I'm having trouble finding data to use. I'm looking for a (hopefully free) source of historical stock market data. Ideally, it would be a very fine-grained (second or minute interval) data set with price and volume of every symbol on NASDAQ and NYSE (and perhaps others if I get adventurous). Does anyone know of a source for such info? I found this question which indicates Yahoo offers historical data in CSV format, but I've been unable to find out how to get it in a cursory examination of the site linked. I also don't like the idea of downloading the data piecemeal in CSV files... I imagine Yahoo would get upset and shut me off after the first few thousand requests. I also discovered another question that made me think I'd hit the jackpot, but unfortunately that OpenTick site seems to have closed its doors... too bad, since I think they were exactly what I wanted. I'd also be able to use data that's just open/close price and volume of every symbol every day, but I'd prefer all the data if I can get it. Any other suggestions?

    Read the article

  • what are the possibilities of displaying items in a listbox.

    - by Selase
    Ive been trying to figure out for a long time now how to create an interface that can allows users to input several rows of data and passed those entries into an sql server database all at one shot. i could not get any better ideas so i came up with this.(see picture below.) so what i envisioned is that the user enters values in the textboxes and hits "add to list" button. the values are then populated in the list box below with the heading "exhibits lists" and when add exhibit button is pressed, all values from the list box are passed into the database. Well am left wondering again if it would be possible to tie this values from the texboxes to the list box and if id be able to pass them into the database. if it were possible then id please love to know how to go about it otherwise id be glad if you can recommend a better way for me to handle the situation otherwise id have to resolve to data entry one at a time.:(... Counting on you sublime advise. thanks. I believe there is some useful information from this website that can help solve my problem but i just cant make head and tail out of the article...it seems like am almost there and it skids off...can everyone please read and help me adapt it to my situation..thanks..post below http://www.codeproject.com/KB/aspnet/ExtendedGridView.aspx

    Read the article

  • Architecture for data layer that uses both localStorage and a REST remote server

    - by Zack
    Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage: The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter). The server gathers all data from clients. Using mathematical sets notation: Server = Client1 ? Client2 ? ... ? ClientN where, in general, a record may not be unique to a certain client. Here are some scenarios: A client creates a record. The id of the record can not set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server - receive the id - create the record in localStorage. A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?) Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter? Can you see any way to avoid implementing push (Web Sockets, ...)?

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >