Search Results

Search found 109760 results on 4391 pages for 'ado net entity data model'.

Page 251/4391 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • linq-to-sql "an attempt has been made to attach or add an entity that is not new"?

    - by Curtis White
    I've been getting several errors: cannot add an entity with a key that is already in use An attempt has been made to attach or add an entity that is not new, perhaps having been loaded from another datacontext In case 1, this stems from trying to set the key for an entity versus the entity. In case 2, I'm not attaching an entity but I am doing this: MyParent.Child = EntityFromOtherDataContext; I've been using using the pattern of wrap everything with a using datacontext. In my case, I am using this in a web forms scenario, and obviously moving the datacontext object to a class wide member variables solves this. My questions are thus 2 fold: How can I get rid of these errors and not have to structure my program in an odd way or pass the datacontext around while keeping the local-wrap pattern? I assume I could make another hit to the database but that seems very inefficient. Would most people recommend that moving the datacontext to the class wide scope is desirable for web pages?

    Read the article

  • In this example, would Customer or AccountInfo properly be the entity group parent?

    - by Badhu Seral
    In this example, the Google App Engine documentation makes the Customer the entity group parent of the AccountInfo entity. Wouldn't AccountInfo encapsulate Customer rather than the other way around? Normally I would think of an AccountInfo class as including all of the information about the Customer. import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; import com.google.appengine.api.datastore.Key; import com.google.appengine.api.datastore.KeyFactory; @PersistenceCapable public class AccountInfo { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; public void setKey(Key key) { this.key = key; } } // ... KeyFactory.Builder keyBuilder = new KeyFactory.Builder(Customer.class.getSimpleName(), "custid985135"); keyBuilder.addChild(AccountInfo.class.getSimpleName(), "acctidX142516"); Key key = keyBuilder.getKey(); AccountInfo acct = new AccountInfo(); acct.setKey(key); pm.makePersistent(acct);

    Read the article

  • Entity Framework + MySQL - Why is the performance so terrible?

    - by Cyril Gupta
    When I decided to use an OR/M (Entity Framework for MySQL this time) for my new project I was hoping it would save me time, but I seem to have failed it (for the second time now). Take this simple SQL Query SELECT * FROM POST ORDER BY addedOn DESC LIMIT 0, 50 It executes and gives me results in less than a second as it should (the table has about 60,000 rows). Here's the equivalent LINQ To Entities query that I wrote for this var q = (from p in db.post orderby p.addedOn descending select p).Take(50); var q1 = q.ToList(); //This is where the query is fetched and timed out But this query never even executes it times out ALWAYS (without orderby it takes 5 seconds to run)! My timeout is set to 12 seconds so you can imagine it is taking much more than that. Why is this happening? Is there a way I can see what is the actual SQL Query that Entity Framework is sending to the db? Should I give up on EF+MySQL and move to standard SQL before I lose all eternity trying to make it work? I've recalibrated my indexes, tried eager loading (which actually makes it fail even without the orderby clause) Please help, I am about to give up OR/M for MySQL as a lost cause.

    Read the article

  • How do I serialize/deserialize a NHibernate entity that has references to other objects?

    - by Daniel T.
    I have two NHibernate-managed entities that have a bi-directional one-to-many relationship: public class Storage { public virtual string Name { get; set; } public virtual IList<Box> Boxes { get; set; } } public class Box { public virtual string Box { get; set; } [DoNotSerialize] public virtual Storage ParentStorage { get; set; } } A Storage can contain many Boxes, and a Box always belongs in a Storage. I want to edit a Box's name, so I send it to the client using JSON. Note that I don't serialize ParentStorage because I'm not changing which storage it's in. The client edits the name and sends the Box back as JSON. The server deserializes it back into a Box entity. Problem is, the ParentStorage property is null. When I try to save the Box to the database, it updates the name, but also removes the relationship to the Storage. How do I properly serialize and deserialize an entity like a Box, while keeping the JSON data size to a minimum?

    Read the article

  • How to **delete-protect** a file or folder on Windows Server 2003 and onwards using C#/Vb.Net?

    - by Steve Johnson
    Hi all, Is it possible to delete-protect a file/folder using Registry or using a custom written Windows Service in C#? Using Folder Permissions it is possible, but i am looking for a solution that even restricts the admin from deleting specific folders. The requirement is that the administrator must not be easily track the nature of protection and/or may not be able to avert it easily. Obviously all administrators will be able to revert the procedure if the technique is clearly understood. Like folder Permissions/OwnerShip Settings can easily be reset by an administrator. SO that is not an option. Folder protection software can easily be uninstalled and show clear indication that a particular folder is protected by some special kind of software. SO that too is not an option. Most antivirus programs protect folders and files in Program Dir. Windows itself doesnt allow certain files such as registry files in c:\windows\system32\config to not even copied. Such a protection is desired for folders which allowse to read and write to files but not allow deletion. Similar functionality is desired. The protection has to seemless and invisible. I do not want to use any protection features like FolderLock and Invisible secrets/PC Security and Desktop password etc. Moreover, the solution has to be something other than folder encryption. The solution has to be OS-native so ** that it may implemented ** pro grammatically using C#/VB.Net. Please help. Thanks

    Read the article

  • SQLAuthority News – Download SQL Azure Labs Codename “Data Explorer” Client

    - by pinaldave
    Microsoft SQL Azure labs has recently released Data Explorer client. I was looking forward to visualizing tool for quite a while and I am delighted to see this tool. I will be trying out this tool in coming week and will post here my experience. I have listed few of the resources which are related to Data Explorer at the end. Please let me know if I have missed any and I will add the same. With “Data Explorer” you can: Identify the data you care about from the sources you work with (e.g. Excel spreadsheets, files, SQL Server databases). Discover relevant data and services via automatic recommendations from the Windows Azure Marketplace. Enrich your data by combining it and visualizing the results. Collaborate with your colleagues to refine the data. Publish the results to share them with others or power solutions. The Data Explorer Client package contains the Data Explorer workspace as well as an Office plugin that integrates Data Explorer into Excel. Resources: Download Data Explorer Data Explorer Blog Desktop Client Video of  Contoso Bikes and Frozen Yogurt (Data Explorer) Please note that this is not the final release of the product. Please do not attempt this on production server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Azure, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Big Data for Retail

    - by David Dorf
    Right up there with mobile, social, and cloud is the term "big data," which seems to be popping up lots in the press these days.  Companies like Google, Yahoo, and Facebook have popularized a new class of data technologies meant to solve the problem of processing large amounts of data quickly.  I first mentioned this in a posting back in March 2009.  Put simply, big data implies datasets so large they can't normally be processed using a standard transactional database.  The term "noSQL" is often used in this context as well. Actually, using parallel processing within the Oracle database combined with Exadata can achieve impressive results.  Look for more from Oracle at OpenWorld as hinted by Jean-Pierre Dijcks. McKinsey recently released a report on big data in which retail was specifically mentioned as an industry that can benefit from the new technologies.  I won't rehash that report because my friend Rama already did such a good job in his posting, Impact of "Big Data" on Retail. The presentation below does a pretty good job of framing the problem, although it doesn't really get into the available technologies (e.g. Exadata, Hadoop, Cassandra, etc.) and isn't retail specific. Determine the Right Analytic Database: A Survey of New Data Technologies So when a retailer asks me about big data, here's what I say:  Big data refers to a set of technologies for processing large volumes of structured and unstructured data.  Imagine collecting everything uttered by your customers on Facebook and Twitter and combining it with all the data you can find about the products you sell (e.g. reviews, images, demonstration videos), including competitive data.  Assuming you could process all that data, you could then personalize offers to specific customers based on their tastes, ensure prices are competitive, and implement better local assortments.  It's really not that far off.

    Read the article

  • Data and Secularism

    - by kaleidoscope
    Ever since we’ve been using Data we’ve been religious. Religious about the way we represent it and equally religious about the way we access it. Be it plain old SQL, DAO, ADO, ADO.Net and I am just referring to religions in MSFT world. A peek outside and I’d need a separate book to list out the Data faiths. Various application areas in networked computing are converging under the HTTP umbrella with a plausible transition to purist HTTP and in turn REST fuelled by the Web2.0 storm. It was time the Data access faiths also gave up the religious silos wrapped around our long worshipped data publishing and access methods. OData is the secular solution we have at hand today. It is an open protocol for sharing data. It can be exposed via REST. It is Open as in the Microsoft Open Specification Promise. This allows virtually everyone to build Data Services for any runtime. OData is one of the key standards for Data publishing/subscribing on Microsoft Codename Dallas. For us .Netters OData data sources can be exposed/consumed via WCF Data Services and the process is very simple, elegant and intuitive. Applications exposing OData Services Sharepoint 2010 IBM Web Sphere Microsoft SQL Azure Windows Azure Table Storage SQL Server Reporting Services   Live OData Services Netflix Open Science Data Initiative Open Government Data Initiatives Northwind database exposed as OData Service and many others Some may prefer to call it commoditization of data, unification of data access strategies or any other sweet name. I for one will stick to my secular definition. :) Technorati Tags: Sarang,OData,MOSP

    Read the article

  • Using Apache FOP from .NET level

    - by Lukasz Kurylo
    In one of my previous posts I was talking about FO.NET which I was using to generate a pdf documents from XSL-FO. FO.NET is one of the .NET ports of Apache FOP. Unfortunatelly it is no longer maintained. I known it when I decidec to use it, because there is a lack of available (free) choices for .NET to render a pdf form XSL-FO. I hoped in this implementation I will find all I need to create a pdf file with my really simple requirements. FO.NET is a port from some old version of Apache FOP and I found really quickly that there is a lack of some features that I needed, like dotted borders, double borders or support for margins. So I started to looking for some alternatives. I didn’t try the NFOP, another port of Apache FOP, because I found something I think much more better, the IKVM.NET project.   IKVM.NET it is not a pdf renderer. So what it is? From the project site:   IKVM.NET is an implementation of Java for Mono and the Microsoft .NET Framework. It includes the following components: a Java Virtual Machine implemented in .NET a .NET implementation of the Java class libraries tools that enable Java and .NET interoperability   In the simplest form IKVM.NET allows to use a Java code library in the C# code and vice versa.   I tried to use an Apache FOP, the best I think open source pdf –> XSL-FO renderer written in Java from my project written in C# using an IKVM.NET and it work like a charm. In the rest of the post I want to show, how to prepare a .NET *.dll class library from Apache FOP *.jar’s with IKVM.NET and generate a simple Hello world pdf document.   To start playing with IKVM.NET and Apache FOP we need to download their packages: IKVM.NET Apache FOP and then unpack them.   From the FOP directory copy all the *.jar’s files from lib and build catalogs to some location, e.g. d:\fop. Second step is to build the *.dll library from these files. On the console execute the following comand:   ikvmc –target:library –out:d:\fop\fop.dll –recurse:d:\fop   The ikvmc is located in the bin subdirectory where you unpacked the IKVM.NET. You must execute this command from this catalog, add this path to the global variable PATH or specify the full path to the bin subdirectory.   In no error occurred during this process, the fop.dll library should be created. Right now we can create a simple project to test if we can create a pdf file.   So let’s create a simple console project application and add reference to the fop.dll and the IKVM dll’s: IKVM.OpenJDK.Core and IKVM.OpenJDK.XML.API.   Full code to generate a pdf file from XSL-FO template:   static void Main(string[] args)         {             //initialize the Apache FOP             FopFactory fopFactory = FopFactory.newInstance();               //in this stream we will get the generated pdf file             OutputStream o = new DotNetOutputMemoryStream();             try             {                 Fop fop = fopFactory.newFop("application/pdf", o);                 TransformerFactory factory = TransformerFactory.newInstance();                 Transformer transformer = factory.newTransformer();                   //read the template from disc                 Source src = new StreamSource(new File("HelloWorld.fo"));                 Result res = new SAXResult(fop.getDefaultHandler());                 transformer.transform(src, res);             }             finally             {                 o.close();             }             using (System.IO.FileStream fs = System.IO.File.Create("HelloWorld.pdf"))             {                 //write from the .NET MemoryStream stream to disc the generated pdf file                 var data = ((DotNetOutputMemoryStream)o).Stream.GetBuffer();                 fs.Write(data, 0, data.Length);             }             Process.Start("HelloWorld.pdf");             System.Console.ReadLine();         }   Apache FOP be default using a Java’s Xalan to work with XML files. I didn’t find a way to replace this piece of code with equivalent from .NET standard library. If any error or warning will occure during generating the pdf file, on the console will ge shown, that’s why I inserted the last line in the sample above. The DotNetOutputMemoryStream this is my wrapper for the Java OutputStream. I have created it to have the possibility to exchange data between the .NET <-> Java objects. It’s implementation:   class DotNetOutputMemoryStream : OutputStream     {         private System.IO.MemoryStream ms = new System.IO.MemoryStream();         public System.IO.MemoryStream Stream         {             get             {                 return ms;             }         }         public override void write(int i)         {             ms.WriteByte((byte)i);         }         public override void write(byte[] b, int off, int len)         {             ms.Write(b, off, len);         }         public override void write(byte[] b)         {             ms.Write(b, 0, b.Length);         }         public override void close()         {             ms.Close();         }         public override void flush()         {             ms.Flush();         }     } The last thing we need, this is the HelloWorld.fo template.   <?xml version="1.0" encoding="utf-8"?> <fo:root xmlns:fo="http://www.w3.org/1999/XSL/Format"          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">   <fo:layout-master-set>     <fo:simple-page-master master-name="simple"                   page-height="29.7cm"                   page-width="21cm"                   margin-top="1.8cm"                   margin-bottom="0.8cm"                   margin-left="1.6cm"                   margin-right="1.2cm">       <fo:region-body margin-top="3cm"/>       <fo:region-before extent="3cm"/>       <fo:region-after extent="1.5cm"/>     </fo:simple-page-master>   </fo:layout-master-set>   <fo:page-sequence master-reference="simple">     <fo:flow flow-name="xsl-region-body">       <fo:block font-size="18pt" color="black" text-align="center">         Hello, World!       </fo:block>     </fo:flow>   </fo:page-sequence> </fo:root>   I’m not going to explain how how this template is created, because this will be covered in the near future posts.   Generated pdf file should look that:

    Read the article

  • Can web apps allow fast data-typists to "type-ahead"?

    - by user61852
    In some data entry contexts, I've seen data typists, type really fast and know so well the app they use, and have a mechanic quality in their work so that they can "type ahead", ie continue typing and "tab-bing" and "enter-ing" faster than the display updates, so that in many occasions they are typing in the data for the next form before it draws itself. Then when this next entry form appears, their keystrokes fill the text boxes and they continue typing, selecting etc. In contexts like this, this speed is desirable, since this persons are really productive. I think this "type ahead of time" is only possible in desktop apps, but I may be wrong. My question is whether this way of handling the keyboard buffer (which in desktop apps require no extra programming) is achievable in web apps, or is this impossible because of the way web apps work, handle sessions, etc (network latency and the overhead of generating new web pages ) ? Edit: By "type ahead" I mean "keyboard type ahead" (typing faster than the next entry form can load), not suggets-as-you-type-like-google type ahead. Typeahead is a feature of computers and software (and some typewriters) that enables users to continue typing regardless of program or computer operation—the user may type in whatever speed he or she desires, and if the receiving software is busy at the time it will be called to handle this later. Often this means that keystrokes entered will not be displayed on the screen immediately. This programming technique for handling user what is known as a keyboard buffer.

    Read the article

  • How to migrate user settings and data to new machine?

    - by torbengb
    I'm new to Ubuntu and recently started using it on my PC. I'm going to replace that PC with a new machine. I want to transfer my data and settings to the nettop. What aspects should I consider? Obviously I want to move my data over. What things am I missing if I only copy the entire home folder? This is a home pc (not corporate) so user rights and other security issues are not a concern, except that the files should be accessible on the new machine! Please take into account that the new machine is a nettop that doesn't have an optical drive and doesn't allow me to hook the old SATA disk into it, so any data transfer must be handled via home network (I can have both the old and the new machine turned on and connected to the home LAN) and I have an USB thumbdrive with limited capacity (2GB). This sounds like it might limit the general applicability, but it would in fact make it more general. I'll make this a wiki topic because there could be several "right" answers. Update: Or so I thought. I don't see a choice for that.

    Read the article

  • Is it conceivable to have millions of lists of data in memory in Python?

    - by Codemonkey
    I have over the last 30 days been developing a Python application that utilizes a MySQL database of information (specifically about Norwegian addresses) to perform address validation and correction. The database contains approximately 2.1 million rows (43 columns) of data and occupies 640MB of disk space. I'm thinking about speed optimizations, and I've got to assume that when validating 10,000+ addresses, each validation running up to 20 queries to the database, networking is a speed bottleneck. I haven't done any measuring or timing yet, and I'm sure there are simpler ways of speed optimizing the application at the moment, but I just want to get the experts' opinions on how realistic it is to load this amount of data into a row-of-rows structure in Python. Also, would it even be any faster? Surely MySQL is optimized for looking up records among vast amounts of data, so how much help would it even be to remove the networking step? Can you imagine any other viable methods of removing the networking step? The location of the MySQL server will vary, as the application might well be run from a laptop at home or at the office, where the server would be local.

    Read the article

  • Using VBA to model data in Autodesk Inventor?

    - by user108478
    I have a close friend who is using a specific device that records the dimensions of an object as it is eroded and outputs the dimensional data to an excel sheet. The object is spherical in nature but is eroded from the top and bottom, so the shape is constantly changing and a single formula for surface area and volume would not work. This is where Inventor comes in. My friend can plug the dimensional data to Inventor and it immediately returns the surface area and volume. The erosion process takes several minutes to complete and records data at very short intervals, so it would be very arduous to plug in the data thousand of time. Since Inventor supports macros and VBA, is there a way to plug the data into Inventor and output it into another spreadsheet? Any suggestions would be appreciated.

    Read the article

  • melonJS: Entity and solid block on collision layer

    - by Arthur Halma
    Actually I have my player entity with 64x64 sprite animation and 18x60 hitbox also the map is maded by 16x16 tiles. When my player goes some way he can pass through blocks (but not all of them). For example there are 4 situations: Good (player can't pass the tile with isSolid property on collision layer) Good (player can't pass the tile with isSolid property on collision layer) Bad (player pass the tile with isSolid property on collision layer) Bad (player pass the tile with isSolid property on collision layer) Looks like melonJS checks only corners of hitbox instead of whole rectangle. Can anyone help me in this situation.

    Read the article

  • MATLAB: What is an appropriate Data Structure for a Matrix with Random Variable Entries?

    - by user12707
    I'm working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. I am currently coding in MATLAB. To motivate this let me say I have the following matrix: [a b; c d] I want to find a data structure that will allow for a, b, c, d to be either real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 20 and SD 40. The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate an uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix. I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do it. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.

    Read the article

  • How do I achieve a 'select or insert' task using LINQ to EF?

    - by ProfK
    I have an import process with regions, locations, and shifts, where a Shift object has a Location property, and a Location object has a Region property. If a region name does not exist, I create the region, and like wise a location. I thought I could neatly encapsulate the 'Select if exists, or create' logic into helper classes for Region and Location, but if I use local data contexts in these classes I run into attach and detach overheads that become unpleasent. If I include a data context dependency in these classes, my encapsulation feels broken. What is the ideal method for achieving this, or where is the ideal place to place this functionality? In my example I have leaned heavily on the foreign key crutch provided with .NET 4.0, and simply avoided using entities in favour of direct foreign key values, but this is starting to smell. Example: public partial class ActivationLocation { public static int GetOrCreate(int regionId, string name) { using (var ents = new PvmmsEntities()) { var loc = ents.ActivationLocations.FirstOrDefault(x => x.RegionId == regionId && x.Name == name); if (loc == null) { loc = new ActivationLocation {RegionId = regionId, Name = name}; ents.AddToActivationLocations(loc); ents.SaveChanges(SaveOptions.AcceptAllChangesAfterSave); } return loc.LocationId; } } }

    Read the article

  • Migrating data from Plone to Liferay, or how could I retrieve information from Plone's Data.fs

    - by brandizzi
    Hello, all. I need to migrate data from a Plone-based portal to Liferay. Has anyone some idea on how to do it? Anyway, I am trying to retrieve data from Data.fs and store it in a representation easier to work, such as JSON. To do it, I need to know which objects I should get from Plone's Data.fs. I already got the Products.CMFPlone.Portal.PloneSite instance from the Data.fs, but I cannot get anything from it. I would like to get the PloneSite instance and do something like this: >>> import ZODB >>> from ZODB import FileStorage, DB >>> path = r"C:\Arquivos de programas\Plone\var\filestorage\Data.fs" >>> storage = FileStorage.FileStorage(path) >>> db = DB(storage) >>> conn = db.open() >>> root = conn.root() >>> app = root['Application'] >>> plone_site = app.getChildNodes()[13] # 13 would be index of PloneSite object >>> a = plone_site.get_articles() >>> for article in a: ... print "Title:", a.title ... print "Content:", a.content Title: <some title> Conent: <some content> Title: <some title> Conent: <some content> Of course, it did not need to be so straightforward. I just want some information about the structure of PloneSite and how to recover its data. Has anyone some idea? Thank you in advance!

    Read the article

  • Static Data Structures on Embedded Devices (Android in particular)

    - by Mark
    I've started working on some Android applications and have a question regarding how people normally deal with situations where you have a static data set and have an application where that data is needed in memory as one of the standard java collections or as an array. In my current specific issue i have a spreadsheet with some pre-calculated data. It consists of ~100 rows and 3 columns. 1 Column is a string, 1 column is a float, 1 column is an integer. I need access to this data as an array in java. It seems like i could: 1) Encode in XML - This would be cpu intensive to decode in my experience. 2) build into SQLite database - seems like a lot of overhead for static access to data i only need array style access to in ram. 3) Build into binary blob and read in. (never done this in java, i miss void *) 4) Build a python script to take the CSV version of my data and spit out a java function that adds the values to my desired structure with hard coded values. 5) Store a string array via androids resource mechanism and compute the other 2 columns on application load. In my case the computation would require a lot of calls to Math.log, Math.pow and Math.floor which i'd rather not have to do for load time and battery usage reasons. I mostly work in low power embedded applications in C and as such #4 is what i'm used to doing in these situations. It just seems like it should be far easier to gain access to static data structures in java/android. Perhaps I'm just being too battery usage conscious and in my single case i imagine the answer is that it doesn't matter much, but if every application took that stance it could begin to matter. What approaches do people usually take in this situation? Anything I missed?

    Read the article

  • Core data relationship memory leak

    - by cfihelp
    I have a strange (to me) memory leak when accessing an entity in a relationship. Series and Tiles have an inverse relationship to each other. // set up the fetch request NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Series" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // grab all of the series in the core data store NSError *error = nil; availableSeries = [[NSArray alloc] initWithArray:[managedObjectContext executeFetchRequest:fetchRequest error:&error]]; [fetchRequest release]; // grab one of the series Series *currentSeries = [availableSeries objectAtIndex:1]; // load all of the tiles attached to the series through the relationship NSArray *myTiles = [currentSeries.tile allObjects]; // 16 byte leak here! Instruments reports back that the final line has a 16 byte leak cause by NSPlaceHolderString. Stack trace: 2 UIKit UIApplicationMain 3 UIKit -[UIApplication _run] 4 CoreFoundation CFRunLoopRunInMode 5 CoreFoundation CFRunLoopRunSpecific 6 GraphicsServices PurpleEventCallback 7 UIKit _UIApplicationHandleEvent 8 UIKit -[UIApplication sendEvent:] 9 UIKit -[UIApplication handleEvent:withNewEvent:] 10 UIKit -[UIApplication _runWithURL:sourceBundleID:] 11 UIKit -[UIApplication _performInitializationWithURL:sourceBundleID:] 12 Memory -[AppDelegate_Phone application:didFinishLaunchingWithOptions:] /Users/cfish/svnrepo/Memory/src/Memory/iPhone/AppDelegate_Phone.m:49 13 UIKit -[UIViewController view] 14 Memory -[HomeScreenController_Phone viewDidLoad] /Users/cfish/svnrepo/Memory/src/Memory/iPhone/HomeScreenController_Phone.m:58 15 CoreData -[_NSFaultingMutableSet allObjects] 16 CoreData -[_NSFaultingMutableSet willRead] 17 CoreData -[NSFaultHandler retainedFulfillAggregateFaultForObject:andRelationship:withContext:] 18 CoreData -[NSSQLCore retainedRelationshipDataWithSourceID:forRelationship:withContext:] 19 CoreData -[NSSQLCore newFetchedPKsForSourceID:andRelationship:] 20 CoreData -[NSSQLCore rawSQLTextForToManyFaultStatement:stripBindVariables:swapEKPK:] 21 Foundation +[NSString stringWithFormat:] 22 Foundation -[NSPlaceholderString initWithFormat:locale:arguments:] 23 CoreFoundation _CFStringCreateWithFormatAndArgumentsAux 24 CoreFoundation _CFStringAppendFormatAndArgumentsAux 25 Foundation _NSDescriptionWithLocaleFunc 26 CoreFoundation -[NSObject respondsToSelector:] 27 libobjc.A.dylib class_respondsToSelector 28 libobjc.A.dylib lookUpMethod 29 libobjc.A.dylib _cache_addForwardEntry 30 libobjc.A.dylib _malloc_internal I think I'm missing something obvious but I can't quite figure out what. Thanks for your help! Update: I've copied the offending chunk of code to the first part of applicationDidFinishLaunching and it still leaks. Could there be something wrong with my model?

    Read the article

  • Efficient data importing?

    - by Kevin
    We work with a lot of real estate, and while rearchitecting how the data is imported, I came across an interesting issue. Firstly, the way our system works (loosely speaking) is we run a Coldfusion process once a day that retrieves data provided from an IDX vendor via FTP. They push the data to us. Whatever they send us is what we get. Over the years, this has proven to be rather unstable. I am rearchitecting it with PHP on the RETS standard, which uses SOAP methods of retrieving data, which is already proven to be much better than what we had. When it comes to 'updating' existing data, my initial thought was to query only for data that was updated. There is a field for 'Modified' that tells you when a listing was last updated, and the code I have will grab any listing updated within the last 6 hours (give myself a window in case something goes wrong). However, I see a lot of real estate developers suggest creating 'batch' processes that run through all listings regardless of updated status that is constantly running. Is this the better way to do it? Or am I fine with just grabbing the data I know I need? It doesn't make a lot of sense to me to do more processing than necessary. Thoughts?

    Read the article

  • Core data setReturnsDistinctResult not working

    - by Moze
    So i'm building a small application, it uses core data database of ~25mb size with 4 entities. It's for bus timetables. In one table named "Stop" there are have ~1300 entries of bus stops with atributes "name", "id", "longitude", "latitude" and couple relationships. Now there are many stops with same name but different coordinates and id. Search is implemented using NSPredicate. So I want to show all distinct stop names in table view, i'm using setReturnsDistinctResults with NSDictionaryResultType and setPropertiesToFetch. But setReturnsDistinctResult is not working and I'm still getting all entries. Heres code: - (NSFetchRequest *)fetchRequest { if (fetchRequest == nil) { fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Stop" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; NSArray *sortDescriptors = [NSArray arrayWithObject:[[[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES] autorelease]]; [fetchRequest setSortDescriptors:sortDescriptors]; [fetchRequest setResultType:NSDictionaryResultType]; [fetchRequest setPropertiesToFetch:[NSArray arrayWithObject:[[entity propertiesByName] objectForKey:@"name"]]]; [fetchRequest setReturnsDistinctResults:YES]; DebugLog(@"fetchRequest initialized"); } return fetchRequest; } ///////////////////// - (NSFetchedResultsController *)fetchedResultsController { if (self.predicateString != nil) { self.predicate = [NSPredicate predicateWithFormat:@"name CONTAINS[cd] %@", self.predicateString]; [self.fetchRequest setPredicate:predicate]; } else { self.predicate = nil; [self.fetchRequest setPredicate:predicate]; } fetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:self.fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:sectionNameKeyPath cacheName:nil]; return fetchedResultsController; } ////////////// - (UITableViewCell *)tableView:(UITableView *)table cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } cell.textLabel.text = [[fetchedResultsController objectAtIndexPath:indexPath] valueForKey:@"name"]; return cell; }

    Read the article

  • source of historical stock data

    - by rmeador
    I'm trying to make a stock market simulator (perhaps eventually growing into a predicting AI), but I'm having trouble finding data to use. I'm looking for a (hopefully free) source of historical stock market data. Ideally, it would be a very fine-grained (second or minute interval) data set with price and volume of every symbol on NASDAQ and NYSE (and perhaps others if I get adventurous). Does anyone know of a source for such info? I found this question which indicates Yahoo offers historical data in CSV format, but I've been unable to find out how to get it in a cursory examination of the site linked. I also don't like the idea of downloading the data piecemeal in CSV files... I imagine Yahoo would get upset and shut me off after the first few thousand requests. I also discovered another question that made me think I'd hit the jackpot, but unfortunately that OpenTick site seems to have closed its doors... too bad, since I think they were exactly what I wanted. I'd also be able to use data that's just open/close price and volume of every symbol every day, but I'd prefer all the data if I can get it. Any other suggestions?

    Read the article

  • Architecture for data layer that uses both localStorage and a REST remote server

    - by Zack
    Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage: The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter). The server gathers all data from clients. Using mathematical sets notation: Server = Client1 ? Client2 ? ... ? ClientN where, in general, a record may not be unique to a certain client. Here are some scenarios: A client creates a record. The id of the record can not set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server - receive the id - create the record in localStorage. A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?) Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter? Can you see any way to avoid implementing push (Web Sockets, ...)?

    Read the article

  • $.ajax not loading data data everytime from server

    - by Ted
    I have written a simple jQuery.ajax function which loads a user control from the server on click of a button. The first time I click the button, it goes to the server and gets me the user control. But each subsequent click of the same button does not goes to the server to fetch me the user control. Since my user control fetches data from db, I need to reload the user control everytime i hit the button. But if anyhow I get my user control to unload from the page, and re-click the button, it goes to the server and fetches me the user control. Here's the code: $("#btnLoad").click(function() { if ($(this).attr("value") == "Load Control") { $.ajax({ url: "AJAXHandler.ashx", data: { "lt": "loadcontrol" }, dataType: "html", success: function(data) { content.html(data); } }); $(this).attr("value", "Unload Control"); } else { $.ajax({ url: "AJAXHandler.ashx", data: { "lt": "unloadcontrol" }, dataType: "html", success: function(data) { content.html(data); } }); $(this).attr("value", "Load Control"); } }); Please let me know if there is any other way I can get my user control loaded from server everytime I click the button.

    Read the article

  • Rails nested models and data separation by scope

    - by jobrahms
    I have Teacher, Student, and Parent models that all belong to User. This is so that a Teacher can create Students and Parents that can or cannot log into the app depending on the teacher's preference. Student and Parent both accept nested attributes for User so a Student and User object can be created in the same form. All four models also belong to Studio so I can do data separation by scope. The current studio is set in application_controller.rb by looking up the current subdomain. In my students controller (all of my controllers, actually) I'm using @studio.students.new instead of Student.new, etc, to scope the new student to the correct studio, and therefore the correct subdomain. However, the nested User does not pick up the studio from its parent - it gets set to nil. I was thinking that I could do something like params[:student][:user_attributes][:studio_id] = @student.studio.id in the controller, but that would require doing attr_accessible :studio_id in User, which would be bad. How can I make sure that the nested User picks up the same scope that the Student model gets when it's created? student.rb class Student < ActiveRecord::Base belongs_to :studio belongs_to :user, :dependent => :destroy attr_accessible :user_attributes accepts_nested_attributes_for :user, :reject_if => :all_blank end students_controller.rb def create @student = @studio.students.new @student.attributes = params[:student] if @student.save redirect_to @student, :notice => "Successfully created student." else render :action => 'new' end end user.rb class User < ActiveRecord::Base belongs_to :studio accepts_nested_attributes_for :studio attr_accessible :email, :password, :password_confirmation, :remember_me, :studio_attributes devise :invitable, :database_authenticatable, :recoverable, :rememberable, :trackable end

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >