Search Results

Search found 59554 results on 2383 pages for 'distributed data mining'.

Page 45/2383 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Which data structure you will use to for a witness list?

    - by mateen
    I'm making a game where the plot is a bank robbery. Lots of people witness that robbery. The game will load a list of suspects, while the players (witnesses) will have to identify the suspects of this robbery. The game should load a list of suspects to identify the one as quickly as possible. Admin can add/remove suspects in the lists and two or more lists of suspects can also be merged into one (to show it to the player). The question is which data structure will be suitable to develop the lists?

    Read the article

  • SQL Peer-to-Peer Dynamic Structured Data Processing Collaboration

    Unstructured and XML semi-structured data is now used more than structured data. But fixed structured data still keeps businesses running day in and day out, which requires consistent predictable highly principled processing for correct results. For this reason, it would be very useful to have a general purpose SQL peer-to-peer collaboration capability that can utilize highly principled hierarchical data processing and its flexible and advanced structured processing to support dynamically structured data and its dynamic structured processing. This flexible dynamic structured processing can change the structure of the data as necessary for the required processing while preserving the relational and hierarchical data principles. This processing will perform freely across remote unrelated peer locations anytime and transparently process unpredictable and unknown structured data and data type changes automatically for immediate processing using automatic metadata maintenance.

    Read the article

  • Is there a data structure for this type of list/map?

    - by Nick
    Perhaps there's a name for what I want, but I'm not aware of it. I need something similar to a LinkedHashMap in Java, but where it returns the 'previous' value if there's no value at the specified key. That is, I have a list of objects stored by an integer key (which is in units of time in my case): ; key->value 10->A 15->B 20->C So, if I were to query for a value for key 0-9, it would return null. The special part is if I queried for something 10 <= i <= 14 it would return A. Or, for i = 20, it would return C. Is there a data structure for this?

    Read the article

  • On a failing hard drive, I am able to view data but unable to copy it - why?

    - by Tom
    I have a 2.5" external hard drive that is failing. It's not making the expected 'clicking' noise that most hard drives and I am able to view the data, but I am unable to actually retrieve the data. I attempted to use SpinRite in order to access the data on the drive, but it didn't like the external drive. When I view the drive's property page, the drive shows that it's used space is at 100% and that it has 0 bytes available; however, the progress indicator under the drive icon in Windows Explorer shows that it's roughly 50% full (which is correct). When I attempt to run Windows' "Error Checking" tool and attempt to "scan for an attempt recovery of bad sectors," the tool begins to run then immediately closes with no error message. I am able to browse the contents of the drive using Windows Explorer. When I begin to try copying any given single file, the copy process begins, an indicator starts, and then the copy fails with no real error message. The Disk Management page in Computer Management under Control Panel also shows this drive has being 'Healthy.' I dropped the drive off at a data recovery store and they said that "The data seems to be intact, but an internal failure is preventing any information from being retrieved." They offered to provide me references to a data recovery specialist. I've also attempted to run CHKDSK on the drive (with and without arguments) but it returns the following error: The type of the filesystem is RAW. CHKDSK is not available for RAW drives. Before going the route of more expensive data recovery, I'm wondering if these symptoms sound familiar to anyone? Other questions... I'm willing to continue trying tools such as TestDisk and/or PhotoRec (as the majority of the data that I'd like to salvage are photos) but how long I should expect either tool to run given approximately 400GB of data? I'm also comfortable using Linux so I welcome any suggestions for utilities or tools and strategies with which you've had success.

    Read the article

  • Is this distributed database server idea feasible?

    - by David
    I often use SQLite for creating simple programs in companies. The database is placed on a file server. This works fine as long as there are not more than about 50 users working towards the database concurrently (though depending on whether it is reads or writes). Once there are more than this, they will notice a slowdown if there are a lot of concurrent writing on the server as lots of time is spent on locks, and there is nothing like a cache as there is no database server. The advantage of not needing a database server is that the time to set up something like a company Wiki or similar can be reduced from several months to just days. It often takes several months because some IT-department needs to order the server and it needs to conform with the company policies and security rules and it needs to be placed on the outsourced server hosting facility, which screws up and places it in the wrong localtion etc. etc. Therefore, I thought of an idea to create a distributed database server. The process would be as follows: A user on a company computer edits something on a Wiki page (which uses this database as its backend), to do this he reads a file on the local harddisk stating the ip-address of the last desktop computer to be a database server. He then tries to contact this computer directly via TCP/IP. If it does not answer, then he will read a file on the file server stating the ip-address of the last desktop computer to be a database server. If this server does not answer either, his own desktop computer will become the database server and register its ip-address in the same file. The SQL update statement can then be executed, and other desktop computers can connect to his directly. The point with this architecture is that, the higher load, the better it will function, as each desktop computer will always know the ip-address of the database server. Also, using this setup, I believe that a database placed on a fileserver could serve hundreds of desktop computers instead of the current 50 or so. I also do not believe that the load on the single desktop computer, which has become database server will ever be noticable, as there will be no hard disk operations on this desktop, only on the file server. Is this idea feasible? Does it already exist? What kind of database could support such an architecture?

    Read the article

  • Dynamic data-entry value store

    - by simendsjo
    I'm creating a data-entry application where users are allowed to create the entry schema. My first version of this just created a single table per entry schema with each entry spanning a single or multiple columns (for complex types) with the appropriate data type. This allowed for "fast" querying (on small datasets as I didn't index all columns) and simple synchronization where the data-entry was distributed on several databases. I'm not quite happy with this solution though; the only positive thing is the simplicity... I can only store a fixed number of columns. I need to create indexes on all columns. I need to recreate the table on schema changes. Some of my key design criterias are: Very fast querying (Using a simple domain specific query language) Writes doesn't have to be fast Many concurrent users Schemas will change often Schemas might contain many thousand columns The data-entries might be distributed and needs syncronization. Preferable MySQL and SQLite - Databases like DB2 and Oracle is out of the question. Using .Net/Mono I've been thinking of a couple of possible designs, but none of them seems like a good choice. Solution 1: Union like table containing a Type column and one nullable column per type. This avoids joins, but will definitly use a lot of space. Solution 2: Key/value store. All values are stored as string and converted when needed. Also use a lot of space, and of course, I hate having to convert everything to string. Solution 3: Use an xml database or store values as xml. Without any experience I would think this is quite slow (at least for the relational model unless there is some very good xpath support). I also would like to avoid an xml database as other parts of the application fits better as a relational model, and being able to join the data is helpful. I cannot help to think that someone has solved (some of) this already, but I'm unable to find anything. Not quite sure what to search for either... I know market research is doing something like this for their questionnaires, but there are few open source implementations, and the ones I've found doesn't quite fit the bill. PSPP has much of the logic I'm thinking of; primitive column types, many columns, many rows, fast querying and merging. Too bad it doesn't work against a database.. And of course... I don't need 99% of the provided functionality, but a lot of stuff not included. I'm not sure this is the right place to ask such a design related question, but I hope someone here has some tips, know of any existing work, or can point me to a better place to ask such a question. Thanks in advance!

    Read the article

  • Investment advice data dump analysis

    - by portoalet
    For my year-end pet project, I'd like to analyze investment advices and their correlation to the stock market performance. The problem is, where do I get the dump of investment advice data (free) ? something like stackoverflow.com data dump will be nice. Or maybe it's easier to do distributed crawling and crawl the public finance webpages for investment advices? Investment advice is buy/sell advice for stocks/forex, issued by institution/investment advisor.

    Read the article

  • Is there any chance that my data will get silently corrupted with a robocopy SMB network transfer?

    - by Archagon
    I'm setting up a NAS box for the first time. At the moment, I have most of my data backed up to a few local hard drives, and I intend to transfer all the data to my NAS over ethernet once the RAID array is setup. Since this is all happening over the network, I'm a bit worried about my data getting corrupted silently during transfer. From what I understand, data generally doesn't get corrupted without notice on local transfers because a checksum is performed at some point by the drive or the OS. (This could be totally wrong.) Does the same thing happen with SMB, or is it up to the transferrer to check the integrity of their data? And if it doesn't happen with SMB, is there a protocol that does ensure data integrity? I know that rsync can checksum a transfer, but I'm on Windows and I already have a robocopy configuration that I like. Will my data be safe or do I have to use an external checksum tool to make sure?

    Read the article

  • Freezes (not crashes) with GCD, blocks and Core Data

    - by Lukasz
    I have recently rewritten my Core Data driven database controller to use Grand Central Dispatch to manage fetching and importing in the background. Controller can operate on 2 NSManagedContext's: NSManagedObjectContext *mainMoc instance variable for main thread. this contexts is used only by quick access for UI by main thread or by dipatch_get_main_queue() global queue. NSManagedObjectContext *bgMoc for background tasks (importing and fetching data for NSFetchedresultsController for tables). This background tasks are fired ONLY by user defined queue: dispatch_queue_t bgQueue (instance variable in database controller object). Fetching data for tables is done in background to not block user UI when bigger or more complicated predicates are performed. Example fetching code for NSFetchedResultsController in my table view controllers: -(void)fetchData{ dispatch_async([CDdb db].bgQueue, ^{ NSError *error = nil; [[self.fetchedResultsController fetchRequest] setPredicate:self.predicate]; if (self.fetchedResultsController && ![self.fetchedResultsController performFetch:&error]) { NSSLog(@"Unresolved error in fetchData %@", error); } if (!initial_fetch_attampted)initial_fetch_attampted = YES; fetching = NO; dispatch_async(dispatch_get_main_queue(), ^{ [self.table reloadData]; [self.table scrollRectToVisible:CGRectMake(0, 0, 100, 20) animated:YES]; }); }); } // end of fetchData function bgMoc merges with mainMoc on save using NSManagedObjectContextDidSaveNotification: - (void)bgMocDidSave:(NSNotification *)saveNotification { // CDdb - bgMoc didsave - merging changes with main mainMoc dispatch_async(dispatch_get_main_queue(), ^{ [self.mainMoc mergeChangesFromContextDidSaveNotification:saveNotification]; // Extra notification for some other, potentially interested clients [[NSNotificationCenter defaultCenter] postNotificationName:DATABASE_SAVED_WITH_CHANGES object:saveNotification]; }); } - (void)mainMocDidSave:(NSNotification *)saveNotification { // CDdb - main mainMoc didSave - merging changes with bgMoc dispatch_async(self.bgQueue, ^{ [self.bgMoc mergeChangesFromContextDidSaveNotification:saveNotification]; }); } NSfetchedResultsController delegate has only one method implemented (for simplicity): - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { dispatch_async(dispatch_get_main_queue(), ^{ [self fetchData]; }); } This way I am trying to follow Apple recommendation for Core Data: 1 NSManagedObjectContext per thread. I know this pattern is not completely clean for at last 2 reasons: bgQueue not necessarily fires the same thread after suspension but since it is serial, it should not matter much (there is never 2 threads trying access bgMoc NSManagedObjectContext dedicated to it). Sometimes table view data source methods will ask NSFetchedResultsController for info from bgMoc (since fetch is done on bgQueue) like sections count, fetched objects in section count, etc.... Event with this flaws this approach works pretty well of the 95% of application running time until ... AND HERE GOES MY QUESTION: Sometimes, very randomly application freezes but not crashes. It does not response on any touch and the only way to get it back to live is to restart it completely (switching back to and from background does not help). No exception is thrown and nothing is printed to the console (I have Breakpoints set for all exception in Xcode). I have tried to debug it using Instruments (time profiles especially) to see if there is something hard going on on main thread but nothing is showing up. I am aware that GCD and Core Data are the main suspects here, but I have no idea how to track / debug this. Let me point out, that this also happens when I dispatch all the tasks to the queues asynchronously only (using dispatch_async everywhere). This makes me think it is not just standard deadlock. Is there any possibility or hints of how could I get more info what is going on? Some extra debug flags, Instruments magical tricks or build setting etc... Any suggestions on what could be the cause are very much appreciated as well as (or) pointers to how to implement background fetching for NSFetchedResultsController and background importing in better way.

    Read the article

  • Tile Engine - Procedural generation, Data structures, Rendering methods - A lot of effort question!

    - by Trixmix
    Isometric Tile and GameObject rendering. To achive the desired looking game I need to take into consideration which tiles need to be drawn first and which last. What I used is a Object that is TileRenderQueue that you would give it a tile list and it will give you a queue on which ones to draw based on their Z coordinate, so that if the Z is higher then it needs to be drawn last. Now if you read above you would know that I want the location data to instead of being stored in the tile instance i want it to be that the index in the array is the location. and then maybe based on the array i could draw the tiles instead of taking a long time in for looping and ordering them by Z. This is the hardest part for me. It's hard for me to find a simple solution to the which one to draw when problem. Also there is the fact that if the X is larger than the gameobject where the X is larger needs to be drawn over the rest of the tiles and so on. Here is an example: All the parts work together to create an efficient engine so its important to me that you would answer all of the parts. I hope you will work on the answers hard just as much that I worked on this question! If there is any unclear part tell me so in the comments! Thanks for the help!

    Read the article

  • How do I express subtle relationships in my data?

    - by Chuck H
    "A" is related to "B" and "C". How do I show that "B" and "C" might, by this context, be related as well? Example: Here are a few headlines about a recent Broadway play: 1 - David Mamet's Glengarry Glen Ross, Starring Al Pacino, Opens on Broadway 2 - Al Pacino in 'Glengarry Glen Ross': What did the critics think? 3 - Al Pacino earns lackluster reviews for Broadway turn 4 - Theater Review: Glengarry Glen Ross Is Selling Its Stars Hard 5 - Glengarry Glen Ross; Hey, Who Killed the Klieg Lights? Problem: Running a fuzzy-string match over these records will establish some relationships, but not others, even though a human reader could pick them out from context in much larger datasets. How do I find the relationship that suggests #3 is related to #4? Both of them can be easily connected to #1, but not to each other. Is there a (Googlable) name for this kind of data or structure? What kind of algorithm am I looking for? Goal: Given 1,000 headlines, a system that automatically suggests that these 5 items are all probably about the same thing. To be honest, it's been so long since I've programmed I'm at a loss how to properly articulate this problem. (I don't know what I don't know, if that makes sense). This is a personal project and I'm writing it in Python. Thanks in advance for any help, advice, and pointers!

    Read the article

  • Hidden Gems: Accelerating Oracle Data Integrator with SOA, Groovy, SDK, and XML

    - by Alex Kotopoulis
    On the last day of Oracle OpenWorld, we had a final advanced session on getting the most out of Oracle Data Integrator through the use of various advanced techniques. The primary way to improve your ODI processes is to choose the optimal knowledge modules for your load and take advantage of the optimized tools of your database, such as OracleDataPump and similar mechanisms in other databases. Knowledge modules also allow you to customize tasks, allowing you to codify best practices that are consistently applied by all integration developers. ODI SDK is another very powerful means to automate and speed up your integration development process. This allows you to automate Life Cycle Management, code comparison, repetitive code generation and change of your integration projects. The SDK is easily accessible through Java or scripting languages such as Groovy and Jython. Finally, all Oracle Data Integration products provide services that can be integrated into a larger Service Oriented Architecture. This moved data integration from an isolated environment into an agile part of a larger business process environment. All Oracle data integration products can play a part in thisracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle Data Integrator allows full control of its runtime sessions through web services, so that integration jobs can become part of business processes. Oracle Data Service Integrator provides a data virtualization layer over your distributed sources, allowing unified reading and updating for heterogeneous data without replicating and moving data. Oracle Enterprise Data Quality provides data quality services to cleanse and deduplicate your records through web services.

    Read the article

  • What's the proper term for a function inverse to a constructor - to unwrap a value from a data type?

    - by Petr Pudlák
    Edit: I'm rephrasing the question a bit. Apparently I caused some confusion because I didn't realize that the term destructor is used in OOP for something quite different - it's a function invoked when an object is being destroyed. In functional programming we (try to) avoid mutable state so there is no such equivalent to it. (I added the proper tag to the question.) Instead, I've seen that the record field for unwrapping a value (especially for single-valued data types such as newtypes) is sometimes called destructor or perhaps deconstructor. For example, let's have (in Haskell): newtype Wrap = Wrap { unwrap :: Int } Here Wrap is the constructor and unwrap is what? The questions are: How do we call unwrap in functional programming? Deconstructor? Destructor? Or by some other term? And to clarify, is this/other terminology applicable to other functional languages, or is it used just in the Haskell? Perhaps also, is there any terminology for this in general, in non-functional languages? I've seen both terms, for example: ... Most often, one supplies smart constructors and destructors for these to ease working with them. ... at Haskell wiki, or ... The general theme here is to fuse constructor - deconstructor pairs like ... at Haskell wikibook (here it's probably meant in a bit more general sense), or newtype DList a = DL { unDL :: [a] -> [a] } The unDL function is our deconstructor, which removes the DL constructor. ... in The Real World Haskell.

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • Excel tables creation upon MySQL data import (new feature in MySQL for Excel 1.2.x)

    - by Javier Treviño
    In this blog post we are going to talk about one of the features included since MySQL for Excel 1.2.0, you can install the latest GA or maintenance version using the MySQL Installer or optionally you can download directly any GA or non-GA version from the MySQL Developer Zone. Remember how easy is to dump data from a MySQL table, view or stored procedure to an Excel worksheet? (If you don't you can check out this other post: How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel). In version 1.2.0 we introduced some advanced options for the Import MySQL Data operation regarding Excel tables. The Advanced Options dialog shown above is accessible from any Import Data dialog. When the Create an Excel table for the imported MySQL table data option is checked (which is by default), MySQL for Excel will create an Excel table (also known in Excel jargon as a ListObject) from the Excel range containing the imported MySQL data. This "little feature" enables the right-away usage of the Excel table in data analysis, like including it for summarization on a PivotTable, including a summarization row at the end of the table's data, sorting or filtering the table's data by clicking the drop-down button next to each column's header, among other actions. The Excel tables that are created automatically from imported MySQL data will have a name like [UserPrefix].<SchemaName>.<DbObjectName> for tables and views, and <Prefix>.<SchemaName>.<ProcedureName>.<ResultSetName> for stored procedures.  Notice the first piece of the name is an optional [UserPrefix], the prefix is only used if the Prefix Excel tables with the following text option is checked, notice that the suggested prefix is "MySQL" but it can be changed to whatever text is suitable for you. Excel tables must have a table style so they are easily identified. There are a lot of predefined Excel table styles, by default the MySqlDefault style is applied, which is the style you have seen applied to imported data for Edit Sessions, and which adds simple and elegant formatting to the table. If you wish to change it to any of the predefined Excel table style you can do it through the drop-down list on the Use style [[styles drop-down]] for the new Excel table option. Excel tables are the basic construction blocks for building data analysis or self-service Business Intelligence using other more advanced Excel tools like Power Pivot, Power View or Power Map. This feature empowers imported MySQL data to use it in more advanced ways.  We hope you give this and the other new features in the 1.2.x version family a try! Remember that your feedback is very important for us, so drop us a message and follow us: MySQL on Windows (this) Blog: https://blogs.oracle.com/MySqlOnWindows/ MySQL for Excel forum: http://forums.mysql.com/list.php?172 Facebook: http://www.facebook.com/mysql YouTube channel: https://www.youtube.com/user/MySQLChannel Cheers!

    Read the article

  • How to recover data files from xampp-windows to xampp-linux after crash?

    - by David Buehler
    My Windows box died after I developed a database in xampp on it; fortunately I have a backup of the entire F:/TestWeb/Xampp partition. Unfortunately, I did not do an Export (nor dump) of the "Lws2" database before the crash. I have replaced the defunct machine with one running Mint7 (based on Ubuntu 9.04 "Jaunty Jackalope") and installed xampp-linux into the /opt partition, so the new xampp now runs fine in /opt/lampp, and says all the elements are secured by passwords (which I just assigned during this installation.) I assumed that Xamp-Windows installed in November would migrate easily to xampp-linux installed iin February -- a bad assumption. It apparently would have been simple if I had known enough to do an Export or a Dump before the crash, but.... The backup was done to a Network Attached Storage drive, which is formatted as "vfat" so the backup does not carry with it any valid ownership permissions from MySql on NTFS. I now see from my backup that the old data resided in \TestWeb\Xampp\Mysql\Data\Lws2\ and consists of 7 ".frm" files which define my tables. The actual data -- I suppose a ".sql" file or files -- has disappeared, and I am resigning myself to two days of retyping it. But I do not wish to do the table layouts all over again. So I copied Data tree to /opt/lampp/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/Lws2 -- PhpMyAdmin does not see it. So I copied Data tree to /opt/lampp/var/mysql/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/var/mysql/Lws2 -- PhpMyAdmin does not see it. So I adjusted all the permissions to stop saying owner "nobody" to owner "root" and gave full permissions to all groups and to all others, with permissions percolating down, in all 4 trees. You guessed it -- PhpMyAdmin does not see any database named Lws2, only its 4 default ones. I double-checked the permissions and rebooted Linux and repeated the tests. At some point in that process I did see PhpMyAdmin showing "lws2(7)" but when I clicked on it I saw a "no table found" message. I have not been able to recreate that experience. Apparently there are some setup files for MySql and for PhpMyAdmin which need to be set up by running a wizard or two or by editing the files directly. I grepped the TestWeb tree and found an old "ldir = "C:TestWeb\Xampp\MySql\" and a "DataDir = C:TestWeb\Xampp\MySql\" in a .php file and in a .bat file, but I cannot find the corresponding config file names on the /opt partition/ -- so it looks as if these wizards have not been run to create them. What config files files does Linux use to setup MySql config files for PhpMyAdmin? What wizards do I need to run to point the MySql engine and the PhpMyAdmin at the folder /opt/lampp/data/ with its lws2 folder inside it? Or which files do I need to edit, with a sample of what it normally says under Linux? Incidentally, I remember I converted from MyISAM with its .MYD and .MYI files to InnoDB after entering only a small amount of the data -- and I do not know what file types to look for -- perhaps my data is still there but under another guise or in another place? Is it something as simple as linux needing to see "/data/" instead of /Data? I will check that out while waiting for a response. If anyone can point me to documentation that discusses this level of detail -- I will read it avidly! In any case, thanks for any clarification you can give on this thorny problem. wizdum

    Read the article

  • Why should i write a commit message?

    - by acidzombie24
    Why should i write a commit message? I dont want to and i think its stupid every single time. A gui frontend i use which will go unnamed forces you to do it. I hear other doing it everytime even if they are using the VCS on the command line. If i commit several times a day and havent finish a feature what am i writing about? I ONLY ever write a message after many comments and i feel its time for a mini tag or when i do an actual tag. Am i right or am i missing something? also i am using a distributed system

    Read the article

  • Storing game objects with generic object information

    - by Mick
    In a simple game object class, you might have something like this: public abstract class GameObject { protected String name; // other properties protected double x, y; public GameObject(String name, double x, double y) { // etc } // setters, getters } I was thinking, since a lot of game objects (ex. generic monsters) will share the same name, movement speed, attack power, etc, it would be better to have all that information shared between all monsters of the same type. So I decided to have an abstract class "ObjectData" to hold all this shared information. So whenever I create a generic monster, I would use the same pre-created "ObjectData" for it. Now the above class becomes more like this: public abstract class GameObject { protected ObjectData data; protected double x, y; public GameObject(ObjectData data, double x, double y) { // etc } // setters, getters public String getName() { return data.getName(); } } So to tailor this specifically for a Monster (could be done in a very similar way for Npcs, etc), I would add 2 classes. Monster which extends GameObject, and MonsterData which extends ObjectData. Now I'll have something like this: public class Monster extends GameObject { public Monster(MonsterData data, double x, double y) { super(data, x, y); } } This is where my design question comes in. Since MonsterData would hold data specific to a generic monster (and would vary with what say NpcData holds), what would be the best way to access this extra information in a system like this? At the moment, since the data variable is of type ObjectData, I'll have to cast data to MonsterData whenever I use it inside the Monster class. One solution I thought of is this, but this might be bad practice: public class Monster extends GameObject { private MonsterData data; // <- this part here public Monster(MonsterData data, double x, double y) { super(data, x, y); this.data = data; // <- this part here } } I've read that for one I should generically avoid overwriting the underlying classes variables. What do you guys think of this solution? Is it bad practice? Do you have any better solutions? Is the design in general bad? How should I redesign this if it is? Thanks in advanced for any replies, and sorry about the long question. Hopefully it all makes sense!

    Read the article

  • Core Data migration problem: "Persistent store migration failed, missing source managed object model

    - by John Gallagher
    The Background A Cocoa Non Document Core Data project with two Managed Object Models. Model 1 stays the same. Model 2 has changed, so I want to migrate the store. I've created a new version by Design Data Model Add Model Version in Xcode. The difference between versions is a single relationship that's been changed from to a one to many. I've made my changes to the model, then saved. I've made a new Mapping Model that has the old model as a source and new model as a destination. I've ensured all Mapping Models and Data Models and are being compiled and all are copied to the Resource folder of my app bundle. I've switched on migrations by passing in a dictionary with the NSMigratePersistentStoresAutomaticallyOption key as [NSNumber numberWithBool:YES] when adding the Persistent Store. Rather than merging all models in the bundle, I've specified the two models I want to use (model 1 and the new version of model 2) and merged them using modelByMergingModels: The Problem No matter what I do to migrate, I get the error message: "Persistent store migration failed, missing source managed object model." What I've Tried I clean after every single build. I've tried various combinations of having only the model I'm migrating to in Resources, being compiled, or both. Since the error message implies it can't find the source model for my migration, I've tried having every version of the model in both the Resources folder and being compiled. I've made sure I'm not making a really basic error by switching back to the original version of my data model. The app runs fine. I've deleted the Mapping Model and the new version of the model, cleaned, then recreated both. I've tried making a different change in the new model - deleting an entity instead. I'm at my wits end. I can't help but think I've made a huge mistake somewhere that I'm not seeing. Any ideas?

    Read the article

  • How to best transfer large payloads of data using wsHttp with WCF with message security

    - by jpierson
    I have a case where I need to transfer large amounts of serialized object graphs (via NetDataContractSerializer) using WCF using wsHttp. I'm using message security and would like to continue to do so. Using this setup I would like to transfer serialized object graph which can sometimes approach around 300MB or so but when I try to do so I've started seeing a exception of type System.InsufficientMemoryException appear. After a little research it appears that by default in WCF that a result to a service call is contained within a single message by default which contains the serialized data and this data is buffered by default on the server until the whole message is completely written. Thus the memory exception is being caused by the fact that the server is running out of memory resources that it is allowed to allocate because that buffer is full. The two main recommendations that I've come across are to use streaming or chunking to solve this problem however it is not clear to me what that involves and whether either solution is possible with my current setup (wsHttp/NetDataContractSerializer/Message Security). So far I understand that to use streaming message security would not work because message encryption and decryption need to work on the whole set of data and not a partial message. Chunking however sounds like it might be possible however it is not clear to me how it would be done with the other constraints that I've listed. If anybody could offer some guidance on what solutions are available and how to go about implementing it I would greatly appreciate it. Related resources: Chunking Channel How to: Enable Streaming Large attachments over WCF Custom Message Encoder Another spotting of InsufficientMemoryException I'm also interested in any type of compression that could be done on this data but it looks like I would probably be best off doing this at the transport level once I can transition into .NET 4.0 so that the client will automatically support the gzip headers if I understand this properly.

    Read the article

  • How to import data to SAP

    - by Mehmet AVSAR
    Hi, As a complete stranger in town of SAP, I want to transfer my own application's (mobile salesforce automation) data to SAP. My application has records of customers, stocks, inventory, invoices (and waybills), cheques, payments, collections, stock transfer data etc. I have an additional database which holds matchings of records. ie. A customer with ID 345 in my application has key 120-035-0223 in SAP. Every record, for sure, has to know it's counterpart, including parameters. After searching Google and SAP help site for a day, I covered that it's going to be a bit more pain than I expected. Especially SAP site does not give even a clue on it. Say I couldn't find. We transferred our data to some other ERP systems, some of which wanted XML files, some other exposed their APIs. My point is, is Sql Server's SSIS an option for me? I hope it is, so I can fight on my own territory. Since client requests would vary a lot, I count flexibility as most important criteria. Also, I want to transfer as much data as I could. Any help is appreciated. Regards,

    Read the article

  • Image Wells, Core Data, and Sqlite files.

    - by Sway
    I've got a mac application that I've developed. I use it to create sqlite files that are bundled with my iphone app. The mac app uses Core Data and bindings and is working fine except for one "weird" issue. I use an NSImageView (or Image Well) to allow me to drag and drop jpg files. This is bound through to an optional binary attribute in my model class. For some reason when I drag and drop a 4k jpg file it onto the image well and save the sqlite file. The data saved to the binary column is over 15 times larger than it should be. Whereas if I use an application like SQLiteManager and add the image into the row in the database. The binary data is the correct (expected size). File 4k jpg Actual size: 2371. Persisted via Core Data size: 35810. Can anyone give me a suggestion as to why this might be happening? Do I need to set some setting in Interface Builder or write some custom code?

    Read the article

  • Accessing data stored in another unit Delphi

    - by Hendriksen123
    In Unit2 of my program i have the following code: TValue = Record NewValue, OldValue, SavedValue : Double; end; TData = Class(TObject) Public EconomicGrowth : TValue; Inflation : TValue; Unemployment : TValue; CurrentAccountPosition : TValue; AggregateSupply : TValue; AggregateDemand : TValue; ADGovernmentSpending : TValue; ADConsumption : TValue; ADInvestment : TValue; ADNetExports : TValue; OverallTaxation : TValue; GovernmentSpending : TValue; InterestRates : TValue; IncomeTax : TValue; Benefits : TValue; TrainingEducationSpending : TValue; End; I then declare Data : TData in the Var. when i try to do the following however in Unit1: ShowMessage(FloatToStr(Unit2.Data.Inflation.SavedValue)); I get an EAccessViolation message. Is there any way to access the data stored in 'Data' from Unit1 without getting errors?

    Read the article

  • How to show data in dataTable lable:data in jsf

    - by palakolanusrinu
    Hi How to display lable: data name: srinu in multiple rows using dataTable in jsf right now i'm getting like | lable | data| data srinu i want it in this formate lable: data name: srinu code which used is <h:dataTable id="fundInfo" value="#{clientFundInfo}" border="1" var="client" first="0" rows="5" rules="all"> <h:column> <h:outputText value="CLIENT:"/> <h:outputText value="#{client.clientName}"></h:outputText> </h:column> <h:column> <h:outputText value="FUND:"/> <h:outputText value="#{client.fundName}"></h:outputText> </h:column> <h:column> <h:outputText value="Employer Identification Number:"/> <h:outputText value="#{client.empIdentificationNum}"></h:outputText> </h:column> <h:column><h:outputText value="FISCAL YEAR ENDED:"/> <h:outputText value="#{client.fye}"></h:outputText> </h:column> <h:column><h:outputText value="Shares Outstanding"/> <h:outputText value="#{client.sharesOutstanding}"></h:outputText> </h:column> </h:dataTable>

    Read the article

  • Syncing two separate structures to the same master data

    - by Mike Burton
    I've got multiple structures to maintain in my application. All link to the same records, and one of them could be considered the "master" in that it reflects actual relationships held in files on disk. The other structures are used to "call out" elements of the main design for purchase and work orders. I'm struggling to come up with a pattern that deals appropriately with changes to the master data. As an example, the following trees might refer to the same data: A |_ B |_ C |_ D |_ E |_ B |_ C |_ D A |_ B E C |_ D A |_ B C D E These secondary structures follow internal rules, but their overall structure is usually user-determined. In all cases (including the master), any element can be used in multiple locations and in multiple trees. When I add a child to any element in the tree, I want to either automatically build the secondary structure for each instance of the "master" element or at least advertise the situation to the user and allow them to manually generate the data required for the secondary trees. Is there any pattern which might apply to this situation? I've been treating it as a view problem, but it turns out to be more complicated than that when you look at the initial generation of the data.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >