Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 47/2386 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • What's the proper term for a function inverse to a constructor - to unwrap a value from a data type?

    - by Petr Pudlák
    Edit: I'm rephrasing the question a bit. Apparently I caused some confusion because I didn't realize that the term destructor is used in OOP for something quite different - it's a function invoked when an object is being destroyed. In functional programming we (try to) avoid mutable state so there is no such equivalent to it. (I added the proper tag to the question.) Instead, I've seen that the record field for unwrapping a value (especially for single-valued data types such as newtypes) is sometimes called destructor or perhaps deconstructor. For example, let's have (in Haskell): newtype Wrap = Wrap { unwrap :: Int } Here Wrap is the constructor and unwrap is what? The questions are: How do we call unwrap in functional programming? Deconstructor? Destructor? Or by some other term? And to clarify, is this/other terminology applicable to other functional languages, or is it used just in the Haskell? Perhaps also, is there any terminology for this in general, in non-functional languages? I've seen both terms, for example: ... Most often, one supplies smart constructors and destructors for these to ease working with them. ... at Haskell wiki, or ... The general theme here is to fuse constructor - deconstructor pairs like ... at Haskell wikibook (here it's probably meant in a bit more general sense), or newtype DList a = DL { unDL :: [a] -> [a] } The unDL function is our deconstructor, which removes the DL constructor. ... in The Real World Haskell.

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • Dynamic data-entry value store

    - by simendsjo
    I'm creating a data-entry application where users are allowed to create the entry schema. My first version of this just created a single table per entry schema with each entry spanning a single or multiple columns (for complex types) with the appropriate data type. This allowed for "fast" querying (on small datasets as I didn't index all columns) and simple synchronization where the data-entry was distributed on several databases. I'm not quite happy with this solution though; the only positive thing is the simplicity... I can only store a fixed number of columns. I need to create indexes on all columns. I need to recreate the table on schema changes. Some of my key design criterias are: Very fast querying (Using a simple domain specific query language) Writes doesn't have to be fast Many concurrent users Schemas will change often Schemas might contain many thousand columns The data-entries might be distributed and needs syncronization. Preferable MySQL and SQLite - Databases like DB2 and Oracle is out of the question. Using .Net/Mono I've been thinking of a couple of possible designs, but none of them seems like a good choice. Solution 1: Union like table containing a Type column and one nullable column per type. This avoids joins, but will definitly use a lot of space. Solution 2: Key/value store. All values are stored as string and converted when needed. Also use a lot of space, and of course, I hate having to convert everything to string. Solution 3: Use an xml database or store values as xml. Without any experience I would think this is quite slow (at least for the relational model unless there is some very good xpath support). I also would like to avoid an xml database as other parts of the application fits better as a relational model, and being able to join the data is helpful. I cannot help to think that someone has solved (some of) this already, but I'm unable to find anything. Not quite sure what to search for either... I know market research is doing something like this for their questionnaires, but there are few open source implementations, and the ones I've found doesn't quite fit the bill. PSPP has much of the logic I'm thinking of; primitive column types, many columns, many rows, fast querying and merging. Too bad it doesn't work against a database.. And of course... I don't need 99% of the provided functionality, but a lot of stuff not included. I'm not sure this is the right place to ask such a design related question, but I hope someone here has some tips, know of any existing work, or can point me to a better place to ask such a question. Thanks in advance!

    Read the article

  • Internet Explorer to Firefox javascript migration library - does one exist?

    - by Brad
    I am working on a legacy ASP.NET web site that is highly dependent on Internet Explorer. I would like to migrate it to non-IE browsers. I know there are a large amount of differences (as detailed at quirksmode.org, etc.), so I'm searching for a javascript library that can help minimize the amount of source I'd have to change. I'm hoping that my lack of success in finding such a beast so far means that I'm just a bad google-er, and not that I'm just going to have to slog through coming up with replacements/workarounds for all of IE's proprietary functionality that this site currently uses (it uses quite a bit). Any help you can provide will be greatly appreciated. Thanks!

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 1)

    - by Sadequl Hussain
    It is a fact of life: SQL Server databases change homes. They move from one instance to another, from one version to the next, from old servers to new ones.  They move around as an organisation’s data grows, applications are enhanced or new versions of the database software are released. If not anything else, servers become old and unreliable and databases eventually need to find a new home. Consider the following scenarios: 1.     A new  database application is rolled out in a production server from the development or test environment 2.     A copy of the production database needs to be installed in a test server for troubleshooting purposes 3.     A copy of the development database is regularly refreshed in a test server during the system development life cycle 4.     A SQL Server is upgraded to a newer version. This can be an in-place upgrade or a side-by-side migration 5.     One or more databases need to be moved between different instances as part of a consolidation strategy. The instances can be running the same or different version of SQL Server 6.     A database has to be restored from a backup file provided by a third party application vendor 7.     A backup of the database is restored in the same or different instance for disaster recovery 8.     A database needs to be migrated within the same instance: a.     Files are moved from direct attached storage to storage area network b.    The same database is copied under a different name for another application Migrating SQL Server database applications is a complex topic in itself. There are a number of components that can be involved: jobs, DTS or SSIS packages, logins or linked servers are only few pieces of the puzzle. However, in this article we will focus only on the central part of migration: the installation of the database itself. Unless it is an in-place upgrade, typically the database is taken from a source server and installed in a destination instance.  Most of the time, a full backup file is used for the rollout. The backup file is either provided to the DBA or the DBA takes the backup and restores it in the target server. Sometimes the database is detached from the source and the files are copied to and attached in the destination. Regardless of the method of copying, moving, refreshing, restoring or upgrading the physical database, there are a number of steps the DBA should follow before and after it has been installed in the destination. It is these post database installation steps we are going to discuss below. Some of these steps apply in almost every scenario described above while some will depend on the type of objects contained within the database.  Also, the principles hold regardless of the number of databases involved. Step 1:  Make a copy of data and log files when attaching and detaching When detaching and attaching databases, ensure you have made copies of the data and log files if the destination is running a newer version of SQL Server. This is because once attached to a newer version, the database cannot be detached and attached back to an older version. Trying to do so will give you a message like the following: Server: Msg 602, Level 21, State 50, Line 1 Could not find row in sysindexes for database ID 6, object ID 1, index ID 1. Run DBCC CHECKTABLE on sysindexes. Connection Broken If you try to backup the attached database and restore it in the source, it will still fail. Similarly, if you are restoring the database in a newer version, it cannot be backed up or detached and put back in an older version of SQL. Unlike detach and attach method though, you do not lose the backup file or the original database here. When detaching and attaching a database, it is important you keep all the log files available along with the data files. It is possible to attach a database without a log file and SQL Server can be instructed to create a new log file, however this does not work if the database was detached when the primary file group was read-only. You will need all the log files in such cases. Step 2: Change database compatibility level Once the database has been restored or attached to a newer version of SQL Server, change the database compatibility level to reflect the newer version unless there is a compelling reason not to do so. When attaching or restoring from a previous version of SQL, the database retains the older version’s compatibility level.  The only time you would want to keep a database with an older compatibility level is when the code within your database is no longer supported by SQL Server. For example, outer joins with *= or the =* operators were still possible in SQL 2000 (with a warning message), but not in SQL 2005 anymore. If your stored procedures or triggers are using this form of join, you would want to keep the database with an older compatibility level.  For a list of compatibility issues between older and newer versions of SQL Server databases, refer to the Books Online under the sp_dbcmptlevel topic. Application developers and architects can help you in deciding whether you should change the compatibility level or not. You can always change the compatibility mode from the newest to an older version if necessary. To change the compatibility level, you can either use the database’s property from the SQL Server Management Studio or use the sp_dbcmptlevel stored procedure.   Bear in mind that you cannot run the built-in reports for databases from SQL Server Management Studio if you keep the database with an older compatibility level. The following figure shows the error message I received when trying to run the “Disk Usage by Top Tables” report against a database. This database was hosted in a SQL Server 2005 system and still had a compatibility mode 80 (SQL 2000).     Continues…

    Read the article

  • Excel tables creation upon MySQL data import (new feature in MySQL for Excel 1.2.x)

    - by Javier Treviño
    In this blog post we are going to talk about one of the features included since MySQL for Excel 1.2.0, you can install the latest GA or maintenance version using the MySQL Installer or optionally you can download directly any GA or non-GA version from the MySQL Developer Zone. Remember how easy is to dump data from a MySQL table, view or stored procedure to an Excel worksheet? (If you don't you can check out this other post: How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel). In version 1.2.0 we introduced some advanced options for the Import MySQL Data operation regarding Excel tables. The Advanced Options dialog shown above is accessible from any Import Data dialog. When the Create an Excel table for the imported MySQL table data option is checked (which is by default), MySQL for Excel will create an Excel table (also known in Excel jargon as a ListObject) from the Excel range containing the imported MySQL data. This "little feature" enables the right-away usage of the Excel table in data analysis, like including it for summarization on a PivotTable, including a summarization row at the end of the table's data, sorting or filtering the table's data by clicking the drop-down button next to each column's header, among other actions. The Excel tables that are created automatically from imported MySQL data will have a name like [UserPrefix].<SchemaName>.<DbObjectName> for tables and views, and <Prefix>.<SchemaName>.<ProcedureName>.<ResultSetName> for stored procedures.  Notice the first piece of the name is an optional [UserPrefix], the prefix is only used if the Prefix Excel tables with the following text option is checked, notice that the suggested prefix is "MySQL" but it can be changed to whatever text is suitable for you. Excel tables must have a table style so they are easily identified. There are a lot of predefined Excel table styles, by default the MySqlDefault style is applied, which is the style you have seen applied to imported data for Edit Sessions, and which adds simple and elegant formatting to the table. If you wish to change it to any of the predefined Excel table style you can do it through the drop-down list on the Use style [[styles drop-down]] for the new Excel table option. Excel tables are the basic construction blocks for building data analysis or self-service Business Intelligence using other more advanced Excel tools like Power Pivot, Power View or Power Map. This feature empowers imported MySQL data to use it in more advanced ways.  We hope you give this and the other new features in the 1.2.x version family a try! Remember that your feedback is very important for us, so drop us a message and follow us: MySQL on Windows (this) Blog: https://blogs.oracle.com/MySqlOnWindows/ MySQL for Excel forum: http://forums.mysql.com/list.php?172 Facebook: http://www.facebook.com/mysql YouTube channel: https://www.youtube.com/user/MySQLChannel Cheers!

    Read the article

  • How to recover data files from xampp-windows to xampp-linux after crash?

    - by David Buehler
    My Windows box died after I developed a database in xampp on it; fortunately I have a backup of the entire F:/TestWeb/Xampp partition. Unfortunately, I did not do an Export (nor dump) of the "Lws2" database before the crash. I have replaced the defunct machine with one running Mint7 (based on Ubuntu 9.04 "Jaunty Jackalope") and installed xampp-linux into the /opt partition, so the new xampp now runs fine in /opt/lampp, and says all the elements are secured by passwords (which I just assigned during this installation.) I assumed that Xamp-Windows installed in November would migrate easily to xampp-linux installed iin February -- a bad assumption. It apparently would have been simple if I had known enough to do an Export or a Dump before the crash, but.... The backup was done to a Network Attached Storage drive, which is formatted as "vfat" so the backup does not carry with it any valid ownership permissions from MySql on NTFS. I now see from my backup that the old data resided in \TestWeb\Xampp\Mysql\Data\Lws2\ and consists of 7 ".frm" files which define my tables. The actual data -- I suppose a ".sql" file or files -- has disappeared, and I am resigning myself to two days of retyping it. But I do not wish to do the table layouts all over again. So I copied Data tree to /opt/lampp/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/Lws2 -- PhpMyAdmin does not see it. So I copied Data tree to /opt/lampp/var/mysql/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/var/mysql/Lws2 -- PhpMyAdmin does not see it. So I adjusted all the permissions to stop saying owner "nobody" to owner "root" and gave full permissions to all groups and to all others, with permissions percolating down, in all 4 trees. You guessed it -- PhpMyAdmin does not see any database named Lws2, only its 4 default ones. I double-checked the permissions and rebooted Linux and repeated the tests. At some point in that process I did see PhpMyAdmin showing "lws2(7)" but when I clicked on it I saw a "no table found" message. I have not been able to recreate that experience. Apparently there are some setup files for MySql and for PhpMyAdmin which need to be set up by running a wizard or two or by editing the files directly. I grepped the TestWeb tree and found an old "ldir = "C:TestWeb\Xampp\MySql\" and a "DataDir = C:TestWeb\Xampp\MySql\" in a .php file and in a .bat file, but I cannot find the corresponding config file names on the /opt partition/ -- so it looks as if these wizards have not been run to create them. What config files files does Linux use to setup MySql config files for PhpMyAdmin? What wizards do I need to run to point the MySql engine and the PhpMyAdmin at the folder /opt/lampp/data/ with its lws2 folder inside it? Or which files do I need to edit, with a sample of what it normally says under Linux? Incidentally, I remember I converted from MyISAM with its .MYD and .MYI files to InnoDB after entering only a small amount of the data -- and I do not know what file types to look for -- perhaps my data is still there but under another guise or in another place? Is it something as simple as linux needing to see "/data/" instead of /Data? I will check that out while waiting for a response. If anyone can point me to documentation that discusses this level of detail -- I will read it avidly! In any case, thanks for any clarification you can give on this thorny problem. wizdum

    Read the article

  • Storing game objects with generic object information

    - by Mick
    In a simple game object class, you might have something like this: public abstract class GameObject { protected String name; // other properties protected double x, y; public GameObject(String name, double x, double y) { // etc } // setters, getters } I was thinking, since a lot of game objects (ex. generic monsters) will share the same name, movement speed, attack power, etc, it would be better to have all that information shared between all monsters of the same type. So I decided to have an abstract class "ObjectData" to hold all this shared information. So whenever I create a generic monster, I would use the same pre-created "ObjectData" for it. Now the above class becomes more like this: public abstract class GameObject { protected ObjectData data; protected double x, y; public GameObject(ObjectData data, double x, double y) { // etc } // setters, getters public String getName() { return data.getName(); } } So to tailor this specifically for a Monster (could be done in a very similar way for Npcs, etc), I would add 2 classes. Monster which extends GameObject, and MonsterData which extends ObjectData. Now I'll have something like this: public class Monster extends GameObject { public Monster(MonsterData data, double x, double y) { super(data, x, y); } } This is where my design question comes in. Since MonsterData would hold data specific to a generic monster (and would vary with what say NpcData holds), what would be the best way to access this extra information in a system like this? At the moment, since the data variable is of type ObjectData, I'll have to cast data to MonsterData whenever I use it inside the Monster class. One solution I thought of is this, but this might be bad practice: public class Monster extends GameObject { private MonsterData data; // <- this part here public Monster(MonsterData data, double x, double y) { super(data, x, y); this.data = data; // <- this part here } } I've read that for one I should generically avoid overwriting the underlying classes variables. What do you guys think of this solution? Is it bad practice? Do you have any better solutions? Is the design in general bad? How should I redesign this if it is? Thanks in advanced for any replies, and sorry about the long question. Hopefully it all makes sense!

    Read the article

  • Fraud and Anomaly Detection using Oracle Data Mining YouTube-like Video

    - by chberger
    I've created and recorded another YouTube-like presentation and "live" demos of Oracle Advanced Analytics Option, this time focusing on Fraud and Anomaly Detection using Oracle Data Mining.  [Note:  It is a large MP4 file that will open and play in place.  The sound quality is weak so you may need to turn up the volume.] Data is your most valuable asset. It represents the entire history of your organization and its interactions with your customers.  Predictive analytics leverages data to discover patterns, relationships and to help you even make informed predictions.   Oracle Data Mining (ODM) automatically discovers relationships hidden in data.  Predictive models and insights discovered with ODM address business problems such as:  predicting customer behavior, detecting fraud, analyzing market baskets, profiling and loyalty.  Oracle Data Mining, part of the Oracle Advanced Analytics (OAA) Option to the Oracle Database EE, embeds 12 high performance data mining algorithms in the SQL kernel of the Oracle Database. This eliminates data movement, delivers scalability and maintains security.  But, how do you find these very important needles or possibly fraudulent transactions and huge haystacks of data? Oracle Data Mining’s 1 Class Support Vector Machine algorithm is specifically designed to identify rare or anomalous records.  Oracle Data Mining's 1-Class SVM anomaly detection algorithm trains on what it believes to be considered “normal” records, build a descriptive and predictive model which can then be used to flags records that, on a multi-dimensional basis, appear to not fit in--or be different.  Combined with clustering techniques to sort transactions into more homogeneous sub-populations for more focused anomaly detection analysis and Oracle Business Intelligence, Enterprise Applications and/or real-time environments to "deploy" fraud detection, Oracle Data Mining delivers a powerful advanced analytical platform for solving important problems.  With OAA/ODM you can find suspicious expense report submissions, flag non-compliant tax submissions, fight fraud in healthcare claims and save huge amounts of money in fraudulent claims  and abuse.   This presentation and several brief demos will show Oracle Data Mining's fraud and anomaly detection capabilities.  

    Read the article

  • How to best transfer large payloads of data using wsHttp with WCF with message security

    - by jpierson
    I have a case where I need to transfer large amounts of serialized object graphs (via NetDataContractSerializer) using WCF using wsHttp. I'm using message security and would like to continue to do so. Using this setup I would like to transfer serialized object graph which can sometimes approach around 300MB or so but when I try to do so I've started seeing a exception of type System.InsufficientMemoryException appear. After a little research it appears that by default in WCF that a result to a service call is contained within a single message by default which contains the serialized data and this data is buffered by default on the server until the whole message is completely written. Thus the memory exception is being caused by the fact that the server is running out of memory resources that it is allowed to allocate because that buffer is full. The two main recommendations that I've come across are to use streaming or chunking to solve this problem however it is not clear to me what that involves and whether either solution is possible with my current setup (wsHttp/NetDataContractSerializer/Message Security). So far I understand that to use streaming message security would not work because message encryption and decryption need to work on the whole set of data and not a partial message. Chunking however sounds like it might be possible however it is not clear to me how it would be done with the other constraints that I've listed. If anybody could offer some guidance on what solutions are available and how to go about implementing it I would greatly appreciate it. Related resources: Chunking Channel How to: Enable Streaming Large attachments over WCF Custom Message Encoder Another spotting of InsufficientMemoryException I'm also interested in any type of compression that could be done on this data but it looks like I would probably be best off doing this at the transport level once I can transition into .NET 4.0 so that the client will automatically support the gzip headers if I understand this properly.

    Read the article

  • Accessing data stored in another unit Delphi

    - by Hendriksen123
    In Unit2 of my program i have the following code: TValue = Record NewValue, OldValue, SavedValue : Double; end; TData = Class(TObject) Public EconomicGrowth : TValue; Inflation : TValue; Unemployment : TValue; CurrentAccountPosition : TValue; AggregateSupply : TValue; AggregateDemand : TValue; ADGovernmentSpending : TValue; ADConsumption : TValue; ADInvestment : TValue; ADNetExports : TValue; OverallTaxation : TValue; GovernmentSpending : TValue; InterestRates : TValue; IncomeTax : TValue; Benefits : TValue; TrainingEducationSpending : TValue; End; I then declare Data : TData in the Var. when i try to do the following however in Unit1: ShowMessage(FloatToStr(Unit2.Data.Inflation.SavedValue)); I get an EAccessViolation message. Is there any way to access the data stored in 'Data' from Unit1 without getting errors?

    Read the article

  • Image Wells, Core Data, and Sqlite files.

    - by Sway
    I've got a mac application that I've developed. I use it to create sqlite files that are bundled with my iphone app. The mac app uses Core Data and bindings and is working fine except for one "weird" issue. I use an NSImageView (or Image Well) to allow me to drag and drop jpg files. This is bound through to an optional binary attribute in my model class. For some reason when I drag and drop a 4k jpg file it onto the image well and save the sqlite file. The data saved to the binary column is over 15 times larger than it should be. Whereas if I use an application like SQLiteManager and add the image into the row in the database. The binary data is the correct (expected size). File 4k jpg Actual size: 2371. Persisted via Core Data size: 35810. Can anyone give me a suggestion as to why this might be happening? Do I need to set some setting in Interface Builder or write some custom code?

    Read the article

  • How to show data in dataTable lable:data in jsf

    - by palakolanusrinu
    Hi How to display lable: data name: srinu in multiple rows using dataTable in jsf right now i'm getting like | lable | data| data srinu i want it in this formate lable: data name: srinu code which used is <h:dataTable id="fundInfo" value="#{clientFundInfo}" border="1" var="client" first="0" rows="5" rules="all"> <h:column> <h:outputText value="CLIENT:"/> <h:outputText value="#{client.clientName}"></h:outputText> </h:column> <h:column> <h:outputText value="FUND:"/> <h:outputText value="#{client.fundName}"></h:outputText> </h:column> <h:column> <h:outputText value="Employer Identification Number:"/> <h:outputText value="#{client.empIdentificationNum}"></h:outputText> </h:column> <h:column><h:outputText value="FISCAL YEAR ENDED:"/> <h:outputText value="#{client.fye}"></h:outputText> </h:column> <h:column><h:outputText value="Shares Outstanding"/> <h:outputText value="#{client.sharesOutstanding}"></h:outputText> </h:column> </h:dataTable>

    Read the article

  • jquery slide to attribute with specific data attribute value

    - by Alex M
    Ok, so I have the following nav with absolute urls: <ul class="nav navbar-nav" id="main-menu"> <li class="first active"><a href="http://example.com/" title="Home">Home</a></li> <li><a href="http://example.com/about.html" title="About">About</a></li> <li><a href="http://example.com/portfolio.html" title="Portfolio">Portfolio</a></li> <li class="last"><a href="example.com/contact.html" title="Contact">Contact</a></li> </ul> and then articles with the following data attributes: <article class="row page" id="about" data-url="http://example.com/about.html"> <div class="one"> </div> <div class="two" </div> </article> and when you click the link in the menu I would like to ignore the fact it is a hyperlink and slide to the current article based on its attribute data-url. I started with what I think is the obvious: $('#main-menu li a').on('click', function(event) { event.preventDefault(); var pageUrl = $(this).attr('href'); )}; and have tried find and animate but then I don't know how to reference the article with data-url="http://example.com/about.html". Any help would be most appreciated. Thanks

    Read the article

  • Syncing two separate structures to the same master data

    - by Mike Burton
    I've got multiple structures to maintain in my application. All link to the same records, and one of them could be considered the "master" in that it reflects actual relationships held in files on disk. The other structures are used to "call out" elements of the main design for purchase and work orders. I'm struggling to come up with a pattern that deals appropriately with changes to the master data. As an example, the following trees might refer to the same data: A |_ B |_ C |_ D |_ E |_ B |_ C |_ D A |_ B E C |_ D A |_ B C D E These secondary structures follow internal rules, but their overall structure is usually user-determined. In all cases (including the master), any element can be used in multiple locations and in multiple trees. When I add a child to any element in the tree, I want to either automatically build the secondary structure for each instance of the "master" element or at least advertise the situation to the user and allow them to manually generate the data required for the secondary trees. Is there any pattern which might apply to this situation? I've been treating it as a view problem, but it turns out to be more complicated than that when you look at the initial generation of the data.

    Read the article

  • iPhone Core Data problem

    - by Junior B.
    This is my first project with Core Data, I followed the Event tutorial provided by Apple that helped me to understand the basic of core data in iPhone. But now, working over my project, I've a problem adding data into my database. When i create an object and set the data, if I try to get it back, the system returns me a strange sequence of characters. This is what i see in log if I try to log it: 2010-05-11 00:16:43.523 FG[2665:207] Package: ‡}00å 2010-05-11 00:16:43.525 FG[2665:207] Package: ‡}00å 2010-05-11 00:16:43.526 FG[2665:207] Package: ‡}00å 2010-05-11 00:16:43.527 FG[2665:207] Package: ‡}00å 2010-05-11 00:16:43.527 FG[2665:207] Package: ‡}00å 2010-05-11 00:16:43.527 FG[2665:207] Items: 5 What kind of problem could be this? Edit: This is the part of the code that generate the error: package = (Package *)[NSEntityDescription insertNewObjectForEntityForName:@"Package" inManagedObjectContext:moc]; theNodes = [doc nodesForXPath:@"//pack" error:&error]; for (CXMLElement *theElement in theNodes) { // Create a counter variable as type "int" int counter; // Loop through the children of the current node for(counter = 0; counter < [theElement childCount]; counter++) { if([[[theElement childAtIndex:counter] name] isEqualToString: @"id"]) [package setIdPackage:[[theElement childAtIndex:counter] stringValue]]; if([[[theElement childAtIndex:counter] name] isEqualToString: @"title"]) [package setPackageTitle:[[theElement childAtIndex:counter] stringValue]]; if([[[theElement childAtIndex:counter] name] isEqualToString: @"category"]) [package setCategory:[[theElement childAtIndex:counter] stringValue]]; if([[[theElement childAtIndex:counter] name] isEqualToString: @"lang"]) [package setLang:[[theElement childAtIndex:counter] stringValue]]; if([[[theElement childAtIndex:counter] name] isEqualToString: @"number"]) { NSNumberFormatter * f = [[NSNumberFormatter alloc] init]; [f setNumberStyle:NSNumberFormatterDecimalStyle]; NSNumber * myNumber = [f numberFromString:[[theElement childAtIndex:counter] stringValue]]; [f release]; [package setNumber:myNumber]; } } } NSLog([NSString stringWithFormat:@"=== %s ===\nID: %s\nCategory: %s\nLanguage: %s",[package packageTitle], [package idPackage] ,[package category],[package lang]]);

    Read the article

  • Core Data, Bindings, value transformers : crash when saving

    - by Gael
    Hi, I am trying to store a PNG image in a core data store backed by an sqlite database. Since I intend to use this database on an iPhone I can't store NSImage objects directly. I wanted to use bindings and an NSValueTransformer subclass to handle the transcoding from the NSImage (obtained by an Image well on my GUI) to an NSData containing the PNG binary representation of the image. I wrote the following code for the ValueTransformer : + (Class)transformedValueClass { return [NSImage class]; } + (BOOL)allowsReverseTransformation { return YES; } - (id)transformedValue:(id)value { if (value == nil) return nil; return [[[NSImage alloc] initWithData:value] autorelease]; } - (id)reverseTransformedValue:(id)value { if (value == nil) return nil; if(![value isKindOfClass:[NSImage class]]) { NSLog(@"Type mismatch. Expecting NSImage"); } NSBitmapImageRep *bits = [[value representations] objectAtIndex: 0]; NSData *data = [bits representationUsingType:NSPNGFileType properties:nil]; return data; } The model has a transformable property configured with this NSValueTransformer. In Interface Builder a table column and an image well are both bound to this property and both have the proper value transformer name (an image dropped in the image well shows up in the table column). The transformer is registered and called every time an image is added or a row is reloaded (checked with NSLog() calls). The problem arises when I am trying to save the managed objects. The console output shows the error message : [NSImage length]: unrecognized selector sent to instance 0x1004933a0 It seems like core data is using the value transformer to obtain the NSImage back from the NSData and then tries to save the NSImage instead of the NSData. There are probably workarounds such as the one presented in this post but I would really like to understand why my approach is flawn. Thanks in advance for your ideas and explanations.

    Read the article

  • Generic dataset handling library

    - by Pep.
    Hello, I want to build a generic Perl module for handling and analysing biomedical character separated datasets and which can, most certain, be used on any kind of datasets that contain a mixture of categorical (A,B,C,..) and continuous (1.2,3,881..) and identifier (XXX1,XXX2...). The plan is to have people initialize the module and then use some arguments to point to the data file(s), the place were the analysis reports should be placed and the structure of the data. By structure of data I mean which variable is in which place and its name/type. And this is where I need some enlightenment. I am baffled how to do this in a clean way. Obviously, having people create a simple schema file, be it XML or some other format would be the cleanest but maybe not all people enjoy doing something like this. The solutions I can think of are: Create a configuration file in XML or similar and with a prespecified format. Pass the information during initialization of the module. Use the first row of the data as headers and try to guess types (ouch) Surely there must be a "canonical" way of doing this that is also usable and efficient. Thanks p.

    Read the article

  • Ember Data Sycn - LocalStorage+REST+RealTime+Online/Offline

    - by Miguel Madero
    We have a combination of requirements in terms o data access. Pre-load some reference data. We need reference data to survive browser restarts instead of just living in memory to avoid loading it all the time. I'm currently using the LocalStorageAdapter for that. Once we have it, we would like to sync changes (polling or using Socket.IO in the background and updating the LocalStorage could do the trick) There're other models that are more transactional, where we would need to directly go to the Server and get/save them. It would be nice to use something like the RESTAdapter for that. Lastly, there're some operations that should work off-line and changes should be synced later. To make it more concrete: * We pre-load vendor and "favorite products" into Local Storage. We work offline with those. * We need to sync server changes to vendor and product information. * If they search the full catalog, that requires them to be online. * When offline, we need to allow users to add something to their cart or even submit and order. We would like to queue this action and submit it when they have an Internet Connection. So a few questions are derived from this: * Is there a way to user RESTAdapter in combination with LocalStorage? * Is there some Socket.IO support? (Happy to do this part manually) * Is there Queueing support? Ideally at the Ember-Data level. I know we will have to do a lot of this manually and pull together the different lego pieces, but I wanted to ask for some perspective from experience Ember devs.

    Read the article

  • Sending data in a GTK Callback

    - by snostorm
    How can I send data through a GTK callback? I've Googled, and with the information I found created this: #include <gtk/gtk.h> #include <stdio.h> void button_clicked( GtkWidget *widget, GdkEvent *event, gchar *data); int main( int argc, char *argv[]){ GtkWidget *window; GtkWidget *button; gtk_init (&argc, &argv); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); button = gtk_button_new_with_label("Go!"); gtk_container_add(GTK_CONTAINER(window), button); g_signal_connect_swapped(G_OBJECT(window), "destroy", G_CALLBACK(gtk_main_quit), NULL); g_signal_connect(G_OBJECT(button), "clicked", G_CALLBACK(button_clicked),"test" ); gtk_widget_show(window); gtk_widget_show(button); gtk_main(); return 0; } void button_clicked( GtkWidget *widget, GdkEvent *event, gchar *data){ printf("%s \n", (gchar *) data); return; } But it just Segfaults when I press the button. What is the right way to do this?

    Read the article

  • Counting in R data.table

    - by Simon Z.
    I have the following data.table set.seed(1) DT <- data.table(VAL = sample(c(1, 2, 3), 10, replace = TRUE)) VAL 1: 1 2: 2 3: 2 4: 3 5: 1 6: 3 7: 3 8: 2 9: 2 10: 1 Now I want to to perform two tasks: Count the occurrences of numbers in VAL. Count within all rows with the same value VAL (first, second, third occurrence) At the end I want the result VAL COUNT IDX 1: 1 3 1 2: 2 4 1 3: 2 4 2 4: 3 3 1 5: 1 3 2 6: 3 3 2 7: 3 3 3 8: 2 4 3 9: 2 4 4 10: 1 3 3 where COUNT defines task 1. and IDX task 2. I tried to work with which and length using .I: dt[, list(COUNT = length(VAL == VAL[.I]), IDX = which(which(VAL == VAL[.I]) == .I))] but this does not work as .I refers to a vector with the index, so I guess one must use .I[]. Though inside .I[] I again face the problem, that I do not have the row index and I do know (from reading data.table FAQ and following the posts here) that looping through rows should be avoided if possible. So, what's the data.table way?

    Read the article

  • Core Data object into an NSDictionary with possible nil objects

    - by Chuck
    I have a core data object that has a bunch of optional values. I'm pushing a table view controller and passing it a reference to the object so I can display its contents in a table view. Because I want the table view displayed a specific way, I am storing the values from the core data object into an array of dictionaries then using the array to populate the table view. This works great, and I got editing and saving working properly. (i'm not using a fetched results controller because I don't have anything to sort on) The issue with my current code is that if one of the items in the object is missing, then I end up trying to put nil into the dictionary, which won't work. I'm looking for a clean way to handle this, I could do the following, but I can't help but feeling like there's a better way. *passedEntry is the core data object handed to the view controller when it is pushed, lets say it contains firstName, lastName, and age, all optional. if ([passedEntry firstName] != nil) { [dictionary setObject:[passedEntry firstName] forKey:@"firstName"] } else { [dictionary setObject:@"" forKey:@"firstName"] } And so on. This works, but it feels kludgy, especially if I end up adding more items to the core data object down the road.

    Read the article

  • using jquery $.ajax to retrieve data and push data into javascript array through the success functio

    - by teamdane
    Hello All, I have a function that I'm trying to retrieve values using $.ajax and push the returned data into an array called output. Problem is the success function will not allow me to push the results into the array. see below for code. var getValues = function(el) { var value = ($(el).val()); if(value == '' || $(el).attr('disabled')) { return []; } var string = 'value=' + value; var output = []; $.ajax({ type: "GET", url: 'shocklookup_callback.php', dataType: "string", data: string, success: function(data) { } }); //output.push({ text : value + 'test', value : value + 'test1'} ); return output; }; Any ideas of what I can put in the success function to be able to push the data into the output array? Also I need to be able to return the output array to the original function getValues. If this doesn't' make a whole lot of sense please let me know and I'll try to explain better. Thanks, Dane

    Read the article

  • How to reduce a data frame keeping the order for other columns

    - by betabandido
    I am trying to reduce a data frame using the max function on a given column. I would like to preserve other columns but keeping the values from the same rows where each maximum value was selected. An example will make this explanation easier. Let us assume we have the following data frame: dframe <- data.frame(list(BENCH=sort(rep(letters[1:4], 4)), CFG=rep(1:4, 4), VALUE=runif(4 * 4) )) This gives me: BENCH CFG VALUE 1 a 1 0.98828096 2 a 2 0.19630597 3 a 3 0.83539540 4 a 4 0.90988296 5 b 1 0.01191147 6 b 2 0.35164194 7 b 3 0.55094787 8 b 4 0.20744004 9 c 1 0.49864470 10 c 2 0.77845408 11 c 3 0.25278871 12 c 4 0.23440847 13 d 1 0.29795494 14 d 2 0.91766057 15 d 3 0.68044728 16 d 4 0.18448748 Now, I want to reduce the data in order to select the maximum VALUE for each different BENCH: aggregate(VALUE ~ BENCH, dframe, FUN=max) This gives me the expected result: BENCH VALUE 1 a 0.9882810 2 b 0.5509479 3 c 0.7784541 4 d 0.9176606 Next, I tried to preserve other columns: aggregate(cbind(VALUE, CFG) ~ BENCH, dframe, FUN=max) This reduction returns: BENCH VALUE CFG 1 a 0.9882810 4 2 b 0.5509479 4 3 c 0.7784541 4 4 d 0.9176606 4 Both VALUE and CFG are reduced using max function. But this is not what I want. For instance, in this example I would like to obtain: BENCH VALUE CFG 1 a 0.9882810 1 2 b 0.5509479 3 3 c 0.7784541 2 4 d 0.9176606 2 where CFG is not reduced, but it just keeps the value associated to the maximum VALUE for each different BENCH. How could I change my reduction in order to obtain the last result shown?

    Read the article

  • Core Data Errors vs Exceptions Part 3

    - by John Gallagher
    My question is similar to this one. Background I'm creating a large number of objects in a core data store using NSOperations to speed things up. I've followed all the Core Data multithreading rules - I've got a single persistent store coordinator and a managed object context per thread that on save is merging back to the main managed object context. The Problem When the number of threads running at once is more than 1, I get the exception logged on save of my core data store: NSExceptionHandler has recorded the following exception: NSInternalInconsistencyException -- optimistic locking failure What I've Tried My code that creates new entities is quite complex - it makes entities that have relationships with other entities that could be being created in a separate thread. If I replace my object creation routine with some very simple code just making non-related entries, everything works perfectly. Initially, as well as the exceptions, I was getting a save error saying core data couldn't save due to the merge failing. I read the docs and realised I needed a merge policy on the Managed Object Context I was saving to. I set this up and as this question states, the save error goes away, but the exception remains. My Question Do I need to worry about these exceptions? If I do need to get rid of the exceptions, any ideas on how I do it?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >