Search Results

Search found 60107 results on 2405 pages for 'data oriented'.

Page 43/2405 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Letting Wcf data service implement another contract

    - by Wasim
    Hi all , I have a wcf data service with the standart configuration . I want to add another functionality to it , so I built a contract interface and let my wcf data service implements it . Now I see in the service the InitializeService method , and the contract interface methods . When I come to connect the service , I get an error , that there is no end point declared to the contract I added . How can do that ? examples ? links ? I choosed to add the contract interface to the wcf data service and not adding another service , because the client application uses the wcf data service generated objects , and I want to use the same obkject to make operations not related to data , for more coplex processing . If I do the methods in another service , then I have types incompatibility . Thanks in advance ...

    Read the article

  • JavaScript - question regarding data structure

    - by orokusaki
    I'm trying to calculate somebody's bowel health based on a points system. I can edit the data structure, or logic in any way. I'm just trying to write a function and data structure to handle this ability. Pseudo calculator function: // Bowel health calculator var points = 0; If age is from 30 and 34: points += 1 If age is from 35 and 40: points += 2 If daily BMs is from 1 and 3: points -= 1 If daily BMs is from 4 and 6: points -= 2 return points; Pseudo data structure: var points_map = { age: { '30-34': 1, '35-40': 2 }, dbm: { '1-3': -1, '4-6': -2 } }; I have a full spreadsheet of data like this, and I am trying to write a DRY version of code and a DRY version of this data (ie, probably not a string for the '30-34', etc) in order to handle this sort of thing without a humongous number of switch statements.

    Read the article

  • Cloud HUGE data storage options?

    - by ToughPal
    Hi, Does anyone have a good suggestion on how to do video recording? We have a camera that can record and then stream live video to a server. So this means we can have 1000's of cameras sending data 24X7 for recording. We will store data for over 7 / 14 / 30 days depending on the package. Per day if a camera is sending data to the server then it will store 1.5GB. So that means there is a traffic of 1.5GB / day / camera Total monthly 45GB / month / camera (Data + bandwidth for one camera) Please let me know the most cost effective way to get this data stored? Thanks!

    Read the article

  • How to Embed/Link binary data into a C++ DLL

    - by CrimsonX
    So I have a Visual Studio 2008 project which has a large amount of binary data that it is currently referencing. I would like to package the binary data much like you can do with C# by adding it as a "resource" and compiling it as a DLL. Lets say all my data has an extension of ".data" and is currently being read from the visual studio project. Is there a way that you can compile or link the data into the .dll which it is calling? I've looked at some of the google link for this and so far I haven't come up with anything - the only possible solution I've come up with is to use something like ResGen to create a .resources file and then link it using AssemblyLinker with /Embed or /Link flags. I dont think it'd work properly though because I dont have text files to create the .resources files, but rather binary files themselves. Any advice?

    Read the article

  • Managing test data for Junit tests.

    - by nobody
    Hi, We are facing one problem in managing test data(xmls which is used to create mock objects). The data which we have currently has been evolved over a long period of time. Each time we add a new functionality or test case we add new data to test that functionality. Now, the problem is when the business requirement changes the format( like length or format of a variable) or any change which the test data doesn't support , we need to change the entire test data which is 100s of MBs in size. Could anyone suggest a better method or process to overcome this problem? Any suggestion would be appreciated.

    Read the article

  • How to extract data from Google Analytics and build a data warehouse (webhouse) from it?

    - by nkaur301
    I have click stream data such as referring URL, top landing pages, top exit pages and metrics such as page views, number of visits, bounces all in Google Analytics. I am required to build a data warehouse from scratch(which I believe is known as web-house) from this data. My questions are:- 1)Is it possible? Every day data increases (some in terms of metrics or measures such as visits and some in terms of new referring sites), how would the process of loading the warehouse go about? 2)What ETL tool would help me to achieve this? Pentaho I believe has a way to pull out data from Google Analytics, has anyone used it? How does that process go? Any references, links would be appreciated besides answers.

    Read the article

  • SQL SERVER – 5 Tips for Improving Your Data with expressor Studio

    - by pinaldave
    It’s no secret that bad data leads to bad decisions and poor results.  However, how do you prevent dirty data from taking up residency in your data store?  Some might argue that it’s the responsibility of the person sending you the data.  While that may be true, in practice that will rarely hold up.  It doesn’t matter how many times you ask, you will get the data however they decide to provide it. So now you have bad data.  What constitutes bad data?  There are quite a few valid answers, for example: Invalid date values Inappropriate characters Wrong data Values that exceed a pre-set threshold While it is certainly possible to write your own scripts and custom SQL to identify and deal with these data anomalies, that effort often takes too long and becomes difficult to maintain.  Instead, leveraging an ETL tool like expressor Studio makes the data cleansing process much easier and faster.  Below are some tips for leveraging expressor to get your data into tip-top shape. Tip 1:     Build reusable data objects with embedded cleansing rules One of the new features in expressor Studio 3.2 is the ability to define constraints at the metadata level.  Using expressor’s concept of Semantic Types, you can define reusable data objects that have embedded logic such as constraints for dealing with dirty data.  Once defined, they can be saved as a shared atomic type and then re-applied to other data attributes in other schemas. As you can see in the figure above, I’ve defined a constraint on zip code.  I can then save the constraint rules I defined for zip code as a shared atomic type called zip_type for example.   The next time I get a different data source with a schema that also contains a zip code field, I can simply apply the shared atomic type (shown below) and the previously defined constraints will be automatically applied. Tip 2:     Unlock the power of regular expressions in Semantic Types Another powerful feature introduced in expressor Studio 3.2 is the option to use regular expressions as a constraint.   A regular expression is used to identify patterns within data.   The patterns could be something as simple as a date format or something much more complex such as a street address.  For example, I could define that a valid IP address should be made up of 4 numbers, each 0 to 255, and separated by a period.  So 192.168.23.123 might be a valid IP address whereas 888.777.0.123 would not be.   How can I account for this using regular expressions? A very simple regular expression that would look for any 4 sets of 3 digits separated by a period would be:  ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ Alternatively, the following would be the exact check for truly valid IP addresses as we had defined above:  ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])$ .  In expressor, we would enter this regular expression as a constraint like this: Here we select the corrective action to be ‘Escalate’, meaning that the expressor Dataflow operator will decide what to do.  Some of the options include rejecting the offending record, skipping it, or aborting the dataflow. Tip 3:     Email pattern expressions that might come in handy In the example schema that I am using, there’s a field for email.  Email addresses are often entered incorrectly because people are trying to avoid spam.  While there are a lot of different ways to define what constitutes a valid email address, a quick search online yields a couple of really useful regular expressions for validating email addresses: This one is short and sweet:  \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b (Source: http://www.regular-expressions.info/) This one is more specific about which characters are allowed:  ^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$ (Source: http://regexlib.com/REDetails.aspx?regexp_id=26 ) Tip 4:     Reject “dirty data” for analysis or further processing Yet another feature introduced in expressor Studio 3.2 is the ability to reject records based on constraint violations.  To capture reject records on input, simply specify Reject Record in the Error Handling setting for the Read File operator.  Then attach a Write File operator to the reject port of the Read File operator as such: Next, in the Write File operator, you can configure the expressor operator in a similar way to the Read File.  The key difference would be that the schema needs to be derived from the upstream operator as shown below: Once configured, expressor will output rejected records to the file you specified.  In addition to the rejected records, expressor also captures some diagnostic information that will be helpful towards identifying why the record was rejected.  This makes diagnosing errors much easier! Tip 5:    Use a Filter or Transform after the initial cleansing to finish the job Sometimes you may want to predicate the data cleansing on a more complex set of conditions.  For example, I may only be interested in processing data containing males over the age of 25 in certain zip codes.  Using an expressor Filter operator, you can define the conditional logic which isolates the records of importance away from the others. Alternatively, the expressor Transform operator can be used to alter the input value via a user defined algorithm or transformation.  It also supports the use of conditional logic and data can be rejected based on constraint violations. However, the best tip I can leave you with is to not constrain your solution design approach – expressor operators can be combined in many different ways to achieve the desired results.  For example, in the expressor Dataflow below, I can post-process the reject data from the Filter which did not meet my pre-defined criteria and, if successful, Funnel it back into the flow so that it gets written to the target table. I continue to be impressed that expressor offers all this functionality as part of their FREE expressor Studio desktop ETL tool, which you can download from here.  Their Studio ETL tool is absolutely free and they are very open about saying that if you want to deploy their software on a dedicated Windows Server, you need to purchase their server software, whose pricing is posted on their website. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Problem when trying to configure enterprise library 5.0 (Data Access Application Block)

    - by Phil
    Hi There Stackoverflow, I am running into some problems while trying to get DAAB from Enterprise library 5.0 running. I have followed the steps as per the tutorial, but am getting errors... 1) Download / install enterprise library 2) Add references to the blocks I need (common / data) 3) Imports Imports Microsoft.Practices.EnterpriseLibrary.Common Imports Microsoft.Practices.EnterpriseLibrary.Data 4) Through the enterprise library config software. I open up the web.config from my site. I then click Blocks, then Add data settings... fill in my details and save / close 5) I then (thinking setup is complete) try to get an instance of the database via Dim db As Database = DatabaseFactory.CreateDatabase() 6) I compile and receive the following error: Could not load file or assembly 'Microsoft.Practices.EnterpriseLibrary.Data, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) (C:\site\web.config line 4) Line 4 off my web.config was generated by the config tool and is: <section name="dataConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="true" /> Am I missing a required step? Have I done the steps in the wrong order? Have I made a mistake? Thanks a lot for the assistance.

    Read the article

  • Removing and adding persistent stores to a core data application

    - by mkko
    I'm using core data on an iPhone application. I have multiple persisntent stores that I'm switching from one to another so that only one of the stores can be active at the time. I have one managed object context and the different persistent stores are similar in data format (sqlite) and share the same managed object model. I'm importing the data to each persistent store from a respective XML file. For the first import everything works fine, but after I remove the imported data (the persistent store and the physical file) and then re-import, core data gives me an error: *** Terminating app due to uncaught exception 'NSObjectInaccessibleException', reason: 'The NSManagedObject with ID:0x3c14e00 <x-coredata://6D14F11E-2EA7-4141-9BE8-53747DE6FCC6/Book/p2> has been invalidated.' This error comes from the save: of NSManagedObjectContext. Before re-importing, i'm removing the persistent store from the persistent store coordinator and removing the physical file, so everything should be as if re-importing was done for the first time. Alos, the objects in managed object context are removed and the context is sent the reset: message (I don't know if this is actually needed). Could some one help me out here? How should the persistent store be switched? I'm basically using the same logic as tutored here: http://blog.sallarp.com/iphone-core-data-uitableview-drill-down/ Thanks in advance.

    Read the article

  • Core data and @unionOfSets

    - by KevinD
    Im having trouble using the @unionOfSets on my core-data objects. NSLog(@"%@", [NSApp valueForKeyPath:@"delegate.mainWindowController.sidebarViewController.arrayController.selection.list.listElement"]); Prints the set of listElements as expected 2010-03-24 18:11:15.844 Pirouette[7459:80f] Relationship objects for {( (entity: PRPlaylistElement; id: 0x10a71b0 ; data: ), (entity: PRPlaylistElement; id: 0x10ac7d0 ; data: ), (entity: PRPlaylistElement; id: 0x10acf60 ; data: ), (entity: PRPlaylistElement; id: 0x10a6850 ; data: ) However when I try to get the set of file objects for each of the list elements. NSLog(@"%@", [NSApp valueForKeyPath:@"delegate.mainWindowController.sidebarViewController.arrayController.selection.list.listElement.@unionOfArrays.file"]); I get the following error 2010-03-24 18:16:45.843 Pirouette[7505:80f] An uncaught exception was raised 2010-03-24 18:16:45.844 Pirouette[7505:80f] [<NSCFSet 0x10415e0> valueForKeyPath:]: this class does not implement the unionOfArrays operation. 2010-03-24 18:16:45.847 Pirouette[7505:80f] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '[<NSCFSet 0x10415e0> valueForKeyPath:]: this class does not implement the unionOfArrays operation.' Confused because I thought calling to-many relationships in core-data were NSSets.

    Read the article

  • Archiver Securing SQLite Data without using Encryption on iPhone

    - by Redrocks
    I'm developing an iphone app that uses Core Data with a SQLite data store and lots of images in the resource bundle. I want a "simple" way to obfuscate the file structure of the SQLite database and the image files to prevent the casual hacker/unscrupulous developer from gaining access to them. When the app is deployed, the database file and image files would be obfuscated. Upon launching the app it would read in and un-obfuscate the database file, write the un-obfuscated version to the users "tmp" directory for use by core data, and read/un-obfuscate image files as needed. I'd like to apply a simple algorithm to the files that would somehow scramble/manipulate the file data so that the sqlite database data isn't discernible when the db is opened in a text editor and so that neither is recognized by other applications (SQLite Manager, Photoshop, etc.) It seems, from the information I've read, that I could use NSFileManager, NSKeyedArchiver, and NSData to accomplish this but I'm not sure how to proceed. Been developing software for many years but I'm new to everything CocoaTouch, Mac and iPhone. Also never had to secure/encrypt my data so this is new. Any thoughts, suggestions, or links to solutions are appreciated.

    Read the article

  • How to sort data in a table data structure in Java?

    - by rgksugan
    I need to sort data based on the third column of the table data structure. I tried based on the answers for the following question. But my sorting does not work. Please help me in this. Here goes my code. Object[] data = new Object[y]; rst.beforeFirst(); while (rst.next()) { int p_id = Integer.parseInt(rst.getString(1)); String sw2 = "select sum(quantity) from tbl_order_detail where product_id=" + p_id; rst1 = stmt1.executeQuery(sw2); rst1.next(); String sw3 = "select max(order_date) from tbl_order where tbl_order.`Order_ID` in (select tbl_order_detail.`Order_ID` from tbl_order_detail where product_id=" + p_id + ")"; rst2 = stmt2.executeQuery(sw3); rst2.next(); data[i] = new Object[]{new String(rst.getString(2)), new String(rst.getString(3)), new Integer(rst1.getString(1)), new String(rst2.getString(1))}; i++; } ColumnComparator cc = new ColumnComparator(2); Arrays.sort(data, cc); if (i == 0) { table.addCell(""); table.addCell(""); table.addCell(""); table.addCell(""); } else { for (int j = 0; j < y; j++) { Object[] theRow = (Object[]) data[j]; table.addCell((String) theRow[0]); table.addCell((String) theRow[1]); table.addCell((String) theRow[2]); table.addCell((String) theRow[3]); }

    Read the article

  • Moving UITableView cells and maintaining consistent data

    - by Mark F
    I've enabled editing mode and moving cells around to allow users to position table view content in the order they please. I'm using Core Data as the data source, which sorts the content by the attribute "userOrder". When content is first inserted, userOrder is set to a random value. The idea is that when the user moves a cell around, the userOrder of that cell changes to accomodate its new position. The following are problems I am running into while trying to accomplish this: Successfully saving the the new location of the cell and adjusting all changed locations of influenced cells. Getting the data to be consistent. For example, the TableView handles the movement fine, but when i click on the new location of the cell, it displays data for the old cell that used to be that location. Data of all influenced cells gets messed up as well. I know I have to implement this in: - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)sourceIndexPath toIndexPath:(NSIndexPath *)destinationIndexPath {} I just don't know how. The apple docs are not particularly helpful if you are using Core Data, as in my situation. Any guidance greatly appreciated!

    Read the article

  • Ongoing confusion about ivars and properties in objective C

    - by Earl Grey
    After almost 8 months being in ios programming, I am again confused about the right approach. Maybe it is not the language but some OOP principle I am confused about. I don't know.. I was trying C# a few years back. There were fields (private variables, private data in an object), there were getters and setters (methods which exposed something to the world) ,and properties which was THE exposed thing. I liked the elegance of the solution, for example there could be a class that would have a property called DailyRevenue...a float...but there was no private variable called dailyRevenue, there was only a field - an array of single transaction revenues...and the getter for DailyRevenue property calculated the revenue transparently. If somehow the internals of daily revenue calculation would change, it would not affect somebody who consumed my DailyRevenue property in any way, since he would be shielded from getter implementation. I understood that sometimes there was , and sometimes there wasn't a 1-1 relationship between fields and properties. depending on the requirements. It seemed ok in my opinion. And that properties are THE way to acces the data in object. I know the difference betweeen private, protected, and public keyword. Now lets get to objectiveC. On what factor should I base my decision about making someting only an ivar or making it as a property? Is the mental model the same as I describe above? I know that ivars are "protected" by default, not "private" asi in c#..But thats ok I think, no big deal for my presnet level of understanding the whole ios development. The point is ivars are not accesible from outside (given i don't make them public..but i won't). The thing that clouds my clear understanding is that I can have IBOutlets from ivars. Why am I seeing internal object data in the UI? *Why is it ok?* On the other hand, if I make an IBOutlet from property, and I do not make it readonly, anybody can change it. Is this ok too? Let's say I have a ParseManager object. This object would use a built in Foundation framework class called NSXMLParser. Obviously my ParseManager will utilize this nsxmlparser's capabilities but will also do some additional work. Now my question is, who should initialize this NSXMLParser object and in which way should I make a reference to it from the ParseManager object, when there is a need to parse something. A) the ParseManager -1) in its default init method (possible here ivar - or - ivar+ppty) -2) with lazyloading in getter (required a ppty here) B) Some other object - who will pass a reference to NSXMLParser object to the ParseManager object. -1) in some custom initializer (initWithParser:(NSXMLPArser *) parser) when creating the ParseManager object.. A1 - the problem is, we create a parser and waste memory while it is not yet needed. However, we can be sure that all methods that are part ot ParserManager object, can use the ivar safely, since it exists. A2 - the problem is, the nsxmlparser is exposed to outside world, although it could be read only. Would we want a parser to be exposed in some scenario? B1 - this could maybe be useful when we would want to use more types of parsers..i dont know... I understand that architectural requirements and and language is not the same. But clearly the two are in relation. How to get out of that mess of my? Please bear with me, I wasn't able to come up with a single ultimate question. And secondly, it's better to not scare me with some superadvanced newspeak that talks about some crazy internals (what the compiler does) and edge cases.

    Read the article

  • Project structure: where to put business logic

    - by Mister Smith
    First of all, I'm not asking where does business logic belong. This has been asked before and most answers I've read agree in that it belongs in the model: Where to put business logic in MVC design? How much business logic should be allowed to exist in the controller layer? How accurate is "Business logic should be in a service, not in a model"? Why put the business logic in the model? What happens when I have multiple types of storage? However people disagree in the way this logic should be distributed across classes. There seem to exist three major currents of thought: Fat model with business logic inside entity classes. Anemic model and business logic in "Service" classes. It depends. I find all of them problematic. The first option is what most Fowlerites stick to. The problem with a fat model is that sometimes a business logic funtion is not only related to a class, and instead uses a bunch of other classes. If, for example, we are developing a web store, there should be a function that calcs an order's total. We could think of putting this function inside the Order class, but what actually happens is that the logic needs to use different classes, not only data contained in the Order class, but also in the User class, the Session class, and maybe the Tax class, Country class, or Giftcard, Payment, etc. Some of these classes could be composed inside the Order class, but some others not. Sorry if the example is not very good, but I hope you understand what I mean. Putting such a function inside the Order class would break the single responsibility principle, adding unnecesary dependences. The business logic would be scattered across entity classes, making it hard to find. The second option is the one I usually follow, but after many projects I'm still in doubt about how to name the class or classes holding the business logic. In my company we usually develop apps with offline capabilities. The user is able to perform entire transactions offline, so all validation and business rules should be implemented in the client, and then there's usually a background thread that syncs with the server. So we usually have the following classes/packages in every project: Data model (DTOs) Data Access Layer (Persistence) Web Services layer (Usually one class per WS, and one method per WS method). Now for the business logic, what is the standard approach? A single class holding all the logic? Multiple classes? (if so, what criteria is used to distribute the logic across them?). And how should we name them? FooManager? FooService? (I know the last one is common, but in our case it is bad naming because the WS layer usually has classes named FooWebService). The third option is probably the right one, but it is also devoid of any useful info. To sum up: I don't like the first approach, but I accept that I might have been unable to fully understand the Zen of it. So if you advocate for fat models as the only and universal solution you are welcome to post links explaining how to do it the right way. I'd like to know what is the standard design and naming conventions for the second approach in OO languages. Class names and package structure, in particular. It would also be helpful too if you could include links to Open Source projects showing how it is done. Thanks in advance.

    Read the article

  • iPhone SDK / Core Data usage scenario, similar to GAE data store?

    - by boliva
    Hi all, I am currently rewriting a map based App which I wrote in the past, specifically for 2.2.1 devices. Originally I wrote it to make use of SQLite databases but I would like to try and migrate it over Core Data, now that it's available on 3.X (for which I am rewriting to). I am fairly experienced in iPhone/Obj-C development, SQL and server backend technologies, but I have never had the chance to work with Core Data so IDK really if it's the appropiate tool for what I am trying to accomplish. The App works on a limited area in a map over which there are about 4000 placemarks, with different kinds of icons and sizes. Of course not all 4000 placemarks are shown at once but only those currently visible in the map viewport, and depending on the zoom level. What I am doing right now is, after the user moves the map in any way (panning or zooming) I am requesting from the backend server the required information for the placemarks that would be visible given the viewport coordinates boundaries and zoom level, however the process isn't as smooth as I'd like (the backend is sending its response in XML and I am compressing it using gzip), it takes anywhere from 1 to 3 seconds to update the display of the placemarks after the user ends moving the map. What I would like to do is to prefetch all the placemarks data at the App launch and use it all through the app life time - I don't mind storing it for later use because the data should be dynamic. The way I would do it right now is, after retrieving all the data, to store it on an SQLite db which I would query later, whenever the user moves the map, to return only the placemarks inside the viewport coordinate boundaries and specific to a given zoom level. Now, the question itself is, if is it possible to use some more 'native', object driven way to carry this queries process, which got me thinking about Core Data and if it is in any way similar to what Google App Engine offers through its datastore where you can fetch a number of objects from the backend given a certain query or criteria, without resorting to an SQL query itself. Like I said before I don't have any experience on Core Data but I have a pretty deep understanding of Obj-C and iPhone development, as well as SQL databases. Any guides on how to achieve what I'm trying (if possible at all) would be greatly appreciated.

    Read the article

  • Classifying captured data in unknown format?

    - by monch1962
    I've got a large set of captured data (potentially hundreds of thousands of records), and I need to be able to break it down so I can both classify it and also produce "typical" data myself. Let me explain further... If I have the following strings of data: 132T339G1P112S 164T897F5A498S 144T989B9B223T 155T928X9Z554T ... you might start to infer the following: possibly all strings are 14 characters long the 4th, 8th, 10th and 14th characters may always be alphas, while the rest are numeric the first character may always be a '1' the 4th character may always be the letter 'T' the 14th character may be limited to only being 'S' or 'T' and so on... As you get more and more samples of real data, some of these "rules" might disappear; if you see a 15 character long string, then you have evidence that the 1st "rule" is incorrect. However, given a sufficiently large sample of strings that are exactly 14 characters long, you can start to assume that "all strings are 14 characters long" and assign a numeric figure to your degree of confidence (with an appropriate set of assumptions around the fact that you're seeing a suitably random set of all possible captured data). As you can probably tell, a human can do a lot of this classification by eye, but I'm not aware of libraries or algorithms that would allow a computer to do it. Given a set of captured data (significantly more complex than the above...), are there libraries that I can apply in my code to do this sort of classification for me, that will identify "rules" with a given degree of confidence? As a next step, I need to be able to take those rules, and use them to create my own data that conforms to these rules. I assume this is a significantly easier step than the classification, but I've never had to perform a task like this before so I'm really not sure how complex it is. At a guess, Python or Java (or possibly Perl or R) are possibly the "common" languages most likely to have these sorts of libraries, and maybe some of the bioinformatic libraries do this sort of thing. I really don't care which language I have to use; I need to solve the problem in whatever way I can. Any sort of pointer to information would be very useful. As you can probably tell, I'm struggling to describe this problem clearly, and there may be a set of appropriate keywords I can plug into Google that will point me towards the solution. Thanks in advance

    Read the article

  • populate CoreData data model from JSON files prior to app start

    - by johannes_d
    I am creating an iPad App that displays data I got from an API in JSON format. My Core Data model has several entities(Countries, Events, Talks, ...). For each entity I have one .json file that contains all instances of the entity and its attributes as well as its relationships. I would like to populate my Core Data data model with these entities before the start of the App (otherwise it takes about 15 minutes for the iPad to create all the instances of the entities from the several JSON files using factory methods). I am currently importing the data into CoreData like this: -(void)fetchDataIntoDocument:(UIManagedDocument *)document { dispatch_queue_t dataQ = dispatch_queue_create("Data import", NULL); dispatch_async(dataQ, ^{ //Fetching data from application bundle NSURL *tedxgroupsurl = [[NSBundle mainBundle] URLForResource:@"contries" withExtension:@"json"]; NSURL *tedxeventsurl = [[NSBundle mainBundle] URLForResource:@"events" withExtension:@"json"]; //converting the JSON files to NSDictionaries NSError *error = nil; NSDictionary *countries = [NSJSONSerialization JSONObjectWithData:[NSData dataWithContentsOfURL:countriesurl] options:kNilOptions error:&error]; countries = [countries objectForKey:@"countries"]; NSDictionary *events = [NSJSONSerialization JSONObjectWithData:[NSData dataWithContentsOfURL:eventsurl] options:kNilOptions error:&error]; events = [events objectForKey:@"events"]; //creating entities using factory methods in NSManagedObject Subclasses (Country / Event) [document.managedObjectContext performBlock:^{ NSLog(@"creating countries"); for (NSDictionary *country in countries) { [Country countryWithCountryInfo:country inManagedObjectContext:document.managedObjectContext]; //creating Country entities } NSLog(@"creating events"); for (NSDictionary *event in events) { [Event eventWithEventInfo:event inManagedObjectContext:document.managedObjectContext]; // creating Event entities } NSLog(@"done creating, saving document"); [document saveToURL:document.fileURL forSaveOperation:UIDocumentSaveForOverwriting completionHandler:NULL]; }]; }); dispatch_release(dataQ); } This combines the different JSON files into one UIManagedDocument which i can then perform fetchRequests on to populate tableViews, mapView, etc. I'm looking for a way to create this document outside my application & add it to the mainBundle. Then I could copy it once to the apps DocumentsDirectory and be able I use it (instead of creating the Document within the app from the original JSON files). Any help is appreciated!

    Read the article

  • Best way to load application settings

    - by enzom83
    A simple way to keep the settings of a Java application is represented by a text file with ".properties" extension containing the identifier of each setting associated with a specific value (this value may be a number, string, date, etc..). C# uses a similar approach, but the text file must be named "App.config". In both cases, in source code you must initialize a specific class for reading settings: this class has a method that returns the value (as string) associated with the specified setting identifier. // Java example Properties config = new Properties(); config.load(...); String valueStr = config.getProperty("listening-port"); // ... // C# example NameValueCollection setting = ConfigurationManager.AppSettings; string valueStr = setting["listening-port"]; // ... In both cases we should parse strings loaded from the configuration file and assign the ??converted values to the related typed objects (parsing errors could occur during this phase). After the parsing step, we must check that the setting values ??belong to a specific domain of validity: for example, the maximum size of a queue should be a positive value, some values ??may be related (example: min < max), and so on. Suppose that the application should load the settings as soon as it starts: in other words, the first operation performed by the application is to load the settings. Any invalid values for the settings ??must be replaced automatically with default values??: if this happens to a group of related settings, those settings are all set with default values. The easiest way to perform these operations is to create a method that first parses all the settings, then checks the loaded values ??and finally sets any default values??. However maintenance is difficult if you use this approach: as the number of settings increases while developing the application, it becomes increasingly difficult to update the code. In order to solve this problem, I had thought of using the Template Method pattern, as follows. public abstract class Setting { protected abstract bool TryParseValues(); protected abstract bool CheckValues(); public abstract void SetDefaultValues(); /// <summary> /// Template Method /// </summary> public bool TrySetValuesOrDefault() { if (!TryParseValues() || !CheckValues()) { // parsing error or domain error SetDefaultValues(); return false; } return true; } } public class RangeSetting : Setting { private string minStr, maxStr; private byte min, max; public RangeSetting(string minStr, maxStr) { this.minStr = minStr; this.maxStr = maxStr; } protected override bool TryParseValues() { return (byte.TryParse(minStr, out min) && byte.TryParse(maxStr, out max)); } protected override bool CheckValues() { return (0 < min && min < max); } public override void SetDefaultValues() { min = 5; max = 10; } } The problem is that in this way we need to create a new class for each setting, even for a single value. Are there other solutions to this kind of problem? In summary: Easy maintenance: for example, the addition of one or more parameters. Extensibility: a first version of the application could read a single configuration file, but later versions may give the possibility of a multi-user setup (admin sets up a basic configuration, users can set only certain settings, etc..). Object oriented design.

    Read the article

  • Lost all data on Windows XP after blue screen

    - by Barb
    I got a blue screen and was trying to boot with my OS disk. Frankly, I was unsure exactly how to do this. I was trying everything and booted in partition mode. Finally, I booted with disk and ran chkdsk /r and was able to log into Windows. But, all of my files and pictures are gone. I have no backup and all I'm sick to think that I lost the last seven years of pictures of my kids. What can I do?

    Read the article

  • Data Protection Manager System Protection Backups Failing

    - by TrueDuality
    I'm just starting to setup DPM 2010 in a test environment with a Domain Controller and a File Server. Everything seem to be working fairly well and I can get all of my backup jobs to succeed except for the "Computer\System Protection" backups. Both servers are running fully up to date 64 bit Windows Server 2008 R2 Enterprise with Service Pack 1. The error that is being provided is: DPM cannot create a backup because Windows Server Backup (WSB) on the protected computer encountered an error (WSB Event ID: 517, WSB Error Code: 0x8078001D). (ID 30229 Details: Internal error code: 0x809909FB) This Microsoft Knowledge Base article describes the issue perfectly and provides a hotfix. I downloaded the hotfix, moved it onto the affected server, attempt to run it and receive the following error: The update is not applicable to your computer. I've verified that I have indeed downloaded the 64 bit version. According to this thread the hotfix got rolled into Service Pack 1, yet I'm still experiencing the issue. Both machines do have the Windows Server Backup feature installed. Can anybody point me in the right direction? What am I missing?

    Read the article

  • Server hang - data loss on reboot, post mortem analysis

    - by rovangju
    A development server I'm responsible for (ext3 on raid 5 w/Debian Squeeze) froze up over the weekend and I was forced to reset it, as in unresponsive from KVM/physical keyboard access, no eth devices responding, etc. Not even the backup process ran (Figures, the one time I don't check for confirmation) So after the reset, it turns out that every trace of disk IO activity that should have happened for a period of ~24H is completely gone. The log files have a big gap in the dates and times. As if the writes were never committed to disk, no processes seemed to have run. Luckily it was a weekend and nothing of value would have been lost and I don't suspect a hack. What can I do in post mortem to this event - to prevent it from ever happening again? I've seen this happen before on a completely different machine running FreeBSD. I am rounding up the disk checking tools right now - but there must be more going on! Mount options: /dev/sda1 on / type ext3 (rw,errors=remount-ro) Kernel: Linux dev 2.6.32-5-686-bigmem Disk/Inodes: 13%/3%

    Read the article

  • How to avoid damage to ISO archives?

    - by TMRW
    So had a problem where a 16GB ISO was damaged(likely my own fault using standard windows copy dialog instead of proper copy tool like robocopy with verification turned on). It took several hours but i managed to restore the ISO(basicly i rebuilt the damaged parts and recompiled).Namely some .rar archives inside it were unreadable but the ISO itself was readable. So im wondering how can i further protect something like this from happening again?.Obviously proper copy tool but mayble something else?.Perhaps set as "read only" could help?.I generally don't move these files a lot and if i need to acces them then it's only for opening/extracting.

    Read the article

  • Recover data from a corrupted virtualbox vmdk file?

    - by Neth
    The power went out while I was doing a build on a VirtualBox machine, when I restarted the vmdk for the disk the vm was using was corrupted, apparently irrecoverably. I have been able to grep the 66GB vmdk file and it finds strings from the code I was working on that hadn't gotten in to subversion yet (yeah, yeah I know). But the strings are either in the shell history or what look to be strings inside object files. Any ideas for finding/recovering the source code? If it helps the vm was Linux, Fedora Core 10 on an ext3 filesystem. The host is an ubuntu 10.04_amd64 and has an ext4 filesystem.

    Read the article

  • Recover data from physically damaged harddrive. What are my options?

    - by Michael Kniskern
    I was trying to replace the power supply in my desktop PC and ended up physically damaging the data connection from the hard drive to the motherboard. The plastic shelf for the copper prongs on the hard drive broke into the cable. Here is a picture of my handy work: I went to Best Buy Geek Squad to discuss my options and they said that they will need to send it to the recover center it could cost anywhere between $250 to $1600 USD to recover the data out the hard drive Is this reasonable for data recovery from a physically damaged hard drive? Are there any other options I can explore? I am going to talk to the data doctors to see what my options are. Update I took the HD to Data Doctors, and they told me that the SATA connection was broken to they would need to replaced the data connector and then copy the data to a brand new hard drive. So, with the initial analysis, cost of replacement parts, and data recovery fee it came out to $865.00 USD. The technician specifically stated if this was an older hard drive that would just need to replace the data connector. But because there is specific information related to the individual hard drive in the flash ROM, they need to transfer the data to a brand new hard drive.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >