Search Results

Search found 60939 results on 2438 pages for 'data quality'.

Page 40/2438 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • A training world nugget for being taught by the best

    - by Testas
    June represents an exciting time for the SQL Server community with events all over the country in the next few months and there is plenty of knowledge to be gained from willing speakers enthusiastically sharing their knowledge. Furthermore, Paul Randall and Kimberley Trip will be conducting their highly recommended immersion events at London Heathrow in June.There are other big names within SQL Server that will be teaching this year. The company I used to work for, QA, has excellent trainers teaching SQL Server who I would always recommend. Occasionally a big name speaker will be take a course, unknowingly to the community. Solid Quality Mentors is such a company where their staff will teach at QA offices from time to time. And I know from conversation with Itzik Ben-Gan that he will be teaching Advanced TSQL within QA offices in London during the week of Oct 3-7. A link to the course details can be found here.http://www.qa.com/training-courses/technical-it-training/microsoft/microsoft-sql-server/microsoft-sql-server-2008-and-r2/advanced-t-sql-querying,-programming-and-tuning-for-sql-server-2005--2008So if you want to be taught by the best experts, consider checking www.QA.com for their advanced SQL courses, you could find yourself being taught by the best in the business in their field.Chris  

    Read the article

  • How to ignore certain coding standard errors in PHP CodeSniffer

    - by Tom
    We have a PHP 5 web application and we're currently evaluating PHP CodeSniffer in order to decide whether forcing code standards improves code quality without causing too much of a headache. If it seems good we will add a SVN pre-commit hook to ensure all new files committed on the dev branch are free from coding standard smells. Is there a way to configure PHP codeSniffer to ignore a particular type of error? or get it to treat a certain error as a warning instead? Here an example to demonstrate the issue: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> </head> <body> <div> <?php echo getTabContent('Programming', 1, $numX, $numY); if (isset($msg)) { echo $msg; } ?> </div> </body> </html> And this is the output of PHP_CodeSniffer: > phpcs test.php -------------------------------------------------------------------------------- FOUND 2 ERROR(S) AND 1 WARNING(S) AFFECTING 3 LINE(S) -------------------------------------------------------------------------------- 1 | WARNING | Line exceeds 85 characters; contains 121 characters 9 | ERROR | Missing file doc comment 11 | ERROR | Line indented incorrectly; expected 0 spaces, found 4 -------------------------------------------------------------------------------- I have a issue with the "Line indented incorrectly" error. I guess it happens because I am mixing the PHP indentation with the HTML indentation. But this makes it more readable doesn't it? (taking into account that I don't have the resouces to move to a MVC framework right now). So I'd like to ignore it please.

    Read the article

  • Unit and Integration testing: How can it become a reflex

    - by LordOfThePigs
    All the programmers in my team are familiar with unit testing and integration testing. We have all worked with it. We have all written tests with it. Some of us even have felt an improved sense of trust in his/her own code. However, for some reason, writing unit/integration tests has not become a reflex for any of the members of the team. None of us actually feel bad when not writing unit tests at the same time as the actual code. As a result, our codebase is mostly uncovered by unit tests, and projects enter production untested. The problem with that, of course is that once your projects are in production and are already working well, it is virtually impossible to obtain time and/or budget to add unit/integration testing. The members of my team and myself are already familiar with the value of unit testing (1, 2) but it doesn't seem to help bringing unit testing into our natural workflow. In my experience making unit tests and/or a target coverage mandatory just results in poor quality tests and slows down team members simply because there is no self-generated motivation to produce these tests. Also as soon as pressure eases, unit tests are not written any more. My question is the following: Is there any methods that you have experimented with that helps build a dynamic/momentum inside the team, leading to people naturally wanting to create and maintain those tests?

    Read the article

  • Code Measuring and Metrics Tools?

    - by David
    I'm in the process of setting up a build server for personal projects. This server will handle all the normal CI stuff, including running large suites of tests (unit, integration, automated UI). While I'm working out the kinks for including code coverage output with MSTest, it occurs to me that there may be lots of tools out there which give me additional metrics other than just code coverage. FxCop comes to mind as an example. Though I'm sure there are others. Anything that can generate useful reportable data and metrics would be good. Whether it's class dependency charts (looking for Law of Demeter violations, for example), analyses of the uses of classes/functions (looking for a function that isn't used in the system other than just the tests, for example), and so on. I'm not sure the right way to formulate the question, since polling questions or "What's your favorite code analysis tool" aren't very good. But I'm essentially just looking for recommendations on what metrics to gather and the tools that can gather them. The eventual vision for something like this is to have the CI server run a bunch of automated tests and analysis tools and track performance metrics over time. Imagine a dashboard full of graphs plotting these metrics over time. The lines should all relatively be at an equilibrium, and if one starts to stray toward the negative then it's an early indication of problems with the code. In the age old struggle to quantify code quality with management, this sounds like a potentially helpful means of doing just that.

    Read the article

  • Which things instantly ring alarm bells when looking at code? [closed]

    - by FinnNk
    I attended a software craftsmanship event a couple of weeks ago and one of the comments made was "I'm sure we all recognize bad code when we see it" and everyone nodded sagely without further discussion. This sort of thing always worries me as there's that truism that everyone thinks they're an above average driver. Although I think I can recognize bad code I'd love to learn more about what other people consider to be code smells as it's rarely discussed in detail on people's blogs and only in a handful of books. In particular I think it'd be interesting to hear about anything that's a code smell in one language but not another. I'll start off with an easy one: Code in source control that has a high proportion of commented out code - why is it there? was it meant to be deleted? is it a half finished piece of work? maybe it shouldn't have been commented out and was only done when someone was testing something out? Personally I find this sort of thing really annoying even if it's just the odd line here and there, but when you see large blocks interspersed with the rest of the code it's totally unacceptable. It's also usually an indication that the rest of the code is likely to be of dubious quality as well.

    Read the article

  • SQL SERVER – 5 Tips for Improving Your Data with expressor Studio

    - by pinaldave
    It’s no secret that bad data leads to bad decisions and poor results.  However, how do you prevent dirty data from taking up residency in your data store?  Some might argue that it’s the responsibility of the person sending you the data.  While that may be true, in practice that will rarely hold up.  It doesn’t matter how many times you ask, you will get the data however they decide to provide it. So now you have bad data.  What constitutes bad data?  There are quite a few valid answers, for example: Invalid date values Inappropriate characters Wrong data Values that exceed a pre-set threshold While it is certainly possible to write your own scripts and custom SQL to identify and deal with these data anomalies, that effort often takes too long and becomes difficult to maintain.  Instead, leveraging an ETL tool like expressor Studio makes the data cleansing process much easier and faster.  Below are some tips for leveraging expressor to get your data into tip-top shape. Tip 1:     Build reusable data objects with embedded cleansing rules One of the new features in expressor Studio 3.2 is the ability to define constraints at the metadata level.  Using expressor’s concept of Semantic Types, you can define reusable data objects that have embedded logic such as constraints for dealing with dirty data.  Once defined, they can be saved as a shared atomic type and then re-applied to other data attributes in other schemas. As you can see in the figure above, I’ve defined a constraint on zip code.  I can then save the constraint rules I defined for zip code as a shared atomic type called zip_type for example.   The next time I get a different data source with a schema that also contains a zip code field, I can simply apply the shared atomic type (shown below) and the previously defined constraints will be automatically applied. Tip 2:     Unlock the power of regular expressions in Semantic Types Another powerful feature introduced in expressor Studio 3.2 is the option to use regular expressions as a constraint.   A regular expression is used to identify patterns within data.   The patterns could be something as simple as a date format or something much more complex such as a street address.  For example, I could define that a valid IP address should be made up of 4 numbers, each 0 to 255, and separated by a period.  So 192.168.23.123 might be a valid IP address whereas 888.777.0.123 would not be.   How can I account for this using regular expressions? A very simple regular expression that would look for any 4 sets of 3 digits separated by a period would be:  ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ Alternatively, the following would be the exact check for truly valid IP addresses as we had defined above:  ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])$ .  In expressor, we would enter this regular expression as a constraint like this: Here we select the corrective action to be ‘Escalate’, meaning that the expressor Dataflow operator will decide what to do.  Some of the options include rejecting the offending record, skipping it, or aborting the dataflow. Tip 3:     Email pattern expressions that might come in handy In the example schema that I am using, there’s a field for email.  Email addresses are often entered incorrectly because people are trying to avoid spam.  While there are a lot of different ways to define what constitutes a valid email address, a quick search online yields a couple of really useful regular expressions for validating email addresses: This one is short and sweet:  \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b (Source: http://www.regular-expressions.info/) This one is more specific about which characters are allowed:  ^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$ (Source: http://regexlib.com/REDetails.aspx?regexp_id=26 ) Tip 4:     Reject “dirty data” for analysis or further processing Yet another feature introduced in expressor Studio 3.2 is the ability to reject records based on constraint violations.  To capture reject records on input, simply specify Reject Record in the Error Handling setting for the Read File operator.  Then attach a Write File operator to the reject port of the Read File operator as such: Next, in the Write File operator, you can configure the expressor operator in a similar way to the Read File.  The key difference would be that the schema needs to be derived from the upstream operator as shown below: Once configured, expressor will output rejected records to the file you specified.  In addition to the rejected records, expressor also captures some diagnostic information that will be helpful towards identifying why the record was rejected.  This makes diagnosing errors much easier! Tip 5:    Use a Filter or Transform after the initial cleansing to finish the job Sometimes you may want to predicate the data cleansing on a more complex set of conditions.  For example, I may only be interested in processing data containing males over the age of 25 in certain zip codes.  Using an expressor Filter operator, you can define the conditional logic which isolates the records of importance away from the others. Alternatively, the expressor Transform operator can be used to alter the input value via a user defined algorithm or transformation.  It also supports the use of conditional logic and data can be rejected based on constraint violations. However, the best tip I can leave you with is to not constrain your solution design approach – expressor operators can be combined in many different ways to achieve the desired results.  For example, in the expressor Dataflow below, I can post-process the reject data from the Filter which did not meet my pre-defined criteria and, if successful, Funnel it back into the flow so that it gets written to the target table. I continue to be impressed that expressor offers all this functionality as part of their FREE expressor Studio desktop ETL tool, which you can download from here.  Their Studio ETL tool is absolutely free and they are very open about saying that if you want to deploy their software on a dedicated Windows Server, you need to purchase their server software, whose pricing is posted on their website. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Problem when trying to configure enterprise library 5.0 (Data Access Application Block)

    - by Phil
    Hi There Stackoverflow, I am running into some problems while trying to get DAAB from Enterprise library 5.0 running. I have followed the steps as per the tutorial, but am getting errors... 1) Download / install enterprise library 2) Add references to the blocks I need (common / data) 3) Imports Imports Microsoft.Practices.EnterpriseLibrary.Common Imports Microsoft.Practices.EnterpriseLibrary.Data 4) Through the enterprise library config software. I open up the web.config from my site. I then click Blocks, then Add data settings... fill in my details and save / close 5) I then (thinking setup is complete) try to get an instance of the database via Dim db As Database = DatabaseFactory.CreateDatabase() 6) I compile and receive the following error: Could not load file or assembly 'Microsoft.Practices.EnterpriseLibrary.Data, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) (C:\site\web.config line 4) Line 4 off my web.config was generated by the config tool and is: <section name="dataConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="true" /> Am I missing a required step? Have I done the steps in the wrong order? Have I made a mistake? Thanks a lot for the assistance.

    Read the article

  • Removing and adding persistent stores to a core data application

    - by mkko
    I'm using core data on an iPhone application. I have multiple persisntent stores that I'm switching from one to another so that only one of the stores can be active at the time. I have one managed object context and the different persistent stores are similar in data format (sqlite) and share the same managed object model. I'm importing the data to each persistent store from a respective XML file. For the first import everything works fine, but after I remove the imported data (the persistent store and the physical file) and then re-import, core data gives me an error: *** Terminating app due to uncaught exception 'NSObjectInaccessibleException', reason: 'The NSManagedObject with ID:0x3c14e00 <x-coredata://6D14F11E-2EA7-4141-9BE8-53747DE6FCC6/Book/p2> has been invalidated.' This error comes from the save: of NSManagedObjectContext. Before re-importing, i'm removing the persistent store from the persistent store coordinator and removing the physical file, so everything should be as if re-importing was done for the first time. Alos, the objects in managed object context are removed and the context is sent the reset: message (I don't know if this is actually needed). Could some one help me out here? How should the persistent store be switched? I'm basically using the same logic as tutored here: http://blog.sallarp.com/iphone-core-data-uitableview-drill-down/ Thanks in advance.

    Read the article

  • Windows-Mobile Directshow: Specifying bitrate/quality of a WMV video capture

    - by Landstander
    Hi- I'm stumped on this, and I'm really hoping someone could point me in the right direction. I'm currently capturing video in Windows Mobile and encoding it using the WMV 9 DMO (CLSID_CWMV9EncMediaObject). That all works well enough, but the output video's bitrate is too high, resulting in a video file that's much too large for my needs. Ultimately, my goal is to mimic the video settings that Microsoft's Camera Capture Dialog outputs in the "messaging" quality mode (64kbps) from my C++ code. Currently, my code's outputting a WMV file with a bitrate of 352kbps. The only example I could find of specifying the capture bitrate with a WMV9 DMO was this. The idea in that code was basically to use a propertybag to write a bitrate to a property of the DMO. Update: In windows mobile, the closest codec property I can find that seems to equate to the bitrate is "g_wszWMVCVBRQuality". Microsoft's documentation of this property is extremely confusing to me: It basically seems to say that a higher number equates to a higher quality, but it gives absolutely no explanation of the specifics for each number. When I attempt to set this property to value like "1" via a propertybag for the WMV9 DMO, I run into a -2147467259 (unknown) error. To summarize: What is the basic strategy to specify the bitrate/quality of a video being captured via directshow (wmv9) on a windows mobile platform? I've heard (or wondered about) the following methods: Use the propertybag to change the encoder DMO's property that corresponds to bitrate/quality (currently failing) Create your own custom transcoder/encoder to specify it. This seems unnecessary since the WMV encoder works well enough- it's just at too high a bitrate. The VIDEOINFOHEADER has a bitrate property, but I suspect that specifying new settings here will do nothing to alter the actual encoding process since I wouldn't think file attributes would come into play until after the encoding. Any suggestions? PS: I would post specific source code, but at this point it may confuse more than it helps since I'm floundering so much on how to do this. At this point, I'm just trying to validate the general strategy. THANKS!

    Read the article

  • Core data and @unionOfSets

    - by KevinD
    Im having trouble using the @unionOfSets on my core-data objects. NSLog(@"%@", [NSApp valueForKeyPath:@"delegate.mainWindowController.sidebarViewController.arrayController.selection.list.listElement"]); Prints the set of listElements as expected 2010-03-24 18:11:15.844 Pirouette[7459:80f] Relationship objects for {( (entity: PRPlaylistElement; id: 0x10a71b0 ; data: ), (entity: PRPlaylistElement; id: 0x10ac7d0 ; data: ), (entity: PRPlaylistElement; id: 0x10acf60 ; data: ), (entity: PRPlaylistElement; id: 0x10a6850 ; data: ) However when I try to get the set of file objects for each of the list elements. NSLog(@"%@", [NSApp valueForKeyPath:@"delegate.mainWindowController.sidebarViewController.arrayController.selection.list.listElement.@unionOfArrays.file"]); I get the following error 2010-03-24 18:16:45.843 Pirouette[7505:80f] An uncaught exception was raised 2010-03-24 18:16:45.844 Pirouette[7505:80f] [<NSCFSet 0x10415e0> valueForKeyPath:]: this class does not implement the unionOfArrays operation. 2010-03-24 18:16:45.847 Pirouette[7505:80f] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '[<NSCFSet 0x10415e0> valueForKeyPath:]: this class does not implement the unionOfArrays operation.' Confused because I thought calling to-many relationships in core-data were NSSets.

    Read the article

  • Handling incremental Data Modeling Changes in Functional Programming

    - by Adam Gent
    Most of the problems I have to solve in my job as a developer have to do with data modeling. For example in a OOP Web Application world I often have to change the data properties that are in a object to meet new requirements. If I'm lucky I don't even need to programmatically add new "behavior" code (functions,methods). Instead I can declarative add validation and even UI options by annotating the property (Java). In Functional Programming it seems that adding new data properties requires lots of code changes because of pattern matching and data constructors (Haskell, ML). How do I minimize this problem? This seems to be a recognized problem as Xavier Leroy states nicely on page 24 of "Objects and Classes vs. Modules" - To summarize for those that don't have a PostScript viewer it basically says FP languages are better than OOP languages for adding new behavior over data objects but OOP languages are better for adding new data objects/properties. Are there any design pattern used in FP languages to help mitigate this problem? I have read Phillip Wadler's recommendation of using Monads to help this modularity problem but I'm not sure I understand how?

    Read the article

  • Archiver Securing SQLite Data without using Encryption on iPhone

    - by Redrocks
    I'm developing an iphone app that uses Core Data with a SQLite data store and lots of images in the resource bundle. I want a "simple" way to obfuscate the file structure of the SQLite database and the image files to prevent the casual hacker/unscrupulous developer from gaining access to them. When the app is deployed, the database file and image files would be obfuscated. Upon launching the app it would read in and un-obfuscate the database file, write the un-obfuscated version to the users "tmp" directory for use by core data, and read/un-obfuscate image files as needed. I'd like to apply a simple algorithm to the files that would somehow scramble/manipulate the file data so that the sqlite database data isn't discernible when the db is opened in a text editor and so that neither is recognized by other applications (SQLite Manager, Photoshop, etc.) It seems, from the information I've read, that I could use NSFileManager, NSKeyedArchiver, and NSData to accomplish this but I'm not sure how to proceed. Been developing software for many years but I'm new to everything CocoaTouch, Mac and iPhone. Also never had to secure/encrypt my data so this is new. Any thoughts, suggestions, or links to solutions are appreciated.

    Read the article

  • How to sort data in a table data structure in Java?

    - by rgksugan
    I need to sort data based on the third column of the table data structure. I tried based on the answers for the following question. But my sorting does not work. Please help me in this. Here goes my code. Object[] data = new Object[y]; rst.beforeFirst(); while (rst.next()) { int p_id = Integer.parseInt(rst.getString(1)); String sw2 = "select sum(quantity) from tbl_order_detail where product_id=" + p_id; rst1 = stmt1.executeQuery(sw2); rst1.next(); String sw3 = "select max(order_date) from tbl_order where tbl_order.`Order_ID` in (select tbl_order_detail.`Order_ID` from tbl_order_detail where product_id=" + p_id + ")"; rst2 = stmt2.executeQuery(sw3); rst2.next(); data[i] = new Object[]{new String(rst.getString(2)), new String(rst.getString(3)), new Integer(rst1.getString(1)), new String(rst2.getString(1))}; i++; } ColumnComparator cc = new ColumnComparator(2); Arrays.sort(data, cc); if (i == 0) { table.addCell(""); table.addCell(""); table.addCell(""); table.addCell(""); } else { for (int j = 0; j < y; j++) { Object[] theRow = (Object[]) data[j]; table.addCell((String) theRow[0]); table.addCell((String) theRow[1]); table.addCell((String) theRow[2]); table.addCell((String) theRow[3]); }

    Read the article

  • Moving UITableView cells and maintaining consistent data

    - by Mark F
    I've enabled editing mode and moving cells around to allow users to position table view content in the order they please. I'm using Core Data as the data source, which sorts the content by the attribute "userOrder". When content is first inserted, userOrder is set to a random value. The idea is that when the user moves a cell around, the userOrder of that cell changes to accomodate its new position. The following are problems I am running into while trying to accomplish this: Successfully saving the the new location of the cell and adjusting all changed locations of influenced cells. Getting the data to be consistent. For example, the TableView handles the movement fine, but when i click on the new location of the cell, it displays data for the old cell that used to be that location. Data of all influenced cells gets messed up as well. I know I have to implement this in: - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)sourceIndexPath toIndexPath:(NSIndexPath *)destinationIndexPath {} I just don't know how. The apple docs are not particularly helpful if you are using Core Data, as in my situation. Any guidance greatly appreciated!

    Read the article

  • iPhone SDK / Core Data usage scenario, similar to GAE data store?

    - by boliva
    Hi all, I am currently rewriting a map based App which I wrote in the past, specifically for 2.2.1 devices. Originally I wrote it to make use of SQLite databases but I would like to try and migrate it over Core Data, now that it's available on 3.X (for which I am rewriting to). I am fairly experienced in iPhone/Obj-C development, SQL and server backend technologies, but I have never had the chance to work with Core Data so IDK really if it's the appropiate tool for what I am trying to accomplish. The App works on a limited area in a map over which there are about 4000 placemarks, with different kinds of icons and sizes. Of course not all 4000 placemarks are shown at once but only those currently visible in the map viewport, and depending on the zoom level. What I am doing right now is, after the user moves the map in any way (panning or zooming) I am requesting from the backend server the required information for the placemarks that would be visible given the viewport coordinates boundaries and zoom level, however the process isn't as smooth as I'd like (the backend is sending its response in XML and I am compressing it using gzip), it takes anywhere from 1 to 3 seconds to update the display of the placemarks after the user ends moving the map. What I would like to do is to prefetch all the placemarks data at the App launch and use it all through the app life time - I don't mind storing it for later use because the data should be dynamic. The way I would do it right now is, after retrieving all the data, to store it on an SQLite db which I would query later, whenever the user moves the map, to return only the placemarks inside the viewport coordinate boundaries and specific to a given zoom level. Now, the question itself is, if is it possible to use some more 'native', object driven way to carry this queries process, which got me thinking about Core Data and if it is in any way similar to what Google App Engine offers through its datastore where you can fetch a number of objects from the backend given a certain query or criteria, without resorting to an SQL query itself. Like I said before I don't have any experience on Core Data but I have a pretty deep understanding of Obj-C and iPhone development, as well as SQL databases. Any guides on how to achieve what I'm trying (if possible at all) would be greatly appreciated.

    Read the article

  • Classifying captured data in unknown format?

    - by monch1962
    I've got a large set of captured data (potentially hundreds of thousands of records), and I need to be able to break it down so I can both classify it and also produce "typical" data myself. Let me explain further... If I have the following strings of data: 132T339G1P112S 164T897F5A498S 144T989B9B223T 155T928X9Z554T ... you might start to infer the following: possibly all strings are 14 characters long the 4th, 8th, 10th and 14th characters may always be alphas, while the rest are numeric the first character may always be a '1' the 4th character may always be the letter 'T' the 14th character may be limited to only being 'S' or 'T' and so on... As you get more and more samples of real data, some of these "rules" might disappear; if you see a 15 character long string, then you have evidence that the 1st "rule" is incorrect. However, given a sufficiently large sample of strings that are exactly 14 characters long, you can start to assume that "all strings are 14 characters long" and assign a numeric figure to your degree of confidence (with an appropriate set of assumptions around the fact that you're seeing a suitably random set of all possible captured data). As you can probably tell, a human can do a lot of this classification by eye, but I'm not aware of libraries or algorithms that would allow a computer to do it. Given a set of captured data (significantly more complex than the above...), are there libraries that I can apply in my code to do this sort of classification for me, that will identify "rules" with a given degree of confidence? As a next step, I need to be able to take those rules, and use them to create my own data that conforms to these rules. I assume this is a significantly easier step than the classification, but I've never had to perform a task like this before so I'm really not sure how complex it is. At a guess, Python or Java (or possibly Perl or R) are possibly the "common" languages most likely to have these sorts of libraries, and maybe some of the bioinformatic libraries do this sort of thing. I really don't care which language I have to use; I need to solve the problem in whatever way I can. Any sort of pointer to information would be very useful. As you can probably tell, I'm struggling to describe this problem clearly, and there may be a set of appropriate keywords I can plug into Google that will point me towards the solution. Thanks in advance

    Read the article

  • populate CoreData data model from JSON files prior to app start

    - by johannes_d
    I am creating an iPad App that displays data I got from an API in JSON format. My Core Data model has several entities(Countries, Events, Talks, ...). For each entity I have one .json file that contains all instances of the entity and its attributes as well as its relationships. I would like to populate my Core Data data model with these entities before the start of the App (otherwise it takes about 15 minutes for the iPad to create all the instances of the entities from the several JSON files using factory methods). I am currently importing the data into CoreData like this: -(void)fetchDataIntoDocument:(UIManagedDocument *)document { dispatch_queue_t dataQ = dispatch_queue_create("Data import", NULL); dispatch_async(dataQ, ^{ //Fetching data from application bundle NSURL *tedxgroupsurl = [[NSBundle mainBundle] URLForResource:@"contries" withExtension:@"json"]; NSURL *tedxeventsurl = [[NSBundle mainBundle] URLForResource:@"events" withExtension:@"json"]; //converting the JSON files to NSDictionaries NSError *error = nil; NSDictionary *countries = [NSJSONSerialization JSONObjectWithData:[NSData dataWithContentsOfURL:countriesurl] options:kNilOptions error:&error]; countries = [countries objectForKey:@"countries"]; NSDictionary *events = [NSJSONSerialization JSONObjectWithData:[NSData dataWithContentsOfURL:eventsurl] options:kNilOptions error:&error]; events = [events objectForKey:@"events"]; //creating entities using factory methods in NSManagedObject Subclasses (Country / Event) [document.managedObjectContext performBlock:^{ NSLog(@"creating countries"); for (NSDictionary *country in countries) { [Country countryWithCountryInfo:country inManagedObjectContext:document.managedObjectContext]; //creating Country entities } NSLog(@"creating events"); for (NSDictionary *event in events) { [Event eventWithEventInfo:event inManagedObjectContext:document.managedObjectContext]; // creating Event entities } NSLog(@"done creating, saving document"); [document saveToURL:document.fileURL forSaveOperation:UIDocumentSaveForOverwriting completionHandler:NULL]; }]; }); dispatch_release(dataQ); } This combines the different JSON files into one UIManagedDocument which i can then perform fetchRequests on to populate tableViews, mapView, etc. I'm looking for a way to create this document outside my application & add it to the mainBundle. Then I could copy it once to the apps DocumentsDirectory and be able I use it (instead of creating the Document within the app from the original JSON files). Any help is appreciated!

    Read the article

  • Data Protection Manager System Protection Backups Failing

    - by TrueDuality
    I'm just starting to setup DPM 2010 in a test environment with a Domain Controller and a File Server. Everything seem to be working fairly well and I can get all of my backup jobs to succeed except for the "Computer\System Protection" backups. Both servers are running fully up to date 64 bit Windows Server 2008 R2 Enterprise with Service Pack 1. The error that is being provided is: DPM cannot create a backup because Windows Server Backup (WSB) on the protected computer encountered an error (WSB Event ID: 517, WSB Error Code: 0x8078001D). (ID 30229 Details: Internal error code: 0x809909FB) This Microsoft Knowledge Base article describes the issue perfectly and provides a hotfix. I downloaded the hotfix, moved it onto the affected server, attempt to run it and receive the following error: The update is not applicable to your computer. I've verified that I have indeed downloaded the 64 bit version. According to this thread the hotfix got rolled into Service Pack 1, yet I'm still experiencing the issue. Both machines do have the Windows Server Backup feature installed. Can anybody point me in the right direction? What am I missing?

    Read the article

  • Lost all data on Windows XP after blue screen

    - by Barb
    I got a blue screen and was trying to boot with my OS disk. Frankly, I was unsure exactly how to do this. I was trying everything and booted in partition mode. Finally, I booted with disk and ran chkdsk /r and was able to log into Windows. But, all of my files and pictures are gone. I have no backup and all I'm sick to think that I lost the last seven years of pictures of my kids. What can I do?

    Read the article

  • Server hang - data loss on reboot, post mortem analysis

    - by rovangju
    A development server I'm responsible for (ext3 on raid 5 w/Debian Squeeze) froze up over the weekend and I was forced to reset it, as in unresponsive from KVM/physical keyboard access, no eth devices responding, etc. Not even the backup process ran (Figures, the one time I don't check for confirmation) So after the reset, it turns out that every trace of disk IO activity that should have happened for a period of ~24H is completely gone. The log files have a big gap in the dates and times. As if the writes were never committed to disk, no processes seemed to have run. Luckily it was a weekend and nothing of value would have been lost and I don't suspect a hack. What can I do in post mortem to this event - to prevent it from ever happening again? I've seen this happen before on a completely different machine running FreeBSD. I am rounding up the disk checking tools right now - but there must be more going on! Mount options: /dev/sda1 on / type ext3 (rw,errors=remount-ro) Kernel: Linux dev 2.6.32-5-686-bigmem Disk/Inodes: 13%/3%

    Read the article

  • How to avoid damage to ISO archives?

    - by TMRW
    So had a problem where a 16GB ISO was damaged(likely my own fault using standard windows copy dialog instead of proper copy tool like robocopy with verification turned on). It took several hours but i managed to restore the ISO(basicly i rebuilt the damaged parts and recompiled).Namely some .rar archives inside it were unreadable but the ISO itself was readable. So im wondering how can i further protect something like this from happening again?.Obviously proper copy tool but mayble something else?.Perhaps set as "read only" could help?.I generally don't move these files a lot and if i need to acces them then it's only for opening/extracting.

    Read the article

  • Recover data from a corrupted virtualbox vmdk file?

    - by Neth
    The power went out while I was doing a build on a VirtualBox machine, when I restarted the vmdk for the disk the vm was using was corrupted, apparently irrecoverably. I have been able to grep the 66GB vmdk file and it finds strings from the code I was working on that hadn't gotten in to subversion yet (yeah, yeah I know). But the strings are either in the shell history or what look to be strings inside object files. Any ideas for finding/recovering the source code? If it helps the vm was Linux, Fedora Core 10 on an ext3 filesystem. The host is an ubuntu 10.04_amd64 and has an ext4 filesystem.

    Read the article

  • Recover data from physically damaged harddrive. What are my options?

    - by Michael Kniskern
    I was trying to replace the power supply in my desktop PC and ended up physically damaging the data connection from the hard drive to the motherboard. The plastic shelf for the copper prongs on the hard drive broke into the cable. Here is a picture of my handy work: I went to Best Buy Geek Squad to discuss my options and they said that they will need to send it to the recover center it could cost anywhere between $250 to $1600 USD to recover the data out the hard drive Is this reasonable for data recovery from a physically damaged hard drive? Are there any other options I can explore? I am going to talk to the data doctors to see what my options are. Update I took the HD to Data Doctors, and they told me that the SATA connection was broken to they would need to replaced the data connector and then copy the data to a brand new hard drive. So, with the initial analysis, cost of replacement parts, and data recovery fee it came out to $865.00 USD. The technician specifically stated if this was an older hard drive that would just need to replace the data connector. But because there is specific information related to the individual hard drive in the flash ROM, they need to transfer the data to a brand new hard drive.

    Read the article

  • Merely chainloading an Acer Recovery Partition deleted all data

    - by WindowsEscapist
    I was starting a backup of Acer's factory restore partition located inside of an extended partition to determine whether or not it still worked. I clicked "take no action" once I saw that it had, in fact, successfully started up. However, when I rebooted, I got an "error: no such partition" and was dropped to a GRUB recovery prompt. Upon further investigation, I discovered that all partitions inside the extended partition were gone except for the recovery partition! What happened? How can I fix this? testdisk doesn't find the deleted partitions!

    Read the article

  • Ways to recover data from external hard drive

    - by Howard Benson
    I use an external hard disk for backup of my mac with time machine (OS 10.5.8). I have made something wrong and I have found important folders in the recycler bin. These folders come from external hd. They are backup folders (backups.backupdb) and others. I have tried to restore them draggin and dropping. Some of them came back in the external hd in a while. For the others it takes hours to "preparing to copy" and then it has said "there's no space to copy" on ext hd. It's strange. Files are now in the recycle bin (180gb), and the ext had should have lot of free space. But it isn't really so. Ext hd is not free of space even if these files are in the bin. I ask for advices. I'm not also able to use time machine now (and i have "lost" old backups) for the same reason. Ext hd says that it has not free space.. Thanks

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >