Search Results

Search found 59118 results on 2365 pages for 'data persistence'.

Page 36/2365 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Classifying captured data in unknown format?

    - by monch1962
    I've got a large set of captured data (potentially hundreds of thousands of records), and I need to be able to break it down so I can both classify it and also produce "typical" data myself. Let me explain further... If I have the following strings of data: 132T339G1P112S 164T897F5A498S 144T989B9B223T 155T928X9Z554T ... you might start to infer the following: possibly all strings are 14 characters long the 4th, 8th, 10th and 14th characters may always be alphas, while the rest are numeric the first character may always be a '1' the 4th character may always be the letter 'T' the 14th character may be limited to only being 'S' or 'T' and so on... As you get more and more samples of real data, some of these "rules" might disappear; if you see a 15 character long string, then you have evidence that the 1st "rule" is incorrect. However, given a sufficiently large sample of strings that are exactly 14 characters long, you can start to assume that "all strings are 14 characters long" and assign a numeric figure to your degree of confidence (with an appropriate set of assumptions around the fact that you're seeing a suitably random set of all possible captured data). As you can probably tell, a human can do a lot of this classification by eye, but I'm not aware of libraries or algorithms that would allow a computer to do it. Given a set of captured data (significantly more complex than the above...), are there libraries that I can apply in my code to do this sort of classification for me, that will identify "rules" with a given degree of confidence? As a next step, I need to be able to take those rules, and use them to create my own data that conforms to these rules. I assume this is a significantly easier step than the classification, but I've never had to perform a task like this before so I'm really not sure how complex it is. At a guess, Python or Java (or possibly Perl or R) are possibly the "common" languages most likely to have these sorts of libraries, and maybe some of the bioinformatic libraries do this sort of thing. I really don't care which language I have to use; I need to solve the problem in whatever way I can. Any sort of pointer to information would be very useful. As you can probably tell, I'm struggling to describe this problem clearly, and there may be a set of appropriate keywords I can plug into Google that will point me towards the solution. Thanks in advance

    Read the article

  • populate CoreData data model from JSON files prior to app start

    - by johannes_d
    I am creating an iPad App that displays data I got from an API in JSON format. My Core Data model has several entities(Countries, Events, Talks, ...). For each entity I have one .json file that contains all instances of the entity and its attributes as well as its relationships. I would like to populate my Core Data data model with these entities before the start of the App (otherwise it takes about 15 minutes for the iPad to create all the instances of the entities from the several JSON files using factory methods). I am currently importing the data into CoreData like this: -(void)fetchDataIntoDocument:(UIManagedDocument *)document { dispatch_queue_t dataQ = dispatch_queue_create("Data import", NULL); dispatch_async(dataQ, ^{ //Fetching data from application bundle NSURL *tedxgroupsurl = [[NSBundle mainBundle] URLForResource:@"contries" withExtension:@"json"]; NSURL *tedxeventsurl = [[NSBundle mainBundle] URLForResource:@"events" withExtension:@"json"]; //converting the JSON files to NSDictionaries NSError *error = nil; NSDictionary *countries = [NSJSONSerialization JSONObjectWithData:[NSData dataWithContentsOfURL:countriesurl] options:kNilOptions error:&error]; countries = [countries objectForKey:@"countries"]; NSDictionary *events = [NSJSONSerialization JSONObjectWithData:[NSData dataWithContentsOfURL:eventsurl] options:kNilOptions error:&error]; events = [events objectForKey:@"events"]; //creating entities using factory methods in NSManagedObject Subclasses (Country / Event) [document.managedObjectContext performBlock:^{ NSLog(@"creating countries"); for (NSDictionary *country in countries) { [Country countryWithCountryInfo:country inManagedObjectContext:document.managedObjectContext]; //creating Country entities } NSLog(@"creating events"); for (NSDictionary *event in events) { [Event eventWithEventInfo:event inManagedObjectContext:document.managedObjectContext]; // creating Event entities } NSLog(@"done creating, saving document"); [document saveToURL:document.fileURL forSaveOperation:UIDocumentSaveForOverwriting completionHandler:NULL]; }]; }); dispatch_release(dataQ); } This combines the different JSON files into one UIManagedDocument which i can then perform fetchRequests on to populate tableViews, mapView, etc. I'm looking for a way to create this document outside my application & add it to the mainBundle. Then I could copy it once to the apps DocumentsDirectory and be able I use it (instead of creating the Document within the app from the original JSON files). Any help is appreciated!

    Read the article

  • Persistence scheme & state data for low memory situations (iphone)

    - by Robin Jamieson
    What happens to state information held by a class's variable after coming back from a low memory situation? I know that views will get unloaded and then reloaded later but what about some ancillary classes & data held in them that's used by the controller that launched the view? Sample scenario in question: @interface MyCustomController: UIViewController { ServiceAuthenticator *authenticator; } -(id)initWithAuthenticator:(ServiceAuthenticator *)auth; // the user may press a button that will cause the authenticator // to post some data to the service. -(IBAction)doStuffButtonPressed:(id)sender; @end @interface ServiceAuthenticator { BOOL hasValidCredentials; // YES if user's credentials have been validated NSString *username; NSString *password; // password is not stored in plain text } -(id)initWithUserCredentials:(NSString *)username password:(NSString *)aPassword; -(void)postData:(NSString *)data; @end The app delegate creates the ServiceAuthenticator class with some user data (read from plist file) and the class logs the user with the remote service. inside MyAppDelegate's applicationDidFinishLaunching: - (void)applicationDidFinishLaunching:(UIApplication *)application { ServiceAuthenticator *auth = [[ServiceAuthenticator alloc] initWithUserCredentials:username password:userPassword]; MyCustomController *controller = [[MyCustomController alloc] initWithNibName:...]; controller.authenticator = auth; // Configure and show the window [window addSubview:..]; // make everything visible [window makeKeyAndVisible]; } Then whenever the user presses a certain button, 'MyCustomController's doStuffButtonPressed' is invoked. -(IBAction)doStuffButtonPressed:(id)sender { [authenticator postData:someDataFromSender]; } The authenticator in-turn checks to if the user is logged in (BOOL variable indicates login state) and if so, exchanges data with the remote service. The ServiceAuthenticator is the kind of class that validates the user's credentials only once and all subsequent calls to the object will be to postData. Once a low memory scenario occurs and the associated nib & MyCustomController will get unloaded -- when it's reloaded, what's the process for resetting up the 'ServiceAuthenticator' class & its former state? I'm periodically persisting all of the data in my actual model classes. Should I consider also persisting the state data in these utility style classes? Is that the pattern to follow?

    Read the article

  • Lost all data on Windows XP after blue screen

    - by Barb
    I got a blue screen and was trying to boot with my OS disk. Frankly, I was unsure exactly how to do this. I was trying everything and booted in partition mode. Finally, I booted with disk and ran chkdsk /r and was able to log into Windows. But, all of my files and pictures are gone. I have no backup and all I'm sick to think that I lost the last seven years of pictures of my kids. What can I do?

    Read the article

  • Data Protection Manager System Protection Backups Failing

    - by TrueDuality
    I'm just starting to setup DPM 2010 in a test environment with a Domain Controller and a File Server. Everything seem to be working fairly well and I can get all of my backup jobs to succeed except for the "Computer\System Protection" backups. Both servers are running fully up to date 64 bit Windows Server 2008 R2 Enterprise with Service Pack 1. The error that is being provided is: DPM cannot create a backup because Windows Server Backup (WSB) on the protected computer encountered an error (WSB Event ID: 517, WSB Error Code: 0x8078001D). (ID 30229 Details: Internal error code: 0x809909FB) This Microsoft Knowledge Base article describes the issue perfectly and provides a hotfix. I downloaded the hotfix, moved it onto the affected server, attempt to run it and receive the following error: The update is not applicable to your computer. I've verified that I have indeed downloaded the 64 bit version. According to this thread the hotfix got rolled into Service Pack 1, yet I'm still experiencing the issue. Both machines do have the Windows Server Backup feature installed. Can anybody point me in the right direction? What am I missing?

    Read the article

  • Server hang - data loss on reboot, post mortem analysis

    - by rovangju
    A development server I'm responsible for (ext3 on raid 5 w/Debian Squeeze) froze up over the weekend and I was forced to reset it, as in unresponsive from KVM/physical keyboard access, no eth devices responding, etc. Not even the backup process ran (Figures, the one time I don't check for confirmation) So after the reset, it turns out that every trace of disk IO activity that should have happened for a period of ~24H is completely gone. The log files have a big gap in the dates and times. As if the writes were never committed to disk, no processes seemed to have run. Luckily it was a weekend and nothing of value would have been lost and I don't suspect a hack. What can I do in post mortem to this event - to prevent it from ever happening again? I've seen this happen before on a completely different machine running FreeBSD. I am rounding up the disk checking tools right now - but there must be more going on! Mount options: /dev/sda1 on / type ext3 (rw,errors=remount-ro) Kernel: Linux dev 2.6.32-5-686-bigmem Disk/Inodes: 13%/3%

    Read the article

  • RabbitMQ and persistence (blocking writes?)

    - by daharon
    I want to create a RabbitMQ server on a virtual machine (VMware) to be used in production. It will contain persistent queues. I'm wondering if it is a bad idea to store the server on a NAS that's accessed over NFS. Basically my questions are: Will RabbitMQ's writes be blocking? Will the entire queue's operation halt on a write? How much performance degradation should I expect when persisting over NFS?

    Read the article

  • EC2 persistence of machine

    - by Seagull
    I notice that EBS-backed AMIs are much like a VMWare instances -- I can stop them and also persist them to disk, and all this is done relatively quickly. However, I believe that S3 backed machines are different. They cannot be 'stopped', but rather can only be shut-down, written to S3 disk and started up again; with at least a 15 min delay in doing so. Why the difference? How do AMI providers decide whether to use EBS or S3? If I need to stop/persist/restart machines relatively frequently, then I am implicitly limited to just the EBS-backed machines?

    Read the article

  • How to avoid damage to ISO archives?

    - by TMRW
    So had a problem where a 16GB ISO was damaged(likely my own fault using standard windows copy dialog instead of proper copy tool like robocopy with verification turned on). It took several hours but i managed to restore the ISO(basicly i rebuilt the damaged parts and recompiled).Namely some .rar archives inside it were unreadable but the ISO itself was readable. So im wondering how can i further protect something like this from happening again?.Obviously proper copy tool but mayble something else?.Perhaps set as "read only" could help?.I generally don't move these files a lot and if i need to acces them then it's only for opening/extracting.

    Read the article

  • Recover data from a corrupted virtualbox vmdk file?

    - by Neth
    The power went out while I was doing a build on a VirtualBox machine, when I restarted the vmdk for the disk the vm was using was corrupted, apparently irrecoverably. I have been able to grep the 66GB vmdk file and it finds strings from the code I was working on that hadn't gotten in to subversion yet (yeah, yeah I know). But the strings are either in the shell history or what look to be strings inside object files. Any ideas for finding/recovering the source code? If it helps the vm was Linux, Fedora Core 10 on an ext3 filesystem. The host is an ubuntu 10.04_amd64 and has an ext4 filesystem.

    Read the article

  • Recover data from physically damaged harddrive. What are my options?

    - by Michael Kniskern
    I was trying to replace the power supply in my desktop PC and ended up physically damaging the data connection from the hard drive to the motherboard. The plastic shelf for the copper prongs on the hard drive broke into the cable. Here is a picture of my handy work: I went to Best Buy Geek Squad to discuss my options and they said that they will need to send it to the recover center it could cost anywhere between $250 to $1600 USD to recover the data out the hard drive Is this reasonable for data recovery from a physically damaged hard drive? Are there any other options I can explore? I am going to talk to the data doctors to see what my options are. Update I took the HD to Data Doctors, and they told me that the SATA connection was broken to they would need to replaced the data connector and then copy the data to a brand new hard drive. So, with the initial analysis, cost of replacement parts, and data recovery fee it came out to $865.00 USD. The technician specifically stated if this was an older hard drive that would just need to replace the data connector. But because there is specific information related to the individual hard drive in the flash ROM, they need to transfer the data to a brand new hard drive.

    Read the article

  • Merely chainloading an Acer Recovery Partition deleted all data

    - by WindowsEscapist
    I was starting a backup of Acer's factory restore partition located inside of an extended partition to determine whether or not it still worked. I clicked "take no action" once I saw that it had, in fact, successfully started up. However, when I rebooted, I got an "error: no such partition" and was dropped to a GRUB recovery prompt. Upon further investigation, I discovered that all partitions inside the extended partition were gone except for the recovery partition! What happened? How can I fix this? testdisk doesn't find the deleted partitions!

    Read the article

  • Ways to recover data from external hard drive

    - by Howard Benson
    I use an external hard disk for backup of my mac with time machine (OS 10.5.8). I have made something wrong and I have found important folders in the recycler bin. These folders come from external hd. They are backup folders (backups.backupdb) and others. I have tried to restore them draggin and dropping. Some of them came back in the external hd in a while. For the others it takes hours to "preparing to copy" and then it has said "there's no space to copy" on ext hd. It's strange. Files are now in the recycle bin (180gb), and the ext had should have lot of free space. But it isn't really so. Ext hd is not free of space even if these files are in the bin. I ask for advices. I'm not also able to use time machine now (and i have "lost" old backups) for the same reason. Ext hd says that it has not free space.. Thanks

    Read the article

  • Migrating Linux user data to Windows profiles automatically

    - by scott ryan
    I have what seems to be an incredibly simple problem with a very simple solution but I'm having some trouble connecting dots. I have an aging server running Ubuntu Server which hosts roaming profiles. I am switching to a Windows Server 2012 DS shortly. Users used to be named firstinitial.lastname and we are switching to firstname.lastname. I need to transfer things like favorites, documents, etc. from the roaming Linux profile to the user's local Windows profile. So, the way I think it'd work is by using a login script. I think I'd use a script to mount the Linux server's /home for each user, then do copy to various paths (documents, pictures, etc.). But, how do I automate this for each user that logs in? I'm working with a nonprofit, so doing this by hand would probably be out of their budget. I'm open to suggestions, though. What I want is basically Windows Easy Migration, but I'm fairly certain that won't work under Wine... (Kidding, I promise). Thanks!

    Read the article

  • Mount --bind persistence over reboot

    - by vnuk
    I'm having trouble finding an answer for this question Does mount --bind persist over reboot? On my CentOS it looks like it doesn't, so I've placed apropriate mount --bind calls in rc.local. How can I do mount --bind to avoid rc.local scenario?

    Read the article

  • Data recovery from corrupt Ubuntu partition/directory (question about a previous answer)

    - by JoshMaurice
    I have an Ubuntu installation that won't boot anymore. I asked my previous question about it here: http://superuser.com/questions/15916/ubuntu-chkdsk-equivalent Bolotov replied: As I see from your previous question you can boot Windows so you could use dskprobe from Windows XP Service Pack 2 Support Tools to make sure that fs type is correct ... but it's already correct fs type 7 is NTFS. Message "The type of the filesystem is RAW. CHKDSK is not available for RAW drives." means that windows can't determine fs type for some reason. As we see fs type is correct. To run Chkdsk on your Windows partition you can install Windows Recovery Console, boot in recovery console and check your disk. After checking the disk you will gain access to you c:\ubuntu\disks. I think you can mount your linux partition (which is in file) as usual loop-back device: mount -o loop [path to your linux-loopback-partition] But you should mount windows patrition first. So now I'd like to know: Within the recovery console I will be issuing the commands "chkdsk -r" and then "mount -o loop [path to windows partition]" and then "mount -o loop c:\ubuntu\disks", correct? I do have a ("corrupt and unreadable") c:\ubuntu\disks directory so that appears to be the correct path to the linux partition; do you know the path to the windows partition? would that be just "c:\"?

    Read the article

  • Session persistence between multiple Rails / Unicorn servers with Redis as session_store on AWS

    - by d_ethier
    I've got 2 nginx EC2 instances pointing to 2 Unicorn EC2 instances in a round robin load balanced configuration. The two nginx instances are being the Elastic Load Balancer. Both Unicorn instances have a Redis session_store configured which is in a master/slave configuration with an Elastic IP attached to the master. I've tried configuring the session stickiness on the load balancer, but sessions are lost on each page refresh. I'm using the redis-store gem for the session_store configuration and redis support. Anyone have any ideas as to why this is not working?

    Read the article

  • Recover deleted data

    - by atapimp24
    Hi, A user deleted his documents from his laptop somehow and has no backup available. How would one go on his way to recover these deleted files. I have zero experience on this issue. Are there any open source or freeware tools that I can use to attempt a recovery of these files. Thanks

    Read the article

  • Rescue data from damaged hard disk

    - by Lexsys
    Hello. I have a 500 GB hard drive with one NTFS-partition on it. I can mount it with Ubuntu and view the contents. But when I try to copy something, I get an I/O error. Ok, I tried to make its image with dd. I/O error as soon as it starts. I have installed ddrescue, but its manual page says not to use it with drives, failing on I/O. Can I manage to get some information from this drive and how to do this?

    Read the article

  • SQL SERVER – Configure Management Data Collection in Quick Steps – T-SQL Tuesday #005

    - by pinaldave
    This article was written as a response to T-SQL Tuesday #005 – Reporting. The three most important components of any computer and server are the CPU, Memory, and Hard disk specification. This post talks about  how to get more details about these three most important components using the Management Data Collection. Management Data Collection generates the reports for the three said components by default. Configuring Data Collection is a very easy task and can be done very quickly. Please note: There are many different ways to get reports generated for CPU, Memory and IO. You can use DMVs, Extended Events as well Perfmon to trace the data. Keeping the T-SQL Tuesday subject of reporting this post is created to give visual tutorial to quickly configure Data Collection and generate Reports. From Book On-Line: The data collector is a core component of the Data Collection platform for SQL Server 2008 and the tools that are provided by SQL Server. The data collector provides one central point for data collection across your database servers and applications. This collection point can obtain data from a variety of sources and is not limited to performance data, unlike SQL Trace. Let us go over the visual tutorial on how quickly Data Collection can be configured. Expand the management node under the main server node and follow the direction in the pictures. This reports can be exported to PDF as well Excel by writing clicking on reports. Now let us see more additional screenshots of the reports. The reports are very self-explanatory  but can be drilled down to get further details. Click on the image to make it larger. Well, as we can see, it is very easy to configure and utilize this tool. Do you use this tool in your organization? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Reporting, SQL Reports

    Read the article

  • Allen for Umbraco with location EXIF meta data

    - by Vizioz Limited
    The latest version of Allen for Umbraco has now hit the Apple App store, we have managed to add some nice improvements to this version that include:Storing location and direction information when photos are taken within the AppEmbedding EXIF data into the images when they are uploadBackground UploadingPull to refresh the media tree Location and DirectionBy default when the camera is used within an application the location and direction that the camera is pointing is not stored within the image meta data. We have now added full support so that this data is now added. We have added a setting which allows you to prevent this data from being uploaded to your website if you do not want the location data to be sent you can turn it off within Allen, Note: Please don't forget that location services do need to be turned on to allow the app to access the images in the phone's asset library.We have had quite a few ideas from users already for using this location data, including logging free parking in Denmark to geo-tagging holiday photos and linking the photos to Google street view. Embedding EXIF dataWe now embed all the meta data available on the iPhone into the image when it is uploaded to your server, this allows you to pull the data out and use it within your site. Have a look at Cultiv's Photo Meta Data package for great example code that allows you to automatically pull this data out and populate properties on your Umbraco media item.We slightly modified the source code of this package to allow the package to always extract the image data, as the default package requires a property to allow the data to be extracted, it's an easy change, if you get stuck add a comment to this post. Background UploadingIf you try to upload multiple images and need to start doing something else on your phone, you can now click the home button and the application will continue to upload your images in the background. As soon as it has finished you will receive a standard Apple notification. Pull to RefreshOur final enhancement has been to add "Pull to refresh" to the media trees, just pull the tree downwards with your finger and it will refresh, this is useful if you are adding items to your media tree while testing your site with Allen for Umbraco. Future enhancements.. your ideas?If you have any ideas for future enhancement feel free to add a comment below!

    Read the article

  • Which data structure should I use for dynamically generated platforms?

    - by Joey Green
    I'm creating a platform type of game with various types of platforms. Platforms that move, shake, rotate, etc. Multiple types and multiple of each type can be on the screen at once. The platforms will be procedural generated. I'm trying to figure out which of the following would be a better platform system: Pre-allocate all platforms when the scene loads, storing each platform type into different platform type arrays( i.e. regPlatformArray ), and just getting one when I need one. The other option is to allocate and load what I need when my code needs it. The problem with 1 is keeping up with the indices that are in use on screen and which aren't. The problem with 2 is I'm having a hard time wrapping my head around how I would store these platforms so that I can call the update/draw methods on them and managing that data structure that holds them. The data structure would constantly be growing and shrinking. It seems there could be too much complexity. I'm using the cocos2d iPhone game engine. Anyways, which option would be best or is there a better option?

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • Reference Data Management

    - by rahulkamath
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableColorfulListAccent2 {mso-style-name:"Colorful List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:72; mso-style-unhide:no; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-tstyle-shading:#F8EDED; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:25; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; color:black; mso-themecolor:text1;} table.MsoTableColorfulListAccent2FirstRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#9E3A38; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themeshade:204; mso-tstyle-border-bottom:1.5pt solid white; mso-tstyle-border-bottom-themecolor:background1; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:white; mso-tstyle-shading-themecolor:background1; mso-tstyle-border-top:1.5pt solid black; mso-tstyle-border-top-themecolor:text1; color:#9E3A38; mso-themecolor:accent2; mso-themeshade:204; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2FirstCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2OddColumn {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#EFD3D2; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:63; mso-tstyle-border-top:cell-none; mso-tstyle-border-left:cell-none; mso-tstyle-border-bottom:cell-none; mso-tstyle-border-right:cell-none; mso-tstyle-border-insideh:cell-none; mso-tstyle-border-insidev:cell-none;} table.MsoTableColorfulListAccent2OddRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#F2DBDB; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:51;} Reference Data Management Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise MDM solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or mastering sales territories in light of rapid fire acquisitions that require frequent sales territory refinement, equitable distribution of leads and accounts to salespersons, and alignment of budget/forecast with results to optimize sales coverage. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? Reference data is a close cousin of master data. While master data may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and give them contextual value. The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Specialty Finance: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change.

    Read the article

  • Partner Webcast - Oracle Data Integration Competency Center (DICC): A Niche Market for services

    - by Thanos Terentes Printzios
    Market success now depends on data integration speed. This is why we collected all best practices from the most advanced IT leaders, simply to prove that a Data Integration competency center should be the primary new IT team you should establish. This is a niche market with unlimited potential for partners becoming, the much needed, data integration services provider trusted by customers. We would like to elaborate with OPN Partners on the Business Value Assessment and Total Economic Impact of the Data Integration Platform for End Users, while justifying re-organizing your IT services teams. We are happy to share our research on: The Economical impact of data integration platform/competency center. Justifying strongest reasons and differentiators, using numeric analysis and best-practice in customer case studies from specific industries Utilizing diagnostics and health-check analysis in building a business case for your customers What exactly is so special in the technology of Oracle Data Integration Impact of growing data volume and amount of data sources Analysis of usual solutions that are being implemented so far, addressing key challenges and mistakes During this partner webcast we will balance business case centric content with extensive numerical ROI analysis. Join us to find out how to build a unified approach to moving/sharing/integrating data across the enterprise and why this is an important new services opportunity for partners. Agenda: Data Integration Competency Center Oracle Data Integration Solution Overview Services Niche Market For OPN Summary Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Presenter: Milomir Vojvodic, EMEA Senior Business Development Manager for Oracle Data Integration Product Group Date: Thursday, September 4th, 10pm CEST (8am UTC/11am EEST)Duration: 1 hour Register Today For any questions please contact us at [email protected]

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >