Search Results

Search found 64506 results on 2581 pages for 'live data'.

Page 5/2581 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Ideal data structure/techniques for storing generic scheduler data in C#

    - by GraemeMiller
    I am trying to implement a generic scheduler object in C# 4 which will output a table in HTML. Basic aim is to show some object along with various attributes, and whether it was doing something in a given time period. The scheduler will output a table displaying the headers: Detail Field 1 ....N| Date1.........N I want to initialise the table with a start date and an end date to create the date range (ideally could also do other time periods e.g. hours but that isn't vital). I then want to provide a generic object which will have associated events. Where an object has events within the period I want a table cell to be marked E.g. Name Height Weight 1/1/2011 2/1/2011 3/1/20011...... 31/1/2011 Ben 5.11 75 X X X Bill 5.7 83 X X So I created scheduler with Start Date=1/1/2011 and end date 31/1/2011 I'd like to give it my person object (already sorted) and tell it which fields I want displayed (Name, Height, Weight) Each person has events which have a start date and end date. Some events will start and end outwith but they should still be shown on the relevant date etc. Ideally I'd like to have been able to provide it with say a class booking object as well. So I'm trying to keep it generic. I have seen Javasript implementations etc of similar. What would a good data structure be for this? Any thoughts on techniques I could use to make it generic. I am not great with generics so any tips appreciated.

    Read the article

  • Ubuntu live cd : black screen and blinking cursor

    - by IFasel
    I try to install ubuntu 12.04 on my computer. I can get to the purple screen on the live cd but then, if I choose "Installing Ubuntu", I have a black screen with a cursor blinking (and nothing else happens). My PC : acer aspire M3920, CPU i5-2300, 8 Gb RAM, NVIDIA gt 405. What I already tried : I tried with 12.04 and 13.04 daily build I tried with a live usb and with a live dvd I tried the following boot options : nomodset, acpi=off I googled a lot and it seems that it could be a graphic card problem. Do you know any other boot options that I could try ? UPDATE This is not a duplicate : I've tried all the common boot options (nomodeset, noacpi...) and it doesn't change anything. With the option "no splash" (instead of "quiet splash"), I can see what happens before the forever-blinking cursor : [sdg] no caching mode present [sdg] assuming drive cache : write trough ata8.00: excetion Emask 0x52 ... frozen ata8 : SError : { RecovData RecovComm UnrecovData...} ata8.00 : failed command : IDENTIFY PACKET DEVICE ... ata8.00 : status : { DRDY } ata8 : hard resetting link Does somebody know what it means ? N.B. astonishingly, Puppy Linux boots fine (but Debian, Fedora and Ubuntu do not) Solution In fact, it was not a graphic card problem. I had to disconnect the dvd drive and connect it to another free sata connector (I don't really understand why Ubuntu had trouble with this connector and Windows 7 not). After that, everything worked fine.

    Read the article

  • HTML5 video - Safari (Mac) - live HLS stream - seek in live buffer

    - by bsnote
    I have the following HTML: <head> </head> <body> <button type="button" onclick="onSeek();">Seek</button> <video id="video" src="http://<host>/somevideo.m3u8" autoplay controls> </video> <script src="http://code.jquery.com/jquery-1.7.2.min.js"></script> <script> function onSeek() { var video = $('#video')[0]; video.currentTime = -300; } </script> </body> When I open the page, the m3u8 playlist has already about 100 media segments (5 min). Since it's a live video it starts playing from one of the last media segments and video.currentTime is set to 0. video.seekable.start(i) and video.seekable.end(i) shows the seekable range which is 0..currentTime. How can I seek to the previous media segments, say 5 min back? video.currentTime = -300 doesn't work. It resets currentTime to 0.

    Read the article

  • Accessing SQL Data Services via ADO.NET Data Service Client Library

    - by Mehmet Aras
    Is this possible? Basically I would like to use SQL Data Services REST interface and let the ADO.NET Data Service Client library handle communication details and generate the entities that I can use. I looked at the samples in February release of Azure services kit but the samples in there are using HttpWebRequest and HttpWebResponse to consume SQL Data Services RESTfully. I was hoping to use ADO.NET Data Service Client library to abstract low-level details away.

    Read the article

  • Bridging Two Worlds: Big Data and Enterprise Data

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The big data world is all the vogue in today’s IT conversations. It’s a world of volume, velocity, variety – tantalizing us with its untapped potential. It’s a world of transformational game-changing technologies that have already begun to alter the information management landscape. One of the reasons that big data is so compelling is that it’s a universal challenge that impacts every one of us. Whether it is healthcare, financial, manufacturing, government, retail - big data presents a pressing problem for many industries: how can so much information be processed so quickly to deliver the ‘bigger’ picture? With big data we’re tapping into new information that didn’t exist before: social data, weblogs, sensor data, complex content, and more. What also makes big data revolutionary is that it turns traditional information architecture on its head, putting into question commonly accepted notions of where and how data should be aggregated processed, analyzed, and stored. This is where Hadoop and NoSQL come in – new technologies which solve new problems for managing unstructured data. And now for some worst practices that I'd recommend that you please not follow: Worst Practice Lesson 1: Throw away everything that you already know about data management, data integration tools, and start completely over. One shouldn’t forget what’s already running in today’s IT. Today’s Business Analytics, Data Warehouses, Business Applications (ERP, CRM, SCM, HCM), and even many social, mobile, cloud applications still rely almost exclusively on structured data – or what we’d like to call enterprise data. This dilemma is what today’s IT leaders are up against: what are the best ways to bridge enterprise data with big data? And what are the best strategies for dealing with the complexities of these two unique worlds? Worst Practice Lesson 2: Throw away all of your existing business applications … because they don’t run on big data yet. Bridging the two worlds of big data and enterprise data means considering solutions that are complete, based on emerging Hadoop technologies (as well as traditional), and are poised for success through integrated design tools, integrated platforms that connect to your existing business applications, as well as and support real-time analytics. Leveraging these types of best practices translates to improved productivity, lowered TCO, IT optimization, and better business insights. Worst Practice Lesson 3: Separate out [and keep separate] your big data sandboxes from all the current enterprise IT systems. Don’t mix sand among playgrounds. We didn't tell you that you wouldn't get dirty doing this. Correlation between the two worlds is key. The real advantage to analyzing big data comes when you can correlate it with the existing data in your data warehouse or your current applications to make sense of the larger patterns. If you have not followed these worst practices 1-3 then you qualify for the first step of our journey: bridging the two worlds of enterprise data and big data. Over the next several weeks we’ll be discussing this topic along with several others around big data as it relates to data integration. We welcome you to join us in the conversation by following us on twitter on #BridgingBigData or download our latest white paper and resource kit: Big Data and Enterprise Data: Bridging Two Worlds.

    Read the article

  • SQL SERVER – Data Sources and Data Sets in Reporting Services SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the Beginning SSRS. Supporting files are available with a free download from the www.Joes2Pros.com web site. Connecting to Your Data? When I was a child, the telephone book was an important part of my life. Maybe I was just a nerd, but I enjoyed getting a new book every year to page through to learn about the businesses in my small town or to discover where some of my school acquaintances lived. It was also the source of maps to my town’s neighborhoods and the towns that surrounded me. To make a phone call, I would need a telephone number. In order to find a telephone number, I had to know how to use the telephone book. That seems pretty simple, but it resembles connecting to any data. You have to know where the data is and how to interact with it. A data source is the connection information that the report uses to connect to the database. You have two choices when creating a data source, whether to embed it in the report or to make it a shared resource usable by many reports. Data Sources and Data Sets A few basic terms will make the upcoming choses make more sense. What database on what server do you want to connect to? It would be better to just ask… “what is your data source?” The connection you need to make to get your reports data is called a data source. If you connected to a data source (like the JProCo database) there may be hundreds of tables. You probably only want data from just a few tables. This means you want to write a specific query against this data source. A query on a data source to get just the records you need for an SSRS report is called a Data Set. Creating a local Data Source You can connect embed a connection from your report directly to your JProCo database which (let’s say) is installed on a server named Reno. If you move JProCo to a new server named Tampa then you need to update the Data Set. If you have 10 reports in one project that were all pointing to the JProCo database on the Reno server then they would all need to be updated at once. It’s possible to make a project level Data Source and have each report use that. This means one change can fix all 10 reports at once. This would be called a Shared Data Source. Creating a Shared Data Source The best advice I can give you is to create shared data sources. The reason I recommend this is that if a database moves to a new server you will have just one place in Report Manager to make the server name change. That one change will update the connection information in all the reports that use that data source. To get started, you will start with a fresh project. Go to Start > All Programs > SQL Server 2012 > Microsoft SQL Server Data Tools to launch SSDT. Once SSDT is running, click New Project to create a new project. Once the New Project dialog box appears, fill in the form, as shown in. Be sure to select Report Server Project this time – not the wizard. Click OK to dismiss the New Project dialog box. You should now have an empty project, as shown in the Solution Explorer. A report is meant to show you data. Where is the data? The first task is to create a Shared Data Source. Right-click on the Shared Data Sources folder and choose Add New Data Source. The Shared Data Source Properties dialog box will launch where you can fill in a name for the data source. By default, it is named DataSource1. The best practice is to give the data source a more meaningful name. It is possible that you will have projects with more than one data source and, by naming them, you can tell one from another. Type the name JProCo for the data source name and click the Edit button to configure the database connection properties. If you take a look at the types of data sources you can choose, you will see that SSRS works with many data platforms including Oracle, XML, and Teradata. Make sure SQL Server is selected before continuing. For this post, I am assuming that you are using a local SQL Server and that you can use your Windows account to log in to the SQL Server. If, for some reason you must use SQL Server Authentication, choose that option and fill in your SQL Server account credentials. Otherwise, just accept Windows Authentication. If your database server was installed locally and with the default instance, just type in Localhost for the Server name. Select the JProCo database from the database list. At this point, the connection properties should look like. If you have installed a named instance of SQL Server, you will have to specify the server name like this: Localhost\InstanceName, replacing the InstanceName with whatever your instance name is. If you are not sure about the named instance, launch the SQL Server Configuration Manager found at Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools. If you have a named instance, the name will be shown in parentheses. A default instance of SQL Server will display MSSQLSERVER; a named instance will display the name chosen during installation. Once you get the connection properties filled in, click OK to dismiss the Connection Properties dialog box and OK again to dismiss the Shared Data Source properties. You now have a data source in the Solution Explorer. What’s next I really need to thank Kathi Kellenberger and Rick Morelan for sharing this material for this 5 day series of posts on SSRS. To get really comfortable with SSRS you will get to know the different SSDT windows, Build reports on your own (without the wizards),  Add report headers and footers, Accept user input,  create levels, charts, or even maps for visual appeal. You might be surprise to know a small 230 page book starts from the very beginning and covers the steps to do all these items. Beginning SSRS 2012 is a small easy to follow book so you can learn SSRS for less than $20. See Joes2Pros.com for more on this and other books. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • Windows 8 Live Accounts and the actual Windows Account

    - by Rick Strahl
    As if Windows Security wasn't confusing enough, in Windows 8 we get thrown yet another curve ball with Windows Live accounts to logon. When I set up my Windows 8 machine I originally set it up with a 'real', non-live account that I always use on my Windows machines. I did this mainly so I have a matching account for resources around my home and intranet network so I could log on to network resources properly. At some point later I decided to set up Windows Live security just to see how changes things. Windows wants you to use Windows Live Windows 8 logins are required in order for the Windows RT account info to work. Not that I care - since installing Windows 8 I've maybe spent 10 minutes with Windows RT because - well it's pretty freaking sucky on the desktop. From shitty apps to mis-managed screen real estate I can't say that there's anything compelling there to date, but then I haven't looked that hard either. Anyway… I set up the Windows Live account to see if that changes things. It does - I do get all my live logins to work from Live Account so that Twitter and Facebook posts and pictures and calendars all show up on live tiles on the start screen and in the actual apps. That's nice-ish, but hardly that exciting given that all of the apps tied to those live tiles are average at best. And it would have been nice if all of this could be done without being forced into running with a Windows Live User Account - this all feels like strong-arming you into moving into Microsofts walled garden… and that's probably what it's meant to do. Who am I? The real problem to me though is that these Windows Live and raw Windows User accounts are a bit unpredictable especially when it comes to developer information about the account and which credentials to use. So for example Windows reports folder security like this: Notice it's showing my Windows Live account. Now if I go to Edit and try to add my Windows user account (rstrahl) it'll just automatically show up as the live account. On the other hand though the underlying system sees everything as my real Windows account. After I switched to a Windows Live login account and I have to login to Windows with my Live account, what do you suppose this returns?Console.WriteLine(Environment.UserName); It returns my raw Windows user account (rstrahl). All my permissions, all my actual settings and the desktop console altogether run under that account. If I look in TaskManager (or Process Explorer for me) I see: Everything running on the desktop shell with my login running under my Windows user account. I suppose it makes sense, but where is that association happening? When I switched to a Windows Live account, nowhere did I associate my real account with the Live account - it just happened. And looking through the account configuration dialogs I can't find any reference to the raw Windows account. Other than switching back I see no mention anywhere of the raw Windows account - everything refers to the Live account. Right then, clear as potato soup! So this is who you really are! The problem is that in some situations this schizophrenic account behavior gets a bit weird. Today I was running a local Web application in IIS that uses Windows Authentication - I tried to log-in with my real Windows account login because that's what I'm used to using with WINDOWS freaking Authentication through IIS. But… it failed. I checked my IIS settings, my apps login settings and I just could not for the life of me get into the site with my Windows username. That is until I finally realized that I should try using my Windows Live credentials instead. And that worked. So now in this Windows Authentication dialog I had to type in my Live ID and password, which is - just weird. Then in IIS if I look at a Trace page (or in my case my app's Status page) I see that the logged on account is - my Windows user account. What's really annoying about this is that in some places it uses the live account in other places it uses my Windows account. If I remote desktop into my Web server online - I have to use the local authentication dialog but I have to put in my real Windows credentials not the Live account. Oh yes, it's all so terribly intuitive and logical… So in summary, when you log on with a Live account you are actually mapped to an underlying Windows user. In any application if you check the user name it'll be the underlying user account (not sure what happens in a Windows RT app or even what mechanism is used there to get the user name info).  When logging on to local machine resource with user name and password you have to use your Live IDs even if the permissions on the resources are mapped to your underlying Windows account. Easy enough I suppose, but still not exactly intuitive behavior…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Screen split oddly when OS is installed from a Live CD

    - by Brian
    Never used Linux before but I decided I want to start somewhere and Ubuntu seemed like the right place to start. I burned the 64bit version ISO onto a CD and installed it onto a fresh new hard drive I have. The Live CD works and the Live Distro thing works great. However when I attempt to install it, the screen splits oddly in a way where part of the right is on the left. Furthermore, after attempting to log in, the GUI doesn't show up and the computer would freeze and stay that way. However you can still move the cursor. I can't really get in or do anything (possibly because I don't know how) and tinker around since I am not too tech savvy. But I can follow instructions. Graphics Card Radeon hd 6670 I just tried the install with windows thing. There was the exact same problem.

    Read the article

  • Ubuntu Live USB: best practices for secure net traffic

    - by Och
    I want to to set up a live USB with Ubuntu, in the most secure way. So I want to have the persistent data on a second USB, something that its not that much problematic. How to configure a very safe Internet surfing (throughout a VPN?) Which are the best practices that could be implemented to have the Ubuntu live in a USB, the persistent data in other, and with the Internet access to a VPN (the Ubuntu privacy remix gives most of this, except the VPN config), Any ideas of how to combine the best of Ubuntu privacy remix, and Internet access to a VPN?

    Read the article

  • Booting from Live USB leads to command line instead of menu

    - by Alek
    I'm trying to install 12.04 x64 from a USB stick. I created the Live USB as per instructions from ubuntu.com, however, after choosing to boot from USB I get stuck with a command line, titled GRUB. When I type "install", it replies that "no kernel is loaded". On my other PC everything is fine, the Live USB boots just fine. The one I'm trying to install to has UEFI mobo - maybe that's the problem? However, fiddling with UEFI settings in mobo setup (UEFI/Legacy) has no visible effect.

    Read the article

  • Live CD won't start with UEFI

    - by skytreader
    I'm trying to dual boot my Windows 8 machine with Ubuntu 12.04 but I cannot even get to the Live CD under UEFI. I've already set my UEFI boot loader to load from the DVD drive before anything else but it keeps on loading Windows 8 first thing. I've checked that the Ubuntu installer I am using is working by setting the BIOS to legacy boot; under this setting, I can get to the Live CD but it cannot detect Windows 8---something I do not want to happen. Just for the record, my boot order is as follows: ATAPI CDROM: HL-DL-ST DVDRAM GT51N USB CDROM: USB FDD USB HDD HDD: TOSHIBA [SERIAL NUMBER] Network Boot-IPV4: Network Boot-IPV6 Windows Boot Manager Has anyone ran into the same problems as me for UEFI? What am I missing here?

    Read the article

  • Make a live USB of your installed ubuntu

    - by Eray Tuncer
    Okay let me clear my point. Lets assume I want to make a internet cafe-like system. Every user will be able to enter via only one user account to linux. However, I do not want them to leave anything behind. Whenever this ubuntu is booted, I want to this system remove their garbage. Why I mentioned the live USB is that I have seen that live usb or cd makes what I want but not exactly because I want to install some apps on it and make some confugration like firewall, permissions.

    Read the article

  • DualBoot 32Win & 64Mac from Live USB +Persistance

    - by josephsmendoza
    So I have a 32 bit Live USB with persistance that I use to write code, cause that's just how I roll. I can boot it onto my school computers(32 bit Win) no problem, but obviously not my Mac (2007 iMac, 64 Bit). Currently I use VMWare Fusion 6 Pro and a Plop Linux image to use the usb at home, but I can only get 900mb of ram for my VM. I was hoping to make a Live USB that can boot 32 bit and 64 bit for mac, with a shared persistance file, this way I can use my computer's full 2GB. Also, I'm not allowed edit any part of this Mac. Do Not reply telling me it's impossible. Please only solutions. Thank you, and have a unicorntastic day!

    Read the article

  • Live USB does not work

    - by MARUF SARKER
    I've made live USB using these "Universal-USB-Installer-1.9.1.9", "unetbootin-windows-581", "YUMI-0.0.8.0" & "LinuxLive USB Creator 2.8.18" in "FAT32" format, but unfortunately I could not boot using the pen-drive. My BIOS supports boot from USB drive. When I am booting from the live usb, it just show black screen with a blinking "_", but nothing happens after that( I have waited more than 10 minutes, but nothing happens). So, can anyone please help me on this? [Additional Info.: I've made Bootable USB using PowerISO and that booted, but it was not possible to access the USB normally afterwards,because PowerISO formatted the USB Drive as RAW(or anything I don't know) and it became 8GB to 700MB, afterwards I had to format it using MiniTool Partition manager.]

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • How to boot live iso images?

    - by virpara
    I found that it can be done with loopback as follows menuentry "Lucid ISO" { loopback loop (hd0,1)/boot/iso/ubuntu-10.04-desktop-i386.iso linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=/boot/iso/ubuntu-10.04-desktop-i386.iso noprompt noeject initrd (loop)/casper/initrd.lz } But it works only with ubuntu or its derivatives. How it should be written if I want to boot other live images like fedora, cent, opensuse etc. ? Edit: I found some other entries but all of them are probably debian based. menuentry "Linux Mint 10 Gnome ISO" { loopback loop /linuxmint10.iso linux (loop)/casper/vmlinuz file=/cdrom/preseed/mint.seed boot=casper initrd=/casper/initrd.lz iso-scan/filename=/linuxmint10.iso noeject noprompt splash -- initrd (loop)/casper/initrd.lz } menuentry "DBAN ISO" { loopback loop /dban.iso linux (loop)/DBAN.BZI nuke="dwipe" iso-scan/filename=/dban.iso silent -- } menuentry "Tinycore ISO" { loopback loop /tinycore.iso linux (loop)/boot/bzImage -- initrd (loop)/boot/tinycore.gz } menuentry "SystemRescueCd" { loopback loop /systemrescuecd.iso linux (loop)/isolinux/rescuecd isoloop=/systemrescuecd.iso setkmap=us docache dostartx initrd (loop)/isolinux/initram.igz } Edit2: How to chainload grub and syslinux from grub2? Edit3: I want to boot other live images without any removable devices and use grub2 so need menu entries specific to grub2.

    Read the article

  • Focus on Oracle Data Profiling and Data Quality 11g - 24/Fev/11

    - by Claudia Costa
    Thursday 24th February, 11am GMTOracle offers an integrated suite Data Quality software architected to discover and correct today's data quality problems and establish a platform prepared for tomorrow's yet unknown data challenges.Oracle Data Profiling provides data investigation, discovery, and profiling in support of quality, migration, integration, stewardship, and governance initiatives. It includes a broad range of features that expand upon basic profiling, including automated monitoring, business-rule validation, and trend analysis.Oracle Data Quality for Data Integrator provides cleansing, standardization, matching, address validation, location enrichment, and linking functions for global customer data and operational business data.It ensures that data adheres to established standards that are adaptable to fit each organization's specific needs. Both single - and double - byte data are processed in local languages to provide a unique and centralized view of customers, products and services.  During this in-person briefing, Data Integration Solution Specialists will be providing a technical overview and a walkthrough.Agenda Oracle Data Integration Strategy overview A focus on Oracle Data Profiling and Oracle Data Quality for Data Integrator: Oracle Data Profiling Oracle Data Quality for Data Integrator Live demo Q&A  This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Registrations received less than 24hours prior to start time may not receive confirmation to attend.To register click here.For any questions please contact [email protected]

    Read the article

  • No WIFI or LAN on Ubuntu 12.04 or 12.10 Live CD/USB using Toshiba qosmio x870

    - by Mighty
    I recently had issues with secure boot and couldn't boot the Live CD/USB but after disabling secure boot, I was able to 'TRY UBUNTU'. My currently problem is that I can't access WIFI or LAN from either Ubuntu 12.04 or 12.10 Live CD/USB which I do from Windows 8. Also, the wireless button is able to turn on and off the wireless LED but doesn't find available WIFI. Please, what should I do to get both WIFI and LAN working on Ubuntu using Toshiba qosmio x870? UPDATED: Here's the output of lspci: ubuntu@ubuntu:~$ lspci 00:00.0 Host bridge: Intel Corporation 3rd Gen Core processor DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09) 00:14.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation 7 Series/C210 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 2 (rev c4) 00:1c.4 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 5 (rev c4) 00:1d.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation HM76 Express Chipset LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 7 Series Chipset Family 6-port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation 7 Series/C210 Series Chipset Family SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1213 (rev a1) 07:00.0 Ethernet controller: Atheros Communications Inc. AR8161 Gigabit Ethernet (rev 10) 08:00.0 Network controller: Realtek Semiconductor Co., Ltd. Device 8723 09:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5229 PCI Express Card Reader (rev 01)

    Read the article

  • ubuntu live cd start up error

    - by Emiel
    First off, I'm new to the Linux scene. This is my first attempt to make a single boot installation for Ubuntu. I tried it for a few days in dual boot with win7 and I was sold, so i removed the tumor my pc had to endure for so long (sorry laptop) and installed Ubuntu from an usb boot device. My dual boot was as follows: Windows 7 was installed on partition C from hdd1, the windows installer for Ubuntu installed Ubuntu on partition I on that same hdd, hdd1. In the live cd installation I did the normal execution for removing windows and it said that after the installation my partition would be 320gb big, that is the total size of my hdd, so I automatically assumed that it would format my whole hdd. Now the installation has completed and it tells me to restart my system, and here comes the problem: now I get a dashing white cursor on my screen after the BIOS load and it won't budge... it just stands there and it doesn't move on or load Ubuntu, the system gets very hot at this point... Then I tried to reinstall using the same live CD, it is still on my USB drive, but when I boot from the USB, I get the error: no such file with some address and the a grub rescue. What to do? I can get hold of a win7 copy, but I don't really want to use that crap again... Thanks for helping me out. Kind regards, Emiel

    Read the article

  • Extending WCF Data Service to synthesize missing data on request

    - by Schneider
    I have got a WCF Data Service based on a LINQ to SQL data provider. I am making a query "get me all the records between two dates". The problem is that I want to synthesize two extra records such that I always get records that fall on the start and end dates, plus all the ones in between which come from the database. Is there a way to "intercept" the request so I can synthesize these records and return them to the client? Thanks

    Read the article

  • Live Messenger Programming - video

    - by NicoJuicy
    I have checked the possibilities of msn live sdk, i haven't come across a possiblity to add video options to a "bot". How would i implement this? I want to stream video's directly to a user's msn (multiple video's) ... Would this be possible?

    Read the article

  • Tackling Big Data Analytics with Oracle Data Integrator

    - by Irem Radzik
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}  By Mike Eisterer  The term big data draws a lot of attention, but behind the hype there's a simple story. For decades, companies have been making business decisions based on transactional data stored in relational databases. Beyond that critical data, however, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and documents that can be mined for useful information.  Companies are facing emerging technologies, increasing data volumes, numerous data varieties and the processing power needed to efficiently analyze data which changes with high velocity. Oracle offers the broadest and most integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships Oracle Data Integrator Enterprise Edition(ODI) is critical to any enterprise big data strategy. ODI and the Oracle Data Connectors provide native access to Hadoop, leveraging such technologies as MapReduce, HDFS and Hive. Alongside with ODI’s metadata driven approach for extracting, loading and transforming data; companies may now integrate their existing data with big data technologies and deliver timely and trusted data to their analytic and decision support platforms. In this session, you’ll learn about ODI and Oracle Big Data Connectors and how, coupled together, they provide the critical integration with multiple big data platforms. Tackling Big Data Analytics with Oracle Data Integrator October 1, 2012 12:15 PM at MOSCONE WEST – 3005 For other data integration sessions at OpenWorld, please check our Focus-On document.  If you are not able to attend OpenWorld, please check out our latest resources for Data Integration.

    Read the article

  • Creating Ubuntu Live CD with customized Unity launcher

    - by toros
    I would like to create a custom Ubuntu image based on Natty using Ubuntu Customization Kit. I also want to customize the icons appearing on the Unity Launcher. I can change the icons on my desktop system with the following command: gsettings set com.canonical.Unity.Launcher favorites "['firefox.desktop', 'nautilus-home.desktop', 'libreoffice-writer.desktop']" I tried to run this command from the UCK console while creating the Live CD, but it doesn’t seem to work. Do you have any ideas how I could solve this?

    Read the article

  • How to customize live Ubuntu CD?

    - by karthick87
    I would like to customize Ubuntu live CD by installing some additional packages. I have followed this link but it doesn't seems to work. Can anyone provide clear instructions? Thanks in advance! Customization Packages that I want to install: Thunderbird Samba SSH Changes that I need: Remove Games menu from the Application menu Firefox shortcut on desktop Radiance as the default theme Different default Ubuntu wallpaper Note I do not prefer Remastersys, manual way will be appreciated.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >