Search Results

Search found 65872 results on 2635 pages for 'core data migration'.

Page 6/2635 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • New article available in "SOA Suite Essentials for WLI Users" series: Dynamic Data Lookup in a Busin

    - by simone.geib
    It is my pleasure to announce the publishing of another article in our "SOA Suite Essentials for WLI Users" series: "Dynamic Data Lookup in a Business Process: Meta Data Cache Control in Oracle WebLogic Integration and Domain Value Maps in SOA Suite". This article explains how dynamic data can be retrieved in a business process using Domain Value Maps in SOA Suite and shows the similarities to the WLI XML MetaData Cache Control. Lots of customers have asked about this comparison and I hope they will find it useful. The article follows "Setting Web Service and JCA Adapter Endpoints Dynamically in Oracle SOA Suite" which describes how web services and JCA adapter endpoints in SOA Suite can be changed at run-time, and so completes the use case where a BPEL process writes to a file (via file adapter) and the output directory and the file name are set dynamically. Please let me know what you think about the series and this specific article.

    Read the article

  • Core Data multi-threading

    - by JK
    My app starts by presenting a tableview whose datasource is a Core Data SQLite store. When the app starts, a secondary thread with its own store controller and context is created to obtain updates from the web for data in the store. However, any resulting changes to the store are not notified to the fetchedresults controller (I presume because it has its own coordinator) and consequently the table is not updated with store changes. What would be the most efficient way to refresh the context on the main thread? I am considering tracking the objectIDs of any objects changed on the secondary thread, sending those to the main thread when the secondary thread completes and invoking "[context refreshObject:....] Any help would be greatly appreciated.

    Read the article

  • How to add database layer in core data application

    - by aditya
    Hi all I am fairly new to core data technology and i searched a lot on how to add the database to a core data application.so can anybody guide me on how to integrate the database layer? i have seen the iphone tutorial on core data (i.e books example) but i am not able to understand how to .sqlite file has been included in that application

    Read the article

  • Ideal data structure/techniques for storing generic scheduler data in C#

    - by GraemeMiller
    I am trying to implement a generic scheduler object in C# 4 which will output a table in HTML. Basic aim is to show some object along with various attributes, and whether it was doing something in a given time period. The scheduler will output a table displaying the headers: Detail Field 1 ....N| Date1.........N I want to initialise the table with a start date and an end date to create the date range (ideally could also do other time periods e.g. hours but that isn't vital). I then want to provide a generic object which will have associated events. Where an object has events within the period I want a table cell to be marked E.g. Name Height Weight 1/1/2011 2/1/2011 3/1/20011...... 31/1/2011 Ben 5.11 75 X X X Bill 5.7 83 X X So I created scheduler with Start Date=1/1/2011 and end date 31/1/2011 I'd like to give it my person object (already sorted) and tell it which fields I want displayed (Name, Height, Weight) Each person has events which have a start date and end date. Some events will start and end outwith but they should still be shown on the relevant date etc. Ideally I'd like to have been able to provide it with say a class booking object as well. So I'm trying to keep it generic. I have seen Javasript implementations etc of similar. What would a good data structure be for this? Any thoughts on techniques I could use to make it generic. I am not great with generics so any tips appreciated.

    Read the article

  • Core Data: migrating entities with self-referential properties

    - by Dan
    My Core Data model contains an entity, Shape, that has two self-referential relationships, which means four properties. One pair is a one-to-many relationship (Shape.containedBy <- Shape.contains) and the another is a many-to-many relationship (Shape.nextShapes <<- Shape.previousShapes). It all works perfectly in the application, so I don't think self-referencing relationships is a problem in general. However, when it comes to migrating the model to a new version, then Xcode fails to compile the automatically generated mapping model, with this error message: 2009-10-30 17:10:09.387 mapc[18619:607] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Unable to parse the format string "FUNCTION($manager ,'destinationInstancesForSourceRelationshipNamed:sourceInstances:' , 'contains' , $source.contains) == 1"' *** Call stack at first throw: ( 0 CoreFoundation 0x00007fff80d735a4 __exceptionPreprocess + 180 1 libobjc.A.dylib 0x00007fff83f0a313 objc_exception_throw + 45 2 Foundation 0x00007fff819bc8d4 _qfqp2_performParsing + 8412 3 Foundation 0x00007fff819ba79d +[NSPredicate predicateWithFormat:arguments:] + 59 4 Foundation 0x00007fff81a482ef +[NSExpression expressionWithFormat:arguments:] + 68 5 Foundation 0x00007fff81a48843 +[NSExpression expressionWithFormat:] + 155 6 XDBase 0x0000000100038e94 -[XDDevRelationshipMapping valueExpressionAsString] + 260 7 XDBase 0x000000010003ae5c -[XDMappingCompilerSupport generateCompileResultForMappingModel:] + 2828 8 XDBase 0x000000010003b135 -[XDMappingCompilerSupport compileSourcePath:options:] + 309 9 mapc 0x0000000100001a1c 0x0 + 4294973980 10 mapc 0x0000000100001794 0x0 + 4294973332 ) terminate called after throwing an instance of 'NSException' Command /Developer/usr/bin/mapc failed with exit code 6 The 'contains' is the name of one of the self-referential properties. Anyway, the really big problem is that I can't even look at this Mapping Property as Xcode crashes as soon as I select the entity mapping when viewing the mapping model. So I'm a bit lost really where to go from here. I really can't remove the self-referential properties, so I'm thinking I've got manually create a mapping model that compiles? Any ideas? Cheers

    Read the article

  • Ranking hit after site migration

    - by Ben
    I migrated my site from its old domain over a month ago. I followed Google Webmaster Tools completely, including 301 redirects from every existing URL to the new domain, and then submitting a change of address. Traffic continued as normal, but then a few days after submitting the change of address traffic plummeted to about 20-30% of what it was previously. Most of my traffic comes from organic search, and I can see that for the keywords I had targeted before and performed well with and am now ranking much much lower for. In some cases for low competition keywords I've only lost a few places, for higher competition terms I have really suffered. This has started to pick up a bit (one of my keywords I have risen from 195 to 100 in the last week), but it seems to be a very slow process. How seamless is this process normally? I was under the impression that this would not affect my rankings too severely, but it has now been a month since the move and recovery seems to be very slow, if at all. Is it likely that I've missed something? The only change is that I have moved what was the home page to be more of a sub-page, and now in its place is a magazine-style home page. I understand that links to the old site will now be pointing to the latter which means that rankings for some keywords attributed to the old home page will take a hit, but even on other pages that seem to fit in exactly the same page structure as the previous site I have seen a drop in rankings.

    Read the article

  • Server Migration Checklist II

    - by merrillaldrich
    Easy Breezy Login Audit for your Ol’ 2000 Server In the last post on this topic I put up the preparatory steps I’ve been using for server migrations. Here I am posting some code that has worked well for us to trace who/what is connecting to our older SQL Server 2000 machines. It’s a simple audit of login events, tracing the login name, host name, database, and last login time for connections to the server, and gave us valuable insight into who was really using the machines and which databases might...(read more)

    Read the article

  • Ranking hit after WP site migration

    - by Ben
    I migrated my site from its old domain over a month ago. I followed WMT completely, including 301 redirects from every existing URL to the new domain, and then submitting a change of address. Traffic continued as normal, but then a few days after submitting the change of address traffic plummeted to about 20-30% of what it was previously. Most of my traffic come from organic search, and I can see that for the keywords I had targeted before and performed well with and am now ranking much much lower for. In some cases for low competition keywords I've only lost a few places, for higher competition terms I have really suffered. This has started to pick up a bit (one of my keywords I have risen from 195 to 100 in the last week), but it seems to be a very slow process. How seamless is this process normally? I was under the impression that this would not affect my rankings too severely, but it has now been a month since the move and recovery seems to be very slow, if at all. Is it likely that I've missed something? The only change is that I have moved what was the home page to be more of a sub-page, and now in its place is a magazine-style home page. I understand that links to the old site will now be pointing to the latter which means that rankings for some keywords attributed to the old home page will take a hit, but even on other pages that seem to fit in exactly the same page structure as the previous site I have seen a drop in rankings. Any help would be greatly appreciated. Thanks!

    Read the article

  • Sempron 145 core unlocking

    - by Grant
    Call me a cheapskate looking for performance, but I have a Sempron 145 processor that has been unlocked to an Athlon II X2 4450e with a GIGABYTE GA-990FXA-UD3 motherboard, but only one of the two processor cores can be used at a time. If both of the cores are enabled in the BIOS the kernel will hang during boot, If I reboot the system and disable the second core (but not re-lock the processor), it will take me to the grub and then start normally. Is there a way to enable both cores and boot without kernel hangs?

    Read the article

  • Accessing SQL Data Services via ADO.NET Data Service Client Library

    - by Mehmet Aras
    Is this possible? Basically I would like to use SQL Data Services REST interface and let the ADO.NET Data Service Client library handle communication details and generate the entities that I can use. I looked at the samples in February release of Azure services kit but the samples in there are using HttpWebRequest and HttpWebResponse to consume SQL Data Services RESTfully. I was hoping to use ADO.NET Data Service Client library to abstract low-level details away.

    Read the article

  • Scanstate.exe Migration Error

    - by user1438147
    I'm trying to create a migration store on a server using Scanstate and this is the error I recieve: C:\USMTscanstate.exe \(Server_Name)\migration\mystore /f:migdocs.xml /f: migapp.xml /v:13 /f:scan.log Failed. An error occurred processing the command line. scanstate.exe \svdataitfl1\migration\mystore ##ERROR## -- /f:migdocs.xml /f migapp.xml /v:13 /f:scan.log Undefined or incomplete command line option ScanState return code: 11 I can't seem to find the answer to this...need some help.

    Read the article

  • Bridging Two Worlds: Big Data and Enterprise Data

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The big data world is all the vogue in today’s IT conversations. It’s a world of volume, velocity, variety – tantalizing us with its untapped potential. It’s a world of transformational game-changing technologies that have already begun to alter the information management landscape. One of the reasons that big data is so compelling is that it’s a universal challenge that impacts every one of us. Whether it is healthcare, financial, manufacturing, government, retail - big data presents a pressing problem for many industries: how can so much information be processed so quickly to deliver the ‘bigger’ picture? With big data we’re tapping into new information that didn’t exist before: social data, weblogs, sensor data, complex content, and more. What also makes big data revolutionary is that it turns traditional information architecture on its head, putting into question commonly accepted notions of where and how data should be aggregated processed, analyzed, and stored. This is where Hadoop and NoSQL come in – new technologies which solve new problems for managing unstructured data. And now for some worst practices that I'd recommend that you please not follow: Worst Practice Lesson 1: Throw away everything that you already know about data management, data integration tools, and start completely over. One shouldn’t forget what’s already running in today’s IT. Today’s Business Analytics, Data Warehouses, Business Applications (ERP, CRM, SCM, HCM), and even many social, mobile, cloud applications still rely almost exclusively on structured data – or what we’d like to call enterprise data. This dilemma is what today’s IT leaders are up against: what are the best ways to bridge enterprise data with big data? And what are the best strategies for dealing with the complexities of these two unique worlds? Worst Practice Lesson 2: Throw away all of your existing business applications … because they don’t run on big data yet. Bridging the two worlds of big data and enterprise data means considering solutions that are complete, based on emerging Hadoop technologies (as well as traditional), and are poised for success through integrated design tools, integrated platforms that connect to your existing business applications, as well as and support real-time analytics. Leveraging these types of best practices translates to improved productivity, lowered TCO, IT optimization, and better business insights. Worst Practice Lesson 3: Separate out [and keep separate] your big data sandboxes from all the current enterprise IT systems. Don’t mix sand among playgrounds. We didn't tell you that you wouldn't get dirty doing this. Correlation between the two worlds is key. The real advantage to analyzing big data comes when you can correlate it with the existing data in your data warehouse or your current applications to make sense of the larger patterns. If you have not followed these worst practices 1-3 then you qualify for the first step of our journey: bridging the two worlds of enterprise data and big data. Over the next several weeks we’ll be discussing this topic along with several others around big data as it relates to data integration. We welcome you to join us in the conversation by following us on twitter on #BridgingBigData or download our latest white paper and resource kit: Big Data and Enterprise Data: Bridging Two Worlds.

    Read the article

  • SQL SERVER – Data Sources and Data Sets in Reporting Services SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the Beginning SSRS. Supporting files are available with a free download from the www.Joes2Pros.com web site. Connecting to Your Data? When I was a child, the telephone book was an important part of my life. Maybe I was just a nerd, but I enjoyed getting a new book every year to page through to learn about the businesses in my small town or to discover where some of my school acquaintances lived. It was also the source of maps to my town’s neighborhoods and the towns that surrounded me. To make a phone call, I would need a telephone number. In order to find a telephone number, I had to know how to use the telephone book. That seems pretty simple, but it resembles connecting to any data. You have to know where the data is and how to interact with it. A data source is the connection information that the report uses to connect to the database. You have two choices when creating a data source, whether to embed it in the report or to make it a shared resource usable by many reports. Data Sources and Data Sets A few basic terms will make the upcoming choses make more sense. What database on what server do you want to connect to? It would be better to just ask… “what is your data source?” The connection you need to make to get your reports data is called a data source. If you connected to a data source (like the JProCo database) there may be hundreds of tables. You probably only want data from just a few tables. This means you want to write a specific query against this data source. A query on a data source to get just the records you need for an SSRS report is called a Data Set. Creating a local Data Source You can connect embed a connection from your report directly to your JProCo database which (let’s say) is installed on a server named Reno. If you move JProCo to a new server named Tampa then you need to update the Data Set. If you have 10 reports in one project that were all pointing to the JProCo database on the Reno server then they would all need to be updated at once. It’s possible to make a project level Data Source and have each report use that. This means one change can fix all 10 reports at once. This would be called a Shared Data Source. Creating a Shared Data Source The best advice I can give you is to create shared data sources. The reason I recommend this is that if a database moves to a new server you will have just one place in Report Manager to make the server name change. That one change will update the connection information in all the reports that use that data source. To get started, you will start with a fresh project. Go to Start > All Programs > SQL Server 2012 > Microsoft SQL Server Data Tools to launch SSDT. Once SSDT is running, click New Project to create a new project. Once the New Project dialog box appears, fill in the form, as shown in. Be sure to select Report Server Project this time – not the wizard. Click OK to dismiss the New Project dialog box. You should now have an empty project, as shown in the Solution Explorer. A report is meant to show you data. Where is the data? The first task is to create a Shared Data Source. Right-click on the Shared Data Sources folder and choose Add New Data Source. The Shared Data Source Properties dialog box will launch where you can fill in a name for the data source. By default, it is named DataSource1. The best practice is to give the data source a more meaningful name. It is possible that you will have projects with more than one data source and, by naming them, you can tell one from another. Type the name JProCo for the data source name and click the Edit button to configure the database connection properties. If you take a look at the types of data sources you can choose, you will see that SSRS works with many data platforms including Oracle, XML, and Teradata. Make sure SQL Server is selected before continuing. For this post, I am assuming that you are using a local SQL Server and that you can use your Windows account to log in to the SQL Server. If, for some reason you must use SQL Server Authentication, choose that option and fill in your SQL Server account credentials. Otherwise, just accept Windows Authentication. If your database server was installed locally and with the default instance, just type in Localhost for the Server name. Select the JProCo database from the database list. At this point, the connection properties should look like. If you have installed a named instance of SQL Server, you will have to specify the server name like this: Localhost\InstanceName, replacing the InstanceName with whatever your instance name is. If you are not sure about the named instance, launch the SQL Server Configuration Manager found at Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools. If you have a named instance, the name will be shown in parentheses. A default instance of SQL Server will display MSSQLSERVER; a named instance will display the name chosen during installation. Once you get the connection properties filled in, click OK to dismiss the Connection Properties dialog box and OK again to dismiss the Shared Data Source properties. You now have a data source in the Solution Explorer. What’s next I really need to thank Kathi Kellenberger and Rick Morelan for sharing this material for this 5 day series of posts on SSRS. To get really comfortable with SSRS you will get to know the different SSDT windows, Build reports on your own (without the wizards),  Add report headers and footers, Accept user input,  create levels, charts, or even maps for visual appeal. You might be surprise to know a small 230 page book starts from the very beginning and covers the steps to do all these items. Beginning SSRS 2012 is a small easy to follow book so you can learn SSRS for less than $20. See Joes2Pros.com for more on this and other books. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • A Model for Planning Your Oracle BPM 10g Migration by Kris Nelson

    - by JuergenKress
    As the Oracle SOA Suite and BPM Suite 12c products enter beta, many of our clients are starting to discuss migrating from the Oracle 10g or prior platforms. With the BPM Suite 11g, Oracle introduced a major change in architecture with a strong focus on integration with SOA and an entirely new technology stack. In addition, there were fresh new UIs and a renewed business focus with an improved Process Composer and features like Adaptive Case Management. While very beneficial to both technology and the business, the fundamental change in architecture does pose clear migration challenges for clients who have made investments in the 10g platform. Some of the key challenges facing 10g customers include: Managing in-process instance migration and running multiple process engines Migration of User Interfaces and other code within the environment that may not be automated Growing or finding technical staff with both 10g and 12c experience Managing migration projects while continuing to move the business forward and meet day-to-day responsibilities As a former practitioner in a mixed 10g/11g shop, I wrestled with many of these challenges as we tried to plan ahead for the migration. Luckily, there is migration tooling on the way from Oracle and several approaches you can use in planning your migration efforts. In addition, you already have a defined and visible process on the current platform, which will be invaluable as you migrate.  A Migration ModelThis model presents several options across a value and investment spectrum. The goal of the AVIO Migration Model is to kick-start discussions within your company and assist in creating a plan of action to take advantage of the new platform. As with all models, this is a framework for discussion and certain processes or situations may not fit. Please contact us if you have specific questions or want to discuss migrations efforts in your situation. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Kris Nelson,ACM,Adaptive Case Management,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

    Read the article

  • Oracle Demantra BAL Migration

    - by Annemarie Provisero
    ADVISOR WEBCAST: Oracle Demantra BAL Migration PRODUCT FAMILY:  Manufacturing December 7, 2011 at 8 am PT, 9 am MT, 11 am ET This one-hour live advisor webcast provides Application Consultants, Administrative and Technical Users, and Customers with and introduction to the BAL Migration utility and details steps involved in migration process. The information provided trains you on leveraging BAL migration utility to migrate Demantra application objects from one environment to another. TOPICS WILL INCLUDE: Introduction to BAL Migration BAL Migration Steps Schema Setups BAL Repository Object Migration A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

  • Using core plot for iPhone, drawing date on x axis

    - by xmax
    I have available an array of dictionary that contains NSDate and NSNumber values. I wanted to plot date on X axis. for plotting I need to supply xRanges to plot with some decimal values.I don't understand how can i supply NSdate values to xRange (low and length). And what should be there in this method: -(NSNumber *)numberForPlot:(CPPlot *)plot field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)index I mean how my date value will be returned as NSNumber..? I think i should use some interval over there.but what should be the exact conversion..? can any one explain me what are the exact requirement to plot the date on xAxis..? I am plotting my plot in half of the view.

    Read the article

  • How to resolve CGDirectDisplayID changing issues on newer multi-GPU Apple laptops in Core Foundation

    - by Dave Gallagher
    In Mac OS X, every display gets a unique CGDirectDisplayID number assigned to it. You can use CGGetActiveDisplayList() or [NSScreen screens] to access them, among others. Per Apple's docs: A display ID can persist across processes and system reboot, and typically remains constant as long as certain display parameters do not change. On newer mid-2010 MacBook Pro's, Apple started using auto-switching Intel/nVidia graphics. Laptops have two GPU's, a low-powered Intel, and a high-powered nVidia. Previous dual-GPU laptops (2009 models) didn't have auto-GPU switching, and required the user to make a settings change, logoff, and then logon again to make a GPU switch occur. Even older systems only had one GPU. There's an issue with the mid-2010 models where CGDirectDisplayID's don't remain the same when a display switches from one GPU to the next. For example: Laptop powers on. Built-In LCD Screen is driven by Intel chipset. Display ID: 30002 External Display is plugged in. Built-In LCD Screen switches to nVidia chipset. It's display ID changes: 30004 External Display is driven by nVidia chipset. ...at this point, the Intel chipset is dormant... User unplugs External Display. Built-In LCD Screen switches back to Intel chipset. It's display ID changes back to original: 30002 My question is, how can I match an old display ID to a new display ID when they alter due to a GPU change? Thought about: I've noticed that the display ID only changes by 2, but I don't have enough test Mac's available to determine if this is common to all new MacBook Pro's, or just mine. Kind of a kludge if "just check for display ID's which are +/-2 from one another" works, anyway. Tried: CGDisplayRegisterReconfigurationCallback(), which notifies before-and-after when displays are going to change, has no matching logic. Putting something like this inside a method registered with it doesn't work: // Run before display settings change: CGDirectDisplayID directDisplayID = ...; io_service_t servicePort = CGDisplayIOServicePort(directDisplayID); CFDictionaryRef oldInfoDict = IODisplayCreateInfoDictionary(servicePort, kIODisplayMatchingInfo); // ...display settings change... // Run after display settings change: CGDirectDisplayID directDisplayID = ...; io_service_t servicePort = CGDisplayIOServicePort(directDisplayID); CFDictionaryRef newInfoDict = IODisplayCreateInfoDictionary(servicePort, kIODisplayMatchingInfo); BOOL match = IODisplayMatchDictionaries(oldInfoDict, newInfoDict, 0); if (match) NSLog(@"Displays are a match"); else NSLog(@"Displays are not a match"); What's happening above is I'm caching oldInfoDict before display settings change, letting them change, and then comparing it to newInfoDict by using IODisplayMatchDictionaries(), which will say either "yes, both displays are the same!" or "no, both displays are not the same." Unfortunately, it does not return YES if GPU's have changed for a monitor. Example of the dictionary's it's comparing: // oldInfoDict (Display ID: 30002) oldInfoDict: { DisplayProductID = 40144; DisplayVendorID = 1552; IODisplayLocation = "IOService:/AppleACPIPlatformExpert/PCI0@0/AppleACPIPCI/IGPU@2/AppleIntelFramebuffer/display0/AppleBacklightDisplay"; } // newInfoDict (Display ID: 30004) newInfoDict: { DisplayProductID = 40144; DisplayVendorID = 1552; IODisplayLocation = "IOService:/AppleACPIPlatformExpert/PCI0@0/AppleACPIPCI/P0P2@1/IOPCI2PCIBridge/GFX0@0/NVDA,Display-A@0/NVDA/display0/AppleBacklightDisplay"; } As you can see, the IODisplayLocation key changes when GPU's are switched, hence IODisplayMatchDictionaries() doesn't work. I can, theoretically, compared just the DisplayProductID and DisplayVendorID keys, but I'm writing end-user software, and am worried of a situation where users have two or more identical monitors plugged in. Any help is greatly appreciated! :)

    Read the article

  • Ubuntu server 11.04 recognize only 1 core instead of 4

    - by Kreker
    I searched for other questions and googled a lot but I don't find a solution for solving this problem. Ubuntu Server 11.04 64bit installed on Dell Poweredge with Intel Xeon X5450. He only recognize 1 of the 4 cores I have. Tried to modify the GRUB config but didn't work. IN the machine BIOS I didn't find anything useful. CPU root@darwin:~# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU X5450 @ 3.00GHz stepping : 10 cpu MHz : 2992.180 cache size : 6144 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 xsave lahf_lm dts tpr_shadow vnmi flexpriority bogomips : 5984.36 clflush size : 64 cache_alignment : 64 address sizes : 38 bits physical, 48 bits virtual power management: GRUB root@darwin:~# cat /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=2 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="" GRUB_CMDLINE_LINUX="noapic nolapic" #was with acpi=off # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" Complete dmesg Too long, posted on pastebin http://pastebin.com/bsKPBhzu

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • eclipse error - org.osgi.framework.BundleException: Exception in org.eclipse.core.internal.net.Activator.start()

    - by chaostimmy
    i have the following error message written to the workspace log file... i tried several different Eclipse versions and fresh workspaces... !SESSION 2011-01-11 16:56:49.375 ----------------------------------------------- eclipse.buildId=M20100909-0800 java.version=1.6.0_20 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=linux, ARCH=x86_64, WS=gtk, NL=en_US Command-line arguments: -os linux -ws gtk -arch x86_64 !ENTRY org.eclipse.osgi 4 0 2011-01-11 16:57:03.820 !MESSAGE An error occurred while automatically activating bundle org.eclipse.core.net (46). !STACK 0 org.osgi.framework.BundleException: Exception in org.eclipse.core.internal.net.Activator.start() of bundle org.eclipse.core.net. at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:806) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:755) at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:370) at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:284) at org.eclipse.osgi.framework.util.SecureAction.start(SecureAction.java:417) at org.eclipse.osgi.internal.loader.BundleLoader.setLazyTrigger(BundleLoader.java:265) at org.eclipse.core.runtime.internal.adaptor.EclipseLazyStarter.postFindLocalClass(EclipseLazyStarter.java:106) at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass(ClasspathManager.java:453) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.findLocalClass(DefaultClassLoader.java:216) at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:393) at org.eclipse.osgi.internal.loader.SingleSourcePackage.loadClass(SingleSourcePackage.java:33) at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:466) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:422) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:169) at org.eclipse.ui.internal.ide.application.IDEWorkbenchAdvisor.activateProxyService(IDEWorkbenchAdvisor.java:284) at org.eclipse.ui.internal.ide.application.IDEWorkbenchAdvisor.postStartup(IDEWorkbenchAdvisor.java:264) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2575) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2438) at org.eclipse.ui.internal.Workbench$7.run(Workbench.java:671) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:664) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:115) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:369) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:619) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:574) at org.eclipse.equinox.launcher.Main.run(Main.java:1407) at org.eclipse.equinox.launcher.Main.main(Main.java:1383) Caused by: java.lang.NoClassDefFoundError: javax/crypto/BadPaddingException at org.eclipse.equinox.internal.security.storage.SecurePreferencesMapper.open(SecurePreferencesMapper.java:99) at org.eclipse.equinox.internal.security.storage.SecurePreferencesMapper.getDefault(SecurePreferencesMapper.java:44) at org.eclipse.equinox.security.storage.SecurePreferencesFactory.getDefault(SecurePreferencesFactory.java:50) at org.eclipse.core.internal.net.ProxyType.getNode(ProxyType.java:515) at org.eclipse.core.internal.net.ProxyType.loadProxyAuth(ProxyType.java:525) at org.eclipse.core.internal.net.ProxyType.createProxyData(ProxyType.java:148) at org.eclipse.core.internal.net.ProxyType.getProxyData(ProxyType.java:137) at org.eclipse.core.internal.net.ProxyManager.migrateInstanceScopePreferences(ProxyManager.java:453) at org.eclipse.core.internal.net.ProxyManager.checkMigrated(ProxyManager.java:418) at org.eclipse.core.internal.net.ProxyManager.initialize(ProxyManager.java:277) at org.eclipse.core.internal.net.Activator.start(Activator.java:179) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:783) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:774) ... 39 more Caused by: java.lang.ClassNotFoundException: javax.crypto.BadPaddingException at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:460) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:422) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) ... 53 more Root exception: java.lang.NoClassDefFoundError: javax/crypto/BadPaddingException at org.eclipse.equinox.internal.security.storage.SecurePreferencesMapper.open(SecurePreferencesMapper.java:99) at org.eclipse.equinox.internal.security.storage.SecurePreferencesMapper.getDefault(SecurePreferencesMapper.java:44) at org.eclipse.equinox.security.storage.SecurePreferencesFactory.getDefault(SecurePreferencesFactory.java:50) at org.eclipse.core.internal.net.ProxyType.getNode(ProxyType.java:515) at org.eclipse.core.internal.net.ProxyType.loadProxyAuth(ProxyType.java:525) at org.eclipse.core.internal.net.ProxyType.createProxyData(ProxyType.java:148) at org.eclipse.core.internal.net.ProxyType.getProxyData(ProxyType.java:137) at org.eclipse.core.internal.net.ProxyManager.migrateInstanceScopePreferences(ProxyManager.java:453) at org.eclipse.core.internal.net.ProxyManager.checkMigrated(ProxyManager.java:418) at org.eclipse.core.internal.net.ProxyManager.initialize(ProxyManager.java:277) at org.eclipse.core.internal.net.Activator.start(Activator.java:179) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:783) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:774) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:755) at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:370) at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:284) at org.eclipse.osgi.framework.util.SecureAction.start(SecureAction.java:417) at org.eclipse.osgi.internal.loader.BundleLoader.setLazyTrigger(BundleLoader.java:265) at org.eclipse.core.runtime.internal.adaptor.EclipseLazyStarter.postFindLocalClass(EclipseLazyStarter.java:106) at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass(ClasspathManager.java:453) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.findLocalClass(DefaultClassLoader.java:216) at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:393) at org.eclipse.osgi.internal.loader.SingleSourcePackage.loadClass(SingleSourcePackage.java:33) at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:466) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:422) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:169) at org.eclipse.ui.internal.ide.application.IDEWorkbenchAdvisor.activateProxyService(IDEWorkbenchAdvisor.java:284) at org.eclipse.ui.internal.ide.application.IDEWorkbenchAdvisor.postStartup(IDEWorkbenchAdvisor.java:264) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2575) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2438) at org.eclipse.ui.internal.Workbench$7.run(Workbench.java:671) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:664) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:115) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:369) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:619) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:574) at org.eclipse.equinox.launcher.Main.run(Main.java:1407) at org.eclipse.equinox.launcher.Main.main(Main.java:1383) Caused by: java.lang.ClassNotFoundException: javax.crypto.BadPaddingException at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:460) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:422) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) ... 53 more !ENTRY org.eclipse.ui.workbench 4 0 2011-01-11 16:57:03.862 !MESSAGE Widget disposed too early! !STACK 0 java.lang.RuntimeException: Widget disposed too early! at org.eclipse.ui.internal.WorkbenchPartReference$1.widgetDisposed(WorkbenchPartReference.java:172) at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:123) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1258) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1282) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1263) at org.eclipse.swt.widgets.Widget.release(Widget.java:1080) at org.eclipse.swt.widgets.Control.release(Control.java:3304) at org.eclipse.swt.widgets.Composite.releaseChildren(Composite.java:1293) at org.eclipse.swt.widgets.Widget.release(Widget.java:1083) at org.eclipse.swt.widgets.Control.release(Control.java:3304) at org.eclipse.swt.widgets.Composite.releaseChildren(Composite.java:1293) at org.eclipse.swt.widgets.Widget.release(Widget.java:1083) at org.eclipse.swt.widgets.Control.release(Control.java:3304) at org.eclipse.swt.widgets.Composite.releaseChildren(Composite.java:1293) at org.eclipse.swt.widgets.Canvas.releaseChildren(Canvas.java:208) at org.eclipse.swt.widgets.Decorations.releaseChildren(Decorations.java:469) at org.eclipse.swt.widgets.Shell.releaseChildren(Shell.java:2305) at org.eclipse.swt.widgets.Widget.release(Widget.java:1083) at org.eclipse.swt.widgets.Control.release(Control.java:3304) at org.eclipse.swt.widgets.Widget.dispose(Widget.java:462) at org.eclipse.swt.widgets.Shell.dispose(Shell.java:2241) at org.eclipse.swt.widgets.Display.release(Display.java:3211) at org.eclipse.swt.graphics.Device.dispose(Device.java:237) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:131) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:369) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:619) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:574) at org.eclipse.equinox.launcher.Main.run(Main.java:1407) at org.eclipse.equinox.launcher.Main.main(Main.java:1383) !ENTRY org.eclipse.ui.workbench 4 0 2011-01-11 16:57:03.868 !MESSAGE Widget disposed too early! !STACK 0 java.lang.RuntimeException: Widget disposed too early! at org.eclipse.ui.internal.WorkbenchPartReference$1.widgetDisposed(WorkbenchPartReference.java:172) at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:123) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1258) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1282) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1263) at org.eclipse.swt.widgets.Widget.release(Widget.java:1080) at org.eclipse.swt.widgets.Control.release(Control.java:3304) at org.eclipse.swt.widgets.Composite.releaseChildren(Composite.java:1293) at org.eclipse.swt.widgets.Widget.release(Widget.java:1083) at org.eclipse.swt.widgets.Control.release(Control.java:3304) at org.eclipse.swt.widgets.Composite.releaseChildren(Composite.java:1293) at org.eclipse.swt.widgets.Widget.release(Widget.java:1083) at org.eclipse.swt.widgets.Control.release(Control.java:3304) at org.eclipse.swt.widgets.Composite.releaseChildren(Composite.java:1293) at org.eclipse.swt.widgets.Canvas.releaseChildren(Canvas.java:208) at org.eclipse.swt.widgets.Decorations.releaseChildren(Decorations.java:469) at org.eclipse.swt.widgets.Shell.releaseChildren(Shell.java:2305) at org.eclipse.swt.widgets.Widget.release(Widget.java:1083) at org.eclipse.swt.widgets.Control.release(Control.java:3304) at org.eclipse.swt.widgets.Widget.dispose(Widget.java:462) at org.eclipse.swt.widgets.Shell.dispose(Shell.java:2241) at org.eclipse.swt.widgets.Display.release(Display.java:3211) at org.eclipse.swt.graphics.Device.dispose(Device.java:237) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:131) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:369) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:619) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:574) at org.eclipse.equinox.launcher.Main.run(Main.java:1407) at org.eclipse.equinox.launcher.Main.main(Main.java:1383) !ENTRY org.eclipse.ui.workbench 4 0 2011-01-11 16:57:03.872 !MESSAGE Widget disposed too early! !STACK 0 java.lang.RuntimeException: Widget disposed too early! at org.eclipse.ui.internal.WorkbenchPartReference$1.widgetDisposed(WorkbenchPartReference.java:172) at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:123) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1258) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1282) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1263) at org.eclipse.swt.widgets.Widget.release(Widget.java:1080) at org.eclipse.swt.widgets.Control.release(Control.java:3304) at org.eclipse.swt.widgets.Composite.releaseChildren(Composite.java:1293) at org.eclipse.swt.widgets.Widget.release(Widget.java:1083) at org.eclipse.swt.widgets.Control.release(Control.java:3304) at org.eclipse.swt.widgets.Composite.releaseChildren(Composite.java:1293) at org.eclipse.swt.widgets.Widget.release(Widget.java:1083) at org.eclipse.swt.widgets.Control.release(Control.java:3304) at org.eclipse.swt.widgets.Composite.releaseChildren(Composite.java:1293) at org.eclipse.swt.widgets.Canvas.releaseChildren(Canvas.java:208) at org.eclipse.swt.widgets.Decorations.releaseChildren(Decorations.java:469) at org.eclipse.swt.widgets.Shell.releaseChildren(Shell.java:2305) at org.eclipse.swt.widgets.Widget.release(Widget.java:1083) at org.eclipse.swt.widgets.Control.release(Control.java:3304) at org.eclipse.swt.widgets.Widget.dispose(Widget.java:462) at org.eclipse.swt.widgets.Shell.dispose(Shell.java:2241) at org.eclipse.swt.widgets.Display.release(Display.java:3211) at org.eclipse.swt.graphics.Device.dispose(Device.java:237) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:131) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:369) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:619) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:574) at org.eclipse.equinox.launcher.Main.run(Main.java:1407) at org.eclipse.equinox.launcher.Main.main(Main.java:1383) !ENTRY org.eclipse.osgi 4 0 2011-01-11 16:57:03.925 !MESSAGE Application error !STACK 1 java.lang.NoClassDefFoundError: An error occurred while automatically activating bundle org.eclipse.core.net (46). at org.eclipse.ui.internal.ide.application.IDEWorkbenchAdvisor.activateProxyService(IDEWorkbenchAdvisor.java:284) at org.eclipse.ui.internal.ide.application.IDEWorkbenchAdvisor.postStartup(IDEWorkbenchAdvisor.java:264) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2575) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2438) at org.eclipse.ui.internal.Workbench$7.run(Workbench.java:671) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:664) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:115) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:369) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:619) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:574) at org.eclipse.equinox.launcher.Main.run(Main.java:1407) at org.eclipse.equinox.launcher.Main.main(Main.java:1383) i dont know what to do =(

    Read the article

  • Focus on Oracle Data Profiling and Data Quality 11g - 24/Fev/11

    - by Claudia Costa
    Thursday 24th February, 11am GMTOracle offers an integrated suite Data Quality software architected to discover and correct today's data quality problems and establish a platform prepared for tomorrow's yet unknown data challenges.Oracle Data Profiling provides data investigation, discovery, and profiling in support of quality, migration, integration, stewardship, and governance initiatives. It includes a broad range of features that expand upon basic profiling, including automated monitoring, business-rule validation, and trend analysis.Oracle Data Quality for Data Integrator provides cleansing, standardization, matching, address validation, location enrichment, and linking functions for global customer data and operational business data.It ensures that data adheres to established standards that are adaptable to fit each organization's specific needs. Both single - and double - byte data are processed in local languages to provide a unique and centralized view of customers, products and services.  During this in-person briefing, Data Integration Solution Specialists will be providing a technical overview and a walkthrough.Agenda Oracle Data Integration Strategy overview A focus on Oracle Data Profiling and Oracle Data Quality for Data Integrator: Oracle Data Profiling Oracle Data Quality for Data Integrator Live demo Q&A  This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Registrations received less than 24hours prior to start time may not receive confirmation to attend.To register click here.For any questions please contact [email protected]

    Read the article

  • Windows Server 2012 Migration (DNS/AD DS Standard Eval to Essentials OEM) P2V -> Do I need a Secondary Domain Controller during migration?

    - by Aubrey Robertson
    This is my first post on this exchange (although not my first on stack exchange), so please have patience. I am a 3rd year student intern, and I have been tasked with virtualizing the server systems at the company I work for. I have come a long way, and I am almost ready to install the VM Server in migration mode. Here is some information: Source Server: Windows Server 2012 Standard Evaluation DNS Server (local only) Advanced Directory Domain Services File and Storage stuff A few other server roles Destination Server: Windows Server 2012 Essentials OEM (Hyper-V client) Running under a temporary Hyper-V host (will migrate the Hyper-V host back to the old machine after the original server is virtualized as a client). Sitting currently at the "Select Installation Mode" screen. I have been following the guides on Microsoft tech net, and today I spent most of the day getting rid of issues in the Best Practices Analyser on the source machine. I have 3 remaining issues (which are all related): ERROR: DNS: DNS servers on Ethernet (adapter name) should include the loopback address, but not as the first entry (flavour text indicates that, during migration, the DNS server may not be found) WARNING: All domains should have at least two domain controllers for redundancy. WARNING: DNS: Ethernet should be configured to use both a preferred and an alternate DNS Server. All of these issues can be resolved by deploying a secondary domain controller, but I have never done that before (see my concerns below). The main issue here that I am concerned with for installing in migration mode is the FIRST one (the error). If I try and set-up the new server deployment, and the adapter domain controller is listed as localhost, then this may cause the installation to fail. (at least, this is what the Microsoft documentation suggests). But I do not have another IP address to enter here as I have no other local domain controllers. So I did the first obvious thing that came to my mind, and tried to use Google DNS servers as my alternates. That did not work because they couldn't recognize other computers in the "forest". Now I'm no expert when it comes to DNS, so please forgive my ignorance. This DNS server is concerned only with Active Directory stuffs for the local network. If I go ahead with migration, and it fails, then I will just have to go ahead and install a secondary DNS server I suppose. The problem I have here is that I am limited by the amount of Windows Server keys I have available (I have 2); however, I do have access to a Linux box running Debian Wheezy that I set-up two weeks ago as a Mantis server. I could install Windows Server 2012 as a secondary DNS (I think) in a VM and use that, but then it seems like I will be wasting time, and probably the Windows key too, and if there's another way to do it with Linux that would be much better. Even better still, do I even need a secondary DNS server for migration at all? The hints said that during migration the original machine "might" not be found. Thank you for your time and consideration.

    Read the article

  • How does Core Data work

    - by Jason
    Ive tried to gather information on as to how core data works, but can someone give me a clear explanation of all the stuff required...For instance NSDataContext, Fetchcontroller, NSDataModel, Presistent... Perhaps all the steps involved to get a data...Now I'm also unclear about an SQLite file, like how do we load the data into the core data , once we have created our entities etc.. Thanks

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >