Search Results

Search found 2258 results on 91 pages for 'insurance distribution'.

Page 17/91 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How to I create a user that is allowed to only add/remove users to distribution lists in Active Directory?

    - by Sorin Sbarnea
    I do have a third party product (Jira) that has Active Directory integration via LDAP. I want to enable Jira administrators to edit group memberships and have them syncronized inside Active Directory. This currently works but I needed to use a Domain Administrator service account in order to do this. The question is how can I do this without giving the entire Domain Administrator permission to the service account.

    Read the article

  • How to create an Universal Binary for iTunes Connect Distribution?

    - by balexandre
    I created an app that was rejected because Apple say that my App was not showing the correct iPad window and it was showing the same iPhone screen but top left aligned. Running on simulator, I get my App to show exactly what it should, a big iPad View. my app as Apple referees that is showing on device: my app running the simulator (50% zoom only): my code in the Application Delegate is the one I published before - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { // The default have the line below, let us comment it //MainViewController *aController = [[MainViewController alloc] initWithNibName:@"MainView" bundle:nil]; // Our main controller MainViewController *aController = nil; // Is this OS 3.2.0+ ? #if __IPHONE_OS_VERSION_MAX_ALLOWED >= 30200 if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) // It's an iPad, let's set the MainView to our MainView-iPad aController = [[MainViewController alloc] initWithNibName:@"MainView-iPad" bundle:nil]; else // This is a 3.2.0+ but not an iPad (for future, when iPhone/iPod Touch runs with same OS than iPad) aController = [[MainViewController alloc] initWithNibName:@"MainView" bundle:nil]; #else // It's an iPhone/iPod Touch (OS < 3.2.0) aController = [[MainViewController alloc] initWithNibName:@"MainView" bundle:nil]; #endif // Let's continue our default code self.mainViewController = aController; [aController release]; mainViewController.view.frame = [UIScreen mainScreen].applicationFrame; [window addSubview:[mainViewController view]]; [window makeKeyAndVisible]; return YES; } on my target info I have iPhone/iPad My question is, how should I build the app? Use Base SDK iPhone Simulator 3.1.3 iPhone Simulator 3.2 my Active Configuration is Distribution and Active Architecture is arm6 Can anyone that already published app into iTunes Connect explain me the settings? P.S. I followed the Developer Guideline on Building and Installing your Development Application that is found on Creating and Downloading Development Provisioning Profiles but does not say anything regarding this, as I did exactly and the app was rejected.

    Read the article

  • WebCenter Customer Spotlight: spectrumK Holding GmbH

    - by me
    Solution Summary spectrumK Holding GmbH was founded in 2007 by various German health insurance funds and national insurance associations and is a service provider for the healthcare market, covering patient care management, financial management, and information management, as well as payment services and legal counseling. spectrumK Holding GmbH business objectives was to implement innovative new Web-based services and solution systems for health insurance funds by integrating a multitude of isolated solutions from different organizations. Using Oracle WebCenter Portal, Oracle WebCenter Content, and Site Studio, the customer created a multiple-portal environment and deployed the 1st three applications for patient receipt, a medication navigator, and disability information. spectrumK Holding GmbH accelerated time-to-market for new features by reducing the development time, achieved 40% development and cost savings using standard modules and realized 80% overall savings using the Oracle multiple portal environment, as compared to individual installations. >> Read the full story

    Read the article

  • Do certain corporations hold more weight on a resume?

    - by Ryan
    Would a developer/tester position at Google, Apple, Microsoft, etc. (any large tech. company of which most people have heard) be more valuable on a resume than working as a developer/tester somewhere where tech. isn't the main objective (shipping company, restaurant chain, insurance company, etc.)? Let's say you have two offers, and you only plan to stay with whichever company for 5 years, before trying to get a better position at a different company. One at Google that has a starting salary of $60,000, and one at some insurance company that has a starting salary of $80,000. I guess what I'm trying to say is... with university's, if someone graduates from MIT or Carnegie Mellon, they can pretty much get a job anywhere. Does someone seem more valuable after having worked at a company like Google, Apple, Microsoft, etc.? In other words, would taking the lower paying job be better in the long run since it's at Google, or would it be better to take the higher paying job at the insurance company?

    Read the article

  • Lighting-Reflectance Models & Licensing Issues

    - by codey
    Generally, or specifically, is there any licensing issue with using any of the well known lighting/reflectance models (i.e. the BRDFs or other distribution or approximation functions): Phong, Blinn–Phong, Cook–Torrance, Blinn-Torrance-Sparrow, Lambert, Minnaert, Oren–Nayar, Ward, Strauss, Ashikhmin-Shirley and common modifications where applicable, such as: Beckmann distribution, Blinn distribution, Schlick's approximation, etc. in your shader code utilised in a commercial product? Or is it a non-issue?

    Read the article

  • Netty options for real-time distribution of small messages to a large number of clients?

    - by user439407
    I am designing a (near) real-time Netty server to distribute a large number of very small messages to a large number of clients across the internet. In internal, go as fast as you can testing, I found that I could do 10k clients no sweat, but now that we are trying to go across the internet, where the latency, bandwidth etc varies pretty wildly we are running into the dreaded outOfMemory issues, even with 2 gigs of RAM. I have tried various workarounds(setting the socket stack sizes smaller, setting high and low water marks, cancelling things that are too old), and they help a little, but they seem to only help a little bit. What would some good ways to optimize Netty for sending large #s of small messages without significant delays? Also, the bulk of the message only consists of one kind of message that I don't particularly care if it doesn't arrive. I would use UDP but because we don't control the client, thats not really a possibility. Is it possible to set a separate timeout solely for this kind of message without affecting the other messages? Any insight you could offer would be greatly appreciated.

    Read the article

  • where can i get the spring framework 3.0 distribution?

    - by mlo55
    has anyone been able to download the spring framework 3.0.0.M4 release from the spring source site... (or can you provide an alternative download page)? http://www.springsource.org/download am I missing something..., the site is giving me the runaround... when i get to the "Spring Community Downloads" page and choose spring from the LHS menu... I get no download link... ta in advance...

    Read the article

  • Web UI for inputting a function from the reals to the reals, such as a probability distribution.

    - by dreeves
    I would like a web interface for a user to describe a one-dimensional real-valued function. I'm imagining the user being presented with a blank pair of axes and they can click anywhere to create points that are thick and draggable. Double-clicking a point, let's say, makes it disappear. The actual function should be shown in real time as an interpolation of the user-supplied points. Here's what this looks like implemented in Mathematica (though of course I'm looking for something in javascript):

    Read the article

  • How to adjust the distribution of values in a random data stream?

    - by BCS
    Given a infinite stream of random 0's and 1's that is from a biased (e.g. 1's are more common than 0's by a know factor) but otherwise ideal random number generator, I want to convert it into a (shorter) infinite stream that is just as ideal but also unbiased. Looking up the definition of entropy finds this graph showing how many bits of output I should, in theory, be able to get from each bit of input. The question: Is there any practical way to actually implement a converter that is nearly ideally efficient?

    Read the article

  • Choose build configuration for referenced subproject

    - by Jaka Jancar
    I have an iPhone project which references a framework as a subproject. The framework has the following configurations: Debug Release My app has the following configurations: Debug Release Distribution-AdHoc Distribution-AppStore I would like the framework to be built with different configurations, depending on the app configuration: Debug - Debug Release - Release Distribution-AdHoc - Release Distribution-AppStore - Release How can I achieve this?

    Read the article

  • Replication Services in a BI environment

    - by jorg
    In this blog post I will explain the principles of SQL Server Replication Services without too much detail and I will take a look on the BI capabilities that Replication Services could offer in my opinion. SQL Server Replication Services provides tools to copy and distribute database objects from one database system to another and maintain consistency afterwards. These tools basically copy or synchronize data with little or no transformations, they do not offer capabilities to transform data or apply business rules, like ETL tools do. The only “transformations” Replication Services offers is to filter records or columns out of your data set. You can achieve this by selecting the desired columns of a table and/or by using WHERE statements like this: SELECT <published_columns> FROM [Table] WHERE [DateTime] >= getdate() - 60 There are three types of replication: Transactional Replication This type replicates data on a transactional level. The Log Reader Agent reads directly on the transaction log of the source database (Publisher) and clones the transactions to the Distribution Database (Distributor), this database acts as a queue for the destination database (Subscriber). Next, the Distribution Agent moves the cloned transactions that are stored in the Distribution Database to the Subscriber. The Distribution Agent can either run at scheduled intervals or continuously which offers near real-time replication of data! So for example when a user executes an UPDATE statement on one or multiple records in the publisher database, this transaction (not the data itself) is copied to the distribution database and is then also executed on the subscriber. When the Distribution Agent is set to run continuously this process runs all the time and transactions on the publisher are replicated in small batches (near real-time), when it runs on scheduled intervals it executes larger batches of transactions, but the idea is the same. Snapshot Replication This type of replication makes an initial copy of database objects that need to be replicated, this includes the schemas and the data itself. All types of replication must start with a snapshot of the database objects from the Publisher to initialize the Subscriber. Transactional replication need an initial snapshot of the replicated publisher tables/objects to run its cloned transactions on and maintain consistency. The Snapshot Agent copies the schemas of the tables that will be replicated to files that will be stored in the Snapshot Folder which is a normal folder on the file system. When all the schemas are ready, the data itself will be copied from the Publisher to the snapshot folder. The snapshot is generated as a set of bulk copy program (BCP) files. Next, the Distribution Agent moves the snapshot to the Subscriber, if necessary it applies schema changes first and copies the data itself afterwards. The application of schema changes to the Subscriber is a nice feature, when you change the schema of the Publisher with, for example, an ALTER TABLE statement, that change is propagated by default to the Subscriber(s). Merge Replication Merge replication is typically used in server-to-client environments, for example when subscribers need to receive data, make changes offline, and later synchronize changes with the Publisher and other Subscribers, like with mobile devices that need to synchronize one in a while. Because I don’t really see BI capabilities here, I will not explain this type of replication any further. Replication Services in a BI environment Transactional Replication can be very useful in BI environments. In my opinion you never want to see users to run custom (SSRS) reports or PowerPivot solutions directly on your production database, it can slow down the system and can cause deadlocks in the database which can cause errors. Transactional Replication can offer a read-only, near real-time database for reporting purposes with minimal overhead on the source system. Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!

    Read the article

  • C# Monte Carlo Simulation Package Needed

    - by Yunzhou
    I'm relative new to C# and doing a project using Monte Carlo Simulation. Basically my question is the following. I have two uncertain variable inputs, A and B, and they will go through a model and give an output C. So C = f(A,B). I know A's probability distribution (Triangular) and B's probability distribution (Discrete). How can I get the probability distribution of C? What I have done now is that I can generate random numbers based on A's triangular distribution as well as B's discrete distribution. Each pair of randomly generated A and B gives a resultant C. I've run this model 1000 times thus I can get 1000 possible values of C. The difficulty is to get the corresponding probabilities of each value of C. Obviously it's not 1/1000 unless C is uniformly distributed. Is there any Monte Carlo Simulation package/library I can use?

    Read the article

  • Monitor SQL Server Replication Jobs

    - by Yaniv Etrogi
    The Replication infrastructure in SQL Server is implemented using SQL Server Agent to execute the various components involved in the form of a job (e.g. LogReader agent job, Distribution agent job, Merge agent job) SQL Server jobs execute a binary executable file which is basically C++ code. You can download all the scripts for this article here SQL Server Job Schedules By default each of job has only one schedule that is set to Start automatically when SQL Server Agent starts. This schedule ensures that when ever the SQL Server Agent service is started all the replication components are also put into action. This is OK and makes sense but there is one problem with this default configuration that needs improvement  -  if for any reason one of the components fails it remains down in a stopped state.   Unless you monitor the status of each component you will typically get to know about such a failure from a customer complaint as a result of missing data or data that is not up to date at the subscriber level. Furthermore, having any of these components in a stopped state can lead to more severe problems if not corrected within a short time. The action required to improve on this default settings is in fact very simple. Adding a second schedule that is set as a Daily Reoccurring schedule which runs every 1 minute does the trick. SQL Server Agent’s scheduler module knows how to handle overlapping schedules so if the job is already being executed by another schedule it will not get executed again at the same time. So, in the event of a failure the failed job remains down for at most 60 seconds. Many DBAs are not aware of this capability and so search for more complex solutions such as having an additional dedicated job running an external code in VBS or another scripting language that detects replication jobs in a stopped state and starts them but there is no need to seek such external solutions when what is needed can be accomplished by T-SQL code. SQL Server Jobs Status In addition to the 1 minute schedule we also want to ensure that key components in the replication are enabled so I can search for those components by their Category, and set their status to enabled in case they are disabled, by executing the stored procedure MonitorEnableReplicationAgents. The jobs that I typically have handled are listed below but you may want to extend this, so below is the query to return all jobs along with their category. SELECT category_id, name FROM msdb.dbo.syscategories ORDER BY category_id; Distribution Cleanup LogReader Agent Distribution Agent Snapshot Agent Jobs By default when a publication is created, a snapshot agent job also gets created with a daily schedule. I see more organizations where the snapshot agent job does not need to be executed automatically by the SQL Server Agent  scheduler than organizations who   need a new snapshot generated automatically. To assure this setting is in place I created the stored procedure MonitorSnapshotAgentsSchedules which disables snapshot agent jobs and also deletes the job schedule. It is worth mentioning that when the publication property immediate_sync is turned off then the snapshot files are not created when the Snapshot agent is executed by the job. You control this property when the publication is created with a parameter called @immediate_sync passed to sp_addpublication and for an existing publication you can use sp_changepublication. Implementation The scripts assume the existence of a database named PerfDB. Steps: Run the scripts to create the stored procedures in the PerfDB database. Create a job that executes the stored procedures every hour. -- Verify that the 1_Minute schedule exists. EXEC PerfDB.dbo.MonitorReplicationAgentsSchedules @CategoryId = 10; /* Distribution */ EXEC PerfDB.dbo.MonitorReplicationAgentsSchedules @CategoryId = 13; /* LogReader */ -- Verify all replication agents are enabled. EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 10; /* Distribution */ EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 13; /* LogReader */ EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 11; /* Distribution clean up */ -- Verify that Snapshot agents are disabled and have no schedule EXEC PerfDB.dbo.MonitorSnapshotAgentsSchedules; Want to read more of about replication? Check at my replication posts at my blog.

    Read the article

  • Introducing the Oracle Linux Playground yum repo

    - by wcoekaer
    We just introduced a new yum repository/channel on http://public-yum.oracle.com called the playground channel. What we started doing is the following: When a new stable mainline kernel is released by Linus or GregKH, we internally build RPMs to test it and do some QA work around it to keep track of what's going on with the latest development kernels. It helps us understand how performance moves up or down and if there are issues, we try to help look into them and of course send that stuff back upstream. Many Linux users out there are interested in trying out the latest features but there are some potential barriers to do this. (1) in general, you are looking at an upstream development distribution, which means that everything changes both in userspace(random applications) and kernel. Projects like Fedora are very useful and someone that wants to just see how the entire distribution evolves with all the changes, this is a great way to be current. A drawback here, though, is that if you have applications that are not part of the distribution, there's a lot of manual work involved or they might just not work because the changes are too drastic. The introduction of systemd is a good example. (2) when you look at many of our customers, that are interested in our database products or applications, the starting point of having a supported/certified userspace/distribution, like Oracle Linux, is a much easier way to get your feet wet in seeing what new/future Linux kernel enhancements could do. This is where the playground channel comes into play. When you install Oracle Linux 6 (which anyone can download and use from http://edelivery.oracle.com/linux), grab the latest public yum repository file http://public-yum.oracle.com/public-yum-ol6.repo, put it in /etc/yum.repos.d and enable the playground repo : [ol6_playground_latest] name=Latest mainline stable kernel for Oracle Linux 6 ($basearch) - Unsupported baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/playground/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 Now, all you need to do : type yum update and you will be downloading the latest stable kernel which will install cleanly on Oracle Linux 6. Thus you end up with a stable Linux distribution where you can install all your software, and then download the latest stable kernel (at time of writing this is 3.6.7) without having to recompile a kernel, without having to jump through hoops. There is of course a big, very important disclaimer this is NOT for PRODUCTION use. We want to try and help make it easy for people that are interested, from a user perspective, where the Linux kernel is going and make it easy to install and use it and play around with new features. Without having to learn how to compile a kernel and without necessarily having to install a complete new distribution with all the changes top to bottom. So we don't or won't introduce any new userspace changes, this project really is around making it easy to try out the latest upstream Linux kernels in a very easy way on an environment that's stable and you can keep current, since all the latest errata for Oracle Linux 6 are published on the public yum repo as well. So one repository location for all your current changes and the upstream kernels. We hope that this will get more users to try out the latest kernel and report their findings. We are always interested in understanding stability and performance characteristics. As new features are going into the mainline kernel, that could potentially be interesting or useful for various products, we will try to point them out on our blogs and give an example on how something can be used so you can try it out for yourselves. Anyway, I hope people will find this useful and that it will help increase interested in upstream development beyond reading lkml by some of the more non-kernel-developer types.

    Read the article

  • ADNOC talks about 50x increase in performance

    - by KLaker
    If you are still wondering about how Exadata can revolutionise your business then I would recommend watching this great video which was recorded at this year's OpenWorld. First a little background...The Abu Dhabi National Oil Company for Distribution (ADNOC) is an integrated energy company that was founded in 1973. ADNOC Distribution markets and distributes petroleum products and services within the United Arab Emirates and internationally. As one of the largest and most innovative government-owned petroleum companies in the Arab Gulf, ADNOC Distribution is renowned and respected for the exceptional quality and reliability of its products and services. Its five corporate divisions include more than 200 filling stations (a number that is growing at 8% annually), more than 150 convenience stores, 10 vehicle inspection stations, as well as wholesale and retail sales of bulk fuel, gas, oil, diesel, and lubricants. ADNOC selected Oracle Exadata Database Machine after extensive research because it provided them with a single platform that can run mixed workloads in a single unified machine: "We chose Oracle Exadata Database Machine because it.offered a fully integrated and highly engineered system that was ready to deploy. With our infrastructure running all the same technology, we can operate any type of Oracle Database without restrictions and be prepared for business growth," said Ali Abdul Aziz Al-Ali, IT division manager, ADNOC Distribution. ".....we could consolidate our transaction processing and business intelligence onto one platform. Competing solutions are just not capable of doing that." - Awad Ahmed Ali El-Sidiq, Senior Database Administrator, ADNOC Distribution In this new video Awad Ahmen Ali El Sidddig, Senior DBA at ADNOC, talks about the impact that Exadata has had on his team and the whole business. ADNOC is using our engineered systems to drive and manage all their workloads: from transaction systems to payments system to data warehouse to BI environment. A true Disk-to-Dashboard revolution using Engineered Systems. This engineered approach is delivering 50x improvement in performance with one queries running 100x faster! The IT has even revolutionised some of their data warehouse related processes with the help of Exadata and now jobs that were taking over 4 hours now run in a few minutes.  To watch the video click on the image below which will take you to our Oracle YouTube page: (if the above link does not work, click here: http://www.youtube.com/watch?v=zcRpxc6u5Ic) Now that queries are running 100x faster and jobs are completing in minutes not hours, what is next for the IT team at ADNOC? Like many of our customers ADNOC is now looking to take advantage of big data to help them better align their business operations with customer behaviour and customer insights. To help deliver this next level of insight the IT team is looking at the new features in Oracle Database 12c such as the new in-memory feature to deliver even more performance gains.  The great news is that Awad Ahmen Ali El Sidddig was awarded DBA of the Year - EMEA within our Data Warehouse Global Leaders programme and you can see the badge for this award pop-up at the start of video. Well done to everyone at ADNOC and thanks for spending the time with us at OOW to create this great video.

    Read the article

  • How can I prepare a cake graph in excel with a result based on 100%?

    - by Pitto
    Hello my friends... I need to distribute correctly a little data in an excel graph. I have the total I've earned last year which should represent the 100% of the cake. Then I have my insurance expenses and I want to understand, graphically, how much of my total income went away to pay insurance... I know that a basic proportion like: total expenses : total insurance costs = 100 : x do the correct math but I can't find a way to display this in a cake graph... Any hints?

    Read the article

  • Random Monday Thoughs

    - by Terry Goldman
    On this Monday morning my thoughts center on why is it so hard to embrace governance, any form of governance for that matter, be it software development governance, SOA governance, data governance, IT governance, so on a so fourth?Most customers that I meet tend to think that they don't need don't need governance as all is good within the enterprise. The question I generally pose to colleges and customers, is you have to think of governance as an insurance policy. Take of instance, if you just bought a new car, perhaps your "dream" car, would you drive it on the open streets without having the car insured? Probably not.Governance is what insurance is to new cars, be it to SOA, IT transformations and software development. Governance is a insurance policy against risk of failure. Now once I put it in this context, ask yourself, does governance have value to your organization? Most people now get it. Once the seed of governance is planted at the executive level of an organization, it becomes a exercise in planting an that idea into key personnel within the organization. Then the justification for governance grows and grows across the enterprise.Thats my food for though in this Monday morning.FYI, stay tuned for an upcoming multi-part article on using Oracle Enterprise Repository to build a Enterprise Continuum as described in TOGAF v9.0.

    Read the article

  • XSLT is not the solution you're looking for

    - by Jeff
    I was very relieved to see that Umbraco is ditching XSLT as a rendering mechanism in the forthcoming v5. Thank God for that. After working in this business for a very long time, I can't think of any other technology that has been inappropriately used, time after time, and without any compelling reason.The place I remember seeing it the most was during my time at Insurance.com. We used it, mostly, for two reasons. The first and justifiable reason was that it tweaked data for messaging to the various insurance carriers. While they all shared a "standard" for insurance quoting, they all had their little nuances we had to accommodate, so XSLT made sense. The other thing we used it for was rendering in the interview app. In other words, when we showed you some fancy UI, we'd often ditch the control rendering and straight HTML and use XSLT. I hated it.There just hasn't been a technology hammer that made every problem look like a nail (or however that metaphor goes) the way XSLT has. Imagine my horror the first week at Microsoft, when my team assumed control of the MSDN/TechNet forums, and we saw a mess of XSLT for some parts of it. I don't have to tell you that we ripped that stuff out pretty quickly. I can't even tell you how many performance problems went away as we started to rip it out.XSLT is not your friend. It has a place in the world, but that place is tweaking XML, not rendering UI.

    Read the article

  • SQL Transactional Replication snapshot not applying

    - by dmch2
    Hi, I'm using SQL Transactional Replication with pull subscriptions to replicate databases (hosting their own distribution database) from several servers across a VPN to a central server. I've got the first 2 databases working fine but the 3rd one is causing me problems. My subscription server is SQL 2008, the source systems are all SQL 2005. The source databases are a few 100Mb in size and contain audit data so are simply growing slowly by adding new records at approx 1kb a second. As far as the replication monitor, Agent logs and event logs show everything is working fine - except that no data appears in my subscription database. The distribution agent doesn't seem to want to read the snapshot (and hence the initial state and schema) from the publisher. New transactions aren't applied although they do seem to be arriving OK as the replication monitor shows things like '5 transactions with 10 commands were delivered'. I would expect (as in previous times) to see statements about data being BCPed in the replication monitor. The snapshot is on the publisher on a shared folder. The subscriber can view the snapshot OK (\\repldata) and the alt snapshot folder is pointing at it. But the distribution agent doesn't seem to be making an attempt to do read it. I tried changing the snapshot path to something that's incorrect and didn't even get an error saying that it couldn't access it. After lots of googling etc I found that sp_MSget_repl_commands is called by the subscriber on the distribution database on the publisher. Running a profiler I can see that it's only called for one agent Id. After a reinit it's called for sequence number 0x0 as expected so I thought that would mean it's would look for the snapshot. However, looking on the publisher I see that there's data for two agents - the snapshot agent and the log reader agent (which is being queries). So I guess I need to tell the distribution agent to get the data for both. But how? and more importantly - why? It worked fine on the other two servers I've replicated. I'm not an SQL novice but this is pretty much my first go at replication so don't be afraid to accuse me of missing something obvious/stupid! I can get log files (eg from the distribution agent) if you want but they don't seem to have any errors in them - it just starts up and starts applying log reader agent changes. Cheers Dave

    Read the article

  • Nivida driver install

    - by Adham
    tried black listing and every thing i could find on the internet used with root and alt+ctrl+f1 with no solution nvidia-installer log file '/var/log/nvidia-installer.log' creation time: Mon Jun 17 08:35:25 2013 installer version: 319.23 PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin nvidia-installer command line: ./nvidia-installer Using: nvidia-installer ncurses user interface -> License accepted. -> Installing NVIDIA driver version 319.23. -> Running distribution scripts executing: '/usr/lib/nvidia/pre-install'... -> done. -> The distribution-provided pre-install script failed! Continue installation anyway? (Answer: Yes) ERROR: The Nouveau kernel driver is currently in use by your system. This driver is incompatible with the NVIDIA driver, and must be disabled before proceeding. Please consult the NVIDIA driver README and your Linux distribution's documentation for details on how to correctly disable the Nouveau kernel driver. WARNING: One or more modprobe configuration files to disable Nouveau are already present at: /etc/modprobe.d/nvidia-installer-disable-nouveau.conf. Please be sure you have rebooted your system since these files were written. If you have rebooted, then Nouveau may be enabled for other reasons, such as being included in the system initial ramdisk or in your X configuration file. Please consult the NVIDIA driver README and your Linux distribution's documentation for details on how to correctly disable the Nouveau kernel driver. ERROR: Installation has failed. Please see the file '/var/log/nvidia-installer.log' for details. You may find suggestions on fixing installation problems in the README available on the Linux driver download page at www.nvidia.com.

    Read the article

  • Smooth Sailing or Rough Waters: Navigating Policy Administration Modernization

    - by helen.pitts(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Life insurance and annuity carriers continue to recognize the need to modernize their aging policy administration systems, but may be hesitant to move forward because of the inherent risk involved. To help carriers better prepare for what lies ahead LOMA's Resource Magazine asked Karen Furtado, partner of Strategy Meets Action, to help them chart a course in Navigating Policy Administration Selection, the cover story of this month’s issue. The industry analyst and research firm recently asked insurance carriers to name the business drivers for replacing legacy policy administration systems. The top five cited, according to Furtado, centered on: Supporting growth in current lines Improving competitive position Containing and reducing costs Supporting growth in new lines Supporting agent demands and interaction It’s no surprise that fueling growth, both now and in the future, continues to be a key driver for modernization. Why? Inflexible, hard-coded, legacy systems require customization by IT every time a change is required. This in turn impedes a carrier’s ability to be agile, constraining their ability to quickly adapt to changing regulatory requirements and evolving market demands. It also stymies their ability to quickly bring to market new products or rapidly configure changes to existing ones, and also can inhibit how carriers service customers and distribution channels. In the article, Furtado advised carriers to ensure that the policy administration system they are considering is current and modern, with an adaptable user interface and flexible service-oriented architecture. She said carriers to should ask themselves, “How much do you need flexibility and agility now and in the future? Does it support the business processes and rules that are needed for you to be able to create that adaptable environment?” Furtado went on to advise that carriers “Connect your strategy to your business and technical capabilities before you make investment choices…You want to enable your organization to transform for the future, not just automate the past.” Unlocking High Performance with Policy Administration Transformation also was the topic of a recent LOMA webcast moderated by Ron Clark, editor of LOMA's Resource Magazine. The web cast, which featured speakers from Oracle Insurance and Capgemini, focused on how insurers can competitively drive high performance by: Replacing a legacy policy administration system with a modern, flexible platform Optimizing IT and operations costs, creating consistent processes and eliminating resource redundancies Selecting the right partner with the best blend of technology, operational, and consulting capabilities to achieve market leadership Understanding the value of outsourcing closed block operations Learn more by clicking here to access this free, one-hour recorded webcast. Helen Pitts, is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • Insurers Pushed to Transform Their Business

    - by Calvin Glenn
    Everyone in the P&C industry has heard it “We can’t do it.” “Nobody wants to do it.” “We can’t afford to do it.”  Unfortunately, what they’re referencing are the reasons many insurers are still trying to maintain their business processing on legacy policy administration systems, attempting to bide time until there is no other recourse but to give in, bite the bullet, and take on the monumental task of replacing an entire policy administration system (PAS). Just the thought of that project sends IT, Business Users and Management reeling. However, is that fear real?  It is a bit daunting when one realizes that a complete policy administration system replacement will touch most every function an insurer manages, from quoting and rating, to underwriting, distribution, and even customer service. With that, everyone has heard at least one horror story around a transformation initiative that has far exceeded budget and the promised implementation / go-live timeline.    But, does it have to be that hard?  Surely, in the age where a person can voice-activate their DVR to record a TV program from a cell phone, there has to be someone somewhere who’s figured out how to simplify this process. To be able to help insurers, of all sizes, transform and grow their business while also delivering on their overall objectives of providing speed to market, straight-through-processing for applications, quoting, underwriting, and simplified product development. Maybe we’re looking too hard and the answer is simple and straight-forward. Why replace the entire machine when all it really needs is a new part…a single enterprise rating system? This core, modular piece of the policy administration system is the foundation of product development and rate management that enables insurers to provide the right product at the right price to the right customer through the best channels at any given moment in time. The real benefit of a single enterprise rating system is the ability to deliver enhanced business capabilities, such as improved product management, streamlined underwriting, and speed to market. With these benefits, carriers have accomplished a portion of their overall transformation goal. Furthermore, lessons learned from the rating project can be applied to the bigger, down-the-road PAS project to support the successful completion of the overall transformation endeavor. At the recent Oracle OpenWorld Conference in San Francisco, information was shared with attendees about a recent “go-live” project from an Oracle Insurance Tier 1 insurer who did what is proposed above…replaced just the rating portion of their legacy policy administration system with Oracle Insurance Insbridge Rating and Underwriting.  This change provided the insurer greater flexibility to set rates that better reflect risk while enabling the company to support its market segment strategy. Using the Oracle Insurance Insbridge enterprise rating solution, the insurer was able to reduce processing time for agents and underwriters, gained the ability to support proprietary rating models and improved pricing accuracy.      There is mounting pressure on P&C insurers to produce growth and show net profitability in the midst of modest overall industry growth, large weather-related losses and intensifying competition for market share.  Insurers are also being asked to improve customer service, offer a differentiated value proposition and simplify insurance processes.  While the demands are many there is an easy answer…invest in and update the most mission critical application in your arsenal, the single enterprise rating system. Download the Podcast to listen to “Stand-Alone Rating Engine - Leading Force Behind Core Transformation Projects in the P&C Market,” a podcast originally recorded in October 2013. Related Resources: White Paper: Stand-Alone Rating Engine: Leading Force Behind Core Transformation Projects in the P&C Market Webcast On Demand: Stand-Alone Rating Engine and Core Transformation for P&C Insurers Don’t forget to keep up with us year-round: Facebook: www.facebook.com/oracleinsurance Twitter: www.twitter.com/oracleinsurance YouTube: www.youtube.com/oracleinsurance

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >