Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 495/880 | < Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >

  • Latest News on Service, Field Service and Depot Repair Products

    - by LuciaC
    Service and Depot Repair Customer Advisory Boards (CAB) In November 2012 the Service and Depot Repair CAB joined together for a combined meeting at Oracle HQ in Redwood Shores, California to discuss all the latest news in the Oracle Service, Field Service and Depot Repair products.  Over four days attendees shared their experiences with implementing and using these EBS CRM products and heard details of recent enhancements and future product plans direct from Development. You can access all the Oracle presentations via Doc ID 1511768.1.  Here are just some of the highlights: Field Service: Next Generation Dispatch Center Endeca Integration Case Study: Oracle Sun Field Service implementation. Mobile Field Service: New capabilities for technician-facing applications Service: Integration with Oracle Projects New Teleservice enhancements Spares Management: Supplier Warranty External Repair Execution Oracle Knowledge (Inquira) Introduction for Service Organizations If you weren't at the CAB, take a look at these presentations for great information about what's new and what's coming up in these products. 12.1.3++ Features for Field Service, Mobile Field Service, Spares Management, FSTP & Advanced Scheduler In June 2012 the R12.1.3++ patches were released for Field Service, Mobile Field Service, FSTP and Advanced Scheduler.  These patches contain new and updated functionality for these CRM Service suite modules.  New functionality includes: Field Service/FSTP/MFS: Support for Transfer Parts across subinventories in different organizations Validation to ensure Installed Item matches Returned Item MFS Wireless - Support fro Special Address Creation MFS Wireless - Enhanced Debrief Flow Advanced Scheduler Scheduler UI - Display of Spares Sourcing Information Auto Commit (Release) Tasks by Territory Dispatch Center UI - Display Spare Parts Arrival Information Spares Management Enhancements to the Task Reassignment Process Enhancements to the Parts Requirements UI Supply Chain Enhancements to allow filtering of ship methods from source location by distance. Check the following notes for more details and relevant patch numbers:Doc ID 1463333.1 - Oracle Field Service Release Notes, Release 12.1.3++Doc ID 1452470.1 - Field Service Technician Portal 12.1.3++ New FeaturesDoc ID 1463066.1 - Oracle Advanced Scheduler Release Notes, Release 12.1.3++ Doc ID 1463335.1 - Oracle Spares Management Release Notes, Release 12.1.3++ Doc ID 1463243.1 - Oracle Mobile Field Service Release Notes, Release 12.1.3++

    Read the article

  • Security Risks of Unsigned ClickOnce Manifests

    - by Tom Tom
    Using signed manifests in ClickOnce deployments, it is not possible to modify files after the deployment package has been published - installation will fail as hash information in the manifest won't match up with the modified files. I recently stumbled upon a situation where this was problematic - customers need to be able to set things like connection strings in app.config before deploying the software to their users. I got round the problem by un-checking the option to "Sign the ClickOnce manifests" in VS2010 and explicitly excluding the app.config file from the list of files to have hashes generated during the publish process. From a related page on MSDN "Unsigned manifests can simplify development and testing of your application. However, unsigned manifests introduce substantial security risks in a production environment. Only consider using unsigned manifests if your ClickOnce application runs on computers within an intranet that is completely isolated from the internet or other sources of malicious code." In my situation, this isn't an immediate problem - the deployment won't be internet-facing. However, I'm curious to learn what the "substantial security risks" of what I've done would be if it was internet-facing (or if things changed and it needed to be in the future). Thanks in advance!

    Read the article

  • Windows 7 backup network restore: "The network location cannot be reached, 0x800704CF"

    - by Znarkus
    When I try to restore from a backup image, I get this error. After I enter the network address (\\10.0.0.1\backup or \\z\backup), the wizard presents me with the network login dialog, which leads me to believe that it can connect to the network (yes, the share is password protected). I decided to install Windows 7, since I thought that I could restore the image from Windows. The restore process in Windows can locate the backups, but to do an image restore it needs to reboot to the wizard above. Which of course gives the very same error. This is what \\z\backup looks like. Please help, I'm getting desperate. Update: Forgot to mention that the NAS is running Ubuntu, if that's relevant.

    Read the article

  • Timeout Considerations for Solicit Response

    - by Michael Stephenson
    Background One of the clients I work with had been experiencing some issues for a while surrounding web service timeouts.  It's been a little challenging to work through the problems due to limitations in the diagnostic information available from one of the applications, but I learned some interesting things while troubleshooting the problem which don't seem to have been discussed much in the community so I thought I'd share my findings. In the scenario we have BizTalk trying to make calls to a .net web service which was exposed as a WSE 2 endpoint.  In the process BizTalk will try to make a large number of concurrent web service calls to the application, and the backend application has more than enough infrastructure and capability to handle the load. We have configured the <ConnectionManagement> section of the BizTalk configuration file to support up to 100 concurrent connections from each of our 2 BizTalk send servers to the web servers of the application. The problem we were facing was that the BizTalk side was reporting a significant number of timeouts when calling the web service.   One of the biggest issues was the challenge of being able to correlate a message from BizTalk to the IIS log in the .net application and the custom logs in the application especially when there was a fairly large number of servers hosting the web services.  However the key moment came when we were able to identify a specific call which had taken 40 seconds to execute on the server (yes a long time I know but that's a different story!).  Anyway we were able to identify that this had timed out on the BizTalk side.  Based on the normal 2 minute timeout we knew something unexpected was going on. From here I decided to do some experimentation and I wanted to start outside of BizTalk because my hunch was this was not a BizTalk behaviour but something which was being highlighted by BizTalk because of our large load.     Server-side - Sample Web Service To begin with I created a sample web service.  Nothing special just a vanilla asmx web service hosted in IIS6 on Windows 2003 Standard Edition.  The web service is just a hello world style web service as shown in the below picture.  The only key feature is that the server side web method has a 30 second sleep in it and will trace out some information before and after the thread is set to sleep.      In the configuration for this web service there again is nothing special it's pretty much the most plain simple web service you could build. Client-Side To begin looking at what was happening with our example I created a number of different ways to consume the web service. SoapHttpClientProtocol Example I created a small application which would use a normal proxy generated to call the web service.  It would iterate around a loop and make calls using the begin/end methods so I can do this asynchronously.  I would do a loop of 20 calls with the ConnectionManager configuration section supporting only 5 concurrent connections to the server.     <connectionManagement> <remove address="*"/> <add address = "*" maxconnection = "12" /> <add address = "http://<ServerName>" maxconnection = "5" />                         </connectionManagement> </system.net>     The below picture shows an example of the service calling code, key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service         The Test I would run the client and execute 21 calls to the web service.   The Results  Below is the client side trace showing what's happening on the client. In the below diagram is the web service side trace showing what's happening on the server Some observations on the results are: All of the calls were successful from the clients perspective You could see the next call starting on the server as soon as the previous one had completed Calls took significantly longer than 40 seconds from the start of our call to the return. In fact call 20 took 2 minutes and 30 seconds from the perspective of my code to execute even though I had set the timeout to 40 seconds     WSE 2 Sample In the second example I used the exact same code to call the web service again with a single exception that I modified the web service proxy to derive from WebServiceClient protocol which is part of WSE 2 (using SP3).  The below picture shows the basic code and the key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service        The Test This test would execute 21 calls from the client to the web service.   The Results  The below trace is from the client side: The below trace is from the server side:   Some observations on the trace results for this scenario are: With call 4 if you look at the server side trace it did not start executing on the server for a number of seconds after the other 4 initial calls which were accepted by the server. I re-ran the test and this happened a couple of times and not on most others so at this point I'm just putting this down to something unexpected happening on the development machine and we will leave this observation out of scope of this article. You can see that the client side trace statement executed almost immediately in all cases All calls after the initial few calls would timeout On the client side the calls that did timeout; timed out in a longer duration than the 40 seconds we set as the timeout You can see that as calls were completing on the server the next calls were starting to come through The calls that timed out on the client did actually connect to the server and their server side execution completed successfully     Elaboration on the findings Based on the above observations I have drawn the below sequence diagram to illustrate conceptually what is happening.  Everything except the final web service object is on the client side of the call. In the diagram below I've put two notes on the Web Service Proxy to show the two different places where the different base classes seem to start their timeout counters. From the earlier samples we can work out that the timeout counter for the WSE web service proxy starts before the one for the SoapHttpClientProtocol proxy and the WSE one includes the time to get a connection from the pool; whereas the Soap proxy timeout just covers the method execution. One interesting observation is if we rerun the above sample and increase the number of calls from 21 to 100,000 then for the WSE sample we will see a similar pattern where everything after the first few calls will timeout on the client as soon as it makes a connection to the server whereas the soap proxy will happily plug away and process all of the calls without a single timeout. I have actually set the sample running overnight and this did happen. At this point you are probably thinking the same thoughts I was at the time about the differences in behaviour and which is right and why are they different? I'm not sure there is a definitive answer to this in the documentation, or at least not that I could find! I think you just have to consider that they are different and they could have different effects depending on your messaging solution. In lots of situations this is just not an issue as your concurrent requests doesn't get to the situation where you end up throttling the web service calls on the client side, however this is definitely more common with an integration broker such as BizTalk where you often have high throughput requirements.  Some of the considerations you should make Based on this behaviour you should be aware of the following: In a .net application if you are making lots of concurrent web service calls from an application in an asynchronous manner your user may thing they are experiencing poor performance but you think your web service is working well. The problem could be that the client will have a default of 2 connections to remote servers so you should bear this in mind When you are developing a BizTalk solution or a .net solution with the WSE 2 stack you may experience timeouts under load and throttling the number of connections using the max connections element in the configuration file will not help you For an application using WSE2 or SoapHttpClientProtocol an expired timeout will not throw an error until after a connection to the server has been made so you should consider this in your transaction and durability patterns     Our Work Around In the short term for our specific scenario we know that we can handle this by just increasing our timeout value.  There is only a specific small window when we get lots of concurrent traffic that causes this scenario so we should be able to increase the timeout to take into consideration the additional client side wait, and on the odd occasion where we do get a timeout the BizTalk send port retry will handle this. What was causing our original problem was that for that short window we were getting a lot of retries which significantly increased the load on our send servers and highlighted the issue.  Longer Term Solution As a longer term solution this really gives us more ammunition to argue a migration to WCF. The application we are calling has some factors which limit the protocols we can use but with WCF we would have more control on the various timeout options because in WCF you can configure specific parts of the timeout. Summary I've had this blog post on my to do list for ages but hopefully it will be useful to some people to just understand this behaviour and to possibly help you with some performance issues you may have. I do not believe there is too much in the way of documentation particularly around WSE2 and ASMX in this area so again another bit of ammunition for migrating to WCF. I'll try to do a follow up post with the sample for WCF to show how this changes things.

    Read the article

  • How to create a "retro" pixel shader for transformed 2D sprites that maintains pixel fidelity?

    - by David Gouveia
    The image below shows two sprites rendered with point sampling on top of a background: The left skull has no rotation/scaling applied to it, so every pixel matches perfectly with the background. The right skull is rotated/scaled, and this results in larger pixels that are no longer axis aligned. How could I develop a pixel shader that would render the transformed sprite on the right with axis aligned pixels of the same size as the rest of the scene? This might be related to how sprite scaling was implemented in old games such as Monkey Island, because that's the effect I'm trying to achieve, but with rotation added. Edit As per kaoD's suggestions, I tried to address the problem as a post-process. The easiest approach was to render to a separate render target first (downsampled to match the desired pixel size) and then upscale it when rendering a second time. It did address my requirements above. First I tried doing it Linear -> Point and the result was this: There's no distortion but the result looks blurred and it loses most of the highlights colors. In my opinion it breaks the retro look I needed. The second time I tried Point -> Point and the result was this: Despite the distortion, I think that might be good enough for my needs, although it does look better as a still image than in motion. To demonstrate, here's a video of the effect, although YouTube filtered the pixels out of it: http://youtu.be/hqokk58KFmI However, I'll leave the question open for a few more days in case someone comes up with a better sampling solution that maintains the crisp look while decreasing the amount of distortion when moving.

    Read the article

  • Google Account Changes that Messed Up My Ability to Manage Sites

    - by A Yearwood
    I hope you can help... I have a google account that I use to manage my church website. This account is set up with my work email address. My work (a school system) is in the process of changing us all over to use google accounts to replace Outlook. When I logged in to this new account (which is my same email address) it no longer lets me manage my google site. Under my dashboard I can see that my data is still there, but when I try to "manage" my site it says I don't have one and re-directs me to my school's account. Thanks in advance. A Yearwood

    Read the article

  • How to install network drivers during installation?

    - by Matt
    I have a server that I'm attempting to install windows onto. However, the disk is an iscsi target provided by ipxe. Everything appears to go well until about 3/4 through the install process I get an error about a critical driver missing and the installation is cancelled. I would say the critical driver would be the network card. It's an intel nic and the drivers are not on the windows installation CD. I tried slipstreaming them with RTSevenLite, but after it created the CD it seems it failed to make it bootable. I've also not been successful in making a bootable USB thumb drive or USB HDD. I suspect a buggy bios even though I have the latest. How to install network drivers during installation? Windows used to provide an optional F6 during install feature but this seems to be missing in Windows Servert 2008. Perhaps there is a way to do this, or another method?

    Read the article

  • Dell R910 with Integrated PERC H700 Adapter

    - by Alex
    I am in the process of designing an architecture based around a single Dell R910 server running Windows Server 2008 Enterprise. I would like the server to have 8 RAID1 pairs of spinning disks, so I intend to implement: Dell R910 Server Integrated PERC H700 Adapter with 1 SAS expander on each SAS connector (so 8 expanders in total) 7 RAID1 pairs of 143Gb 15K HDD, each paired on one connector using an expander 1 RAID1 pair of 600Gb 10K HDD, paired on the remaining connector using an expander My main concern is not to introduce bottlenecks in this architecture, and I have the following questions. Will the PERC H700 Adapter act as a bottleneck for disk access? Will using SAS expanders for each RAID1 pair cause a bottleneck or would this be as fast as pairing disks directly attached to the SAS connectors? Can I mix the disks, as long as the disks in each RAID1 pair are the same? I assume so. Can anyone recommend any single-to-double SAS Expanders that are known to function well with the H700? Cheers Alex

    Read the article

  • The Workshop Technique Handbook

    - by llowitz
    The #OUM method pack contains a plethora of information, but if you browse through the activities and tasks contained in OUM, you will see very few references to workshops.  Consequently, I am often asked whether OUM supports a workshop-type approach.  In general, OUM does not prescribe the manner in which tasks should be conducted, as many factors such as culture, availability of resources, potential travel cost of attendees, can influence whether a workshop is appropriate in a given situation.  Although workshops are not typically called out in OUM, OUM encourages the project manager to group the OUM tasks in a way that makes sense for the project. OUM considers a workshop to be a technique that can be applied to any OUM task or group of tasks.  If a workshop is conducted, it is important to identify the OUM tasks that are executed during the workshop.  For example, a “Requirements Gathering Workshop” is quite likely to Gathering Business Requirements, Gathering Solution Requirements and perhaps Specifying Key Structure Definitions. Not only is a workshop approach to conducting the OUM tasks perfectly acceptable, OUM provides in-depth guidance on how to maximize the value of your workshops.  I strongly encourage you to read the “Workshop Techniques Handbook” included in the OUM Manage Focus Area under Method Resources. The Workshop Techniques Handbook provides valuable information on a variety of workshop approaches and discusses the circumstances in which each type of workshop is most affective.  Furthermore, it provides detailed information on how to structure a workshops and tips on facilitating the workshop. You will find guidance on some popular workshop techniques such as brainstorming, setting objectives, prioritizing and other more specialized techniques such as Value Chain Analysis, SWOT analysis, the Delphi Technique and much more. Workshops can and should be applied to any type of OUM project, whether that project falls within the Envision, Manage or Implementation focus areas.  If you typically employ workshops to gather information, walk through a business process, develop a roadmap or validate your understanding with the client, by all means continue utilizing them to conduct the OUM tasks during your project, but first take the time to review the Workshop Techniques Handbook to refresh your knowledge and hone your skills. 

    Read the article

  • How can a non-technical person can learn to write a spec for small projects?

    - by Joseph Turian
    How can a non-technical person learn to write specs for small projects? A friend of mine is trying to outsource some development on a statistics project. In particular, he does a lot of work in excel, and wants to outsource the creation of scripts to do what he now does by hand. However, my friend is extremely non-technical. He is poor at writing technical specs. When he does write a spec, it is written the way you would describe doing something in excel (go to this cell and then copy the value to that cell). It is also overly verbose, and does examples several times. I'm not sure if he properly describes corner cases. The first project he outsourced was a failure. I think he overdescribed some details, but underdescribed corner cases. That and/or the coder he hired didn't think through the corner cases and ask appropriate questions. I'm not sure. I got on IM with him and it took me half an hour to dig out a description that should have taken five minutes or less to describe. I wrote the scripts for him at the end, but didn't examine why his process with the coder failed. He has asked me for help. However, I refuse to get involved, because taking his spec and translating it into clear requirements is 10x more work than executing on a clearly written spec. What is the right way for him to learn? Are there resources he could use? Are there ways he can learn from small, low-pressure practice projects with coders? [edit: Most of his scripts are statistical and data processing oriented. e.g. take this column and run an average over it. Remove these rows under these conditions. So the challenge is different than spec'ing a web app.]

    Read the article

  • Ubuntu 9.10 x86_32 with PAE causes graphics issues, mainly slowness

    - by widgisoft
    I've tried the 64bit version and found I was constantly hitting a brick wall trying to get 32bit stuff to run; I'd previously used PAE on 9.04 without any issues so figured I'd give it a shot. However, on 9.10 it seems PAE or the process of enabling PAE breaks the nvidia drivers/module somewhat as performance is terrible; I can't even enable desktop effects and there's lots of artifacting on random controls. Disabling the pae image and rebooting fixes the issue however I'm then stuck with "only" 2.7Gb of ram and unable to use the full 8gb that's installed. Is there somthing special I need to do when using pae and nVidia drivers or should I be using 64bit and just figure out how to force run 32bit packages? :-p

    Read the article

  • How to automatically make a change to Outlook Microsoft Exchange Proxy Settings

    - by Richard West
    I need to make a change on all computers in our domain. Specifically I need to make a change to the Microsoft Exchange Proxy Settings. Our users have Outlook 2010 installed. These setting can be mannually accessed from: Control Panel - Mail - E-mail Accounts - (Select Account) - Change Account - More Settings - Connection Tab - Exchange Proxy Settings I need to have both the "On fast networks" and "On slow networks" check boxes selected. Obviously the idea of asking my users to go through the process above to make these changes is not ideal. Therefore I looking for advice on how I can automatically push these setting to my user base. I have seached the registry but I have been unable to find the location that this setting is saved. Thanks for any help!

    Read the article

  • Need game development sandbox like Etoys to do 2D games prototyping

    - by Dimitry Tato
    I am new to game development, and currently working on development a mobile 2D game (for android). As the part of the development process, I need to build a prototype and playtest it, to see if the game mechanics and user interaction is ok For example: if I have a starship shooting at ememies, I need to see what's the best size for my starship. what trajectories should the enemy ships fly and what velocity. Should the enemy ships be coming only from left to right, or also from top Should the enemy ships form a 'flock' or just fly by themselves what's the best 'powerup' pickup mechanics: to shoot it, or to pick it with the ship etc Implementing these details directly in Java (Android) is time consuming and as many of the 'hypotheses' will be rejected, I also don't want to invest a lot of time to code thigs, majority of which gonna be rejected. I found 'tool' Etoys http://www.youtube.com/watch?v=34cWCnLC5nM&feature=related and official website http://www.squeakland.org/ which helps to build 'prototype' quickly, but Etoys is meant for children learning programming and is too basic. SO MY QUESTION IS: Is there any prototyping tool, as simple as Etoys and with better prototype quality?

    Read the article

  • How to implement a birds eye view of 2D Grid Map using Android

    - by IM_Adan
    I'm a true beginner with using the android platform and I'm having difficulties on implementing a 2D grid system for a tower defense type game. Where I can place towers on a specific tile and enemies will be able to traverse through tiles etc. What I would like is a practical explanation of how I could tackle this. A step by step guide for dummies. This is what I believe are the necessary steps to take, I think I might be wrong but I hope someone could help me out. Calculate the Width and Height of the view I'm working with. Based on that, determine the number of tiles required and their dimensions, (Still not sure how I would do this) Create each tile as a Rectangle object and draw these rectangle on a canvas I would really be grateful if someone could steer me in the right direction on how to implement a 2D Grid Map using android. I hope the answer to this questions helps the TRUE beginners out there like me. I have looked at the following links below yet I still feel that I don't trully understand what's going on. For XNA: 2D Grid based game - how should I draw grid lines? How to Create a Grid for a 2D Game? Also a quick note: All my previous game development has been in Java, mostly using Java SE and Swing. I also have good understanding of the game development process, it is only android thats confusing me :S

    Read the article

  • XOLO X900–First mobile phone with Intel Power

    - by Rekha
    XOLO X900, XOLO’s offering the world’s first smart phone with the power of Intel inside® shaking hands with LAVA International Ltd., India’s fastest growing handset brands. The R&D Centre is in Shenzhan (China) and Bangalore (India). The smart phone has a fast web browsing with the 1.6 GHz Intel processor and smooth multi-tasking process using Intel patented Hyper Threading technology.It has an optimum battery usage, 4.03” hi-resolution of 1024X600 pixels LCD screen to ensure crisp text and vibrant images, HDMI Output port for TV, full HD 1080p playback and dual speakers. It has a camera of 8MP HD camera with certain DSLR like features allowing to click upto 10 photos in less than a second. 3D and HD gaming is immensely realistic with 400 MHz Graphics Processing Unit. The Operating System used here is Android 2.3 (Gingerbread) and upgradable to Android 4.0. It has the GPS facility and rear and front cameras with 8MP and 1.3MP respectively.  They have enabled Accelerometer, Gyroscope, Magnetometer, Ambient light sensor and Proximity sensor in this smart phone. Intel’s smartphone venture is beginning in India first. It is said to be available for sale in Indian from April 23, 2011 onwards. The price is at a best-buy price of INR 22,000 approximately. The smartphone will be available at the Indian retail chain Croma. The phone will available in other retail stores and online stores from early May. The company is launching the smartphone in India first and a more powerful handset in China later this year. According to their success in India and China, Intel is planning to come into Europe and US market. Till then, Intel smartphones are only for Indian buyers. You can more technical information from the XOLO’s site.

    Read the article

  • Installing SQL Server 2005 Express on Windows 8 [closed]

    - by Angel
    We have an application that installs a custom instance of Microsoft SQL Server 2005 Express as part if the whole installation process. Microsoft states that SQL Server 2005 Express is not compatible with Windows 8, but in reality it seems to install and work perfectly fine. The only problem is that during the installation a dialog appears saying it's not compatible, and offers options to get help online, continue with the installation anyway, or cancel. If you chose to continue anyway on all these incompatibility prompts, then the SQL server instance is installed without any problem whatsoever. Does anyone know if there is a way to suppress these incompatibility messages during the SQL service installation (or any installation, for that matter)?

    Read the article

  • Excel - Filling images using a reference image

    - by tjans
    I have a spreadsheet that I use to create baseball cards for a tabletop baseball game. There are about 20 cards on my sheet, and I'd like to add a spot where I can set the logo and have it reflect that logo in each card without having to update 20 different images each time I create cards for a new team (and thus, a new logo). Is there a way to automate this process similar to setting one cell equal to the value of another (=A4, for instance)? I think the images aren't part of a cell and they float on top of the sheet, but I had hoped there was a way either with a macro or other VBA function (or maybe something built-in) that would accomplish this.

    Read the article

  • Getting BootMgr not found errors repeatedly on Win7 x64

    - by abszero
    So here is the basic configuration of the box: Primary RAID 1 (Mirror, Bootable): 2x 300GB WD SATA drives AMD Phenom Quad Core x64 @2.2 ASUS M3N78 Pro Board 4GB RAM Win 7 Ultimate Additionally, this box is a Host OS for several CentOS Boxes via VirtualBox. The box runs like a champ but, for whatever reason, everytime I restart the machine I get a BootMgr not found error when the box tries to boot. I pop in my Win DVD, select 'Repair Windows' then 'Fix Start Up Problems' and everything works fine...once. When I restart the box again I have to go back through this process. Any ideas on what is going on?

    Read the article

  • links for 2011-01-10

    - by Bob Rhubart
    Clusterware 11gR2: Setting up an Active/Passive failover configuration (Oracle Luxembourg XPS on Database) Some think that expensive third-party cluster systems are necessary when it comes to protecting a system with an Active/Passive architecture with failover capabilities. Not true, according to Gilles Haro. (tags: oracle otn database) Atul Kumar: Part IX : Install OAM Agent - 11g WebGate with OAM 11g Part 9 of Atul's step by step guide to the installation of Oracle Identity Management. (tags: oracle oam identitymanagement security otn) Michel Schildmeijer: Oracle Service Bus: enable / disable proxy service with WLST Amis Technology's Michel Schildmeijer shares a process he found for enabling / disabling a proxy service within Oracle Service Bus 11g with WLST (WebLogic Scripting tool). (tags: oracle soa servicebus weblogic) @andrejusb: SOA & E2.0 Partner Community Forum XIII - in Utrecht, The Netherlands Oracle ACE Director Andrejus Baranovskis shares a nice plug for the SOA & E2.0 Partner Community Forum XIII coming up in March in the Netherlands. (tags: oracle oracleace otn soa enterprise2.0) Oracle Magazine Architect Column: Enterprise Architecture in Interesting Times Oracle ACE Directors Lonneke Dikmans, Ronald van Luttikhuizen, Mike van Alst, and Floyd Teter and Oracle enterprise architect Mans Bhuller share their thoughts on the forces that are shaping enterprise architecture. (tags: oracle otn architect entarch oraclemag) InfoQ: Deriving Agility from SOA and BPM - Ten Things that Separate the Winners from the Losers In this presentation from SOA Symposium 2010, Manas Deb and Clemens Utschig-Utschig discuss how to derive business agility from SOA and BPM, motivations for agility, developing and nurturing agility, influencers and dependencies, how SOA and BPM enable agility, pitfalls and recommendations for organizational culture, and pitfalls and recommendations for business and technical architectures. (tags: ping.fm)

    Read the article

  • Toshiba Satellite error 10053A0000 when re installing windows xp home on an existing windows 7

    - by Jayapal Chandran
    I had installed windows 7 for testing. Now i want to re-install windows xp home original. I am using the toshiba installation(recovery) disk. The installation process asked a few questions. I selected the option to retain other partitions and to delete only c drive. In the next step i got this error. http://web1.toshiba.ca/support//techsupport/tsbs/all/-tsb001404.htm So, what should i do to retain my files in d drive and only allow the installation to delete c drive?

    Read the article

  • Developing a feature which sole purpose to be taken out?

    - by adib
    What is the name of the pattern in which individual contributors (programmers/designers) developed an artifact for the sole purpose is to serve as a diversion so that management can remove that feature in the final product? This is a folklore I heard from an ex-colleague who used to work at a large game development company. At that company, it is well known that middle management is pressurized to "give inputs" and "make changes" to the product otherwise they risk being seen as not contributing to the project. This situation have delayed many projects because of these superfluous "management inputs". In one project at the above company, the artists and developers created a supernumerary animated character that appears in every cutscene and sticks out like a sore thumb. They designed it in such a way that it can be easily removed before the game is shipped (this was when games were still sold in physical media and not a downloadable product). Obviously the management then voted to remove the animation. On the positive side, management didn't introduced any unnecessary changes that would have delayed the project because they have shown that they provided constructive inputs to the product. This process pattern has a name among game programmers that work in corporates, but I forgot what was the actual name. I believe it's duck-something. Anybody can help pointing out the name and perhaps some rather credible reference to how the pattern develops?.

    Read the article

  • Policy Administration is the Top 2011 IT Priority for Insurers

    - by helen.pitts(at)oracle.com
    The current issue of Insurance Networking News includes an interesting column by Novarica's Matt Josefowicz.  Recent research by the firm revealed that policy administration replacement or extension is the most common strategic IT project for insurers this year.  The article goes on to note that insurers are keenly focused on the business capabilities that can be delivered once the system is in production as well as the ability to leverage agile development methodologies and true business/IT collaboration during implementation. The results are not too surprising given that policy administration is a mission-critical system for life and annuity insurers.  As Josefowicz notes, "Core systems are called core for a reason--they are at the heart of the insurer's ability to function.  Replacing them is not to be done lightly, but failing to replace them can mean diminishing the ability to compete or function effectively as a company." Insurers can no longer rely on inflexible policy administration systems that impede their ability to rapidly configure and bring to innovative new products, add riders, support changing business processes and take advantage of market opportunities.  The ability to leverage the policy administration systems to better service customers and distribution channels by providing real-time access to policy information throughout the policy lifecycle is also critical to sustain loyalty and further fuel growth.Insurers can benefit from a modern, adaptive policy administration system, like Oracle Insurance Policy Administration for Life and Annuity.  You can learn more about the industry's most highly advanced, rules-based system, which is unmatched for its highly flexible, rules-based configurability, performance and extensibility, as well as global market industry trends by viewing a complimentary, on-demand Webcast, Adapt, Transform and Grow:  Accelerate Speed to Market with Adaptive Insurance Policy Administration.Data conversions can be a daunting process for many insurers when deciding to modernize, in particular when consolidating from multiple, disparate legacy policy administration systems to a single new platform.  Migrating from a legacy system requires a well-thought out approach that builds on the industry's best thinking from previous modernization efforts and takes data migration off the critical path by leveraging proven methodology and tools to capitalize on the new system's capabilities.  We'll discuss more about this approach in a future Oracle Insurance blog.Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • .wine-pipelight folder not present

    - by DaimyoKirby
    Following the instructions on the pipelight installation page, I installed pipelight on Ubuntu 14.04. However, upon opening firefox the .wine-pipelight folder isn't present in my home folder, and I get the following errors: [PIPELIGHT:LIN:unknown] attached to process. [PIPELIGHT:LIN:unknown] checking environment variable PIPELIGHT_SILVERLIGHT5_1_CONFIG. [PIPELIGHT:LIN:unknown] searching for config file pipelight-silverlight5.1. [PIPELIGHT:LIN:unknown] trying to load config file from '/home/alden/.config/pipelight-silverlight5.1'. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:427:checkSilverlightGraphicDriver(): error in execlp command - probably silverlightGraphicDriverCheck not found or missing execute permission. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:441:checkSilverlightGraphicDriver(): GPU driver check - Your driver is not in the whitelist, hardware acceleration disabled. [PIPELIGHT:LIN:silverlight5.1] using wine prefix directory /home/alden/.wine-pipelight. [PIPELIGHT:LIN:silverlight5.1] checking plugin installation - this might take some time. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:374:checkPluginInstallation(): error in execvp command - probably dependencyInstaller/sandbox not found or missing execute permission. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:384:checkPluginInstallation(): Plugin installer did not run correctly (exitcode = 1). [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:108:attach(): plugin not correctly installed - aborting. I've reinstalled quite a few times and ran through many of the common fixes offered on the pipelight Launchpad pages and here on AskUbunta and still it fails to run. Is there a reason why this folder isn't present, or why I'm getting these errors? Edit: Oddly enough, the .wine-pipelight folder is created wtih I open Nitro, although this still doesn't fix the issue.

    Read the article

  • Merging multiple top-level domains into a single domain

    - by user23089
    My client had multiple top-level-domains. Each one represented an insurance program within a specific vertical. For all the sites at these alternate domains, there was a 30/70 mix of duplicate vs. original content. Some of the alternate domains ranked very well for their target keyphrase groups, where others were absent in results pages. We advised the client to merge multiple domains into their existing main domain, for usability and SEO reasons. We recently ran the merger. Here was our process: On the main domain, transfer the content such that it matches 1-for-1 content on the various alternate domains Setup Google Webmaster Tools on the main domain Push the new content on the main domain live and submit a corresponding sitemap to Google Establish 301 redirects on the alternate domains, such that each alternate domain URL points to its respective page on the main domain We did this 12 days ago, and pages (previously on the alternate domains) that had ranked well on Google have now plummeted or are entirely non-existent. Did we do the right thing by merging multiple top-level domains into a single domain? Is this initial dip in rankings normal? How soon should we expect to see it return to its normal rankings?

    Read the article

  • App_offline.htm and SharePoint and wholly contained images&hellip;

    - by Shawn Cicoria
    The question came up today if we could use an “app_offline.htm” file along with HTML in that file that would reference images. First, I wasn’t 100% sure if the app_offline.htm would work, but it sure did.  Since it’s just the Asp.net hosting process that detects the file, it circumvents loading any HttpApplications (SharePoint) beyond just serving up the HTML content. The second question was about having something more than text, specifically <img> tags.  So, since the HttpHandlers are taking all requests for all resources through the Asp.net pipeline, as soon as the app_offline.htm file is there, nothing else will get served from within that web application.    Sure, we could host the file (images) outside the web app, but what fun would that be? So, I found this link on using an image in app_offline.htm http://pbodev.wordpress.com/2009/12/20/app_offline-htm-with-an-image-yes-we-can/ Turns out, the src tag (in fact many tags) can take a stream of data represented by a mime type and base64 encoding inline – such as: <img style="height:515px;width:700px;border-width:0px;" src="data:image/jpeg;base64,/9j/4AAQSkZJRgA   One of the problems we had was the image was too large; so, sliced up the image, but ended up with spaces between each of the slices.  Low and behold, it works with CSS as well.   <style type="text/css"> .Slice_1_jpg { position: absolute; left:0px; top:0px; width:1011px; height:148px; background: url("data:image/jpeg;base64,/9j/4AAQSkZ   And the body is as follows: <body> <div class="Slice_1_jpg"> </div>   For this, I wrote a little Asp.Net site that, using a file upload control, will generate the necessary contents of what needs to go in the “data” value for the stream.  A link to the code is here: http://cicoria.com/downloads/CreateBase64OfImage.zip

    Read the article

< Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >