Search Results

Search found 15439 results on 618 pages for 'wls configuration'.

Page 371/618 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • stunnel: SSL-to-SSL? (for smtp/imap)

    - by nonot1
    Hello, How can I configure stunnel to accpet SSL connections, and connect then to an SSL port on a different server? Here is my setup: Our ISP's server, "Mail Server", supports smtp/imap over SSL. (Not starttls. Just over ssl.) But, I have a bunch of client machines that will only trust a specific, internal, root certificate. Thus, they can not connect to "Mail Server". For these client machines, I'd like to make a dedicated "Mail Tunnel" host that uses stunnel to listen with an in-house signed SSL certificate, and just forward data to "Mail Server" using a 2nd SSL connection. Can this be done? What would be the specific steps for Ubuntu Server 10.10? (I'm not too familiar with persistent service configuration.) Thank you

    Read the article

  • How do I fix "Setup did not find any hard disk drives installed in your computer" error during Win X

    - by CT
    I just bought a nettop. It came with WinXP Home. I first installed Win 7 on it. I wasn't that happy with the performance so I decided to go back to XP. I am using an external dvd drive and a Win XP Pro disc. I boot from the dvd drive and during the install get this error: Setup did not find any hard disk drives installed in your computer. Make sure any hard disk drives are powered on and properly connected to your computer, and that any disk-related hardware configuration is correct. This may involve running a manufacturer-supplied diagnostic or setup program. Setup cannot continue. To quit Setup, press F3. This is the nettop in question: http://www.newegg.com/Product/Product.aspx?Item=N82E16883103228

    Read the article

  • Oracle Linux Partner Pavilion Spotlight - Part IV

    - by Ted Davis
    Welcome to the final Oracle Linux Partner Pavilion Spotlight Part IV.  Two days left till the Big Show. You are gearing up. We are gearing up. You can feel the excitement.  We can feel the excitement. This. Will. Be. The. Best. Show. EVER. See you at the Partner Pavilion (Moscone south # 1033) at Oracle OpenWorld. - Oracle Linux / Oracle VM Team HP and Oracle are pleased to announce another Oracle Validated Configuration based on the ProLiant DL980 server. Many choose to deploy Oracle workloads on the ProLiant DL980 based on the cost/performance ratio they achieve running Oracle Linux Unbreakable Enterprise Kernel. You can be confident that Oracle Validated Configurations based on ProLiant servers will help you achieve your most demanding performance goals. QLogic The QLogic-Oracle partnership spans over 20 years resulting in the most comprehensive line of Oracle Linux I/O adapter technology. Interface options include Ethernet, Fibre-Channel, and FCoE. Host side connectivity is offered in both low profile PCIe and Express Module PCIe form factors. QLogic software drives are jointly qualified and “in-box” with Oracle Linux 5.x, 6,x and Oracle VM enabling simplified installation and management while simultaneously taking risk out of the solution. Bringing innovations such as NPIV, T10-PI, and intelligent caching adapter technology to the Oracle Linux environment further strengthens the QLogic advantage. A big thank you to all of our Oracle Linux Partner Pavilion participants. We - they- look forward to meeting you next week at Oracle OpenWorld. If you've missed our three previous Partner Spotlight's - here are the links: Part I, Part II, Part III. 

    Read the article

  • How IBM Implement WebSphere Application Server SDK for Sun Solaris OS

    - by Eng Al-Rawabdeh
    I deploy the same application in IBM-WAS on different OS ( Windows , AIX and SUN-Solaris ) , SDK errors appeared on SDK for just Solaris OS , I refer some sites and it talk that the SDK on Solaris OS was build based on Sun SDK is it write ? so please I need to now if the IBM build the Solaris SDK from scratch or based on sun SDK ?? More Details : I Installed the same IBM WAS Application Server on two servers as the following : 1- Server1 - OS (AIX) 2- Server2 - OS ( Solaris) these two server on the same network and have the same configuration . Then I deploy Java Application ( X ) on both servers , the Application X was run on Server1 ( AIX ) without any problem but when I run the Application on Server 2 ( Solaris OS) I faced SDK issue . So I need to know what the difference between AIX WAS SDK and Solaris WAS SDK ?? Note : I try windows and it was run without any problem .

    Read the article

  • How to write PowerShell code part 2 (Using function)

    - by ybbest
    In the last post, I have showed you how to use external configuration file in your PowerShell script. In this post, I will show you how to create PowerShell function and call external PowerShell script.You can download the script here. 1. In the original script, I create the site directly using New-SPSite command. I will refactor it so that I will create a new function to create the site using New-SPSite. The PowerShell function is quite similar to a C# method. You put your function parameters in () and separate each parameter by a comma (,). Then you put your method body in {}. function add ([int] $num1 , [int] $num2){ $total=$num1+$num2 #Return $total $total } 2. The difference is you do not need semi-colon (;) at the end of each statement and when calling the method you do not need comma (,) to separate each parameter. function add ([int] $num1 , [int] $num2){ $total=$num1+$num2 #Return $total $total } #Calling the function [int] $num1=3 [int] $num2=4 $d= add $num1 $num2 Write-Host $d 3. If you like to return anything from the function, you just need to type in the object you like to return, not need to type return .e.g. $ObjectToReturn not return $ObjectToReturn

    Read the article

  • "Bad response to Storage command" when scheduling job with Bacula

    - by Joril
    I have a Bacula setup with 9 clients, and it's working happily. Today I had to add another client, so I went and copied+adapted the existing configuration files from another client, but when I schedule a job for the new client, I get these errors: 20-Mar 17:50 tools-dir JobId 39: Start Backup JobId 39, Job=BackupPresenze2.2012-03-20_17.50.49_04 20-Mar 17:50 tools-dir JobId 39: Using Device "FileStorage" 20-Mar 17:50 presenze2-fd JobId 39: Fatal error: Failed to connect to Storage daemon: bacula.mylan.local:9103 20-Mar 17:50 tools-dir JobId 39: Fatal error: Bad response to Storage command: wanted 2000 OK storage , got 2902 Bad storage From the client I can telnet to bacula.mylan.local:9103 just fine, and jobs for other clients work successfully... What could I check? (Server and client run Ubuntu 10.04, if it's relevant)

    Read the article

  • BSOD constantly with same ntoskrnl.exe error and disk indicator is frozen

    - by Sheep
    BSOD constantly and the disk indicator is frozen. Error do not happen immediately, usually an hour after boot up. Here is the Minidump: Bug Check Code = 0x00000124 Caused By Driver = ntoskrnl.exe Caused By Address = ntoskrnl.exe+4b094c Crash Address = ntoskrnl.exe+4b094c Seems to be hardware problem, but I checked RAM, no error. I have two HDs installed, system is on SSD, data is on HDD. Checked SSD with the properties-tools-error-checking , no error. Re-installed several times, still happens even after removed HDD. Configuration: SSD: Crucial M4-CT064M4SSD2 with Firmware 0009 Intel HM65 CPU: i7-2630QM The SSD is set correctly, SATA III 6Gb/s enabled, and everything worked perfectly for nearly a year.

    Read the article

  • Service haproxy error

    - by user128296
    I want to configure Haproxy for outgoing mail load balancing. my configuration file /etc/haproxy.cfg is. global maxconn 4096 # Total Max Connections. This is dependent on ulimit daemon nbproc 4 # Number of processing cores. Dual Dual-core Opteron is 4 cores for example. defaults mode tcp listen smtp_proxy 199.83.95.71:25 mode tcp option tcplog balance roundrobin # Load Balancing algorithm ## Define your servers to balance server r23.lbsmtp.org 74.117.x.x:25 weight 1 maxconn 512 check server r15.lbsmtp.org 199.71.x.x:25 weight 1 maxconn 512 check And when i start service haproxy i get this error. Starting HAproxy: [ALERT] 244/172148 (7354) : cannot bind socket for proxy smtp_proxy. Aborting. Please tell me where i am doing mistake.help will appreciated.

    Read the article

  • How to move ubuntu 12.04 on another drive

    - by Maksim
    How I can move my ubuntu on another drive? I know about clonezilla but problem is that destination drive is smaller the source one. Gparted can't copy-paste partition if destination not the end last partition. I tried dpkg --selected-packages and apt-clone. First one just not install all my packages and removed existed that now I have no full unity and not my all packages. Second one just fail on configuration package. But before I did that way I copy-paste my /etc to new system. My partition table destination : gpt 1 1049kB 106MB 105MB fat32 EFI System ??????????? 2 106MB 12,1GB 12,0GB ext4 3 12,1GB 66,3GB 54,2GB ext4 source: msdos 1 1049kB 12,0GB 12,0GB primary ext4 ??????????? 2 12,0GB 492GB 480GB primary ext4 3 492GB 500GB 8107MB primary linux-swap(v1) Gpt not working with ubuntu that use grub 1.99. I don't know why but my laptop can't boot any device with uefi just black screen and ubuntu detect it on fresh install.

    Read the article

  • A Console Application or Windows Application in VS 2010 for Sharepoint 2010 : A common Error

    - by Gino Abraham
    I have seen many Sharepoint Newbies cracking their head to create a Console/Windows  application in VS2010 and make it talk to Sharepoint 2010 Server. I had the same problem when i started with Sharepoint in the begining. It is important for you to acknowledge that SharePoint 2010 is based on .NET Framework version 3.5 and not version 4.0. In VS 2010 when you create a Console/Windows application, Make Sure you select .Net Framework 3.5 in the New Project Dialog Window.If you have missed while creating new Project Go to the Application tab of project properties and verify that .NET Framework Version 3.5 is select as the Target Framework. Now that you have selected the correct framework, will it work? Nope if the application is configured as x86 one it will not work. Sharepoint is a 64 Bit application and when you create a windows application to talk to Sharepoint it should also be a 64 Bit one. Go to Configuration Manager, Select x64. If x64 is not available select <New…> and in the New Solution Platform dialog box select x64 as the new platform copying settings from x86 and checking the Create new project platforms check box. This is not applicable if you are making a console application to talk to sharepoint with Client Object Model.

    Read the article

  • I used a 301 Permanent Redirect to a 3rd party site by mistake! Can I stop the redirection?

    - by Dees
    Oh Noes! I've been parking a domain name for a friend/client of mine on my hosting provider (Dreamhost, FWIW) for a while, and they eventually asked me to redirect their domain to a 3rd party website which is currently featuring some relevant promotional content. Once this period ends, we will probably go ahead and set up a proper website for the domain on my hosting account. I used Dreamhost's "redirect" hosting option in their domain configuration panel, not realizing that it would implement a 301 Permanent redirect, or what the implications were. Now it seems that for any client that has visited the site anytime recently, the 301 redirect is still cached/in effect, although I have changed the domain settings back to regular Dreamhost full site hosting. It seems that the only thing that can be done is to wait out the TTL/cache expiration for the redirect. I have no idea how long that might be, so I'm wondering if there is any good way to cache-bust the redirect or otherwise undo its long-term effects. I put a simple html meta refresh in the domain folder to replace the 301 to keep the intended functionality in place, but I'm still not able to access the domain's other content normally, even via FTP, etc. Isn't there anything I can do? Otherwise, how long does it take for a cached redirect to expire? It's gonna be a bummer if it's really permanent.

    Read the article

  • Enabling spdy in nginx fails spdycheck.org

    - by tulio84z
    I'm trying to enable spdy with nginx 1.6.0 but spdycheck.org is giving me two complaints: And My nginx configuration file is as such: server { listen 80; listen 443 ssl spdy; server_name 54.201.32.118; ssl_certificate /etc/nginx/ssl/tulio.crt; ssl_certificate_key /etc/nginx/ssl/tulio.key; if ($ssl_protocol = "") { rewrite ^ https://$server_name$request_uri? permanent; } root /usr/share/nginx/html; index index.html index.htm; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } } The other info of spdycheck you can find at: http://spdycheck.org/#54.201.32.118

    Read the article

  • How do I make YouTube videos fill up the entire screen when using dual monitors?

    - by Jephir
    I am using a dual monitor setup on Ubuntu 9.10 using the TwinView configuration in NIVIDA X Server Settings. My total resolution is 2960x1050 pixels, and my individual monitors are 1680x1050 (primary) and 1280x1024 (secondary). When going into fullscreen mode on any video on YouTube, I only see a cropped version of the video on my primary display as seen below. This does not occur on any other video sharing website - they properly make the video to fill the entire screen on my primary monitor. To my knowledge this problem only happens on YouTube.

    Read the article

  • Obscure SPUtility.SendMail Behavior When Manually Passing in Mail Headers

    - by Damon
    There are two ways to send mail in SharePoint: you can either use the mail components from the System.Net namespace, or you can send email using SharePoint's SPUtility.SendMail method.  One of the benefits of the SPUtility.SendMail method is that it uses the mail configuration from SharePoint, so you can manage settings in Central Administration instead of having to go through and modify your web.config file.  SPUtility.SendMail can get the job done, but it's defiantly not as developer friendly as the components from the System.Net namespace.  If you want to CC someone on an email, for example, you do NOT have a nice CC parameter - you have to manually add the CC mail header and pass it into the SPUtility.SendMail method.  I had to do this the other day, and ran into a really obscure issue. If you do NOT pass the headers into the method then SharePoint sends the email using the From Address configured in the Outgoing Mail settings in Central Admin.  If you pass headers into the method, but do not include the from header, then SharePoint sends the mail using the email address of the current user. This can be an issue if your mail server is setup to reject an email from an invalid email address or an email address that is not on your domain.  The way to fix this issue is to always pass in the from header.  If you want to use the configured From address, then you can do the following: SPWebApplication webApp = SPWebApplication.Lookup(new Uri(SPContext.Current.Site.Url)); StringDictionary headers = new StringDictionary(); headers.Add("from", webApp.OutboundMailSenderAddress);

    Read the article

  • Snow Leopard connecting to Unbuntu 10.04 through Samba failure -- need help fixing.

    - by Chris Altman
    I have a Ubuntu 10.04 web server. I want to connect to it with my OSX 10.6 machine and Finder. I have installed openSSH and Samba on the Ubuntu machine. In my smb.conf I have a Share Definition: [www] comment = Development Computer WWW path = /var/www writeable = yes browseable = yes allow hosts = 192.168.1. I can connect to the machine through Finder using a non-root user. When I attempt to add files thought Finder I get an "Insufficient Permissions" error. Please help. I am not sure if the issue is in the Samba configuration or OSX 10.6 Thank you

    Read the article

  • Linux Bridge, Samba netbios name/hostname access

    - by Christopher Wilson
    I am currently running a linux bridge in the following configuration ADSL Modem: 192.168.1.1 Linux Bridge: eth0: 192.168.1.2 eth1: no address Wireless Router: 192.168.0.1 My issue is that i cannot access the "Linux Bridge" shares using the WINS name of the server via client systems (yes i understand it is a transparent bridge but i can access it via the 192.168.1.2 address this is not on the same subnet as the client systems). This is the global section of my SMB.CONF [global] unix extensions = off os level = 20 netbios name = server guest account = nobody server string = 447 Server security = share #unix extensions = no #wins support = yes #wins server = 192.168.0.1 name resolve order = wins lmhosts hosts bcast interfaces bridge1 eth0 eth1 lo bind interfaces only = yes Can i access a bridged server using it's WINS name to access samba shares? Cheers Chris

    Read the article

  • How do I install and run Tomcat on port 80 as my only web server? (Rooted Ubuntu box)

    - by gav
    Hi All, tl;dr - I have a rooted linux box that I want to run tomcat on as a server (No Apache Web Server) how would you set this up avoiding common security pitfalls? I've written a Grails App that I want to run on a VPS I rent. The VPS has very little memory and I am using it for the sole purpose of running this application so I don't need the apache web server. This is my first venture into Server administration and I'm sure to fall into some well known traps. Should I use iptables to redirect requests from port 80 to 8080? Should I run tomcat as root or as it's own user? What configuration settings would be good for a low memory system expecting less than 10 concurrent users? Hopefully an easy one for you! Anyone who could link to a tutorial would be a personal hero destined for great things no doubt. Gav

    Read the article

  • Link bonding across multiple switches?

    - by Bryan Agee
    I've read up a little bit on bonding nics with ifenslave; what I'm having trouble understanding is whether there is special configuration needed in order to split the bonds across two switches. For example, if I have several servers that all have two nics each, and two separate switches, do I just configure the bonds and plug 1 nic from each into switch #1 and the other from each into switch #2? or is there more to it than that? If the bonds are active-backup, will a nic failure on single machine mean that server may become disconnected since the rest of the machines are using the primary nic and it's using the secondary? Or do you link the switches with one cable as well?

    Read the article

  • nagios contact groups to check_mk

    - by Skiaddict
    I have Nagios installed with traditional configuration files. I have created some contact groups and assigned them to hosts. For web UI I'm using check_mk. And here's the question: Check_mk supports showing hosts/services based on contact group membership. But I can't use the Nagios contact groups in check_mk. (Result should be that if person XYZ is logged in, he see only hosts and services assigned to him.) My users are in LDAP (I'm using check_mk login form, not apache authorisation). I can't find any information about this in documentation so if someone have experience, please tell me how this works. The problem is that I cannot let everybody be admin and receive all alerts...

    Read the article

  • Ubuntu 12.04 Server: permissions on /var/www for newly copied files

    - by Abe
    I ran the following commands to set up ACL on the /var/www folder in my Ubuntu 12.04 Server: sudo usermod -g www-data abe sudo chown -R www-data:www-data /var/www sudo chmod -R 775 /var/www I downloaded Wordpress using wget in my /var/www folder and unzipped the downloaded file: cd /var/www wget http://wordpress.org/latest.zip mv latest.zip wordpress.zip unzip wordpress.zip I created a new database and user in mysql and attempted to run the setup process through the web interface. When I enter the configuration info in wordpress I run into the following error message: Sorry, but I can't write the wp-config.php file. When I run ls -la, I see that the files are owned by my user abe, but they are part of the group www-data. Would I have to run the chmod command every time I copy new files to /var/www? sudo chmod -R 775 /var/www

    Read the article

  • Seeking for a better solution to restrict access in GRUB2 menu

    - by LiveWireBT
    I just read that in certain situations you should also protect access to your GRUB2 menu by setting a password and may be refining acces by adding --unrestricted or --users as arguments to menuentries und submenus. I read the corresponding pages in the Ubuntu Community Documentation and the Arch Wiki. So, I created /etc/grub.d/01_security, stored usernames and passwords in there, made the file executable and ran update-grub. This is working as intended, every action in the menu prompts for username and password, but I also want to modify the automatically generated entries to either restrict them to certain users (via --users) or make them available for everyone, but not editable by everyone (via --unrestricted). I was able to find the proper lines in 10_linux and edit them accordingly, however I'd love to see an easier solution. Perhaps an option like GRUB_DISABLE_RECOVERY="true" or GRUB_DISABLE_OS_PROBER=true in /etc/default/grub for easy (re)configuration (for linux and os-prober generated entries). Here's a diff from my 13.10 installation: $ diff /etc/grub.d/10_linux /etc/grub.d/10_linux_bak 123c123 < echo "menuentry '$(echo "$title" | grub_quote)' ${CLASS} --unrestriced \$menuentry_id_option 'gnulinux-$version-$type-$boot_device_id' {" | sed "s/^$ --- > echo "menuentry '$(echo "$title" | grub_quote)' ${CLASS} \$menuentry_id_option 'gnulinux-$version-$type-$boot_device_id' {" | sed "s/^/$submenu_inde$ 125c125 < echo "menuentry '$(echo "$os" | grub_quote)' ${CLASS} --unrestricted \$menuentry_id_option 'gnulinux-simple-$boot_device_id' {" | sed "s/^/$submenu_$ --- > echo "menuentry '$(echo "$os" | grub_quote)' ${CLASS} \$menuentry_id_option 'gnulinux-simple-$boot_device_id' {" | sed "s/^/$submenu_indentation/" 323c323 < echo "submenu --unrestricted '$(gettext_printf "Advanced options for %s" "${OS}" | grub_quote)' \$menuentry_id_option 'gnulinux-advanced-$boot_device_$ --- > echo "submenu '$(gettext_printf "Advanced options for %s" "${OS}" | grub_quote)' \$menuentry_id_option 'gnulinux-advanced-$boot_device_id' {" tl;dr: I'd love the see a simple solution for GRUB2 entries that cannot be modified without a password or are limited to certain users. (Yes, GRUB_DISABLE_RECOVERY="true" is active.)

    Read the article

  • Alternatives to Pessimistic Locking in Cluster Applications

    - by amphibient
    I am researching alternatives to database-level pessimistic locking to achieve transaction isolation in a cluster of Java applications going against the same database. Synchronizing concurrent access in the application tier is clearly not a solution in the present configuration because the same database transaction can be invoked from multiple JVMs concurrently. Currently, we are subject to occasional race conditions which, due to the optimistic locking we have in place via Hibernate, cause a StaleObjectStateException exception and data loss. I have a moderately large transaction within the scope of my refactoring project. Let's describe it as updating one top-level table row and then making various related inserts and/or updates to several of its child entities. I would like to insure exclusive access to the top-level table row and all of the children to be affected but I would like to stay away from pessimistic locking at the database level for performance reasons mostly. We use Hibernate for ORM. Does it make sense to start a single (perhaps synchronous) message queue application into which this method could be moved to insure synchronized access as opposed to each cluster node using its own, which is a clear race condition hazard? I am mentioning this approach even though I am not confident in it because both the top-level table row and its children could also be updated from other system calls, not just the mentioned transaction. So I am seeking to design a solution where the top-level table row and its children will all somehow be pseudo-locked (exclusive transaction isolation) but at the application and not the database level. I am open to ideas and suggestions, I understand this is not a very cut and dried challenge.

    Read the article

  • Serial converter - cat /dev/ttyUSB0 hangs on open

    - by Alex
    I am using Ubuntu 11.04 and attached a Garmin data cable. The device gets recognized [17718.502138] USB Serial support registered for pl2303 [17718.502181] pl2303 2-1:1.0: pl2303 converter detected [17718.513416] usb 2-1: pl2303 converter now attached to ttyUSB0 [17718.513443] usbcore: registered new interface driver pl2303 [17718.513446] pl2303: Prolific PL2303 USB to serial adaptor driver ... but when I do a strace cat /dev/ttyUSB0 it hangs on the open part and does not continue any more open("/dev/ttyUSB0", O_RDONLY|O_LARGEFILEC If I do the same on Ubuntu 12.04 it stops on fread(" ... ") which is okay, as there is currently no data comming in at this port. I am not sure if it is just a different configuration of the system or an driver related problem. How can I track this down further? Unfortunately I can not update the old Ubuntu 11.04 system for different reasons at the moment.

    Read the article

  • Setting up Kerberos SSO in Windows 2008 network

    - by Arturs Licis
    We recently introduced Kerberos (SPNEGO) Single Sign-on in our web-portal, and tested it on a Windows network with Windows 2003 domain controller. Now, trying to test it on Windows 2008 R2 controlled network, SSO just doesn't work due to defective tokens. Up to the moment I was pretty sure that there's something wrong about environment and that were NTLM tokens. We double checked IE settings etc, but nothing helped. Then we checked the following settings for both users (logged on a client test-machine, and the one used as a Service Principal): This account supports Kerberos AES 128 bit encryption. This account supports Kerberos AES 256 bit encryption. .. and error message changed to ' GSSException: Failure unspecified at GSS-API level (Mechanism level: Encryption type AES256CTS mode with HMAC SHA1-96 is not supported/enabled) It makes me think that Internet Explorer receives Kerberos tokens at all times, and there's just some configuration missing, or it was ktpass.exe to be incorrectly executed. Here's how ktpass.exe was invoked: C:ktpass /out portal1.keytab /mapuser USER /princ HTTP/[email protected] /pass *

    Read the article

  • Accenture Foundation Platform for Oracle (AFPO)

    - by Lionel Dubreuil
    The Accenture Foundation Platform for Oracle (AFPO) is a pre-built, tested reference application, common services framework and development accelerator for Oracle’s Fusion Middleware 11g product suite that can help to reduce development time and cost by up to 30 percent. AFPO is a unique accelerator that includes documentation, day one deliverables and quick start virtual machine images, along with access to a skilled team of resources, to reduce risk and cost while improving project quality. It can be delivered all at once or in stages, on-site, hosted, or as a cloud solution. Accenture recently released AFPO v5 for use with their clients. Accenture added significant updates in v5 including Day 1 images & documentation for Webcenter & ADF Mobile that are integrated with 30 other Oracle Middleware products that signifigantly reduced the services aspect to standing these products up. AFPO v5 also features rapid configuration and implementation capabilities for SOA/BPM integrated with Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Business Intelligence, Oracle Identity Management and Oracle ADF Mobile.  AFPO v5 also delivers a starter kit for Oracle SOA Suite which builds upon the integration methodology, leading practices and extended tooling contained within the Oracle Foundation Pack. The combination of the AFPO starter kit and Foundation Pack jump-start and streamline Oracle SOA Suite implementation initiatives, helping to reduce the risk of deploying new technologies and making architectural decisions, so clients can ultimately reduce cost, risk and the time needed for an implementation.  You'll find more information at: Accenture's website:  www.accenture.com/afpo YouTube AFPO Telestration:  http://www.youtube.com/watch?v=_x429DcHEJs Press Release Brochure  Contacts: [email protected] Patrick J Sullivan (Accenture – Global Oracle Technology Lead), [email protected]

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >