Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 263/976 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • How do I read multiple lines from STDIN into a variable?

    - by The Wicked Flea
    I've been googling this question to no avail. I'm automating a build process here at work, and all I'm trying to do is get version numbers and a tiny description of the build which may be multi-line. The system this runs on is OSX 10.6.8. I've seen everything from using CAT to processing each line as necessary. I can't figure out what I should use and why. Attempts read -d '' versionNotes Results in garbled input if the user has to use the backspace key. Also there's no good way to terminate the input as ^D doesn't terminate and ^C just exits the process. read -d 'END' versionNotes Works... but still garbles the input if the backspace key is needed. while read versionNotes do echo " $versionNotes" >> "source/application.yml" done Doesn't properly end the input (because I'm too late to look up matching against an empty string).

    Read the article

  • How can I decrease the time spent reformatting / restoring user's workstations?

    - by CT
    I just working for a medium sized company (approx. 150 users). When user's workstations need to be reformatted for any variety of reasons, we reformat, reinstall windows from an oem disk, install drivers, install shop desired software, and restore user's documents from latest backup. While the process isn't very difficult it is very time consuming. What are some options simplify / speed up this process? Mostly a complete Windows shop with most servers running Win2k3 Enterprise and workstations running a variety of XP, Vista, and 7. Workstations are purchased through a variety of OEMs mostly Dell.

    Read the article

  • Oracle E-Business Products New Search Helpers for Guided Resolution of Customer Issues

    - by user793044
    Oracle E-Business Proactive Support has created many new guided resolution documents that you may find helpful in resolving issues in your EBS applications.  These new documents are called “Search Helpers” and they guide you through your issue to a solution.  They are meant to be an easy and fast method to finding a relevant, complete solution. Hundreds of notes and service requests were reviewed and the best solutions to these known issues were selected.  For some issues, notes were updated to better clarify the solution.  In other cases, if a note with a solution did not already exist, one was created. You start the process by selecting the scenario you have encountered.  You may have received an error message, or there may be a particular area of the application in which you have encountered an issue.  Based on your selection of the issue, the Search Helper will present one or more additional possible symptoms.  When you have selected from both of these two sections, you are then presented with one or more articles known to have fully solved this issue in the past.  Several EBS products have produced Search Helpers documents.  Take a look at Doc ID 1501724.1 for an index of the current EBS Search Helpers.  Here is an example of a Search Helper from the Receivables Transactions area: After selecting the Functional Area of "Entering / Updating Transactions" a list of Known Symptoms is presented: And, when "Transaction numbers are not in sequence" is selected, a solution link is provided for Document ID 197212.1: How To Setup Gapless Document Sequencing in Receivables. The EBS applications that currently have published Search Helpers are: Advanced Pricing Applications Technology Configurator General Ledger Human Capital Management Inventory Management Order Management Payables Process Manufacturing Purchasing Receivables Shipping Value Chain Planning

    Read the article

  • How to run a script in Ubuntu via SSH as superuser?

    - by Irinotecan
    So I have a script that needs to be executed remotely as root. This isn't a problem with most Linux distros since they have a root account. But since Ubuntu does not, executing anything as root requires a 2-step process of entering the account password twice - once to log in and once for sudo. The SSH process to launch the script is automated, so it cannot pause for user input for the second password request. Does anyone know, short of hacking Ubuntu to re-enable root (not an option), if unattended SSH script execution with superuser privilege on the target machine is possible? Also, having no experience with Debian, does Debian behave this way too?

    Read the article

  • Is there a log showing why a Windows server did not restart SQL Server after a reboot?

    - by MerlinMags
    Our server was rebooted after a Windows Update scheduled for 1am, but after the restart SQL Server did not start up, so our websites were unable to display. Usually this process happens with no manual intervention. Is there a log somewhere which might indicate the reason why the Windows startup process did not call SQL Server to get going again? I've looked in the Event Viewer (Application Log) and SQL's own file E:\MSSQL\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG* but these only contain records of successful startup operations....nothing mentions a failed attempt to start a service or anything like that.

    Read the article

  • Applying Service Pack 1 to Team Foundation Server 2010

    - by Enrique Lima
    Disclosure:  I performed the following activities on my Windows 7 SP1 system, Visual Studio 2010 SP1 and a local Basic installation of TFS 2010. As with any deployment of a service pack into a server environment, take your recommended precautions and be aware of the changes you are putting in.  With that said, make sure you backup your databases, and that you have an exit/rollback strategy in the event of an unexpected situation. Team Foundation Server 2010 Service Pack 1 corresponds to KB2182621.  The KB article is http://support.microsoft.com/kb/2182621 The process will be very simple to follow, you will need to execute the mu_team_foundation_server_2010_sp1_x86_x64_651711.exe file.  That will extract files needed and launch the wizard driven Installation. Once this process completes, you need to validate the changes. By looking at Team Foundation Server 2010 Administration Console, you should see the reference to the KB number and SP1. There is also a good reason to validate log locations and records. From the Team Foundation Server 2010 Administration Console. Or from Windows Explorer, go to the C:\ProgramData\Microsoft\Team Foundation\Server Configuration\Logs location and review the logs referenced by the servicing references.

    Read the article

  • Why does my CPU Usage reach 100% too often?

    - by deathlock
    I'm using a dual-core processor and often see my CPU usage reaches 100%. I realize this may happen if I'm running too much applications, so when I know the computer starts to run slowly, I start to close my applications. I usually run 4-5 applications simultaneously. Usually those are: web browser (Google Chrome), Adobe Photoshop, Notepad++, XAMPP, and Windows Task Manager. Usually I close tabs in my Chrome first, because I often browse the net with about 20 tabs/4 windows open, so I presume that would take much memory (bad habit, I know). But even after closing Chrome's tabs or closing other applications, my CPU Usage often stays at high percentage - 72% at best, 100% at worst. I check the Processes tab on Windows Task Manager and usually found the System, System Idle Process, or services.exe taking the highest CPU process (could reach 60). Why is this happening? And is there any solution? EDIT I have T2250 @ 1,73 Ghz and 2.5 GB RAM

    Read the article

  • How to pass parameters to a function?

    - by sbi
    I need to process an SVN working copy in a PS script, but I have trouble passing arguments to functions. Here's what I have: function foo($arg1, $arg2) { echo $arg1 echo $arg2.FullName } echo "0: $($args[0])" echo "1: $($args[1])" $items = get-childitem $args[1] $items | foreach-object -process {foo $args[0] $_} I want to pass $arg[0] as $arg1 to foo, and $arg[1] as $arg2. However, it doesn't work, for some reason $arg1 is always empty: PS C:\Users\sbi> .\test.ps1 blah .\Dropbox 0: blah 1: .\Dropbox C:\Users\sbi\Dropbox\Photos C:\Users\sbi\Dropbox\Public C:\Users\sbi\Dropbox\sbi PS C:\Users\sbi> Note: The "blah"parameter isn't passed as $arg1. I am absolutely sure this is something hilariously simple (I only just started with doing PS and still feel very clumsy), but I have banged my head against this for more than an hour now, and I can't find anything.

    Read the article

  • Upgrading a hard disk – To repave or to migrate, that is the question

    - by guybarrette
    I recently changed my laptop hard disk from the stock 250GB 5400 drive to a 320GB 7200 drive.  And no, I didn’t bought a SSD drive because the cost is way too much right now.  At $70, my upgrade was a lot cheaper than a SSD drive.  Maybe next year. When changing a system main hard drive, one must ask himself: To repave or to migrate, that is the question.  I choose to migrate so I went to the Acronis Website to take a look at their product line.  They have a few products that could do the job.  One being Acronis Migrate Easy 7.0 and the other being Acronis True Image Home 2010.  Since True Image was just $10 more then Migrate Easy, I bought True Image. I inserted my new hard drive in a 2.5” USB enclosure, and started the migration process.  Once the data copied, I switched the drives.  The process went very smoothly and without hiccups.  Highly recommended. BTW, Acronis offers free trials so I guess that nothing can stop you from “testing” a migration  ;-) var addthis_pub="guybarrette";

    Read the article

  • Does replacing chrome User Data with my own - works without leaving any trace behind? Where else chrome writes data outside of User Data folder?

    - by Selin Peck
    Does replacing chrome User Data with my own - works without leaving any trace behind? Where else chrome writes data outside of User Data folder? I used to start office work by removing chrome User Data, replacing it with my own User Data copied from my external drive, saving the original User Data to other folder. Before leaving in the evening, I will take back my own User Data, and bring back the original User Data where it is originally saved. Is this process advisable? Would I be safe this way or if not, where else does chrome save data outside of User Data folder in AppData? Also, how is the process in Mozilla Firefox?

    Read the article

  • Dual boot Ubuntu 12.10 and Linux Mint 13

    - by user101693
    I know this question has been asked so many times, but I don't know what should I do in my case with those tutorials available everywhere. This is how my current situation looks like: Right now I'm using Linux Mint 13 Xfce installed with: 500MB of /boot 2GB of swap 15GB of / The rest of my space is /home with no space left in my hard drive And I just got a Ubuntu 12.10 live CD from my friend, and I intended to install it alongside my Linux Mint. And I want to select something else in the installation process. The question is: I want to use the same /home partition for Ubuntu and Linux Mint with same user but different directory because I don't want my configuration files conflict with each other. For example my username is Budiman and I want a directory named /home/budiman-Ubuntu for Ubuntu and /home/budiman-LinuxMint for Linux Mint. How can I do that? I read it somewhere said that I can share /boot and swap with multiple Distro, is it true? How can I make another /root directory for Ubuntu since I don't have any space left in my hard drive? Can I resize the /home partition without losing my data? How can I do that if it's possible? Now I've used 10-20% of my /home partition. I really hope somebody can help me with my question, if possible with a full tutorial starting from install with something else step until completion of the process. Thanks before :)

    Read the article

  • How to remove all associated files and configuration settings of an app installed through 'force architecture' command

    - by Mysterio
    A few weeks ago I installed a 32 bit .deb file through the 'force architecture' command (on my 64bit notebook), however the procedure was unsuccessful and I used the apt-get purgecommand to uninstall the app. It seems there are some leftovers of the app I uninstalled which has now broken system update. Synaptic recommended a sudo apt-get install -fwhich I did in the terminal with this initial response: Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: libntfs10 Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: crossplatformui 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? I chose 'Y' then got this response: (Reading database ... 187616 files and directories currently installed.) Removing crossplatformui ... ztemtvcdromd: no process found dpkg: error processing crossplatformui (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: crossplatformui E: Sub-process /usr/bin/dpkg returned an error code (1) It seems the app I installed crossplatformuiis still on my system and has caused update manager to stop running with a partial upgrade warning. What do I do now?

    Read the article

  • SIGINT and SIGTSTP ignored by most common applications

    - by Vašek Potocek
    After the last upgrade to my Fedora, a strange behaviour started occurring in X terminal applications. I can't seem to stop any process using Ctrl+C, it just results in printing ^C to the console. Similarly, Ctrl+Z prints ^Z and the process goes on. Both work well in non-graphical virtual consoles. I checked stty -a and it seems perfectly normal: speed 38400 baud; rows 24; columns 80; line = 0; intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = M-^?; eol2 = M-^?; swtch = M-^?; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0; -parenb -parodd cs8 hupcl -cstopb cread -clocal -crtscts -ignbrk brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc ixany imaxbel iutf8 opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke This is independent of the terminal (gnome-terminal, XFCE4 terminal, xterm). I later noticed that it may not be caused by the terminal at all: INT or TSTP sent directly to the respective process are ignored, too. This comprises various applications I used to terminate using Ctrl+C on a regular basis (and which often don't have any better means of exiting): cat, find, tail -f, java, ping, mplayer when stuck on a broken file... Even bash ignores Ctrl+C when I want to break a command line I have been entering and then changed my mind (no ^C is printed in this case). I need to delete it character by character (of which there may be hundreds if filename completion has been used) or intentionally run the unwanted command. Strangely enough, vim does recognize Ctrl+C—just to say its "use :quit", of course. This is extremely annoying and prevents me from working efficiently. Everything had been working until lately, maybe a week ago or so. I can not find any possible causes in Google, perhaps I'm trying wrong search terms or misidentifying the main problem. What could be it and how could I revert the standard behaviour, please? Update Ctrl+Z works sometimes. It seems that in the very first terminal I launch after logging in it stops the running command but stops working after that.

    Read the article

  • Creating a Scheduled Task that runs forever on Windows XP

    - by Mike Fiedler
    When I create a scheduled task, I do so via command line: schtasks.exe /Create /TN "startup-script" /TR "C:\startup.bat" /RU taskuser /RP taskpasswd /SC ONLOGON The idea is that this task run forever. The batch opens a java process that is never meant to end. I've used ONLOGON, as the machine auto-logs in as taskuser. All this works fine, for about 72 hours, after which the Duration flag kicks in and ends the process. Windows XP doesn't have the /DU flag on command line - is there an alternative method to creating a task that is meant to run from a system startup (doesn't even require logon) and runs forever, without touching a GUI?

    Read the article

  • Determine $DISPLAY socket name on OS X 10.6?

    - by Nate
    I'm looking to do something that's a little odd. I'm SSH'ing from a server to a Snow Leopard client to start an X11 data display process. In other words, SSH's X11 forwarding isn't what I want. I can do: client$ echo $DISPLAY /tmp/launch-SOMETHING/org.x:0 client$ ls -l $DISPLAY srwx------ 1 myuser wheel 0 Dec 9 15:47 /tmp/launch-SOMETHING/org.x:0 And, when I do: server$ ssh myuser@client client$ export DISPLAY=/tmp/launch-SOMETHING/org.x:0 client$ xterm I happily get my xterm. What I need, then, is some way to find out the correct value for $DISPLAY in my ssh session. From what I've read, $DISPLAY is set by launchd, but I haven't found any way to see that value. If it matters, I know that when my process connects from $server to $client, $client will logged in to the terminal as the same user.

    Read the article

  • iotop for Linux kernel 2.6.18

    - by Lightsauce
    So it has to come to my attention that iotop isn't availalbe for 2.6.18 since it's less than 2.6.20 and requires Python 2.6+. I've done some research and came across this article: http://lserinol.blogspot.com/2009/09/io-usage-per-process-on-linux.html According to this, if these process have io stats in /proc/pid#/io (where pid# is the process #) it's doable regardless of the kernel version. So, in reality, I could upgrade Python to 2.6 and test out iotop. However, my flavor of Linux, CentOS release 5.5 (Final), only supports Python 2.4.3-44.el5 currently. If I were to do uninstall from yum, it doesn't look so pretty. It ends up wanting to uninstall 235 packages, most of which are very important! I read in one place, online (I forget the URL from yesterday), that you can install Python 2.6+ parallel to this one, and have the rpm install for iotop use that. Well, I didn't choose that route. I figured, what the heck, lets write iotop (not copying it, but reverse engineering it without actually looking at it's code/it in use) in bash. I thought it would just grab the /proc/pid#/io file and parse stats. So I wrote a script to grab the top 10 rchar, wchar, read_bytes, and write_bytes by collecting all these stats from all the /proc/pid#/io files, sorting them by each metric, then grabbing the top 10 highest values. The conclusion, the data seems completely useless. Does anybody know any resources for advanced Linux where I can figure out how to take these /proc/pid#/ directories and figure out what the heck they are doing with io on the disk? My main goal is to figure out what exactly is causing high load on my disk. I just know it's on the / partition (/dev/sda2 in this case), and I'm not really sure how to narrow it down without the help of iotop. If I run iostat to grab metrics for 1 minute, every second, the first result it gives me shows a high 'kB_read/s', so that makes me think, it's reading mostly. However, if I watch the update it gives me every second, it's actually just showing values for kB_wrtn/s. This makes me think the initial value iostat gives me is misleading.

    Read the article

  • ssh freezes when trying to connect to some hosts

    - by NS Gopikrishnan
    When I try to ssh to particular machine/s in a list, The SSH command happens to be freezing. I tried out setting ssh timeout. But then also it's freezes even after the timeout. In verbose mode : OpenSSH_3.9p1, OpenSSL 0.9.7a Feb 19 2003 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to x358.x.server.com [10.x.x.x] port 22. debug1: fd 3 clearing O_NONBLOCK debug1: Connection established. debug1: identity file /export/home/sqlrpt/.ssh/identity type -1 debug1: identity file /export/home/sqlrpt/.ssh/id_rsa type -1 debug1: identity file /export/home/sqlrpt/.ssh/id_dsa type 2 At this point it freezes. A work around I thought was to create a child process for each ssh calls and if the process doesn't respond after a timeout - Kill it. But are there any less complex ways, so that I can accommodate it in a shell script itself rather than going for a C/C++ program ?

    Read the article

  • Looking for an example of how a software project can be managed/deployed

    - by rguilbault
    My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is understanding how to move code between stages of development. We have what we call a pipeline, which consists of particular stops: [Source] - [QC] - [Production] At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. This movement of code is triggered by advancing the status of the change request to match the stage of the pipeline. I have been searching the internet for a few days trying to find how the process is accomplished elsewhere -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc. Can anyone point me to any resources detailing specific workflows or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative. Note: I've cleaned up the question to hopefully make it easier to understand. Also, I found another question (which I can't find now) that referenced this book, which sounds like it might be exactly what I am looking for -- not sure if I want to shell out the cash for it, though.

    Read the article

  • Tracing what program is making a network connnection? (CentOS)

    - by Airjoe
    I was wondering if it is possible to find out which process is trying to make a specific network connection. On a server I support which hosts websites for about 200 users, the iptables firewall keeps blocking, as it should, a connection to 212.117.169.139 on port 80. Firefox reports this as an attack page (and at the least is obvious spam, if not malicious). It seems something on this server is trying to access this site for some reason, and although it's being blocked successfully, the requests seem to be going through every two to sixty seconds and I'd like to be able to find what process or script is doing this so I can handle it appropriately. Besides doing a grep to try and find if this IP is in some file (which probably won't even work because it may be working by hostname or it may be encoded), is there any way to find out some more information? Thanks!

    Read the article

  • How do I get rid of sockets in FIN_WAIT1 state?

    - by Gert M
    I have a port that is blocked by a process I needed to kill. (a little telnet daemon that crashed) The process was killed successfully but the port is still in a 'FIN_WAIT1' state. It doesn't come out of it, the timeout for that seems to be set to 'a decade'. The only way I've found to free the port is to reboot the entire machine, which is ofcourse something I do not want to do. $ netstat -tulnap | grep FIN_WAIT1 tcp 0 13937 10.0.0.153:4000 10.0.2.46:2572 FIN_WAIT1 - Does anyone know how I can get this port unblocked without rebooting?

    Read the article

  • Get Zipped Logs from a Remote Server

    - by Jonathan
    I am tasked with trying to find a way to download zipped logs from a remote server. There are quite a bit of these logs and they are constantly created. I do have limited ssh access to the remote server and can scp or rsync the files. However, due to the sheer size of these logs file, I do not want to rsync all of them. The logs could get to terabytes and for rsync to compare them may take some time. I only want to get any new file that was created/last updated an hour ago. I also am worried that I will rsync logs that are in the process of being created, so I was thinking to only rsync files that were last modified 3-5 minutes ago. Would anyone be so kind as to help me with such a process? Thank you in advance.

    Read the article

  • Migrating Gmail to Office 365

    - by user218699
    Good Morning, I have been setting up Office 365 for my organization. We are currently using Gmail. I have synced our local Active Directory server w/ Office 365, as well as our domains. The problem I am having has to do with migrating mailboxes from Gmail to Office 365. I have been using this article to walk me through the process: http://technet.microsoft.com/en-us/library/dn568114.aspx The issue arises when I begin to sync the mailboxes. Currently I have been trying to sync my own mailbox as a test. The synchronization process has been going on for about 15 hours (for just one mailbox) with no errors or any information given by Office 365, other than the "Syncing" status on the migration page in the Exchange Admin Center. Is syncing a single mailbox supposed to take this long, or have I missed a step? Thanks!

    Read the article

  • Have I created the recovery disk from recovery partition correctly?

    - by Tim
    I was creating recovery disk from recovery partition on my Lenovo T400 with Windows 7. 6.5 GB of the recovery partition has been occupied. But in the process, I created three DVDs. I might remember wrong, but the first two DVDs were called by the wizard as disk 1, and the third one was called disk 2. The first one has been written 0.22 GB only. Following is the content of the DVD (right click the image and select view the image in a bigger size): The second one has been written 3.97 GB as follows: The third one has been written 2.44 GB as follows: I am allowed only one time to create recovery disk. So I cannot try again. So I was wondering if I missed something? How is the creation process supposed to be like? Thanks and regards!

    Read the article

  • Windows 7 'All Programs' folders self-close on right-click

    - by Madmanguruman
    Odd issue on my Windows 7 Professional (32-bit) system. If I click on the Start 'orb' then navigate to 'All Programs', then navigate to any of the folders that appear at the bottom of the list (like Accessories) and left-click, the folder contents expand with no issue. If I right-click, the context menu appears for half-a-second or so, then the popup goes away and the entire start menu dismisses. I'm not sure how to debug this issue - I'm considering using Autoruns to try and disable things hooked into the shell one-by-one. Is there a way to use a tool like Process Explorer to narrow down the process that's actually dismissing the menus?

    Read the article

  • A Better Way to Plan, Execute and Manage Enterprise Architecture

    - by JuergenKress
    IT Strategies from Oracle is an authorized library of guidelines and reference architectures that will help you better plan, execute, and manage your enterprise architecture and IT initiatives. The IT Strategies from Oracle library offers two types of best practice documents: practitioner guides containing pragmatic advice and approaches, and reference architectures containing the proven technology patterns to jumpstart your initiative. The IT Strategies from Oracle library can help you establish a reliable set of principles and standards to guide your use of Oracle technology. We will expand this library over time across all of Oracle's technologies. Today, you can access: Overview documents providing an introduction to all the resources available in the library and best practices maturity models Oracle Reference Architectures covering the application infrastructure foundation, management and monitoring, security, software engineering, service-oriented integration, service orientation, user interaction, engineered systems, and a master glossary. Enterprise Technology Strategies for Service-Oriented Architecture offering practitioner guides on creating a SOA roadmap, frameworks for governance, determining ROI, identifying services, software engineering, and white papers. Enterprise Technology Strategies for Event-Driven Architecture offering practitioner guides on creating an EDA roadmap and reference architectures on an EDA foundation and EDA infrastructure. Enterprise Technology Strategies for Business Process Management including practitioner guides on creating a BPM roadmap, business process engineering, governance, and reference architectures on a BPM foundation and BPM infrastructure. Enterprise Technology Strategies for Cloud Computing including reference architectures on a Cloud foundation and Cloud infrastructure. Enterprise Technology Strategies for Business Analytics includes a practitioner guide for creating a BA roadmap, and reference architectures for a BA foundation and BA infrastructure. Get the Oracle Enterprise Architecture content here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Architecture,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >