Search Results

Search found 1821 results on 73 pages for 'bpm ec2'.

Page 15/73 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Prevent Amazon EC2 Time zone from reverting back on yum update

    - by D.Tate
    I use an Amazon EC2 server instance that runs a distro called Amazon Linux AMI. (I've read that it is based on CentOS/Red Hat). My specific version is the 2012.09 release. Anyway, I was able to change the time zone about a week ago from the default UTC to America/New_York (which is EST/EDT). The command I used to change it was: ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime ...thanks to this other Server Fault question. At that point, I was able to run date from the the command line, and it correctly displayed the EDT time. And even after EDT "fell back" to EST this past Sunday, I was pleased to find that running date still produced the correct local time. So that was great. However, after running a yum update yesterday, it seems that my time zone got reverted back to plain 'ol UTC. I even checked the last modified time of /etc/localtime file, and indeed it confirmed that it had been modified around the same time I had updated. Is there any way to prevent this from happening again, or will I be stuck resetting the time zone every time I do a yum update?

    Read the article

  • Map a URL bought with Dreamhost to Amazon EC2 (AWS)

    - by Edan Maor
    I have several URLs I purchased through Dreamhost. I'm starting to use Amazon's AWS, and I'd like to map the URLs to Amazon. This is something of a silly question, and I've already done the same thing several times to other services (mapping from Dreamhost to WebFaction). But for some reason when I tried to find the proper way to do the same mapping to Amazon, I find a lot of detailed writing talking about whether I should be using CNAME or A records, etc. So I wanted to ask in the simplest possible terms and hopefully get a simple, concrete answer: I bought a URL from Dreamhost, I have an EC2 server running on AWS (to which I already mapped an Elastic IP address). How do I make the URL map to AWS? And if there are several options, which one should I effectively be using? P.S. Meta-question - why are things so much more difficult with AWS? When I search Google for "Move from Dreamhost to WebFaction, I get very simple answers on how to do the mapping. In what way is AWS different?

    Read the article

  • EC2 configuration for medium load service on Django

    - by Luberg
    I have created a very basic Django application which puts an email to the database (Coming soon page for a startup). I launched a t1.micro instance to try out which load it can carry out. Nginx+FastCGI from Django+sqllite/postgres - tried both. blitz.io test gave me a pretty unhappy result (just 100 users within 1 minute): This rush generated 542 successful hits in 1.0 min and we transferred 809.01 KB of data in and out of your app. The average hit rate of 8.81/second translates to about 761,612 hits/day. You got bigger problems though: 87.28% of the users during this rush experienced timeouts or errors! I tried both to put varnish, disabled Debub mode in django and started fastcgi in threaded mode - nothing helps. This is not gonna be a super highload page - just a coming soon page to save email of subscribers, it should carry at least 500-1000 users at the same time in peak... I believe t1.micro is super small for that, but I also have tried small instance - not better result.. Please let me know should I use something different from Amazon EC2, or to pick smth better than t1.micro, or I that is definetely a configuration issues?...

    Read the article

  • Hadoop streaming job on EC2 stays in "pending" state

    - by liamf
    Trying to experiment with Hadoop and Streaming using cloudera distribution CDH3 on Ubuntu. Have valid data in hdfs:// ready for processing. Wrote little streaming mapper in python. When I launch a mapper only job using: hadoop jar /usr/lib/hadoop/contrib/streaming/hadoop-streaming*.jar -file /usr/src/mystuff/mapper.py -mapper /usr/src/mystuff/mapper.py -input /incoming/STBFlow/* -output testOP hadoop duly decides it will use 66 mappers on the cluster to process the data. The testOP directory is created on HDFS. A job_conf.xml file is created. But the job tracker UI at port 50030 never shows the job moving out of "pending" state and nothing else happens. CPU usage stays at zero. (the job is created though) If I give it a single file (instead of the entire directory) as input, same result (except Hadoop decides it needs 2 mappers instead of 66). I also tried using the "dumbo" Python utility and launching jobs using that: same result: permanently pending. So I am missing something basic: could someone help me out with what I should look for? The cluster is on Amazon EC2. Firewall issues maybe: ports are enabled explicitly, case by case, in the cluster security group.

    Read the article

  • File permission woes on an Ubuntu ec2 instance

    - by Pardoner
    I've set up an amazon ec2 instance and I'm have some file permission issues. I've created myself a new user and added myself to the following groups: adm:x:4:me,ubuntu sudo:x:27:me www-data:x:33:me,www-data ssh:x:108:me admin:x:111:me ubuntu:x:1000:www-data,me me:x:1001:me but when I cd /var/www I can't do simple commands without doing sudo. So I chown -R www-data:www-data /var/www to ensure that I'm in the owning group but I still have to type sudo for everything. If I sudo su www-data it works fine. Since I'm in the www-data group shouldn't I have the same privilages as www-data? One strange thing I'm noticing is that when I ls -l it list the owner but not the group names. Could this possibly be part of the issue? Is is posible for a directory to not be part of a group? drwxr-xr-x 4 www-data 4.0K Oct 24 16:39 . drwxr-xr-x 14 root 4.0K Oct 10 16:58 .. drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 admin.mywebsite.com drwxrwxr-x 2 www-data 4.0K Oct 4 00:29 mywebsite.com drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 staging.mywebsite.com Edit : It appears I had some alias messing with my ls command. By calling \ls -l I can see that all my files are in the correct group.

    Read the article

  • Amazon EC2: Instances, IPs and a wordpress blog (LAMP)

    - by JustinXXVII
    I had a link to my blog posted on Reddit yesterday and MySQL crashed on my EC2 Micro instance. I know I didn't have that many visitors because I used a marketing link that tracks hits. The link got 167 hits over the course of the last 18 hours, and MySQL crashed twice. So anyway, 167 visits is not a lot, so I've done some short term optimizations like restricting the number of Apache threads to limit the MySQL calls. I also set up WP Super Cache to serve static content. Soon I'm going to offload all of my images to S3 or CloudFront. So this leads me to my question. If this doesn't seem to help, and if i have another traffic "spike", how do AMIs work when you have a MySQL database? I think I understand that if you have more than one instance and assign the same Elastic IP to both of them, the incoming traffic gets distributed among both. But what happens when the MySQL database gets updated on one of the instances? I just need to wrap my mind around what happens when I create an AMI and then launch a new instance to help with traffic. Thanks for your suggestions.

    Read the article

  • EC2 kernel decision and issues with creating a new machine with my AMI

    - by roacha
    I could really use some advice. I started a new instance on EC2 using Amazon's AMI and during the deployment process I selected a Kernel ID of "Use Default". I then configured my server the way that I wanted to and took a snapshot of it. I then created my own AMI to create new servers with. When I try and create a new server with this AMI the server fails to start and I get the error: EXT3-fs: sda1: couldn't mount because of unsupported optional features (240). Which appears to happen because I am selecting a kernel id of "Use default" again when building my second server. I have read that in order for this to work I need to choose the same kernel id that was used in my original server. I have deleted my original server and don't know what it was using. What is the best process to follow in order to not have these issues? Should I choose "Use Default" for my original server? How do you know which kernel it selected? Then should I just document this and always specify this during the deployment of my next servers using my custom AMI? OR should I choose a custom kernel id during the initial build and always use this one moving ahead hoping Amazon never retires it? Thanks for any advice!

    Read the article

  • Replicated filesystem and EC2 MySQL

    - by El Yobo
    I'm currently investigating migrating our infrastructure over to run on Amazon's EC2 and am trying to figure out the best way to set up a MySQL service. I'm leaning towards running our own MySQL instances, rather than going with Amazon's RDS, but am still considering the best approach for performance and cost on the instance itself. In order to have persistent data, the MySQL data needs to be on an EBS volume (with some form of striped RAID, e.g. RAID0 or RAID10) to improve persistence. However, EBS IO is limited by the network interface (gigabit, so a theoretical maximum of 128 MB/s), while the ephemeral volumes have no such problem. I did see a suggestion for running two MySQL servers on an instance, with a master running on the ephemeral disk (which we would also RAID) and a slave storing changes to an EBS volume, but this has some additional overhead and complexity (two servers). What I was imagining is using some form of replicated file system such that I could have a filesystem on top of a RAID0 of ephemeral volumes to maximise performance all changes from the above immediately replicated to another RAID1 volume backed by multiple EBS volumes to ensure no data loss The advantages of this would be best possible IO performance for the DB server; no network delay in IO decreased IO on EBS volumes (as all read IO will be done on the ephemeral volumes) so decreased cost good data security, as it's backed onto redundant EBS volumes However, I haven't seen an appropriate system to replicate all changes from one volume to the other; is there a filesystem, or any other approach, which will do this? The distributed file systems, e.g. GlusterFS, DRBD etc seem to focus on replicating disks between servers, can they be set up to do what I'm interested in here? I also haven't seen anything about other's taking this approach. Do I have a solution in need of a problem here (i.e. is performance good enough, so this whole idea is redundant)? Is there some flaw in the plan?

    Read the article

  • ec2 spot instance for daily processing task

    - by chaft
    I don't have much experience as a sysadmin or with amazon aws, so I hope someone can explain in simple terms or refer me to a good guide on how to achieve the below. I have a system running on ec2 and amazon rds getting data in and saving it to the db. I need to run a script once a day (at the end of the day) to process all that data and prepare a daily report. This process will take approximately an hour to run. It needs to run on a high memory instance.. From what i've read so far, I guess the best way to do it is to have a high memory spot instance run every day, set it up to execute the script on startup and and shut down when done. Is that the right way to do it? If so, how to do it? how to tell the spot instance to run every day? through a cron job on the other server or is there a better way? How to set it up to run the script on startup? through cloudinit? Any help would be appreciated. One last thing, the job is not very time sensitive as long as it runs every day.. thanks

    Read the article

  • BPM+SOA Governance Hands-On-Workshops 17.3. Hannover, 22.3. Hamburg, 24.3. Potsdam

    - by franziska.schneider(at)oracle.com
    Oracle Hands-on Workshop: Entdecken Sie die Flexibilität und Leistungsfähigkeit der BPM-Suite und dem Enterprise Repository. Geschäftprozessmodellierung (BPM) und -ausführung ist aufgrund leistungsfähiger und einfacher anzuwendender Tools für immer mehr Unternehmen eine sinnvolle Lösung. Ein wichtiger Aspekt dabei ist das reibungslose Zusammenspiel zwischen den Fachabteilungen und der Software-Entwicklung. Die Abstimmung zwischen der Fachabteilung, welche die Prozesse modelliert, und den Entwicklern, welche die Services bereitstellen, kann durch SOA-Governance Methodiken gesteuert werden. Dabei muss es nicht immer gleich ein umfassendes Governance-Modell sein, aber eine gewisse Abstimmung ist sinnvoll. In diesem Handson-Workshop soll ein gangbarer Mittelweg aufgezeigt werden. In den Workshops von Oracle können Sie sich mit Kollegen austauschen, sich die neueste Technik direkt von den Oracle Experten zeigen lassen und an praktischen Übungen teilnehmen. Auf dieser Veranstaltung sind Sie richtig, wenn Sie mit der Oracle BPM-Suite in die Modellierung von BPMN Geschäftsprozessen einsteigen möchten, das Oracle Enterprise Repository als zentrale Verwaltungsplattform kennenlernen möchten, lernen möchten, wie Sie Einblick in die Abhängigkeiten Ihrer SOA bekommen und wie Sie die Abstimmung zwischen IT und Fachbereich werkzeugunterstützt optimieren können. Nutzen Sie diese Chance, neue Kontakte zu knüpfen! Melden Sie sich hier gleich für die kostenlose Veranstaltung an.

    Read the article

  • Simple Backup Strategy for Amazon EC2 instances / volumes?

    - by minerj
    You have entered Introductory Backups for Amazon EC2 EBS-backed Windows Images 010... I have been browsing my brains out to find a simple backup strategy for our single windows 2008 server running SharePoint Services. This is an EBS-backed image of one server with one data volume. I don’t need anything exotic. I only need a “daily” backup (losing a day’s worth of data is not catastrophic). We have created and saved an EBS backed AMI image (Windows 2008) we are comfortable using. We started off making backups by simply creating a new EBS AMI image. This is really simple, but the running server is put offline during the first 10 – 15 minutes of creating the image – not ideal. The standard way of creating backups would seem to be creating snapshots of volumes attached to a running instance. Again it’s pretty simple and the server remains usable during the snapshot generation. The apparent Catch-22 is that you can’t simply launch a new instance directly from a snapshot. I know how to bundle a running instance to S3 storage and then register the AMI from the S3 bucket. This does allow me to capture a backup of a running instance and, if the running instance is lost, register the AMI from the S3 bucket and launch the new AMI to recover the instance, but this seems really convoluted and it seems ridiculous to have to juggle back and forth between the AWS Console and the S3 Organizer plug-in for Firefox to get this accomplished. (Please don't mention the command line approach, this is an 010 level course). From playing around with EBS-backed images, the following approach appears to work for me (all done within the AWS Console): 1.For your backups, simply snapshot the system volume (/dev/sda1) as needed. 2.If you lose your running instance, do the following: a.Create a new volume from your last snapshot backup b.Launch another instance of your starting AMI (must be EBS-backed) c.Stop this instance. d.Detach the existing system volume from the new stopped instance and discard. e.Attach the newly created volume as system volume (/dev/sda1) to the stopped instance. f.Re-start the new instance. I have tested this out a couple of times and it seems to work for me. Question: Is there anything wrong with this approach?

    Read the article

  • Connectivity issues with dual NIC machine in EC2

    - by Matt Sieker
    I'm trying to get some servers set up in EC2 in a Virtual Private Cloud. To do this, I have two subnets: 10.0.42.0/24 - Public subnet 10.0.83.0/24 - Private subnet To bridge these two, I have a Funtoo instance with a pair of NICs: eth0 10.0.42.10 eth1 10.0.83.10 Which has the following routing table: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.83.0 * 255.255.255.0 U 0 0 0 eth1 10.0.83.0 * 255.255.255.0 U 203 0 0 eth1 10.0.42.0 * 255.255.255.0 U 202 0 0 eth0 loopback * 255.0.0.0 U 0 0 0 lo default 10.0.42.1 0.0.0.0 UG 0 0 0 eth0 default 10.0.42.1 0.0.0.0 UG 202 0 0 eth0 An elastic IP is attached to the eth0 interface, and I can connect to it fine remotely. However, I cannot ping anything in the 10.0.83.0 subnet. For now iptables is not set up on the box, so there's no rules that would get in the way (Eventually this will be managed by Shorewall, but I should get basic connectivity done first) Subnet details from the VPC interface: CIDR: 10.0.83.0/24 Destination Target 10.0.0.0/16 local 0.0.0.0/0 [ID of eth1 on NAT box] Network ACL: Default Inbound: Rule # Port (Service) Protocol Source Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY Outbound: Rule # Port (Service) Protocol Destination Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY   CIDR: 10.0.83.0/24 VPC: Destination Target 10.0.0.0/16 local 0.0.0.0/0 [Internet Gateway ID] Network ACL: Default (replace) Inbound: Rule # Port (Service) Protocol Source Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY Outbound: Rule # Port (Service) Protocol Destination Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY I've been trying to work this out most of the evening, but I'm just stuck. I'm either missing something obvious, or am doing something very wrong. I would think I'd be able to ping from either interface on this box without issue. Hopefully some more pairs of eyes on this configuration will help. EDIT: I am an idiot. After I bothered to install nmap to run some more tests, I discover I can see the ports, and connect to them, pings are just being blocked.

    Read the article

  • Focus On SOA & BPM for Oracle OpenWorld Now Available

    - by Lionel Dubreuil
    To help our valued customers & partners make the most of time spent at Oracle Openworld, please check out the Focus On Oracle Fusion Middleware documents.  Over the years, we've learned that these provide a great roadmap to must-attend sessions, demos, partner exhibits, and networking events during Oracle OpenWorld. SOA and BPM SOA for Developers BPM In addition to those “Focus On..” documents, session details (speakers, abstracts) can be found in the Content Catalog at: https://oracleus.activeevents.com/connect/search.ww?event=openworld We strongly recommend our customers to attend the following sessions: Service Integration (SOA) & BPM: “Using the Right Tools, Techniques, and Technologies for Integration Projects”  Monday, 10/1/2012; 3:15 PM; Moscone South - 308 BPM Suite: “Oracle Unified Business Process Management Suite 11g Overview and Roadmap” Monday, 10/1/ 2012; 12:15 PM; Moscone South – 308 SOA Suite:“Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge” Monday, 10/1/2012; 10:45 AM; Moscone South - 102 Foundation Pack: “Jump-starting Integration Projects with Oracle AIA Foundation Pack” Tuesday, 10/2/2012; 1:15 PM; Marriott Marquis - Salon 7 Oracle Enterprise Repository: “Gaining Victory over SOA and Application Integration Complexity” Tuesday, 10/2/2012; 1:15 PM; Moscone South - 310 See you in San Francisco! Not attending the show?  Some of the general and key sessions will be available online - so please stay tuned for those announcements as Oracle OpenWorld gets closer.

    Read the article

  • Focus On SOA & BPM for Oracle OpenWorld Now Available

    - by Lionel Dubreuil
    To help our valued customers & partners make the most of time spent at Oracle Openworld, please check out the Focus On Oracle Fusion Middleware documents.  Over the years, we've learned that these provide a great roadmap to must-attend sessions, demos, partner exhibits, and networking events during Oracle OpenWorld. SOA and BPM SOA for Developers BPM In addition to those “Focus On..” documents, session details (speakers, abstracts) can be found in the Content Catalog at: https://oracleus.activeevents.com/connect/search.ww?event=openworld We strongly recommend our customers to attend the following sessions: Service Integration (SOA) & BPM: “Using the Right Tools, Techniques, and Technologies for Integration Projects”  Monday, 10/1/2012; 3:15 PM; Moscone South - 308 BPM Suite: “Oracle Unified Business Process Management Suite 11g Overview and Roadmap” Monday, 10/1/ 2012; 12:15 PM; Moscone South – 308 SOA Suite:“Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge” Monday, 10/1/2012; 10:45 AM; Moscone South - 102 Foundation Pack: “Jump-starting Integration Projects with Oracle AIA Foundation Pack” Tuesday, 10/2/2012; 1:15 PM; Marriott Marquis - Salon 7 Oracle Enterprise Repository: “Gaining Victory over SOA and Application Integration Complexity” Tuesday, 10/2/2012; 1:15 PM; Moscone South - 310 See you in San Francisco! Not attending the show?  Some of the general and key sessions will be available online - so please stay tuned for those announcements as Oracle OpenWorld gets closer.

    Read the article

  • Focus On SOA & BPM for Oracle OpenWorld Now Available

    - by Lionel Dubreuil
    To help our valued customers & partners make the most of time spent at Oracle Openworld, please check out the Focus On Oracle Fusion Middleware documents.  Over the years, we've learned that these provide a great roadmap to must-attend sessions, demos, partner exhibits, and networking events during Oracle OpenWorld. SOA and BPM SOA for Developers BPM In addition to those “Focus On..” documents, session details (speakers, abstracts) can be found in the Content Catalog at: https://oracleus.activeevents.com/connect/search.ww?event=openworld We strongly recommend our customers to attend the following sessions: Service Integration (SOA) & BPM: “Using the Right Tools, Techniques, and Technologies for Integration Projects”  Monday, 10/1/2012; 3:15 PM; Moscone South - 308 BPM Suite: “Oracle Unified Business Process Management Suite 11g Overview and Roadmap” Monday, 10/1/ 2012; 12:15 PM; Moscone South – 308 SOA Suite:“Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge” Monday, 10/1/2012; 10:45 AM; Moscone South - 102 Foundation Pack: “Jump-starting Integration Projects with Oracle AIA Foundation Pack” Tuesday, 10/2/2012; 1:15 PM; Marriott Marquis - Salon 7 Oracle Enterprise Repository: “Gaining Victory over SOA and Application Integration Complexity” Tuesday, 10/2/2012; 1:15 PM; Moscone South - 310 See you in San Francisco! Not attending the show?  Some of the general and key sessions will be available online - so please stay tuned for those announcements as Oracle OpenWorld gets closer.

    Read the article

  • Focus On SOA & BPM for Oracle OpenWorld Now Available

    - by Lionel Dubreuil
    To help our valued customers & partners make the most of time spent at Oracle Openworld, please check out the Focus On Oracle Fusion Middleware documents.  Over the years, we've learned that these provide a great roadmap to must-attend sessions, demos, partner exhibits, and networking events during Oracle OpenWorld. SOA and BPM SOA for Developers BPM In addition to those “Focus On..” documents, session details (speakers, abstracts) can be found in the Content Catalog at: https://oracleus.activeevents.com/connect/search.ww?event=openworld We strongly recommend our customers to attend the following sessions: Service Integration (SOA) & BPM: “Using the Right Tools, Techniques, and Technologies for Integration Projects”  Monday, 10/1/2012; 3:15 PM; Moscone South - 308 BPM Suite: “Oracle Unified Business Process Management Suite 11g Overview and Roadmap” Monday, 10/1/ 2012; 12:15 PM; Moscone South – 308 SOA Suite:“Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge” Monday, 10/1/2012; 10:45 AM; Moscone South - 102 Foundation Pack: “Jump-starting Integration Projects with Oracle AIA Foundation Pack” Tuesday, 10/2/2012; 1:15 PM; Marriott Marquis - Salon 7 Oracle Enterprise Repository: “Gaining Victory over SOA and Application Integration Complexity” Tuesday, 10/2/2012; 1:15 PM; Moscone South - 310 See you in San Francisco! Not attending the show?  Some of the general and key sessions will be available online - so please stay tuned for those announcements as Oracle OpenWorld gets closer.

    Read the article

  • Is it better to have AWS EC2 and RDS is the same Availability Zone?

    - by Dan
    I run a web app in an AWS EC2 instance and the database for the app in an RDS instance both in Amazon Web Services Region East-1. However, one of them is in Availability Zone 1a and the other is in 1d. Am I getting all the speed benefits of having both instances in the same "data center" (East-1) even if they are in different Availability Zones, or can I optimize by moving them to the same Availability Zone?

    Read the article

  • Open World Session - BPM, SOA and ADF Combined:Patterns learned from Fusion Applications

    - by mesriniv
    Blog by Meera Srinivasan (Oracle Product Management) Today afternoon (10/2/2012), Mohan Kamath, and I (Meera Srinivasan) delivered an Open World session on how Oracle Fusion Applications (the next generation business applications from Oracle), use Oracle BPM, Oracle SOA and Oracle ADF products. These adoption patterns can be applied in a generic manner to produce process-centric, user-centric, highly customizable and extensible next generation application. The session was well attended and we had lively discussions with the attendees during Q & A. We started with why as an application developer, you should look at BPM for creating a process-centric application and presented the following fusion adoption patterns Model driven agile development Customization and Extension Guided Process Interactions Personalization and Customization of End User Interfaces Approval Flows Fusion HCM, On Boarding Process - Activity Guide Interface was used as an example for the Guided Process Interactions adoption pattern and the Fusion CRM BPM Process Templates for Customization adoption pattern. In the Personalization and Customization of End User Interfaces section, we looked at how ADF is used within Oracle BPM and the various options available to customize end user interfaces. We also presented how Oracle Procurement does complex approvals using Rules and Approval Management Extensions. We hope you found the session useful, and please do try to attend Heidi’s session on dynamic case management: Case Management Patterns with Oracle Unified Business Process Management Suite. Marriott Marquis - Salon 7, Thu 11:15 AM - 12:15 PM

    Read the article

  • Oracle BPM overview and roadmap session on Monday, October 1st

    - by Manoj Das
    Bhagat Nainani and I, Manoj Das, will present a session on Oracle BPM overview and road map on Monday, October 1 2012, from 12:15-1:15 PM at Moscone South - 308. Since last OpenWorld, many good things have happened. Many customers have gone live with their BPM 11g deployments, some of whom were nominated for the Innovation Awards. From a product perspective, we delivered 11.1.1.6 and 11.1.1.7 is just around the corner. We will discuss some of the highlights related to both customer successes and product features. In particular, we will present some of the exciting new capabilities that we are introducing in 11.1.1.7 around business analyst driven model-to-execution, more comprehensive unified BPM suite, more flexible and manageable BPM. Another significant development is the release of Process Accelerators. We have not only released accelerators, we have ourselves deployed and are using them internally. We will talk about accelerators as well as our learnings. As the title suggests, we will also share some aspects of our roadmap - there are some very exciting things brewing that I can't wait to share with you on Monday. Hoping to see you on Monday. Again, the session is in Moscone South - 308 from 12:15-1:15. Looking forward to your tweets on the session - remember to use #oraclebpm and #oow. Finally, as always, feel free to ask Bhagat and me any questions you have, during the session as well as after the session.

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • Running an rsync sweep before initializing lsyncd for synchronizing instances on EC2

    - by chrisallenlane
    My company uses several EC2 servers that will scale up and down according to the load we're receiving on our sites at any given moment. For the sake of our discussion here, we're running four instances: master.ourdomain.com - the file syncing "hub" of the webservers www1/www2/www3.ourdomain.com - three webservers which turn on or off as dictated by load I'm using lsyncd to keep all of the webservers in sync, and for the most part, it's working quite well. We're using a two-way syncing scheme, such that each webserver syncs against master, and master syncs against each webserver. Thus, the webservers are kept in sync, even though they aren't syncing against each other directly. I'm having one problem that I'm having a hard time solving,though. It occurs under these circumstances: When changes are made on master (perhaps after we've pushed new code), while some of the redundant webservers are sleeping And then a sleeping webserver wakes-up to absorb load Under that circumstance, I would like the following to happen: First, the newly-awoken webserver should sync its file structure - one way - against master, to bring its web application code up-to-date. Then, and only then, should it begin pushing changes in its file structure back to master. Unfortunately, currently, when a sleeping server is started, when lsyncd starts up, it pushes changes back to master before updating its own codebase, thus overwriting new code with old. Thus, before lsyncd starts, I'd like to be able to synchronize the webservers code against master's, perhaps by running a simple one-way rsync against the two machines. We're running lsyncd v.2, and I've tried to make this happen by using the "bash" configuration options documented in the lsyncd manual. My configuration file looks like this: settings = { logfile = "/home/user/log/lsyncd/log.txt", statusFile = "/home/user/log/lsyncd/status.txt", maxProcesses = 2, nodaemon = false, } bash = { onStartup = "rsync [email protected]:/home/user/www /home/user/www" } sync{ default.rsyncssh, source="/home/user/www/", host="[email protected]", targetdir="/home/user/www/", rsyncOpts="-ltus", excludeFrom="/home/user/conf/lsyncd/exclude" } (I've obviously redacted that file somewhat to protect the identities of the guilty.) Simply put, though, this just isn't working. How else might I approach this problem? I was looking at the --delete-after option in man rsync, but I don't think that does what I'm looking for. Are there any suggestions about how I should approach this problem? Thanks for lending your time and expertise. Chris

    Read the article

  • Simple Introduction to using the Enterprise Manager SOA/BPM Facade API by Jaideep Ganguli

    - by JuergenKress
    There may be times when you need to expose just a small section of what is displayed in the Enterprise Manager console for SOA/BPM (EM console). A simple example can be where stakeholders on the systems integration or customer teams want to monitor a dashboard of statistics on how many instances of a composite have been created and how many have faulted. You can see this in the EM, as shown below Some of these stakeholders may not have knowledge of  EM console and they just want a quick view into the statistics, without having to navigate EM. This post describes how to use the Oracle Fusion Middleware Infrastructure Management Java API  for Oracle SOA Suite (also called the Facade API)  to build a custom ADF page to display this information. If you want a quick introduction in using the Facade API, this post is for you. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Enterprise Manager,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • SQL Express 2008 R2 on Amazon EC2 instance: tons of free memory, poor performance

    - by gravyface
    The old SQL Express 2005 was running on a low-end single Xeon CPU Dell server, RAID 5 7200 disks, 2 GB RAM (SBS 2003). I have not done any baseline measurements on the old physical server, but the Web app is used by half a dozen people (maybe 2 concurrently), so I figured "how bad can an Amazon EC2 instance be?". It's pretty horrible: a difference of 8 seconds of load time on one screen. First of all, I'm not a SQL guru, but here's what I've tried: Had a Small Instance, now running a c1.medium (High Cpu Medium) Windows 2008 32-bit R2 EBS-backed instance running IIS 7.5 and SQL Express 2008 R2. No noticeable improvement. Changed Page File from fixed 256 to Automatic. Setup a Striped Mirror from within Disk Management with two attached 1 GB EBS volumes. Moved database and transaction log, left everything else on the boot EBS volume. No noticeable change. Looked at memory, ~1000 MB of physical memory free (1.7 GB total). Changed SQL instance to use a minimum of 1024 RAM; restarted server, no change in memory usage. SQL still only using ~28MB of RAM(!). So I'm thinking: this database is tiny (28MB), why isn't the whole thing cached in RAM? Surely that would speed up performance. The transaction log is 241 MB. Seems kind of large in comparison -- has this not been committed? Is it a cause of performance degradation? I recall something about Recovery Models and log sizes somewhere in my travels, but not positive. Another thing: the old server was running SQL Express 2005. Not sure if that has any impact, but I tried changing the compatibility level from SQL 2000 to 2008, but that had no effect. Anyways, what else can I try here? Seems ridiculous to throw more virtual hardware at this thing. I know I/O is going to be rough on EBS volumes, but surely others are successfully running small .NET/SQL apps on reasonably priced instances?

    Read the article

  • BPM 11g Hands on Training

    - by mseika
    BPM 11g Hands on Training11-14 December 2012, Birmingham (UK) DescriptionThis free hands-on workshop covers the life cycle of a business process from analysis, modeling, simulation, process customization and monitoring using Oracle BPM Suite 11g. The process modeled in the workshop includes integrating with web services, creating complex human workflows with user interfaces for task forms and incorporating rule engine-based decision services. After taking this course on Oracle BPM Suite 11g, you can go onto build industry-focused solutions, customer-facing demos, proof-of-concepts (POCs), pilot implementations and reference architectures. REGISTER NOW Partner Registration Guide Price: FREE Address:Hockley Suite - Oracle OfficesBlythe Valley Business ParkShirley, SolihullWest Midlands B90 8ADUnited Kingdom 11 - 14 December 2012 You will also be able to extend your current SOA implementations with BPMN based business processes.You will have the opportunity to sit the BPM 11g Implementation Specialization exam at the end of the boot camp. The training will finish at 3pm on Friday 14th to allow time for the on-line exam to take place. AgendaThis workshop is 4 days long. 08:30 am: Arrival and sign-in09:00 am: Workshop begins 17:30 pm: Workshop ends (more detail to be provided) Workstation RequirementsAttendees must use their own laptops and it is essential they have the following:           · Minimum 8GB RAM. 40GB free disk space           · VirtualBox (latest version)           ·7zip (required for extracting the VirtualBox image) For more information please contact [email protected].

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >