Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 149/537 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Do licenses matter if there's nobody around to enforce them?

    - by Corey
    Suppose that the original creators can't (or won't) enforce a license on their software/code, but that work is still popular. I guess if you want to visualize it, I'll throw out a convoluted hypothetical: Imagine a very small group of developers that released a code project under an open-source license. The repository was hosted on their servers. However, the everybody on the immediate development team passed away in a tragic accident or something. Their servers shut down after this happened. The project had a fairly large user base, and so others began to host the last revision on their own servers for others to download. (Yes, I have an active imagination) Does abiding by the license simply become a matter of morality by its users, or can there still exist a legal penalty when there is no one user or group to enforce it? Could anything be done if an unscrupulous user decided to branch off the project and use it under a different license? I am not looking for legal advice -- I am simply curious about how software licenses work. I tend to think of strange situations and wonder what would happen in those scenarios.

    Read the article

  • Collaboration using github and testing the code

    - by wyred
    The procedure in my team is that we all commit our code to the same development branch. We have a test server that runs updated code from this branch so that we can test our code on the servers. The problem is that if we want to merge the development branch to the master branch in order to publish new features to our production servers, some features that may not have been ready will be applied to the production servers. So we're considering having each developer work on a feature/topic branch where each of them work on their own features and when it's ready, merge it into the development branch for testing, and then into the master branch. However, because our test server only pulls changes from the development branch, the developers are unable to test their features. While this is not a huge issue as they can test it on their local machine, the only problem I foresee is if we want to test callbacks from third-party services like sendgrid (where you specify a url for sendgrid to update you on the status of emails sent out). How to handle this problem? Note: We're not advanced git users. We use the Github app for MacOSX and Windows to commit our work.

    Read the article

  • FMw Diagnostic Framework : Automatic Capture of Diagnostic Data Upon First Failure!

    - by Daniel Mortimer
    Introduction There is nothing more frustrating than a problem that "cannot be reproduced". Logs, configuration files have been analysed but there just isn't enough information to establish the root cause. The issue maybe closed, but you are left with the feeling that the problem will raise its ugly head again in the future. Trouble is, to resolve such issues you need to capture diagnostic data at the exact time the incident occurs. Step forward Fusion Middleware Diagnostic Framework!  Diagnostic Framework monitors WebLogic Managed Servers and delivers "Automatic capture of diagnostic data upon first failure". To quote fromOracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems "When a critical error occurs ... the Diagnostic Framework automatically collects diagnostics, such as thread dumps, DMS metric dumps, and WebLogic Diagnostics Framework (WLDF) server image dumps ... The data is stored in a file-based repository and is accessible with command-line utilities." In other words the data collected upon first failure - especially the thread and image dumps - provides a snapshot of the system as or immediately after the problem occurs. The table below shows the type of WebLogic Server issues which fall into the scope of Diagnostic Framework How to Configure Diagnostic Framework? Depending on your Fusion Middleware product choice you may not need to do anything! Diagnostic Framework is automatically installed, configured and initiated for any WebLogic Domain which has the Oracle Java Required Files (JRF) template applied. This template is applied by default whenever you configure WebLogic Managed Servers for products such as Portal / Forms / Reports / Discoverer Identity Management ( OID , OAM , OIM etc) WebCenter SOA Check your WebLogic Domain directory structure. If you have an "adr" sub directory under DOMAIN_HOME/servers/<servername>/ then JRF template has been applied and Diagnostic Framework will be in play. Should the "adr" sub directory not exist, review the advice given in My Oracle Support article How to Apply FMW ( EM ) Control and JRF to a WebLogic Domain and Managed Servers [ID 947043.1] If you are working with a standalone WebLogic Server solution and applying Oracle JRF is not acceptable, consider using WLDF - WebLogic Diagnostic Framework. (Fusion Middleware Diagnostic Framework makes use of WLDF under the covers.) Couple of useful links about WLDF are listed below Configuring and Using the Diagnostics Framework for Oracle WebLogic Server 11g WebLogic Diagnostics Framework-A Very Useful Tool [A nice blog which describes a WLDF use case] How to Get Started With Diagnostic Framework To be frank, the Fusion Middleware Administrator's Guide is the best place to start your learning Oracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems A lot of reading here,  but if you are in hurry and just want to get the right information to Oracle Support to help resolve your issue, check out the next section below. How to Upload Diagnostic Framework Incident Data to Oracle Support Some Background Information There are three interfaces to the Repository: Enterprise Manager Cloud Control (Support Workbench) WLST (Command Line) ADRCI (Command Line) The Enterprise Manager Cloud Control does provide a nice GUI interface to search, view and package diagnostic framework incidents. However, this software is not to be confused with Fusion Middleware (EM) Control. Cloud Control (formerly known as Grid Control) is part of the Enterprise Manager media package. EM Cloud Control has it's own install and configuration story. Therefore, for the benefit of those yet to install and play with Cloud Control, I am going to describe how to use the command line tools. Ideally, you would only need to one command line interface, but currently I suggest using both - mainly due to the fact that ADRCI SHOW INCIDENTS does not reveal the description behind the Diagnostic Framework error code. Instructions: Note: WLST and ADRCI are case sensitive when it comes to handling parameter values. If you make a mistake, expect an unfriendly syntax error message. 1) Find the incident Note: The managed server which you are troubleshooting must be up and running. If the managed server is down, ensure the domain's Admin Server is accessible. If you cannot connect to the Admin Server or the Managed Server the example WLST commands will not work. a) Launch WLST  Note: Use the WLST which resides in the "oracle_common" directory (not WL_HOME/common/bin) otherwise you will get a syntax error like the one below Traceback (innermost last):  File "<console>", line 1, in ?NameError: listIncidents MW_HOME/oracle_common/common/bin/wlst.sh b) Connect to the managed server or the admin server e.g. wls:/offline> connect('weblogic','welcome1','t3://localhost:7020') c) Run the command wls:/MyDomain/serverConfig> listIncidents() This will list the incidents for the server to which you have connected. If you have connected to the Admin Server and want to list the incidents for a managed server within the domain, use the command wls:/MyDomain/serverConfig> listIncidents(adrHome='diag\ofm\MyDomain\MyManagedServer' ,server='MyManagedServer') Example output Incident Id     Problem Key              Incident Time         1       DFW-99998 [java.lang.NullPointerException] [oracle.error.simulator.ErrorSimulator.createNullPointerException][errorWebApp_1-0-0-0]        Fri Nov 02 10:38:46 GMT 2012  The piece highlighted in bold is the description you do not see when using the ADRCI 'SHOW INCIDENT' command. Make a note of the incident id. You are ready to move to step 2 2. Package the incident a) Set up the environment - example commands below are for Unix cd <DOMAIN_HOME>/bin . ./setDomainEnv.sh If you want ADRCI to run a Remote Diagnostic Agent collection (recommended) at generate package time, point ORACLE_HOME at oracle_common ORACLE_HOME=$MW_HOME/oracle_common; export ORACLE_HOME To prevent ADRCI from running RDA at generate package time, point ORACLE_HOME at WL_HOME/server/adr directory.  ORACLE_HOME=$WL_HOME/server/adr; export ORACLE_HOME b) Launch adrci $WL_HOME/server/adr/adrci c) Set BASE and HOMEPATH adrci> SET BASE /oracle/middleware/user_projects/domains/ mydomain/servers/mymanagedserver/adr adrci> SET HOMEPATH diag/ofm/mydomain/mymanagedserver d)  Optionally run SHOW INCIDENTS e.g. adrci> SHOW INCIDENTS -MODE DETAIL ADR Home = /oracle/middleware/user_projects/domains/mydomain/ servers/mymanagedserver/adr/diag/ofm/mydomain/mymanagedserver:***********************************************************************************************************************************INCIDENT INFO RECORD 1**********************************************************   INCIDENT_ID                   1   STATUS                        ready   CREATE_TIME                   2012-11-02 10:38:46.468000 +00:00   PROBLEM_ID                    1   CLOSE_TIME                    <NULL>   FLOOD_CONTROLLED              none   ERROR_FACILITY                DFW   ERROR_NUMBER                  99998   ERROR_ARG1                    <NULL>   ERROR_ARG2                    <NULL>   ERROR_ARG3                    <NULL>   ERROR_ARG4                    <NULL>   ERROR_ARG5                    <NULL>   ERROR_ARG6                    <NULL>   ERROR_ARG7                    <NULL>   ERROR_ARG8                    <NULL>   ERROR_ARG9                    <NULL>   ERROR_ARG10                   <NULL>   ERROR_ARG11                   <NULL>   ERROR_ARG12                   <NULL>   SIGNALLING_COMPONENT          <NULL>   SIGNALLING_SUBCOMPONENT       <NULL>   SUSPECT_COMPONENT             <NULL>   SUSPECT_SUBCOMPONENT          <NULL>   ECID                          5162744c6a2eea5e:155ff445:13ac0aae7cb:-8000-0000000000000325   IMPACTS                       01 rows fetched e)  Create a logical package IPS CREATE PACKAGE INCIDENT incident_number e.g. adrci> IPS CREATE PACKAGE INCIDENT 1Created package 1 based on incident id 1, correlation level typical f) Generate the package IPS GENERATE PACKAGE package_number IN path e.g. adrci> IPS GENERATE PACKAGE 1 IN /tmp Generated package 1 in file /tmp/DFW99998j_20121102113633_COM_1.zip, mode complete Note: If the generate package command hangs, ADRCI may be experiencing an issue when running RDA. To avoid such trouble, exit ADRCI and point the ORACLE_HOME environment variable at WL_HOME/server/adr 3) Upload the package zip to Oracle Support via your Service Request a) Log into My Oracle Support and locate your Service Request b) Click on "Add Attachments c) And upload the zip file

    Read the article

  • Partner Webcast - Oracle VM Server for SPARC

    - by dmitry.nefedkin(at)oracle.com
    Normal 0 false false false RU X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:RO; mso-fareast-language:EN-US;} March 17th, 9am CET  (10am EET)Oracle VM Server for SPARC (previously called Sun Logical Domains) provides highly efficient, enterprise-class virtualization capabilities for Oracle's SPARC T-Series servers. Oracle VM Server for SPARC allows you to create up to 128 virtual servers on one system to take advantage of the massive thread scale offered by SPARC T-Series servers and the Oracle Solaris operating system. And all this capability is available at no additional cost. Agenda Overview of VM technologies from Oracle LDoms introduction Values and benefits Feature details LDoms demo Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. To register, please click here For any questions please contact [email protected].

    Read the article

  • Building a Redundant / Distrubuted Application

    - by MattW
    This is more of a "point me in the right direction" question. I (and my team of 3) have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • How should I safely send bulk mail? [closed]

    - by Jerry Dodge
    First of all, we have a large software system we've developed and have a number of clients using it in their own environment. Each of them is responsible for using their own equipment and resources, we don't provide any services to share with them. We have introduced an automated email system which sends emails automatically via SMTP. Usually, it only sends around 10-20 emails a day, but it's very possible to send bulk email up to thousands of people in a single day. This of course requires a big haul of work, which isn't necessarily the problem. The issue arises when it comes to the SMTP server we're using. An email server is issued a number of relays a day, which is paid for. This isn't really necessarily the issue either. The risk is getting the email server blacklisted. It's inevitable, and we need to carefully take all this into consideration. As far as I can see, the ideal setup would be to have at least 50 IP addresses on multiple servers, each of which hosts its own SMTP server. When sending bulk email, it will divide them up across these servers, and each one will process its own queue. If one of those IP's gets blacklisted, it will be decommissioned and a new IP will replace it. Is there a better way that doesn't require us to invest in a large handful of servers? Perhaps a third party service which is meant exactly for this?

    Read the article

  • Development teams do not scale

    - by Matt Watson
    Recently I have been thinking about how development teams don't scale very well. The bigger a team and the product get, the more time the team spends fixing software bugs. This means they spend more time doing troubleshooting and debugging as the grow. The problem is that since developers don't typically have access to production servers, there is a bottleneck in the process when doing production troubleshooting.For a team that has 10 developers, I would guess than 0-2 of them have access to production servers. If that team grows to 20 people, it is probably the same 0-2 people that have production access still. This means that those 2 key people are a bottleneck and the team does not scale correctly as you add more resources. All those new developers want is to help track down and fix software bugs, but they don't have the visibility to do it. So they end up being less productive and frustrated because they really want to fix the problems. The people who do have production access end up spending too much of their time doing troubleshooting instead of working on new projects.The solution is to remove the bottlenecks and get those people working on more important tasks. Stackify can solve this problem by giving all the developers read only access to production servers. This allows them to access the information they need to do troubleshooting on their own.

    Read the article

  • How to spread XML Sitemaps over several webservers behind AWS loadbalancer?

    - by Jurik
    We have a web portal with almost a million products and way more other urls. I wrote a script that checks database. If there is a new url needed or an old one update, this script will update/create the XML Sitemaps. But we have several servers behind the load balancer at our rented AWS space. Further this script checks database for each url if there was an update so that it updates the appropriate xml file too. My question is how to spread those XML Sitemaps over all webservers behind this AWS load balancer? Our approaches/ideas: we could just generate them on one server with a cron job and copy them to the other servers, but this could be difficult because of automatic raising numbers of servers and so on. we put them on our S3 - but this one is not avaible thru our domain, so I guess google will have a problem with it I let my script run on every webserver but change it in a way that it will generate each time all xml files if they do not exist. But then I would have conflicts with updated URLs in my database, where I saved timestamp of last changed value of every url Is there another - better - solution that I do not know? Are there any special services by amazon for such cases?

    Read the article

  • links for 2010-12-17

    - by Bob Rhubart
    Overview of Oracle Enterprise Manager Management Packs How does Oracle Enterprise Manager Grid Control do so much across so many different systems? Porus Homi Havewala has the answers.  (tags: oracle otn grid soa entarch) How to overcome cloud computing hurdles - Computerworld What does it take to go from 'we should move to the cloud' to a successful cloud computing strategy? This excerpt from Silver Clouds, Dark Linings offers advice on crossing cloud chasms and developing a successful roadmap. (tags: ping.fm) Security in OBIEE 11g, Part 2 Guest blogger Pravin Janardanam continues the discussion about OBIEE 11g Authorization and other Security aspects. (tags: oracle otn security businessintelligence obiee) Oracle Fusion Middleware Security: A Quick Note about Oracle Access Manager 11g and WebLogic "OAM 11g integrates with WebLogic using the very same components used to integrate OAM 10.1.4.3. Under most circumstances, that means using the OAM Identity Asserter...which asserts the OAM_REMOTE_USER header as the user principal in the JAAS subject." - Brian Eidelman (tags: WebLogic oracle) Comparison Between Cluster Multicast Messaging and Unicast Messaging Mode Weblogic wonders!!! "When servers are in a cluster, these member servers communicate with each other by sending heartbeats and indicating that they are alive. For this communication between the servers, either unicast or multicast messaging is used." -- Divya Duryea (tags: weblogic oracle) Ron Batra: Cloud Computing Series: IV: Database.com, ExaData on Demand and connecting the dots Oracle ACE Direct Ron Batra offers his assessment of recent rumblings in the Cloud. (tags: oracle otn oracleace cloud database)

    Read the article

  • Oracle VM 3.1.1 build 365 released

    - by wcoekaer
    A few days ago we released a patch update for Oracle VM 3.1.1 (build 365). Oracle VM Manager 3.1.1 Build 365 is now available from My Oracle Support patch ID 14227416 Oracle VM Server 3.1.1 errata updates are, as usual, released on ULN in the ovm3_3.1.1_x86_64_patch channel. Just a reminder, when we publish errata for Oracle VM, the notifications are sent through the oraclevm-errata maillist. You can sign up here. Some of the bugfixes in 3.1.1 : 14054162 - Removes unnecessary locks when creating VNICs in a multi-threaded operation. 14111234 - Fixes the issue when discovering a virtual machine that has disks in a un-discovered repository or has un-discovered physical disks. 14054133 - Fixes a bug of object not found where vdisks are left stale in certain multi-thread operations. 14176607 - Fixes the issue where Oracle VM Manager would hang after a restart due to various tasks running jobs in the global context. 14136410 - Fixes the stale lock issue on multithreaded server where object not found error happens in some rare situations. 14186058 - Fixes the issue where Oracle VM Manager fails to discover the server or start the server after the server hardware configuration (i.e. BIOS) was modified. 14198734 - Fixes the issue where HTTP cannot be disabled. 14065401 - Fixes Oracle VM Manager UI time-out issue where the default value was not long enough for storage repository creation. 14163755 - Fixes the issue when migrating a virtual machine the list of target servers (and "other servers") was not ordered by name. 14163762 - Fixes the size of the "Edit Vlan Group" window to display all information correctly. 14197783 - Fixes the issue that navigation tree (servers) was not ordered by name. I strongly suggest everyone to use this latest build and also update the server to the latest version. have at it.

    Read the article

  • Setting up Cluster Configuration using an existing web server as a Primary Node?

    - by RapidWebs
    Thanks in advance for any help which is issued! I am having a slight issue, and need help with the decision making process when it comes to setting up my Cluster Configuration, consisting on a line of Ubuntu Servers (12.04). We currently have a Primary node, which resides in the US within a Datacenter, but we are going to be using this for all serious bandwidth and resource intensive websites, and through a configuration of Virtualmin + Webmin, will be setup as a sort of pseudo-cluster, using Virtualmins Cluster Modules. Anyways, on to the issue: We also have a business line setup locally, with three servers. here are their specs: Intel P4 2.4 ghz, 1GB Ram, 110 gb sata, Ubuntu 12.04* AMD 1.3 ghz, 512MB Ram, 20 GB IDE P3 Xeon 800mhz (dual physical processors), 1GB Ram, 3 * 25 GB Raid Configuration (one in use for host operating system). The first machine is currently IN USE and is serving virtual hosts off a sub-domain. My question is this: How can I integrate the Secondary node (which will be the Primary node per say, in this smaller configuration...) which is currently in use, into the cluster configuration w/ the other two servers for: Sharing Resources Redundancy (HA?) NFS /w the two Raid Disks without having the FORMAT the secondary node, and start fresh moving all my services in to a DRBD network drive or something similar, and than restoring all active virtualmin's Virtual hosts. the idea is that I want minimal downtime to people currently being served from server2.mywebsite.com, and from what I understand, all services need to be on a NFS so that they can be mounted on demand and accessed from the other machine taking over (i.e. Heartbeat + DRBD Config.) but my issue is that i already have all these services installed to their default directory structure: how can i most easily setup this NFS and HA system, move all my desires services to this new drive, and do it with minimal down time, and without breaking Virtualmin and everything else on my server? even just some pointers, a thread i could read, or a step by step check list or run down of commands i could issue to get started would be great! thanks!

    Read the article

  • What is Stackify?

    - by Matt Watson
    You have developers, applications, and servers. Stackify makes sure that they are all working efficiently. Our mission is to give developers the integrated tools they need to better troubleshoot and monitor the applications they create and the servers that they run on. Traditional IT operations tools are designed for network and system administrators. Developers commonly spend 30% of their time working with IT Operations remediating application service problems. Developers currently lack tools to efficiently support the applications they create. Stackify delivers the application support functionality that developers need:View application deployment locations, versions, and historyBrowse files on servers to ensure proper deploymentsAccess configuration and log files on serversRemotely restart windows services, scheduled tasks, and web applicationsBasic server monitoring and alertsCollects all application exceptions to a centralized pointLog and report on custom applications eventsStackify is building an integrated DevOps solution delivered from the cloud designed to meet the needs of developers but also help unify the working relationship with IT operations teams and existing security roles. Our goal is to help unify the interaction between developers and IT operations. Stackify allows both teams to have visibility that they never had before  to solve complex application service issues easier and faster. Stackify’s CEO and CTO both have experience managing very large and high growth software development teams. That experience is driving our design in Stackify to deliver the integrated tools we always wished we had, the next generation of development operations tools.

    Read the article

  • Building a Redundant / Distributed Application

    - by MattW
    This is more of a "point me in the right direction" question. My team of three and I have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • Friday Tips #6, Part 2

    - by Chris Kawalek
    Here is a question about updating Oracle VM: Question: How can I perform Oracle VM 3 server updates from Oracle VM Manager? Answer by Gregory King, Principal Best Practices Consultant, Oracle VM Product Management: Server Update Manager is a built-in feature of the Oracle VM Manager. Basically, Server Update Manager automatically configures YUM updates on all the Oracle VM Servers, pointing each to our Unbreakable Linux Network (ULN) update channel for Oracle VM. The servers periodically check with our Oracle YUM repository and notify the Oracle VM Manager that an update is available for each server. Actual server updates must be triggered by the Oracle VM administrator – they are not executed automatically. At this point, you can use the Oracle VM Manager to put a server into maintenance mode which live migrates all the running Oracle VM Guests to other Oracle VM Servers in the server pool. Once all the Oracle VM Guests have been migrated, the Oracle VM administrator can trigger the update on the server. The entire process is documented in the Installation and Upgrade Guide of Oracle VM Documentation so I won’t spend time detailing the steps. However, configuring the Server Update Manager is exceedingly simple. Simply navigate to the Tools and Resources tab in the Oracle VM Manager, select the link for Server Update Manager and ensure the following values are added to the text boxes as shown in the illustration below: YUM Base URL: http://public-yum.oracle.com/repo/OracleVM/OVM3/latest/x86_64 YUM GPG Key: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle Every server in the pool will be automatically configured for YUM updates once you choose the Apply button. Many thanks to Greg and Rick for providing the answers to this week's questions. If you want to ask us something, hit up Twitter and use hashtag #AskOracleVirtualization. See you next week! -Chris 

    Read the article

  • Standard Network Tiers in a Distributed N-Tier System

    Distributed N-Tier client/server architecture allows for segments of an application to be broken up and distributed across multiple locations on a network.  Listed below are standard tiers in a Distributed N-Tier System. End-User Client Tier The End-User Client is responsible for sending and receiving requests from web servers and other applications servers and translating the responses so that the End-User can interpret the data effectively. The primary roles for this tier are to communicate with servers and translate server responses back to the end-user to interpret. Business-Specific Functions Validate Data Display Data Send Data to Webserver Web Server Tier The Web server tier processes new requests for information coming in from the HTTP and HTTPS ports. This primarily handles the generation of user interfaces and calls the application server when needed to access Data and business logic when needed. Business-specific functions Send Data to application server Format Data for Display Validate Data Application Server Tier The application server stores and executes predefined business logic that is applied to various pieces of data as the business determines. The processed data is then returned back to the Webserver. Additionally, this server directly calls the database to obtain and store any data used by the system Business-Specific Functions Validate Data Process Data Send Data to Database Server Database Server Tier The Database Server is responsible for storing and returning all data need by the calling applications. The primary role for this this server is storage. Data is stored as needed and can be recalled at any point later in time. Business-Specific Functions Insert Data Delete Data Return Data to Application Server

    Read the article

  • SOA performance on SPARC T5 benchmark results

    - by JuergenKress
    The brand NEW super fast SPARC T5 servers are available. The platform is superb to run large SOA Suite environments or to consolidate your whole middleware platform. Some performance advices, recommended for all workloads: Performance profile for SOA apps on Oracle Solaris 11 BPEL (Fusion Order Demo) instances per second OSB (messages / transformations per second) Crypto acceleration study for SOA transformations SPARC T4 and T5 platform testing, pre-tuning Performance suitable for mid-to-high range enterprise in stand-alone SOA deployment or virtualized consolidation environment shared with Oracle applications 2.2x to 5x faster than SPARC T3 servers 25% faster SOA throughput, core to core than Intel 5600-series servers (running Exalogic software) SPARC T5 has 2x the consolidation density of Intel 5600-class processors 2x faster initial deployment time using Optimized Solutions pre-tested configuration steps Over 200 Application adapters for easiest Oracle software integration Would you like to get details? We can share with you on 1:1 bases T5 SOA Suite performance benchmarks, please contact your local partner manager or myself! SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: T5,TS Sparc,T5 SOA,bechmark,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Tracking "To Do" Items

    - by Bill Graziano
    One of the challenges I struggle with is keeping a good "to do" list of things I need to do on the various SQL Servers I support. I have servers that I don't visit on a regular basis so my situation may be different than many of you. Though I'm sure you all have servers that you only touch every few months. (And it's usually the accounting server!) It's difficult for me to remember what changes I made and what changes I need to make. I've tried Outlook, OneNote and various other to do list managers and haven't been happy with any of them. Many are close but just don't give me what I need. As a result I've started writing my own. It's web-based so you can use it from anywhere -- including on a server. It also knows just enough about SQL Server to help structure your to do items and your notes. It isn't agent based and doesn't do any monitoring. Think OneNote or Evernote but with some "SQL Servery" stuff built in. If you'd like to try this or take a survey I'm putting together, add your email address to my mailing list.  I should be ready in a week or so.  I'm only going to use this list for notifications about this service. I'd like to find a small group of people that feel the same pain I do and maybe we can build something interesting.

    Read the article

  • Client Side Prediction for a Look Vector

    - by Mike Sawayda
    So I am making a first person networked shooter. I am working on client-side prediction where I am predicting player position and look vectors client-side based on input messages received from the server. Right now I am only worried about the look vectors though. I am receiving the correct look vector from the server about 20 times per second and I am checking that against the look vector that I have client side. I want to interpolate the clients look vector towards the correct one that is server side over a period of time. Therefore no matter how far you are away from the servers look vector you will interpolate to it over the same amount of time. Ex. if you were 10 degrees off it would take the same amount of time as if you were 2 degrees off to be correctly lined up with the server copy. My code looks something like this but the problem is that the amount that you are changing the clients copy gets infinitesimally small so you will actually never reach the servers copy. This is because I am always calculating the difference and only moving by a percentage of that every frame. Does anyone have any suggestions on how to interpolate towards the servers copy correctly? if(rotationDiffY > ClientSideAttributes::minRotation) { if(serverRotY > clientRotY) { playerObjects[i]->collisionObject->rotation.y += (rotationDiffY * deltaTime); } else { playerObjects[i]->collisionObject->rotation.y -= (rotationDiffY deltaTime); } }

    Read the article

  • Single CAS web application in a cluster

    - by Dolf Dijkstra
    Recently a customer wanted to set up a cluster of CAS nodes to be used together with WebCenter Sites. In the process of setting this up they realized that they needed to create a web application per managed server. They did not want to have this management burden but would like to have one web application deployed to multiple nodes. The reason that there is a need for a unique application per node is that the web-application contains information that needs to be unique per node, the postfix for the ticket id.  My customer would like to externalize the node specific configuration to either a specific classpath per managed server or to system properties set at startup.It turns out that the postfix for ticket ids is managed through a property host.name and that this property can be externalized.The host.name property is used in: /webapps/cas/WEB-INF/spring-configuration/uniqueIdGenerators.xmlIt is set in /webapps/cas/WEB-INF/spring-configuration/propertyFileConfigurer.xmlin a PropertyPlaceholderConfigurer.The documentation for PropertyPlaceholderConfigurer:http://static.springsource.org/spring/docs/2.0.x/api/org/springframework/beans/factory/config/PropertyPlaceholderConfigurer.htmlThis indicates that the properties defined through the PropertyPlaceHolderConfigurer can be externalized.To enable this externalization you would need to change host.properties so it is generic for all the managed servers and thus can be reused for all the managed servers: host.name=${cluster.node.id}Next step is to change the startup scripts for the managed servers and add a system property for -Dcluster.node.id=<something unique and stable>.Viola, the postfix is externalized and the web application can be shared amongst the cluster nodes.

    Read the article

  • Using Bulk Operations with Coherence Off-Heap Storage

    - by jpurdy
    Some NamedCache methods (including clear(), entrySet(Filter), aggregate(Filter, …), invoke(Filter, …)) may generate large intermediate results. The size of these intermediate results may result in out-of-memory exceptions on cache servers, and in some cases on cache clients. This may be particularly problematic if out-of-memory exceptions occur on more than one server (since these operations may be cluster-wide) or if these exceptions cause additional memory use on the surviving servers as they take over partitions from the failed servers. This may be particularly problematic with clusters that use off-heap storage (such as NIO or Elastic Data storage options), since these storage options allow greater than normal cache sizes but do nothing to address the size of intermediate results or final result sets. One workaround is to use a PartitionedFilter, which allows the application to break up a larger operation into a number of smaller operations, each targeting either a set of partitions (useful for reducing the load on each cache server) or a set of members (useful for managing client result set sizes). It is also possible to return a key set, and then pull in the full entries using that key set. This also allows the application to take advantage of near caching, though this may be of limited value if the result is large enough to result in near cache thrashing.

    Read the article

  • What is the best approach for deploying apps to companies

    - by Supercell
    What is the best approach for the following scenario: 1) A publicly available app (available in app stores) which is used by end users to make use of services offered by multiple companies. 2) These companies maintain their services also using a mobile app. I'm not sure on how to solve the second part. Having one app for both enduser and admin functionality, secured by username/password doesn't sound like a good idea. This would leave the only option of developing a separate admin application for the companies. What is the best approach to deploy "admin" like mobile apps to companies only, for Android, iOS and Windows Phone? Some additional information: Public App ---- Servers ----- Multiple Company Apps The public app shows all companies offering their services. An end user uses the public app to order something from a specific company. The order is sent to our servers. Our servers send the order to the associated company. This order is displayed on the company's admin app and given the option to accept the order.

    Read the article

  • WordPress not resizing images with Nginx + php-fpm and other issues

    - by Julian Fernandes
    Recently i setup a Ubuntu 12.04 VPS with 512mb/1ghz CPU, Nginx + php-fpm + Varnish + APC + Percona's MySQL server + CloudFlare Pro for our Ubuntu LoCo Team's WordPress blog. The blog get about 3~4k daily hits, use about 180MB and 8~20% CPU. Everything seems to be working insanely fast... page load is really good and is about 16x faster than any of our competitors... but there is one problem. When we upload a image, WordPress don't resize it, so all we can do it insert the full image in the post. If the imagem have, let's say, 30kb, it resize fine... but if the image have 100kb+, it won't... In nginx error logs i see this: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "POST /wp-admin/async-upload.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/wp-admin/media-upload.php?post_id=2668&" It seems to be related with the issue, but i dunno. When that timeout happens, i started to get it when i'm trying to view a post too: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "GET /tutoriais-gimp-6-adicionando-aplicando-novos-pinceis.html HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/" And only a restart of php5-fpm fix it. I tryed increasing some timeouts and stuffs but it did not worked, so i guess it's some kind of limitation i did not figured yet. Could someone help me with it, please? /etc/nginx/nginx.conf: user www-data; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 15; keepalive_requests 2000; types_hash_max_size 2048; server_tokens off; server_name_in_redirect off; open_file_cache max=1000 inactive=300s; open_file_cache_valid 360s; open_file_cache_min_uses 2; open_file_cache_errors off; server_names_hash_bucket_size 64; # server_name_in_redirect off; client_body_buffer_size 128K; client_header_buffer_size 1k; client_max_body_size 2m; large_client_header_buffers 4 8k; client_body_timeout 10m; client_header_timeout 10m; send_timeout 10m; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## error_log /var/log/nginx/error.log; access_log off; ## # CloudFlare's IPs (uncomment when site goes live) ## set_real_ip_from 204.93.240.0/24; set_real_ip_from 204.93.177.0/24; set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.22.200.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; real_ip_header CF-Connecting-IP; set_real_ip_from 127.0.0.1/32; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 9; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_buffers 32 8k; # gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; /etc/nginx/sites-avaiable/default: ## # DEFAULT HANDLER # ubuntubrsc.com ## server { listen 8080; # Make site available from main domain server_name www.ubuntubrsc.com; # Root directory root /var/www; index index.php index.html index.htm; include /var/www/nginx.conf; access_log off; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ /\. { deny all; access_log off; log_not_found off; } location ~* ^/wp-content/uploads/.*.php$ { deny all; access_log off; log_not_found off; } rewrite /wp-admin$ $scheme://$host$uri/ permanent; error_page 404 = @wordpress; log_not_found off; location @wordpress { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_NAME /index.php; fastcgi_param SCRIPT_FILENAME $document_root/index.php; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; if (-f $request_filename) { fastcgi_pass unix:/var/run/php5-fpm.sock; } } } server { listen 8080; server_name ubuntubrsc.* www.ubuntubrsc.net www.ubuntubrsc.org www.ubuntubrsc.com.br www.ubuntubrsc.info www.ubuntubrsc.in; return 301 $scheme://www.ubuntubrsc.com$request_uri; } /var/www/nginx.conf: # BEGIN W3TC Minify cache location ~ /wp-content/w3tc/min.*\.js$ { types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*\.css$ { types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*js\.gzip$ { gzip off; types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } location ~ /wp-content/w3tc/min.*css\.gzip$ { gzip off; types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } # END W3TC Minify cache # BEGIN W3TC Browser Cache gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; try_files $uri $uri/ $uri.html /index.php?$args; } location ~ \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } # END W3TC Browser Cache # BEGIN W3TC Minify core rewrite ^/wp-content/w3tc/min/w3tc_rewrite_test$ /wp-content/w3tc/min/index.php?w3tc_rewrite_test=1 last; set $w3tc_enc ""; if ($http_accept_encoding ~ gzip) { set $w3tc_enc .gzip; } if (-f $request_filename$w3tc_enc) { rewrite (.*) $1$w3tc_enc break; } rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; # END W3TC Minify core # BEGIN W3TC Skip 404 error handling by WordPress for static files if (-f $request_filename) { break; } if (-d $request_filename) { break; } if ($request_uri ~ "(robots\.txt|sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { break; } if ($request_uri ~* \.(css|js|htc|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml|asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$) { return 404; } # END W3TC Skip 404 error handling by WordPress for static files # BEGIN Better WP Security location ~ /\.ht { deny all; } location ~ wp-config.php { deny all; } location ~ readme.html { deny all; } location ~ readme.txt { deny all; } location ~ /install.php { deny all; } set $susquery 0; set $rule_2 0; set $rule_3 0; rewrite ^wp-includes/(.*).php /not_found last; rewrite ^/wp-admin/includes(.*)$ /not_found last; if ($request_method ~* "^(TRACE|DELETE|TRACK)"){ return 403; } set $rule_0 0; if ($request_method ~ "POST"){ set $rule_0 1; } if ($uri ~ "^(.*)wp-comments-post.php*"){ set $rule_0 2$rule_0; } if ($http_user_agent ~ "^$"){ set $rule_0 4$rule_0; } if ($rule_0 = "421"){ return 403; } if ($args ~* "\.\./") { set $susquery 1; } if ($args ~* "boot.ini") { set $susquery 1; } if ($args ~* "tag=") { set $susquery 1; } if ($args ~* "ftp:") { set $susquery 1; } if ($args ~* "http:") { set $susquery 1; } if ($args ~* "https:") { set $susquery 1; } if ($args ~* "(<|%3C).*script.*(>|%3E)") { set $susquery 1; } if ($args ~* "mosConfig_[a-zA-Z_]{1,21}(=|%3D)") { set $susquery 1; } if ($args ~* "base64_encode") { set $susquery 1; } if ($args ~* "(%24&x)") { set $susquery 1; } if ($args ~* "(\[|\]|\(|\)|<|>|ê|\"|;|\?|\*|=$)"){ set $susquery 1; } if ($args ~* "(&#x22;|&#x27;|&#x3C;|&#x3E;|&#x5C;|&#x7B;|&#x7C;|%24&x)"){ set $susquery 1; } if ($args ~* "(%0|%A|%B|%C|%D|%E|%F|127.0)") { set $susquery 1; } if ($args ~* "(globals|encode|localhost|loopback)") { set $susquery 1; } if ($args ~* "(request|select|insert|concat|union|declare)") { set $susquery 1; } if ($http_cookie !~* "wordpress_logged_in_" ) { set $susquery "${susquery}2"; set $rule_2 1; set $rule_3 1; } if ($susquery = 12) { return 403; } # END Better WP Security /etc/php5/fpm/php-fpm.conf: pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log emergency_restart_threshold = 3 emergency_restart_interval = 1m process_control_timeout = 10s events.mechanism = epoll /etc/php5/fpm/php.ini (only options i changed): open_basedir ="/var/www/" disable_functions = pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,dl,system,shell_exec,fsockopen,parse_ini_file,passthru,popen,proc_open,proc_close,shell_exec,show_source,symlink,proc_close,proc_get_status,proc_nice,proc_open,proc_terminate,shell_exec ,highlight_file,escapeshellcmd,define_syslog_variables,posix_uname,posix_getpwuid,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,escapeshellarg,posix_uname,ftp_exec,ftp_connect,ftp_login,ftp_get,ftp_put,ftp_nb_fput,ftp_raw,ftp_rawlist,ini_alter,ini_restore,inject_code,syslog,openlog,define_syslog_variables,apache_setenv,mysql_pconnect,eval,phpAds_XmlRpc,phpA ds_remoteInfo,phpAds_xmlrpcEncode,phpAds_xmlrpcDecode,xmlrpc_entity_decode,fp,fput,virtual,show_source,pclose,readfile,wget expose_php = off max_execution_time = 30 max_input_time = 60 memory_limit = 128M display_errors = Off post_max_size = 2M allow_url_fopen = off default_socket_timeout = 60 APC settings: [APC] apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 64M apc.optimization = 0 apc.num_files_hint = 4096 apc.ttl = 60 apc.user_ttl = 7200 apc.gc_ttl = 0 apc.cache_by_default = 1 apc.filters = "" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 10M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.localcache = 0 apc.localcache.size = 512 apc.coredump_unmap = 0 apc.stat_ctime = 0 /etc/php5/fpm/pool.d/www.conf user = www-data group = www-data listen = /var/run/php5-fpm.sock listen.owner = www-data listen.group = www-data listen.mode = 0666 pm = ondemand pm.max_children = 5 pm.process_idle_timeout = 3s; pm.max_requests = 50 I also started to get 404 errors in front page if i use W3 Total Cache's Page Cache (Disk Enhanced). It worked fine untill somedays ago, and then, out of nowhere, it started to happen. Tonight i will disable my mobile plugin and activate only W3 Total Cache to see if it's a conflict with them... And to finish all this, i have been getting this error: PHP Warning: apc_store(): Unable to allocate memory for pool. in /var/www/wp-content/plugins/w3-total-cache/lib/W3/Cache/Apc.php on line 41 I already modifed my APC settings, but no sucess. So... could anyone help me with those issuees, please? Ooohh... if it helps, i instaled PHP like this: sudo apt-get install php5-fpm php5-suhosin php-apc php5-gd php5-imagick php5-curl And Nginx from the official PPA. Sorry for my bad english and thanks for your time people! (:

    Read the article

  • SSIS - Access Denied with UNC paths - The file name is a device or contains invalid characters

    - by simonsabin
    I spent another day tearing my hair out yesterday trying to resolve an issue with SSIS packages runnning in SQLAgent (not got much left at the moment, maybe I should contact the SSIS team for a wig). My situation was that I am deploying packages to a development server, and to provide isolation I was running jobs with a proxy account that only had access to the development servers. Proxies are an awesome feature and mean that you should never have to "just run the job as sysadmin". The issue I was facing was that the job step was failing. The job step was a simple execution of the package.The following errors appeared in my log file. I always check the "Log step output in history" for a job step, this ensures you get all the output from the command that you run. I'll blog about this later. If looking at the output in sysdtslog90 then you will have an entry with datacode -1073573533 and error message File or directory "<filename>" represented by connection "<connection>" does not exist.  Not exactly helpful. If you get the output from the console then you will also get these errors. 0xC0202070 "The file name property is not valid. The file name is a device or contains invalid characters." 0xC001401E "specified in the connection was not valid." It appears this error is due to the use of a UNC path and the account runnnig the package not having access to all the folders in the path. Solution To solve this you need to ensure that the proxy account has access to ALL folders in the path you are accessing. To check this works, logon as the relevant proxy user, or run a command window as the specified user. Then try and do net use \\server\share and then do a dir for each folder in the path and check you have access. If these work and you still have the problem then you have some other problem, sorry. The following are posts on experts exchange that also discuss this,http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SSIS/Q_24056047.htmlhttp://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SSIS/Q_23968903.html This blog had a post about it being a 64 bit issue. That definitely wasn't the issue for me as I was on a 32 bit server http://blogs.perkinsconsulting.com/post/64-bit-SQL-Server-2005-SSIS-and-UNC-paths-Part-2.aspx  

    Read the article

  • 12.04 LTS: unity --reset hangs

    - by Gregory R. Pace
    Nearly each time I reboot my machine, the system panel and integrated app menus fail to load. At a terminal, when issuing 'unity --reset', I get the following errors: ... Initializing widget options...done Initializing winrules options...done Initializing wobbly options...done ERROR 2012-11-05 04:36:48 unity.glib-gobject <unknown>:0 g_object_unref: assertion `G_IS_OBJECT (object)' failed ERROR 2012-11-05 04:36:48 unity.gtk <unknown>:0 gtk_window_resize: assertion `width > 0' failed WARN 2012-11-05 04:37:14 unity <unknown>:0 Unable to fetch children: No such interface `org.ayatana.bamf.view' on object at path /org/ayatana/bamf/application885622223 ERROR 2012-11-05 04:37:21 unity.glib-gobject <unknown>:0 g_object_set_qdata: assertion `G_IS_OBJECT (object)' failed Setting Update "main_menu_key" Setting Update "run_key" WARN 2012-11-05 04:38:06 unity.iconloader IconLoader.cpp:438 Unable to load icon stock-person at size 24 WARN 2012-11-05 04:38:26 unity.glib.dbusproxy GLibDBusProxy.cpp:182 Unable to connect to proxy: Error calling StartServiceByName for com.canonical.Unity.Lens.Applications: Timeout was reached WARN 2012-11-05 04:38:26 unity.glib.dbusproxy GLibDBusProxy.cpp:182 Unable to connect to proxy: Error calling StartServiceByName for com.canonical.Unity.Lens.Applications: Timeout was reached The procedure hangs at this point. Any ideas how to solve these problems ? Thanks in advance.

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >