Search Results

Search found 1858 results on 75 pages for 'fmw cluster'.

Page 38/75 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • ArchBeat Link-o-Rama for October 29, 2013

    - by OTN ArchBeat
    Exceptions Handling and Notifications in ODI | Christophe Dupupet Oracle Fusion Middleware A-Team director Christophe Dupupet reviews the techniques that are available in Oracle Data Integrator to guarantee that the appropriate individuals are notified in the event that ODI processes are impacted by network outages or other mishaps. Tech Article: SOA in Real Life: Mobile Solutions The latest article in the Industrial SOA series looks at mobile computing and how companies are developing SOA to go. Oracle Coherence, Split-Brain and Recovery Protocols In Detail | Ricardo Ferreira Ricardo Ferreira's article "provides a high level conceptual overview of Split-Brain scenarios in distributed systems," focusins on a "specific example of cluster communication failure and recovery in Oracle Coherence." WebLogic & FMW Provisioning update | Edwin Biemond "Provisioning was a hot topic on Oracle Openworld 2013," says Oracle ACE Edwin Biemond. His latest blog post discusses what is now possible with WebLogic and Fusion Middleware, and looks at what might be possible in the future. Reusing and Extending ADF BC Entities from Common Model | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis' post is about "ADF architecture and better application structuring with EO reuse from a common model." Andrejus describes "how to implement additional requirements to common model in extended ADF BC Entities." Thought for the Day "I work hard, I work late, I have nothing on my conscience. When I go to bed, I sleep." — Ellen Johnson Sirleaf, 24th and current President of Liberia (Born 29 October 1938) Source: brainyquote.com

    Read the article

  • FrameworkFolders support for WebCenter Portal

    - by Justin Paul-Oracle
    Oracle WebCenter Content Server includes components that provide a hierarchical folder interface, similar to a conventional file system, for organizing and managing some or all of the content in the repository. We used Contribution Folders (folders_g) which is being replaced by the new Folders (FrameworkFolders) component. The newer FrameworkFolders component fixes a number of limitations that folders_g had and adds a ton of new features. If you have played with the WebCenter Content mobile app, you will notice that it uses FrameworkFolders too. WebCenter Portal requires the use of the folders_g component. Given the fact that folders_g and FrameworkFolders component do not go well together and must never be enabled together on any system, you could never use the new features if you planned to use integrate Content with Portal. I still remember presenting a demo with a colleague for a client and he was showing off the mobile capabilities in Portal (which are pretty impressive by the way) and was getting all the glory; while I was sitting back wanting to show off the Content app but could not (no FrameworkFolders). Not any more; Oracle has released bundle patches for both WebCenter Content and Portal in April 2014 which bring home the support for FrameworkFolders. These patches will bring these products to version 11.1.1.8.3. You can enable support for FrameworkFolders on new Content and Portal installations where the folders_g component has never been enabled using the following patches: Download and apply the WebCenter Content MLR03 patch 18088049. Download and apply the WebCenter Portal BP3 patch 18085041. Download and apply the WebCenterConfigure component patch 18387955. Before applying this patch on the Content Server ensure that WebCenter Content MLR03 patch 18088049 has already been applied. You can find detailed instructions on how to enable FrameworkFolders support in the Oracle® Fusion Middleware Installation Guide for Oracle WebCenter Portal 11g Release 1 (11.1.1.8.3). There may be another bundle patches release (version 11.1.1.8.4) for many FMW products (including WebCenter Portal) in July 2014 which could further enhance the FrameworkFolders support.

    Read the article

  • 14540059 - UPDATE FOR BI PUBLISHER ENTERPRISE 11.1.1.6.0 AUGUST

    - by Tim Dexter
    Its been a while, I know :( I have posts in the pipe just gotta smoke em out! The latest update for BIP 11.1.1.6 was released last week. A bunch of defects have been addressed as you can see below.  13473493 - XMLP TRANSLATION ISSUE OF MILLION (ENG) TO MILLIONES (SPANISH) 13521951 - BIP UPGRADE FROM 10G TO 11.1.1.5.0 IS NOT SUCCESSFULL FOR TIAA-CREF  12542914 - ACC: REPORT VIEWER STRUCTURE HAS ERRORS - NO IFRAME AND NO LANG ATTRIBUTE  13562801 - XML TAG DISPLAY SHOULD DEFAULT TO 'FOLLOW THE DATA 13568043 - BIP QUERY FAILING VALIDATION DUE TO 'COALESCE' KEYWORD 13592901 - THE REPORT IS THROWING AN SQL ERROR THAT REFERENCES CHECKING FOR NULL VALUES 13836696 - BI PUBLISHER REPORT NOT GENERATED WHEN A TEXT FIELD START WITH "E.<SPACE>"  13879206 - DM MIGRATION ISSUES 13888939 - DM: LOV SEARCH CAUSING DB CONNECTION LEAK 13904225 - XSLX ERROR DUE TO URL LINK AND USE OF LIST 13930795 - RTF TEMPLATE GIVING DIFFERENT RESULTS IN DIFFERENT  13942064 - XDOEXCEPTION THROWN WHEN RUNNING PEOPLESOFT TEMPLATES AND XML FILE 13981523 - BI PUBLISHER ON 64-BIT WINDOWS CAN'T CONNECT TO MS ANALYSIS SERVICES CUBE 14039229 - BIP 11.1.1.5.0 REPORTS ARE NOT WORKING ON BIP 11.1.1.6.0  14055793 - BIP 11.1.1.6.0: DATE TYPE INPUT PARAMTER IS NOT DISPLAYING THE CORRECT VALUE USI  14059851 - UNABLE TO GRANT PRIVILEGES TO ROLE: DOMAIN USERS; THE ROLE DOES NOT EXIST 14109967 - LARGE OUTPUT CAUSES OUT OF MEMORY DUE TO LEFT OVER DEBUG CODE 14163973 - ISSUES USING DATA MODEL EDITOR IN BIP 11.1.1.6  14167915 - ORG.XML.SAX.SAXEXCEPTION: DATE FORMAT CANNOT BE NULL  14240045 - EDITING SCHEDULED REPORTS DOES NOT REFLECT VALID VALUES FOR UPGRADED SCHEDULES 14304427 - SEARCH DIALOG NOT BINDING PARAMETER VALUE - INVALID PARAMETER BINDING(S). 14338158 - PASSWORD FIELD SHOULD NOT BE DISPLAYED FOR FMW SECURITY MODEL 14393825 - OBIEE11G: LARGE NUMBER OF OBIPS SESSIONS CREATED WHEN USING SSO AND BI PUB 14558377 - CONT. BUG 14240045:EDITING SCHEDULES IN BI PUBLISHER IS DEFAULTING TO 'ALL' This patch is just for BI Publisher standalone installs. For those of you using BIP within the wider BIEE suite there is the 11.1.1.6.2 BP1 patchset. More details on that here.

    Read the article

  • Another big year for the ADF EMG at OOW12

    - by Chris Muir
    Oracle Open World 2012 has only just started, but in one way it's just finished!  All the ADF EMG's OOW content is over for another year! The unique highlight this year for me was the first ever ADF EMG social night held on Saturday, where I finally had the chance to meet so many ADF community members who I've known over the internet, but never met in person.  What?  You didn't get an invite?  Oh well, better luck next year ;-) Seriously our budget was limited, so in the happy-dictatorship sort of way I had to limit RSVPs to just 40 people.  Hopefully next year we can do something bigger and better for the wider community. Following directly on from the Saturday social night the ADF EMG ran a full day of sessions at the user group Sunday.  I wont go over the content again, but to say thank you very much to all our presenters and helpers, including Gert Poel, Pitier Gillis, Aino Andriessen, Simon Haslam, Ken Mizuta, Lucas Jellema and the FMW roadshow team, Ronald van Luttikhuizen, Guido Schmutz, Luc Bors, Aino Andriessen and Lonneke Dikmans. Also special thanks must go to Doug Cockroft and Bambi Price for their time and effort in organizing the ADF EMG room behind the scenes via the APOUC. To be blunt Doug and Bambi really do deserve serious thanks because they had to wear a lot of Oracle politics behind the scenes to get the rooms organized (oh, and deal with me fretting too! ;-). Finally thanks to all the members and OOW delegates for turning up and supporting the group on the day.  In the end the ADF EMG exists for you, and I hope you found it worthwhile. Onto 2013 (oh, and the rest of OOW12 ;-) 

    Read the article

  • ADF Faces Skin Editor - How to Work with It

    - by Shay Shmeltzer
    The ODTUG Kscop11 conference was a great success with lots of sessions about FMW running in a special track. I did several sessions and labs in the conference, and I thought it might be a good idea to at least give you a taste of what you might have missed. So here is most of what I demoed in my ADF Faces Skinning session (not all though - that session was 60 minutes long, and while everyone did end up going out of the building in the middle because of a fire drill for about 5 minutes, there was other things covered in the session as well). In the demo here you'll see how to generate new images and default color scheme, how to identify a component class with Firebug, how to skin a component, how to identify the global selector of a property, how to change fonts and how to change strings. By the way, for more on ADF Skinning you should also listen to the ADF Insider seminar that Frank Nimphius recorded on skinning, it will give you better understanding of the overall skinning process. P.S. in the demo I add an entry to the web.xml file which prevent ADF Faces from compressing the HTML that is generated. The entry is for org.apache.myfaces.trinidad.DISABLE_CONTENT_COMPRESSION  and I set it to true. This is very useful when you work on creating the skin, but don't forget to un-set it before you go production.

    Read the article

  • WebCenter 11.1.1.8 Certified with E-Business Suite 12.2

    - by Steven Chan (Oracle Development)
    Oracle WebCenter Suite is an integrated suite of Fusion Middleware 11gR1 tools used to create web sites and portals using service-oriented architecture (SOA).  Applications adapters are also available. WebCenter Portal 11.1.1.8 is now certified with Oracle E-Business Suite Release 12.2.  This complements our existing certifications of WebCenter Portal 11.1.1.8 with EBS 12.0 and 12.1.  WebCenter Portal 11.1.1.8 is part of Oracle Fusion Middleware 11g Release 1 Version 11.1.1.8.0, also known as FMW 11gR1 Patchset 7.  Certified Platforms Oracle WebCenter Portal is certified to run on any operating system for which Oracle WebLogic Server 11g is certified. For information on operating systems supported by Oracle WebLogic Server 11g and Oracle WebCenter Portal, refer to the 'Oracle Fusion Middleware on WebLogic Server - System Certification' in the Oracle Fusion Middleware 11g Release 1 (11.1.1.x) Certification Matrix. Integration with Oracle WebCenter Portal involves components spanning several different suites of Oracle products. There are no restrictions on which platform any particular component may be installed so long as the platform is supported for that component. Migrating to Oracle WebCenterIf you're currently using Oracle Portal, you should be aware that Portal is now in maintenance mode.  Updates with bug fixes will continue to be produced, but you should consider migrating to Oracle WebCenter for ongoing new features. References Using WebCenter 11.1.1 with Oracle E-Business Suite Release 12.2 (Note 1332645.1) WebCenter Portal 11g Release 1 (11.1.1.8) Documentation Related Articles Oracle E-Business Suite 12.2 Now Available WebCenter Portal 11.1.1.8 Certified with E-Business Suite 12

    Read the article

  • How to start WebLogic Server using default scripts?

    - by Luz Mestre-Oracle
    There are a few common issues reported when starting weblogic server using scripts. 1. User is not able to access weblogic console. 2. After a few days/hours weblogic server stops abruptly. 3. When user closes putty, they are not able to connect to weblogic server anymore. 4. When user closes windows command prompt, they are not able to connect to weblogic server anymore. 5. Weblogic is started using startManagedWebLogic.cmd/startManagedWebLogic.sh. By default, WebLogic Server does not run in background mode, so after you close the window the process finishes as well. In Linux/Unix based platforms, you need to use: nohup ./startManagedWebLogic.sh <Server> <URL> & In Windows platforms, you need to start Managed Servers using Windows Services: How to Install MS Windows Services For FMW 11g WebLogic Domain Admin and Managed Servers (Doc ID 1060058.1) http://docs.oracle.com/cd/E23943_01/web.1111/e13708/winservice.htm There a few more reasons that could cause similar symptoms, like JVM crash, signals sent by the Operating System, and many other reasons.  But the above steps is the first one to start. Enjoy!

    Read the article

  • Dynamic Monitoring Service (DMS) Configuration Dumping and CPU Utilization

    - by ShawnBailey
    There was recently a report of CPU spikes on a system that were occuring at precise 3 hour intervals. Research revealed that the spikes were the result of the Dynamic Monitoring Service generating a metrics dump and writing it under the server 'logs' folder for every WLS server in the domain. This blog provides some information on what this is for and how to control it. The Dynamic Monitoring Service is a facility in FMw (JRF to be more precise) that collects runtime data on the components deployed to WebLogic. Each component is responsible for how much or how little they use the service and SOA collects a fair amount of information. To view what is collected on any running server you can use the following URL, http://host:port/dms/Spy and login with admin credentials. DMS is essentially always running and collecting this information in the runtime and to protect against loss of this data it also runs automatic backups, by default at the 3 hour interval mentioned above. Most of the management options for DMS are exposed through WLST but these settings are not so we must open the dms_config.xml file which can be found in DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dms_config.xml. The contents are fairly short and at the bottom you will find the following entry: <dumpConfiguration>     <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="true"/> </dumpConfiguration> The interval of 10800 seconds corresponds to the 3 hours and the maximum size is 75MB. The file is written as an archive to DOMAIN_HOME/servers/<server_name>/logs/metrics. This archive contains the dump in XML format. You can disable the dumps all together by simply setting the 'enabled' value to 'false' or of course you could modify the other parameters to suit your needs. Disabling the dumps will NOT impact DMS collections or display at runtime. It will only eliminate these periodic backups.

    Read the article

  • curl http_code of 000

    - by Mikkel Paulson
    I have a shell script that I use to monitor loading times and response codes on my live server cluster. It runs a total of 250 iterations every 5 minutes, distributed across 10 servers and 6 sites. It uses curl with the -w flag to return pertinent information which is then parsed by my shell script: curl -svw 'monitor_load_times %{time_total} %{http_code}' -b 'server=$server' -m 15 -o /dev/null $url 2>&1 This information is then parsed by a graphing script that can display a number of different responses. However, curl will occasionally return a response code of "000". When this happens, it seems to happen multiple times at once despite being distributed over many iterations: What I'm trying to work out is if this is a client-side issue that's skewing my results or if it's actually indicative of a server-side problem affecting my entire cluster. Does 000 mean that the connection was dropped? Database entries corresponding to curl iterations with that response code return "0.000" for the time_total value. All of the search results I've found for curl returning a code of 000 are related to HTTPS being unsupported, but all of my test URLs are HTTP. (The spike in 500 errors is a completely unrelated issue that affected my servers last night.)

    Read the article

  • What is my miniport's service name?

    - by Ian Boyd
    i am trying to query the physical sector size of my drive using fsutil: C:\Windows\system32>fsutil fsinfo ntfsinfo c: NTFS Volume Serial Number : 0x78cc11b2cc116c1e Version : 3.1 Number Sectors : 0x000000003a382fff Total Clusters : 0x00000000074705ff Free Clusters : 0x00000000022fc29b Total Reserved : 0x00000000000007d0 Bytes Per Sector : 512 Bytes Per Physical Sector : <Not Supported> Bytes Per Cluster : 4096 Bytes Per FileRecord Segment : 1024 Clusters Per FileRecord Segment : 0 Mft Valid Data Length : 0x00000000305c0000 Mft Start Lcn : 0x00000000000c0000 Mft2 Start Lcn : 0x0000000003a382ff Mft Zone Start : 0x0000000006951940 Mft Zone End : 0x0000000006951c80 RM Identifier: 19B22CBE-570D-19DE-9C72-CD758F800DDC You can see that the Bytes Per Physical Sector value is Not Supported: Bytes Per Physical Sector : <Not Supported> In KB Article Microsoft support policy for 4K sector hard drives in Windows, Microsoft says: If fsutil.exe continues to display "Bytes Per Physical Sector : " after you apply the latest storage driver and the required hotfixes, make sure that the following registry path exists: HKLM\CurrentControlSet\Services\<miniport’s service name>\Parameters\Device\ Name: EnableQueryAccessAlignment Type: REG_DWORD Value: 1: Enable The only thing i don't know is what my Miniport's service name is. What is my miniport's service name. i know that my SATA drives are in AHCI mode, and AHCI uses the msahci driver service: Is that my miniport service? "MSAHCI"? See also Hitachi - Advanced Format Technology Brief RMPrepUSB - Advanced Format (4K sector) hard disks Microsoft support policy for 4K sector hard drives in Windows OSR Online - Advance Disk Format support in Storport Virtual Mniport diver Default cluster size for NTFS, FAT, and exFAT Wikipedia - Advanced Format

    Read the article

  • Sticky connection and HTTPS support for HAProxy

    - by Saif
    Hi Mates, We have 2 HTTP Load balancer with HAproxy and heartbeat. There are 4 apache nodes in this cluster. It's doing round robin load balancing. The HTTP cluster working fine. We are having problem with our portal because it uses SSO. We need sticky connection support in our HAproxy. Also we need load balancing for HTTPS traffic. Here's our HAproxy conf file. global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local0 log 127.0.0.1 local1 notice chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # main frontend which proxys to the backends #--------------------------------------------------------------------- frontend main *:5000 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js use_backend static if url_static default_backend app #--------------------------------------------------------------------- # static backend for serving up images, stylesheets and such #--------------------------------------------------------------------- backend static balance roundrobin server static 127.0.0.1:4331 check #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend app listen ha-http 10.190.1.28:80 mode http stats enable stats auth admin:xxxxxx balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /haproxy.txt HTTP/1.0 server apache1 portal-04:80 cookie A check server apache2 im-01:80 cookie B check server apache3 im-02:80 cookie B check server apache4 im-03:80 cookie B check Please advice. Thanks for your help in advance.

    Read the article

  • Dynamically add Server 2008 NLB Nodes

    - by Nick Jacques
    Hi All, I have a small NLB cluster for Terminal Servers. One of the things we're looking at doing for this particular project (this is for a college class) is dynamically creating Terminal Servers. What we've done is create policies for a certain OU, that sets the proper TS Farm properties and installs the Terminal Server role and NLB feature. Now what we'd like to do is create a script to be run on our Domain Controller to add hosts to the preexisting NLB cluster. On our Server 2008 R2 Domain Controller, I was thinking of running the following PowerShell script I've kind of hacked together. Any thoughts on if this will work? Is there any way I can trigger this script to run on the DC once all the scripts to install roles are done on the various Terminal Servers? Thanks very much in advance!! Import-Module NetworkLoadBalancingClusters $TermServs = @() $Interface = "Local Area Connection" $ou = [ADSI]"LDAP://OU=Term Servs,DC=example,DC=com" foreach ($child in $ou.psbase.Children) { if ($child.ObjectCategory -like '*computer*') {$TermServs += $child.Name} } foreach ($TS in $TermServs) { Get-NlbCluster 172.16.0.254 | Add-NlbClusterNode -NewNodeName $TS -NewNodeInterface $Interface }

    Read the article

  • netlogon errors

    - by rorr
    I have two instances of mssql 2005 and am using CA XOSoft replication. The master is a failover cluster and the replica is a standalone server. They are all running Server 2003 sp2 x64. Same patch levels on all servers. This setup has worked great for several months until we recently restricted the RPC ports on both nodes of the master(5000 - 6000 using rpccfg.exe). We have to implement egress filtering, thus the limiting of the ports. We began receiving login errors for sql windows authentication and NETLOGON Event ID: 5719: This computer was not able to set up a secure session with a domain controller in domain due to the following: Not enough storage is available to process this command. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. We also see group policies failing to update and cluster file shares go offline at the same time. The RPC ports were set back to default when we started seeing these problems and the servers rebooted, but the problems persist. The domain controllers are not showing any errors. Running dcdiag and netdiag shows everything is fine. We have noticed that the XOSoft service ws_rep.exe is using a lot of handles(8 - 9k), about the same number that sqlserver is using. As soon as xosoft replication is stopped the login errors cease and everything functions correctly. I have opened a ticket with CA for XOSoft, but I'm not sure that the problem is actually xosoft, but that it is the one bringing the problem to light. I'm looking for tips on debugging RPC problems. Specifically on limiting the ports and then reverting the changes.

    Read the article

  • Sticky connection and HTTPS support for HAProxy

    - by Saif
    We have 2 HTTP Load balancer with HAproxy and heartbeat. There are 4 apache nodes in this cluster. It's doing round robin load balancing. The HTTP cluster working fine. We are having problem with our portal because it uses SSO. We need sticky connection support in our HAproxy. Also we need load balancing for HTTPS traffic. Here's our HAproxy conf file. global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local0 log 127.0.0.1 local1 notice chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # main frontend which proxys to the backends #--------------------------------------------------------------------- frontend main *:5000 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js use_backend static if url_static default_backend app #--------------------------------------------------------------------- # static backend for serving up images, stylesheets and such #--------------------------------------------------------------------- backend static balance roundrobin server static 127.0.0.1:4331 check #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend app listen ha-http 10.190.1.28:80 mode http stats enable stats auth admin:xxxxxx balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /haproxy.txt HTTP/1.0 server apache1 portal-04:80 cookie A check server apache2 im-01:80 cookie B check server apache3 im-02:80 cookie B check server apache4 im-03:80 cookie B check Please advice. Thanks for your help in advance.

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr K.
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • HA Proxy won't load balance my web requests. What have I done wrong?

    - by Josh Smeaton
    I've finally got HA Proxy set up and running in a way I think I want. However, it is not load balancing the web requests it receives. All requests are currently being forwarded to the first server in the cluster. I'm going to paste my configuration below - if anyone can see where I may have gone wrong, I'd appreciate it. This is my first stab at configuring web servers in a *nix environment. First up, I have HA Proxy running on the same host as the first server in the apache cluster. We are moving these servers to virtual later on, and they will have different virtual hosts, but I wanted to get this running now. Both web servers are receiving their health checks, and are reporting back correctly. The haproxy?stats page correctly reports servers that are up and down. I've tested this by altering the name of the file that is checked. I haven't put any load onto these servers yet. I've just opened up the URLs on several tabs (private browsing), and had several co-workers hit the URL too. All of the traffic goes to WEB1. Am I balancing incorrectly? global maxconn 10000 nbproc 8 pidfile /var/run/haproxy.pid log 127.0.0.1 local0 debug daemon defaults log global mode http retries 3 option redispatch maxconn 5000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen WEBHAEXT :80,:8443 mode http cookie sessionbalance insert indirect nocache balance roundrobin option httpclose option forwardfor except 127.0.0.1 option httpchk HEAD health_check.txt stats enable stats auth rah:rah server WEB1 10.90.2.131:81 cookie WEB_1 check server WEB2 10.90.2.130:80 cookie WEB_2 check

    Read the article

  • (solved) `ssh foo "<command/>"` not loading remote aliases?

    - by TomRoche
    summary: Why does this fail $ ssh foo 'R --version | head -n 1' bash: R: command not found but this succeeds $ ssh foo 'grep -nHe 'bashrc' ~/.bash_profile' /home/me/.bash_profile:3:# source the users .bashrc if it exists /home/me/.bash_profile:4:if [ -f "${HOME}/.bashrc" ] ; then /home/me/.bash_profile:5: source "${HOME}/.bashrc" $ ssh foo 'grep -nHe "\WR\W" ~/.bashrc' /home/me/.bashrc:118:alias R='/share/linux86_64/bin/R' $ ssh foo '/share/linux86_64/bin/R --version | head -n 1' R version 2.14.1 (2011-12-22) ? details: I am a (rootless) user on 2 clusters. One uses environment modules, so any given server on that cluster can provide (via module add) pretty much the same resources. The other cluster, on which I must also unfortunately work, has servers managed individually, so I get in the habit of doing, e.g., EXEC_NAME='whatever' for S in 'foo' 'bar' 'baz' ; do ssh ${SERVER} "${EXEC_NAME} --version" done This works fine for packages installed normally/consistently, but often (for reasons unknown to me) packages are not: e.g. (compare alias below to alias above), $ ssh bar 'R --version | head -n 1' bash: R: command not found $ ssh bar 'grep -nHe 'bashrc' ~/.bash_profile' /home/me/.bash_profile:3:# source the users .bashrc if it exists /home/me/.bash_profile:4:if [ -f "${HOME}/.bashrc" ] ; then /home/me/.bash_profile:5: source "${HOME}/.bashrc" $ ssh bar 'grep -nHe "\WR\W" ~/.bashrc' /home/me/.bashrc:118:alias R='/share/linux/bin/R' $ ssh bar '/share/linux86_64/bin/R --version | head -n 1' R version 2.14.1 (2011-12-22) Using aliases copes well with these install differences when I interactively shell into the server, but fails when I try to script ssh commands (as above); i.e., # interactively $ ssh foo ... foo> R --version calls my alias for R on remote host=foo, but # scripting $ ssh foo 'R --version' doesn't. What do I need to do to make ssh foo "<command/>" load my aliases on the remote host?

    Read the article

  • 0 connected nodes in datastax opscenter

    - by gansbrest
    Installed opscenterd on the separate node outside of the cluster, but within firewall ( aws security group ). Tested all possible ports between agents and opcenter server. No errors in the log.. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Initializing event storage. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Attempting to load all persisted alert rules 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done loading persisted alert rules 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done initializing event storage. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done loading persisted scheduled job descriptions 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: OpsCenter starting up. 2013-10-30 01:07:23+0000 [] INFO: Finished starting new cluster services for FC_Cluster 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.34.10.185 is version u'3.2.2' 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.32.37.251 is version u'3.2.2' 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.82.226.252 is version u'3.2.2' The most interesting part that I can see some data in the opscenter UI, when I stop agents, there is no data displayed, when I start - it show up again, but at the same time it shows 0 connected nodes. Storage capacity is even funnier - 3 of 0 nodes.. Any ideas why that could be happening?

    Read the article

  • Getting started with webserver clustering.

    - by Ernie
    I work for a small ISP, and we host about 250 domains and all the stuff that goes along with that: DNS, mail, spam filtering, and backups. Currently, we have separate DNS servers (two of them) and mail servers (outgoing mail is actually on the secondary DNS server, but was previously on its own server). In the past, this was done as an insurance measure. The last thing we need is for some doofus (usually yours truly) to hose a server, taking out DNS and mail right along with it, or for spammers to jam our incoming SMTP server, preventing outgoing mail from being sent too. In the past, this was a problem, and our servers were set up the way they are now to combat it. However, clustering solutions like Sun's Cobalt RAQ (in days of olde) and Virtualmin appear to cater to an all-in-one approach, then deal with failures through redundant servers. I have avoided this thus far, but we've been using Virtualmin on our web server for a while now, and I'd like to expand into using it for a high availability cluster. Our networking partner has recently built a datacenter that has eliminated all of our other bugaboos like network, cooling, and power issues, so now the only thing left to go wrong is me hosing a server, which happened earlier this month. One of the bigger reasons we've avoided going this route is because our hardware requirements aren't particularly high. One server easily handles all the sites we host (most of them are flat sites). Also, load-balancing routers tend to be expensive and complicated. All that I'm really expecting to do is building a two-node cluster for redundancy so that when I hose a server (however rare that might be), we're not out for 8-12 hours while I rebuild it. What I need to know is how to get started, and if I'm really in a position to bother with this kind of thing at all.

    Read the article

  • How do you setup FTP with IIS Manager Users in an NLB environment with shared IIS configs?

    - by William Jens
    I've setup a 2 node NLB cluster and used the following to share IIS configs between them. http://blogs.technet.com/b/meamcs/archive/2012/05/30/configuring-iis-7-5-shared-configuration.aspx The IIS configs and content is located on a network share via a UNC path. This works - updating IIS settings on one node, is visible in another node and my website works on the individual nodes and the cluster as whole. I'm able to setup an FTP site and successfully connect with my Windows login. However, I want to use IIS Manager Authentication as defined in: http://www.iis.net/learn/publish/using-the-ftp-service/configure-ftp-with-iis-manager-authentication-in-iis-7 I've tried using "Network Service" with the FTP COM object as well as a dedicated user account that exists on all three hosts, but every time I try to login with an IIS user I get something like the following: IISWMSVC_AUTHENTICATION_UNABLE_TO_READ_CONFIG An unexpected error occurred while retrieving the authentication information. Exception:System.Runtime.InteropServices.COMException (0x8007052E): Filename: Error: at Microsoft.Web.Administration.Interop.AppHostWritableAdminManager.GetAdminSection(String bstrSectionName, String bstrSectionPath) at Microsoft.Web.Administration.Configuration.GetSectionInternal(ConfigurationSection section, String sectionPath, String locationPath) at Microsoft.Web.Management.Server.ConfigurationAuthenticationProvider.GetSection(ServerManager serverManager) Process:dllhost User=NT AUTHORITY\NETWORK SERVICE Can anyone point me in the right direction here?

    Read the article

  • SQL 2008 - db mail issue

    - by Chris
    Hello. I have two instances of SQL Server 2008. One was upgraded from SQL Server 2000 and one was a clean, new install. The instances are running on different nodes of the same cluster, although I have tried having them both on the same node with identical results. SQL Mail operates perfectly on both instances. DB Mail operates perfectly on the newly installed instance. On the upgraded instance, DB Mail does not send any mail. Of course, I am not positive that the fact this instance is upgraded has anything to do with the issue, but it might. The configuration of my db mail profile and account looks identical to my functioning instance. In the configuration of the 'alerts' tab in the SQL Agent properties i have tried selecting both DB Mail and SQL Mail to no avail. Both instances use the same SMTP server with the same authentication (domain with db engine account). All messages sent via sp_send_db mail and those sent via the 'test email' option are visible in the sysmail_allitems queue and remain there as 'unsent'. The send_status eventually changes to 'failed'. The only messages in the sysmail_event_log are 'mail queue stopped by login domain\myuser', 'mail queue started by login domain/myuser' and 'activiation successful.'. selecting from the externalmailqueue has the same number of rows as sysmail_allitems. i have tried bouncing the agent, the entire instance and moving the other functioning instance to the other node in the cluster. any thoughts? thx.

    Read the article

  • Distributed File Systems.

    - by GruffTech
    So, I've been reading several articles around ServerFault as well as google. (For Example, this link) My Requirements are very similar to the link above, however i'd like to also have dynamic or at least resizeable file volumes, so if necessary i can add 4-5 servers to the pool, and then expand the volume. Any Distributed File systems that support that, to save me some time? Thanks! LustreFS will be my next test cluster to build. GlusterFS I've build a 3-machine test GlusterFS cluster, However i quickly became aware of several of its limitations that it doesn't seem to make clearly public. One, i can't seem to resize a volume. Once a volume is created, its done. Which seems retarded, why have a fully scalable file system if i can't scale a volume? So maybe i'm doing something wrong. I'm not sure. AmazonS3 while gives the cheapest startup adds too much cost when broken down to per client per month, so its out. Building my own system when prorated over several years with no bandwidth costs makes it significantly cheaper. MogileFS isn't an option as we'd like this server to be a SAN-Replacement, for storing tons of media from a multitude of systems, which for us means it needs to be POSIX compliant so it can be remotely mounted via NFS or CIFS.

    Read the article

  • Appears to be "randomly" switching between the acl matched backend and the default backend

    - by Xoor
    I have HAProxy acting as a proxy in front of: An NGinx instance An in-house load balancer in front of multiple dynamic services exposed with socket.io (websockets) My problem is that from time to time my connections are proxied correctly to my socket.io cluster, and then randomly it fallsback to routing to NGinx which obviously is annoying and meaningless since NGinx isn't mean't to handle the request. This happens when requesting for URLs of the format : http://mydomain.com/backends/* There's an ACL in the HAProxy config to match the '/backends/*' path. Here's a simplified version of my HAProxy config (removed extra unrelated entries and changed names): global daemon maxconn 4096 user haproxy group haproxy nbproc 4 defaults mode http timeout server 86400000 timeout connect 5000 log global #this frontend interface receives the incoming http requests frontend http-in mode http #process all requests made on port 80 bind *:80 #set a large timeout for websockets timeout client 86400000 # Default Backend default_backend www_backend # Loadfire (socket cluster) acl is_loadfire_backends path_beg /backends use_backend loadfire_backend if is_loadfire_backends # NGinx backend backend www_backend server www_nginx localhost:12346 maxconn 1024 # Loadfire backend backend loadfire_backend option forwardfor # This sets X-Forwarded-For option httpclose server loadfire localhost:7101 maxconn 2048 It's really quite confusing for me why the behaviour appears to be "random", since being hard to reproduce it's hard to debug. I appreciate any insight on this.

    Read the article

  • Replicated MongoDB server slower than simple shards

    - by displayName
    I tried to compare the performance of a sharded configuration against a sharded and replicated configuration. The sharded configuration consists of 8 shards each running on three different machines thereby constituting a total of 24 shards. All 8 of these shards run in the same partition on each machine. The sharded and replicated version is 8 shards again just like plain sharding, and all 8 mongods run on the same partition in each machine. But apart from this, each of these three machine now run additional 16 threads on another partition which serve as the secondary for the 8 mongods running on other machines. This is the way I prepared a sharded and replicated configuration with data chunks having replication factor of 3. Important point to note is that once the data has been loaded, it is not modified. So after primary and secondaries have synchronized then it doesn't matter which one i read from. To run the queries, I use an entirely different machine (let's call it config) which runs mongos and this machine's only purpose is to receive queries and run them on the cluster. Contrary to my expectations, plain sharding of 8 threads on each machine (total = 3 * 8 = 24) is performing better for queries than the sharded + replicated configuration. I have a script written to perform the query. So in order to time the scripts, I use time ./testScript and see the result. I tried changing the reading preference for replicated cluster by logging to mongo of config and run db.getMongo().setReadPref('secondary') and then exit the shell and run the queries like time ./testScript. The questions are: Where am i going wrong in the replication? Why is it slower than its plain sharding version? Does the db.getMongo().ReadPref('secondary') persist when i leave the shell and try to perform the query? All the four machines are running Linux and i have already increased the ulimit -n to 2048 from initial value of 1024 to allow more connections. The collections are properly distributed and all the mongods have equal number of chunks. Goes without saying that indices in both configurations are the same.

    Read the article

  • Experiences in Upgrading from Exchange 2003 to Exchange 2010

    - by gWaldo
    I'm currently running Exchange 2003 SP2 Cluster on a Server 2003 AD Forest (in native 2003 mode), and we beginning to plan the upgrade to Server 2008 AD and Exchange 2010. We have two main sites, one middle-sized office, and a couple of smaller sites which have DCs (which may be RODCs after the upgrade). Currently all of our Exchange cluster is in my main site, but we are considering using the new datastore paradigm for load-balance/failover at the other large site, but this is not set in stone. Right now we are in the information-gathering and planning phases. I am looking for input of any gotchas experienced while performing either upgrade, but especially the Exchange upgrade. Gotchas? What surprised you? What wasn't documented? What said one thing but was misleading? (Confusing either in content or severity.) What is great or horrible about the new system? What worked well? What worked poorly? If you were to do it over again...? (I know that this isn't so much a question that can be definitively answered, but I'm happy to reward insight and useful resources (not the Microsoft documentation, but Blogposts are welcome) with upvotes.) UPDATE A couple items of note: -We are not currently using OWA (currently only the admins), but it may become more of a consideration with iOS devices. -We do have a small number of Blackberries in the environment (< 10%). -In addition to the standard Exchange connectors, we have a third-party connector for Captaris RightFax integration.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >