Search Results

Search found 3050 results on 122 pages for 'soa cluster'.

Page 100/122 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • IIS 6.0 Server Too Busy HTTP 503 Connection_Dropped DefaultAppPool

    - by Shiraz Bhaiji
    We have a site which is running on a windows 2003 cluster with 2 64bit machines. The site needs to be able to cope with over 20,000 concurrent users One of the things that the site does is to allow the download of a 2MB file (which is cached in memory). We have low CPU and memory usage. We also have surplus bandwidth. It appears that we are running out of connections due the time it takes the user to download the file (some users have slow internet connections). In the IIS log we get HTTP 503 errors. In the HTTPErr log we get mainly Connection_Dropped DefaultAppPool with some Timer_EntityBody DefaultAppPool. Question is: How can we configure IIS to allow more connections? Or is there something that I am missing here? Thanks Shiraz

    Read the article

  • infoWindow on MarkerClusterer in google maps

    - by vishwanath
    I need infoWindow to be opened instead of zooming in map, when clicking on the ClusterMarker. I am using Gmaps util library MarkerClusterer for creating cluster of markers. I tried changing following line in markerclusterer.js ClusterMarker_.prototype = new GOverlay(); with ClusterMarker_.prototype = new GMarker(); so that I can get the openInfoWindow() function in the clustermarker, but that didnt worked out. Got some error. If possible, Please suggest solution so that this can be done with MarkerClusterer. Or else any other library which will be able to do this. Any help will be appreciated.

    Read the article

  • Ehcache - Distributed RMI not working

    - by Ted
    Hi, I have this strange problem with ehcache 2.0 that I hope someone can help me with. I have set up a cluster of two hosts, A and B. I can see that heartbeats are received at both ends, so I'm pretty sure the networking and multicast stuff is working. The problem is that is I put an element into the cache at host A, I can see in the logs of host B that it receives a remote put. But when I request the same element from host B, it runs off to the data base and performs a query nonetheless. What may be the cause of this? Thankful for any pointers!

    Read the article

  • C# [Mono]: MPAPI vs MPI.NET vs ?

    - by Olexandr
    Hi. I'm working on college project. I have to develop distributed computing system. And i decided to do some research to make this task fun :) I've found MPAPI and MPI.NET libraries. Yes, they are .NET libraries(Mono, in my case). Why .NET ? I'm choosing between Ada, C++ and C# so to i've choosed C# because of lower development time. I have two goals: Simplicity; Performance; Cluster computing. So, what to choose - MPAPI or MPI.NET or something else ?

    Read the article

  • Hadoop WordCount example stuck at map 100% reduce 0%

    - by Abhinav Sharma
    [hadoop-1.0.2] ? hadoop jar hadoop-examples-1.0.2.jar wordcount /user/abhinav/input /user/abhinav/output Warning: $HADOOP_HOME is deprecated. ****hdfs://localhost:54310/user/abhinav/input 12/04/15 15:52:31 INFO input.FileInputFormat: Total input paths to process : 1 12/04/15 15:52:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 12/04/15 15:52:31 WARN snappy.LoadSnappy: Snappy native library not loaded 12/04/15 15:52:31 INFO mapred.JobClient: Running job: job_201204151241_0010 12/04/15 15:52:32 INFO mapred.JobClient: map 0% reduce 0% 12/04/15 15:52:46 INFO mapred.JobClient: map 100% reduce 0% I've set up hadoop on a single node using this guide (http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#run-the-mapreduce-job) and I'm trying to run a provided example but I'm getting stuck at map 100% reduce 0%. What could be causing this?

    Read the article

  • Elasticsearch Shard stuck in INITIALIZING state

    - by Peeyush
    I stopped one of my elastic search node & as a normal behavior elastic search started relocating my shard on that node to another node. But even after 15 hours it is still stuck in INITIALIZING state. Also that shard is fluctuating between two nodes,for some time it stays on one node then automatically shift to another node & keep on doing that after every few hours. Main issue is it is still in INITIALIZING state after so many hours. I am using version 1.2.1. This shard which is stuck, it is a replica. I am getting this error in logs: [ERROR][index.engine.internal ] [mynode] [myindex][3] failed to acquire searcher, source delete_by_query java.lang.NullPointerException [WARN ][index.engine.internal ] [mynode] [myindex][3] failed engine [deleteByQuery/shard failed on replica] [WARN ][cluster.action.shard ] [mynode] [myindex][3] sending failed shard for [myindex][3], node[Sp3URfNVQlq2i4i3EjCakw], [R], s[INITIALIZING], indexUUID [kTikCHshQMKEQ_jAuWWWnw], reason [engine failure, message [deleteByQuery/shard failed on replica][EngineException[[myindex][3] failed to acquire searcher, source delete_by_query]; nested: NullPointerException; ]]

    Read the article

  • Bind9 DNS help with psuedo domains

    - by Tempname
    I have setup a dns server on my home network to manage some apps that I have written for home. Currently I have 3 "domains" that I am using: controller devserver fileserver The first issue that I am having is that when I attempt to ping the parent domain of any of these 3 I am unable to. I simply get ping: unknown host controller. I however can ping any of the subdomains I have setup for these 3 parent domains. The second issue is I am unable to ping any of the 3 parent domains or any child domains from my window machines. I have verified that these domains work on other devices in my house (ipod touch, ipad, cell phone). Any help with this is greatly appreciated Here is bind data file for my parent domain controller: ; ; BIND data file for local loopback interface ; $TTL 604800 @ IN SOA controller. admin.controller. ( 9 604800 86400 2419200 604800 ) ; @ IN NS controller. @ IN A 192.168.1.104 controller IN A 192.168.1.194 admin.controller. IN A 192.168.1.104

    Read the article

  • SSL cert install in Mongrel

    - by normalocity
    Running a RoR app. I've created a self-signed SSL cert for my test/dev environment, which consists of a single mongrel, and Mac OS X Leopard. Can I install the SSL cert in such a way that mongrel will use it, or do I have build a different server stack to handle SSL (e.g. Apache in front of a mongrel cluster), and install the cert inside of Apache? For my production server, I'll eventually build a server stack like this, so it's no big deal if I have to go that direction - it's just a question of now or later. Thanks!

    Read the article

  • How to setup DNS server behind a VPN

    - by Brian
    I want to host some websites behind a VPN and I need some help with the finer points of the configuration. Thus far I've settled on OpenVPN + Bind9 and I want to configure the domains like this: External DNS mail.example.com www.example.com vpn.example.com I want to be able to connect to the vpn using 'vpn.example.com'. Once connected I then want to be able to resolve anything which is '*.vpn.example.com' with the DNS server sitting behind the VPN. I know that OpenVPN can push DNS servers to clients when they connect. I am having trouble though with the DNS config, both internal and external. I've gone through a few tutorials etc. and tried to reason about it myself but I'm not getting anywhere. So my main question would be does the above configuration make sense? If so, any general pointers or examples would be greatly appreciated. Here's what I've tried so far based on this tutorial (I've redacted my domain with example.com). When I try the tests with dig at the end to check the resolution is working it fails. db.vpn.example.com $TTL 15m vpn.example.com. IN SOA ns.vpn.example.com. [email protected]. ( 2009010910 ;serial 900 ;refresh 900 ;retry 900 ;expire 900 ;minimum TTL ) vpn.example.com. IN NS ns.vpn.example.com. ns IN A 192.168.0.2 test IN A 192.168.0.2

    Read the article

  • DNS configuration issues. Clients inside network unable to resolve DNS server's name

    - by hydroparadise
    Setup the DNS service on Ubuntu 12.04 64 and all apears to be well except that my dhcp clients do not recognize my DNS servers hostname. When doing a nslookup on one of my Windows clients, I get C:\Users\chad>nslookup Default Server: UnKnown Address: 192.168.1.2 Where I would expect the FQDN in the spot where UnKnown is seen. The DNS server know's itself pretty well, but I think only because I have an entry in the /etc/hosts file to resolve. There's so many places to look I don't even know where to begin. Are there any logs I can look at? Something. Places I've looked at and configured: /etc/bind/zones/domain.com.db /etc/bind/zones/rev.1.168.192.in-addr.arpa /etc/bind/named.conf.local EDIT: '/etc/bind/zones/rev.1.168.192.in-addr.arpa' @ IN SOA dns-serv1.mydomain.com [email protected]. ( 2006081401; 28800; 604800; 604800; 86400 ) IN NS dns-serv1.mydomain.com. 2 IN PTR dns-serv1 2 IN PTR mydomain.com EDIT 2: '/etc/bind/named.conf.local' zone "mydomain.com" { type master; file "/etc/bind/zones/mydomain.com.db"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/rev.0.168.192.in-addr.arpa"; };

    Read the article

  • JBossCacheService: exception occurred in cache put error occurred after changing cache mode to REPL_

    - by logoin
    Hi, we have a horizontal cluster set up on JBoss 4.2. The session replication worked fine until we changed cache mode from REPL_ASYNC to REPL_SYNC to fix a issue. We started to see warning for some session failovers: [org.jboss.web.tomcat.service.session.InstantSnapshotManager.ROOT] Failed to replicate session java.lang.RuntimeException bc [local7.warning] JBossCacheService: exception occurred in cache put ... org.jboss.web.tomcat.service.session.JBossCacheWrapper.put(JBossCacheWrapper.java:147) org.jboss.web.tomcat.service.session.JBossCacheService.putSession(JBossCacheService.java:315) org.jboss.web.tomcat.service.session.JBossCacheClusteredSession.processSessionRepl(JBossCacheClusteredSession.java:125) Does anyone have any idea why this happen and how to fix it if we want to still use REPL_SYNC? Any help is appreciated. Thanks!

    Read the article

  • Connecting to RDS database from EC2 instance using bind9 CNAME alias

    - by mptre
    I'm trying to get internal DNS up and running on a EC2 instance. The main goal is to be able to define CNAME aliases for other AWS services. For example: Instead of using the RDS endpoint, which might change over time, an alias mysql.company.int can be used instead. I'm using bind9 and here's my config files: /etc/bind/named.conf.local zone "company.int" { type master; file "/etc/bind/db.company.int"; }; /etc/bind/db.company.int ; $TTL 3600 @ IN SOA company.int. company.localhost. ( 20120617 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS company.int. @ IN A 127.0.0.1 @ IN AAAA ::1 ; CNAME mysql IN CNAME xxxx.eu-west-1.rds.amazonaws.com. The dig command ensures me my alias is working as excepted: $ dig mysql.company.int ... ;; ANSWER SECTION: mysql.company.int. 3600 IN CNAME xxxx.eu-west-1.rds.amazonaws.com. xxxx.eu-west-1.rds.amazonaws.com. 60 IN CNAME ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. 589575 IN A zzz.zz.zz.zzz ... As far as I can understand a reverse zone isn't needed for a simple CNAME alias. However when I try to connect to MySQL using my newly created alias the operation is giving me a timeout. $ mysql -uuser -ppassword -hmysql.company.int ERROR 2003 (HY000): Can't connect to MySQL server on 'mysql.company.int' (110) Any ideas? Thanks in advantage!

    Read the article

  • Domain authentication over OPEN wireless pre-logon (Windows 7 Pro) - No logon servers avail

    - by Shadow00Caster
    I have a plethora of laptops that are joined to an AD domain. I have an enterprise wireless system setup, the users of these laptops will be using an OPEN unsecured SSID which will ultimately have a captive portal that uses Radius-AD auth and firewall rules to allow access pre-captive portal auth to the proper ip's/ports of DC's etc for auth etc. I already have other laptops/users connecting to another SSID with 802.11x and SSO, all works perfectly pre-logon etc. My problem is with this open network, for some reason I cannot get the machines to auth to AD. The laptops connect to the wireless network, I confirm this on the controller and can ping the laptop at startup. I sharked the wires on the 2 DC's that these machines auth to, I can see a DNS SOA update from a laptop im testing with and can ping that test laptop from both DC's. When I try to logon, "There are currently no logon servers available to service the logon request." The shark shows no incoming connections to either DC even though the laptop is connected and pingable. Any help is greatly appreciated.

    Read the article

  • Rails message: ActionView::MissingTemplate

    - by rtfminc
    I am getting an error that I cannot figure out: ActionView::MissingTemplate (Missing template cluster/delete_stuff.erb in view path app/views) <...snip trace...> Rendering rescues/layout (internal_server_error) I am "enhancing" others code and am following the convention they set up, where they have have code like: <%= render :partial => "other_stuff" %> And a file named _other_stuff.html.erb and it all works, but when I copy these little snippets, I get the above error. Any ideas? Something is going on here that I need to figure out.

    Read the article

  • How to use Cassandra's Map Reduce with or w/o Pig?

    - by UltimateBrent
    Can someone explain how MapReduce works with Cassandra .6? I've read through the word count example, but I don't quite follow what's happening on the Cassandra end vs. the "client" end. https://svn.apache.org/repos/asf/cassandra/trunk/contrib/word_count/ For instance, let's say I'm using Python and Pycassa, how would I load in a new map reduce function, and then call it? Does my map reduce function have to be java that's installed on the cassandra server? If so, how do I call it from Pycassa? There's also mention of Pig making this all easier, but I'm a complete Hadoop noob, so that didn't really help. Your answer can use Thrift or whatever, I just mentioned Pycassa to denote the client side. I'm just trying to understand the difference between what runs in the Cassandra cluster vs. the actual server making the requests.

    Read the article

  • MPI signal handling

    - by Seth Johnson
    When using mpirun, is it possible to catch signals (for example, the SIGINT generated by ^C) in the code being run? For example, I'm running a parallelized python code. I can except KeyboardInterrupt to catch those errors when running python blah.py by itself, but I can't when doing mpirun -np 1 python blah.py. Does anyone have a suggestion? Even finding how to catch signals in a C or C++ compiled program would be a helpful start. If I send a signal to the spawned Python processes, they can handle the signals properly; however, signals sent to the parent orterun process (i.e. from exceeding wall time on a cluster, or pressing control-C in a terminal) will kill everything immediately.

    Read the article

  • wildcard deal with www as a subdomain

    - by Alaa Gamal
    i am using wildcard with apache my APACHE CONFIG: ServerAlias *.staronece1.com DocumentRoot /staronece1/domains my named file $ttl 38400 staronece1.com. IN SOA staronece1.com. email.yahoo.com. ( 1334838782 10800 3600 604800 38400 ) staronece1.com. IN NS staronece1.com. staronece1.com. IN A 95.19.203.21 www.staronece1.com. IN A 95.19.203.21 server.staronece1.com. IN A 95.19.203.21 mail.staronece1.com. IN A 95.19.203.21 ns1.staronece1.com. IN A 95.19.203.21 ns2.staronece1.com. IN A 95.19.203.21 staronece1.com. IN NS ns1.staronece1.com. staronece1.com. IN NS ns2.staronece1.com. staronece1.com. IN MX 10 mail.staronece1.com. * 14400 IN A 95.19.203.21 *.staronece1.com IN A 95.19.203.21 my php test file /staronece1/domains/index.php <?php function getBname(){ $bname=explode(".",$_SERVER['HTTP_HOST'],2); return $bname[0]; } echo 'SubDomain is :'.getBname(); ?> if i go to something.staronece1.com i get this result SubDomain is : something No the problem is if i go to www.staronece1.com i should get empty result, because www is not a sub domain but i get this result SubDomain is : www And if i go to www.something.staronece1.com i get firefox error message ( site not found ) How to fix this problem?? i think the solution is: added record for www in named file Thanks

    Read the article

  • dnssec zonesigner ignoring out-of-zone data

    - by jordi12100
    I am trying to configure DNSSec with BIND9 on CentOS 6.4 running DirectAdmin control panel. I am using this tutorial to make it work: https://www.dnssec-tools.org/wiki/index.php/Zonesigner But I can't get it work... When I run this command: zonesigner --genkeys jordikroon.nl.db jordikroon.nl.db.signed I get this error: jordikroon.nl.db:17: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:18: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:22: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:29: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:33: ignoring out-of-zone data (jordikroon.nl) zone jordikroon.nl.db/IN: has no NS records zone jordikroon.nl.db/IN: not loaded due to errors. I can't find anything on the web about this error. This is my zone db file: $TTL 14400 @ IN SOA ns1.ghservers.org. hostmaster.jordikroon.nl. ( 2013090703 14400 3600 1209600 86400 ) jordikroon.nl. 14400 IN NS ns1.ghservers.org. jordikroon.nl. 14400 IN NS ns2.ghservers.org. cp 14400 IN A 85.17.32.228 ftp 14400 IN A 85.17.32.228 jordikroon.nl. 14400 IN A 85.17.32.228 localhost 14400 IN A 127.0.0.1 mail 14400 IN A 85.17.32.228 pop 14400 IN A 85.17.32.228 smtp 14400 IN A 85.17.32.228 www 14400 IN A 85.17.32.228 jordikroon.nl. 14400 IN MX 10 mail jordikroon.nl. 14400 IN TXT "v=spf1 a mx ip4:85.17.32.228 ~all" localhost 14400 IN AAAA ::1 How do I have to fix this? All IN keywords are being ignored. Any help is welcome:-)

    Read the article

  • McAfee Virus Scan and Oracle RAC

    - by Lee Gathercole
    Hi, We're experiencing a strange problem with Oracle RAC and McAfee anti-virus. As part of the installation of the Oracle RAC we disable anti virus as directed. We have had our RAC running fine, but when we came to re-enable the AV and reboot we got the BSOD. Abnormal Program Termination (BugCheck, STOP: 0x00000035 (0x8E984678, 0x00000000, 0x00000000, 0x00000000 NO_MORE_IRP_STACK_LOCATIONS Following the standard process of raising this problem with Microsoft they identify the problem and also a fix. Microsoft talk about too many file filter drivers being present and pushing the DFS upper limit beyond the default size. Upping this value, as per msdn, has no impact. We're able to recover from this BSOD by disabling AV. We don't have the problem if we run the AV service manually whilst the system is up. However, if we make the service automatic we fail to boot. Tech Details 2 Node Oracle 10g Cluster 2 * Windows 2003 SP2, 16GB RAM, Quad Core 3ghz Processor SAN attached storage McAfee VirusScan Enterprise 8.5.0i, Scan Engine (5300.2777), DAT Version (5536.0000) Thanks Lee

    Read the article

  • DNS zone file SPF configuration to support sending mail from multiple servers and gmail

    - by Tauren
    I want to configure SPF on a domain to allow mail to be sent from: the x.com website server (x.com and www.x.com - both at same IP) it's MX servers (smtp.x.com, mx.x.com, mail.x.com) another server that isn't listed as an MX server (somehost.x.com) via gmail using an account that has authenticated use of [email protected] Will this zone file work? If not, what are the problems with it? $ttl 38400 @ IN SOA ns1.x.com. hostmaster.x.com. ( 201003092 ; serial 8H ; refresh 15M ; retry 1W ; expire 1H ) ; minimum @ NS ns1.x.com. @ NS ns2.x.com. @ MX 10 mx.x.com. @ MX 20 smtp.x.com. @ MX 30 mailhost.x.com. ; SPF records @ IN TXT "v=spf1 a mx a:somehost.x.com include:_spf.google.com ~all" mx IN TXT "v=spf1 a -all" smtp IN TXT "v=spf1 a -all" mailhost IN TXT "v=spf1 a -all" Questions: Is _spf.google.com the right thing to include for gmail.com, or is it only for Google Hosted Apps? If only for Google Apps, what should I include to send from gmail.com? If mail shouldn't be sent from anywhere else, is it safe to use -all instead of ~all? Does it make sense to add specific SPF records for each of the mail servers? Any other problems with the zone file? I want to confirm these things before making changes to my zone file. The file has SPF configured basically the same now, just without google.com and somehost, but I want to make sure I won't break things when I change it.

    Read the article

  • Possible to register Selenium RC's with the Hudson Selenium Grid Hub w/o the RC's being slaves in th

    - by Rodreegez
    I am trying to get Hudson to run my ruby based selenium tests. I have installed the Selenium Grid plugin, but I don't want to have the RC's running as slaves in a Hudson cluster. The reason for this is I don't want to waste the next six years of my life trying to configure each of my projects in various Windows environments. Hudson currently pulls each project from Github and builds it just fine. With a regular Selenium Grid setup, I am able to edit the grid_configuration.yml file to represent the various environments I wish to tests against, then pass environment variables to the rake task that runs the test i.e. which browser/platfom to run on and the URL of the application under test -- usually a port on the hub machine running in a specific environment. In this way, the machines on which the RC's run don't need to know anything about the source code of my apps, they just need to have selenium-grid installed and have registered with the hub. Is there a way of elegantly emulating this with Hudson?

    Read the article

  • plotting results of hierarchical clustering ontop of a matrix of data in python

    - by user248237
    How can I plot a dendrogram right on top of a matrix of values, reordered appropriately to reflect the clustering, in Python? An example is in the bottom of the following figure: http://www.coriell.org/images/microarray.gif I use scipy.cluster.dendrogram to make my dendrogram and perform hierarchical clustering on a matrix of data. How can I then plot the data as a matrix where the rows have been reordered to reflect a clustering induced by the cutting the dendrogram at a particular threshold, and have the dendrogram plotted alongside the matrix? I know how to plot the dendrogram in scipy, but not how to plot the intensity matrix of data with the right scale bar next to it. Any help on this would be greatly appreciated.

    Read the article

  • BIND9 DNS Problems - Not resolving

    - by clone1018
    I host a BIND9 DNS server for my VirtualMin users to use. And It only resolves for 75% of the people. It has been WELL over 1 week now. Here is a sample. $ttl 38400 @ IN SOA axxim.net. root.axxim.net. ( 1274031391 10800 3600 604800 38400 ) @ IN NS axxim.net. day7tech.com. IN A 96.226.216.37 www.day7tech.com. IN A 96.226.216.37 ftp.day7tech.com. IN A 96.226.216.37 m.day7tech.com. IN A 96.226.216.37 localhost.day7tech.com. IN A 127.0.0.1 webmail.day7tech.com. IN A 96.226.216.37 admin.day7tech.com. IN A 96.226.216.37 mail.day7tech.com. IN A 96.226.216.37 day7tech.com. IN MX 5 mail.day7tech.com.

    Read the article

  • Custom Icon for Marker Clusterer

    - by Nyxynyx
    I am using Marker Clusterer library for Google Maps API V3. Now that I have the clusterer working, I want to change the default icon to a custom one. Prorblem: When I try to set the style property of the marker clusterer, the default icon still appears. Where did I go wrong? JS Code // Marker Clusterer var styles = {styles: [{ height: 53, url: "http://localhost/mywebsite/images/template/markers/cluster.png", width: 53 }, { height: 56, url: "http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/images/m2.png", width: 56 }, { height: 66, url: "http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/images/m3.png", width: 66 }, { height: 78, url: "http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/images/m4.png", width: 78 }, { height: 90, url: "http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/images/m5.png", width: 90 }]}; var mcOptions = {gridSize: 50, maxZoom: 15, styles: styles[styles]}; mc = new MarkerClusterer(map, [], mcOptions);

    Read the article

  • Apache Cassandra overwhelming bandwidth overhead

    - by tanyehzheng
    while testing Apache Cassandra, I inserted 1000 rows of data. I allow it to propagate to the other machine on LAN. This is a 2 machine cluster. I monitor the network connection between the two machine. The total data I expected to flow between the two servers should be around 25Mb including all column names, column values and timestamps). But the actual data sent and received between them was an whopping 362Mb!! Anybody knows why is there such an overwhelming overhead? Thank you

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >