Search Results

Search found 4043 results on 162 pages for 'mod cluster'.

Page 20/162 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Virutal Machine loses network connectivity on Hyper V Cluster

    - by Chris W
    We're running a number of VMs on a 6 node failover cluster of blades using Hyper V. We have an intermittent issue (every few days at different times - not a fixed frequency) of VMs losing network connectivity. Console access to the VM suggests all is fine and the underlying blade has normal connectivity. To resolve the problem we either have to re-start the VM or, more usually, we do a live migration to another blade which fires up connectivity and we then migrate it back to the original blade. I've had 3 instances of this happen with a specific VM running on a particular blade however it has happened once with a different VM running on a different blade. All VMs and blades have the same basic setup and are running Windows 2008 R2. Any ideas where I should be looking to diagnose the possible causes of this problem as the event logs provide no help? Edit: I've checked that each blade is running the latest NIC drivers and all seem to be fine. Something that is confusing me - a failover or restart of the VM resolves the issue. Whilst I need to work out the underlying issue that is causing the NICs to hang I'm also concerned that the VM didn't failover to another node which would have solved the outage for me. Is there a way to configure the cluster so that it can tell that the VM guest has lost connectivity and fail it over? As things stand the cluster is assuming that the VM is running happily as I presume Hyper V says everything is great even though there is a problem.

    Read the article

  • Configuration for a two machine ESXi cluster using VSA to present local storage to VMs

    - by MDMarra
    I'm designing a little vSphere 5 cluster for one of our remote sites. We have some IBM x3650s that have 6x 300GB 10K RPM drives in them, along with dual quad core CPUs and 24GB RAM. Because we use HP P4500 G2s at our primary site, we have licenses available for HP P4000 VSAs. I thought that this would be the perfect opportunity to use them. Below is a basic drawing of what I want to accomplish: I want to run a P4000 VSA on each server and run them in a Network RAID-10 (Lefthand speak for network mirroring, think of it as RAID 1 across nodes or as an active/active storage cluster). I will then present this storage to guests that will run on this mini-cluster. It will be managed by a vCenter Server on our main site. All connections will be GbE with two dedicated to storage. Management and Data will share a pair of connections, since I don't expect there to be high load. These servers are just there to provide directory services, dhcp, printing, etc. Does anyone see anything potentially wrong with this approach? Is this the best way to do this without adding additional dedicated storage heads? Are there any pitfalls in this design, besides the lack of dedicated Data/Mgmt interfaces?

    Read the article

  • Network Misconfiguration when adding first host to new vSphere cluster

    - by dunxd
    I am building a new vSphere cluster from scratch. I have installed ESXi on the first host, and built a vCenter server on a VM residing on that host (storage is on the local hard drive, although we have iSCSI targets which I can reach from the host). The cluster is configured for HA. When I try and add the host to the cluster, I get an error at the point where HA is configured - Cannot complete the . I have stripped the network configuration of the host down to the most basic - a single NIC attached to a single vSwitch - this is running the VMKernel Port on VLAN 8 - that is our Management VLAN. The vCenter server will have a network address on this VLAN, so I also set the initial Virtual Machine Port Group to this VLAN, and connected the vCenter server NIC to this port group. I understand I can't connect the vCenter server to the VMkernel port group, but shouldn't I be able to connect the vCenter server to a Port Group in the same VLAN? If not, do I need to create a VLAN specifically for VMKernel Port Group? I plan to set up another port group for vMotion with a dedicated and isolated VLAN (i.e. VLAN isn't routed) so this wouldn't allow vCenter to communicate. Does anyone have any suggestions, or other ideas for what might be causing the problem. I've read through the documentation, but it isn't giving me any pointers, and the error message isn't helping me beyond telling me something is wrong with my network config.

    Read the article

  • How to design highly scalable web services in Java?

    - by Kshitiz Sharma
    I am creating some Web Services that would have 2000 concurrent users. The services are offered for free and are hence expected to get a large user base. In the future it may be required to scale up to 50,000 users. There are already a few other questions that address the issue like - Building highly scalable web services However my requirements differ from the question above. For example - My application does not have a user interface, so images, CSS, javascript are not an issue. It is in Java so suggestions like using HipHop to translate PHP to native code are useless. Hence I decided to ask my question separately. This is my project setup - Rest based Web services using Apache CXF Hibernate 3.0 (With relevant optimizations like lazy loading and custom HQL for tune up) Tomcat 6.0 MySql 5.5 My questions are - Are there alternatives to Mysql that offer better performance for what I'm trying to do? What are some general things to abide by in order to scale a Java based web application? I am thinking of putting my Application in two tomcat instances with httpd redirecting the request to appropriate tomcat on basis of load. Is this the right approach? Separate tomcat instances can help but then database becomes the bottleneck since both applications access the same database? I am a programmer not a Db Admin, how difficult would it be to cluster a Mysql database (or, to cluster whatever database offered as an alternative to 1)? How effective are caching solutions like EHCache? Any other general best practices? Some clarifications - Could you partition the data? Yes we could but we're trying to avoid it. We need to run a lot of data mining algorithms and the design would evolve over time so we can't be sure what lines of partition should be there.

    Read the article

  • Clustering Strings on the basis of Common Substrings

    - by pk188
    I have around 10000+ strings and have to identify and group all the strings which looks similar(I base the similarity on the number of common words between any two give strings). The more number of common words, more similar the strings would be. For instance: How to make another layer from an existing layer Unable to edit data on the network drive Existing layers in the desktop Assistance with network drive In this case, the strings 1 and 3 are similar with common words Existing, Layer and 2 and 4 are similar with common words Network Drive(eliminating stop word) The steps I'm following are: Iterate through the data set Do a row by row comparison Find the common words between the strings Form a cluster where number of common words is greater than or equal to 2(eliminating stop words) If number of common words<2, put the string in a new cluster. Assign the rows either to the existing clusters or form a new one depending upon the common words Continue until all the strings are processed I am implementing the project in C#, and have got till step 3. However, I'm not sure how to proceed with the clustering. I have researched a lot about string clustering but could not find any solution that fits my problem. Your inputs would be highly appreciated.

    Read the article

  • What's up with OCFS2?

    - by wcoekaer
    On Linux there are many filesystem choices and even from Oracle we provide a number of filesystems, all with their own advantages and use cases. Customers often confuse ACFS with OCFS or OCFS2 which then causes assumptions to be made such as one replacing the other etc... I thought it would be good to write up a summary of how OCFS2 got to where it is, what we're up to still, how it is different from other options and how this really is a cool native Linux cluster filesystem that we worked on for many years and is still widely used. Work on a cluster filesystem at Oracle started many years ago, in the early 2000's when the Oracle Database Cluster development team wrote a cluster filesystem for Windows that was primarily focused on providing an alternative to raw disk devices and help customers with the deployment of Oracle Real Application Cluster (RAC). Oracle RAC is a cluster technology that lets us make a cluster of Oracle Database servers look like one big database. The RDBMS runs on many nodes and they all work on the same data. It's a Shared Disk database design. There are many advantages doing this but I will not go into detail as that is not the purpose of my write up. Suffice it to say that Oracle RAC expects all the database data to be visible in a consistent, coherent way, across all the nodes in the cluster. To do that, there were/are a few options : 1) use raw disk devices that are shared, through SCSI, FC, or iSCSI 2) use a network filesystem (NFS) 3) use a cluster filesystem(CFS) which basically gives you a filesystem that's coherent across all nodes using shared disks. It is sort of (but not quite) combining option 1 and 2 except that you don't do network access to the files, the files are effectively locally visible as if it was a local filesystem. So OCFS (Oracle Cluster FileSystem) on Windows was born. Since Linux was becoming a very important and popular platform, we decided that we would also make this available on Linux and thus the porting of OCFS/Windows started. The first version of OCFS was really primarily focused on replacing the use of Raw devices with a simple filesystem that lets you create files and provide direct IO to these files to get basically native raw disk performance. The filesystem was not designed to be fully POSIX compliant and it did not have any where near good/decent performance for regular file create/delete/access operations. Cache coherency was easy since it was basically always direct IO down to the disk device and this ensured that any time one issues a write() command it would go directly down to the disk, and not return until the write() was completed. Same for read() any sort of read from a datafile would be a read() operation that went all the way to disk and return. We did not cache any data when it came down to Oracle data files. So while OCFS worked well for that, since it did not have much of a normal filesystem feel, it was not something that could be submitted to the kernel mail list for inclusion into Linux as another native linux filesystem (setting aside the Windows porting code ...) it did its job well, it was very easy to configure, node membership was simple, locking was disk based (so very slow but it existed), you could create regular files and do regular filesystem operations to a certain extend but anything that was not database data file related was just not very useful in general. Logfiles ok, standard filesystem use, not so much. Up to this point, all the work was done, at Oracle, by Oracle developers. Once OCFS (1) was out for a while and there was a lot of use in the database RAC world, many customers wanted to do more and were asking for features that you'd expect in a normal native filesystem, a real "general purposes cluster filesystem". So the team sat down and basically started from scratch to implement what's now known as OCFS2 (Oracle Cluster FileSystem release 2). Some basic criteria were : Design it with a real Distributed Lock Manager and use the network for lock negotiation instead of the disk Make it a Linux native filesystem instead of a native shim layer and a portable core Support standard Posix compliancy and be fully cache coherent with all operations Support all the filesystem features Linux offers (ACL, extended Attributes, quotas, sparse files,...) Be modern, support large files, 32/64bit, journaling, data ordered journaling, endian neutral, we can mount on both endian /cross architecture,.. Needless to say, this was a huge development effort that took many years to complete. A few big milestones happened along the way... OCFS2 was development in the open, we did not have a private tree that we worked on without external code review from the Linux Filesystem maintainers, great folks like Christopher Hellwig reviewed the code regularly to make sure we were not doing anything out of line, we submitted the code for review on lkml a number of times to see if we were getting close for it to be included into the mainline kernel. Using this development model is standard practice for anyone that wants to write code that goes into the kernel and having any chance of doing so without a complete rewrite or.. shall I say flamefest when submitted. It saved us a tremendous amount of time by not having to re-fit code for it to be in a Linus acceptable state. Some other filesystems that were trying to get into the kernel that didn't follow an open development model had a lot harder time and a lot harsher criticism. March 2006, when Linus released 2.6.16, OCFS2 officially became part of the mainline kernel, it was accepted a little earlier in the release candidates but in 2.6.16. OCFS2 became officially part of the mainline Linux kernel tree as one of the many filesystems. It was the first cluster filesystem to make it into the kernel tree. Our hope was that it would then end up getting picked up by the distribution vendors to make it easy for everyone to have access to a CFS. Today the source code for OCFS2 is approximately 85000 lines of code. We made OCFS2 production with full support for customers that ran Oracle database on Linux, no extra or separate support contract needed. OCFS2 1.0.0 started being built for RHEL4 for x86, x86-64, ppc, s390x and ia64. For RHEL5 starting with OCFS2 1.2. SuSE was very interested in high availability and clustering and decided to build and include OCFS2 with SLES9 for their customers and was, next to Oracle, the main contributor to the filesystem for both new features and bug fixes. Source code was always available even prior to inclusion into mainline and as of 2.6.16, source code was just part of a Linux kernel download from kernel.org, which it still is, today. So the latest OCFS2 code is always the upstream mainline Linux kernel. OCFS2 is the cluster filesystem used in Oracle VM 2 and Oracle VM 3 as the virtual disk repository filesystem. Since the filesystem is in the Linux kernel it's released under the GPL v2 The release model has always been that new feature development happened in the mainline kernel and we then built consistent, well tested, snapshots that had versions, 1.2, 1.4, 1.6, 1.8. But these releases were effectively just snapshots in time that were tested for stability and release quality. OCFS2 is very easy to use, there's a simple text file that contains the node information (hostname, node number, cluster name) and a file that contains the cluster heartbeat timeouts. It is very small, and very efficient. As Sunil Mushran wrote in the manual : OCFS2 is an efficient, easily configured, quickly installed, fully integrated and compatible, feature-rich, architecture and endian neutral, cache coherent, ordered data journaling, POSIX-compliant, shared disk cluster file system. Here is a list of some of the important features that are included : Variable Block and Cluster sizes Supports block sizes ranging from 512 bytes to 4 KB and cluster sizes ranging from 4 KB to 1 MB (increments in power of 2). Extent-based Allocations Tracks the allocated space in ranges of clusters making it especially efficient for storing very large files. Optimized Allocations Supports sparse files, inline-data, unwritten extents, hole punching and allocation reservation for higher performance and efficient storage. File Cloning/snapshots REFLINK is a feature which introduces copy-on-write clones of files in a cluster coherent way. Indexed Directories Allows efficient access to millions of objects in a directory. Metadata Checksums Detects silent corruption in inodes and directories. Extended Attributes Supports attaching an unlimited number of name:value pairs to the file system objects like regular files, directories, symbolic links, etc. Advanced Security Supports POSIX ACLs and SELinux in addition to the traditional file access permission model. Quotas Supports user and group quotas. Journaling Supports both ordered and writeback data journaling modes to provide file system consistency in the event of power failure or system crash. Endian and Architecture neutral Supports a cluster of nodes with mixed architectures. Allows concurrent mounts on nodes running 32-bit and 64-bit, little-endian (x86, x86_64, ia64) and big-endian (ppc64) architectures. In-built Cluster-stack with DLM Includes an easy to configure, in-kernel cluster-stack with a distributed lock manager. Buffered, Direct, Asynchronous, Splice and Memory Mapped I/Os Supports all modes of I/Os for maximum flexibility and performance. Comprehensive Tools Support Provides a familiar EXT3-style tool-set that uses similar parameters for ease-of-use. The filesystem was distributed for Linux distributions in separate RPM form and this had to be built for every single kernel errata release or every updated kernel provided by the vendor. We provided builds from Oracle for Oracle Linux and all kernels released by Oracle and for Red Hat Enterprise Linux. SuSE provided the modules directly for every kernel they shipped. With the introduction of the Unbreakable Enterprise Kernel for Oracle Linux and our interest in reducing the overhead of building filesystem modules for every minor release, we decide to make OCFS2 available as part of UEK. There was no more need for separate kernel modules, everything was built-in and a kernel upgrade automatically updated the filesystem, as it should. UEK allowed us to not having to backport new upstream filesystem code into an older kernel version, backporting features into older versions introduces risk and requires extra testing because the code is basically partially rewritten. The UEK model works really well for continuing to provide OCFS2 without that extra overhead. Because the RHEL kernel did not contain OCFS2 as a kernel module (it is in the source tree but it is not built by the vendor in kernel module form) we stopped adding the extra packages to Oracle Linux and its RHEL compatible kernel and for RHEL. Oracle Linux customers/users obviously get OCFS2 included as part of the Unbreakable Enterprise Kernel, SuSE customers get it by SuSE distributed with SLES and Red Hat can decide to distribute OCFS2 to their customers if they chose to as it's just a matter of compiling the module and making it available. OCFS2 today, in the mainline kernel is pretty much feature complete in terms of integration with every filesystem feature Linux offers and it is still actively maintained with Joel Becker being the primary maintainer. Since we use OCFS2 as part of Oracle VM, we continue to look at interesting new functionality to add, REFLINK was a good example, and as such we continue to enhance the filesystem where it makes sense. Bugfixes and any sort of code that goes into the mainline Linux kernel that affects filesystems, automatically also modifies OCFS2 so it's in kernel, actively maintained but not a lot of new development happening at this time. We continue to fully support OCFS2 as part of Oracle Linux and the Unbreakable Enterprise Kernel and other vendors make their own decisions on support as it's really a Linux cluster filesystem now more than something that we provide to customers. It really just is part of Linux like EXT3 or BTRFS etc, the OS distribution vendors decide. Do not confuse OCFS2 with ACFS (ASM cluster Filesystem) also known as Oracle Cloud Filesystem. ACFS is a filesystem that's provided by Oracle on various OS platforms and really integrates into Oracle ASM (Automatic Storage Management). It's a very powerful Cluster Filesystem but it's not distributed as part of the Operating System, it's distributed with the Oracle Database product and installs with and lives inside Oracle ASM. ACFS obviously is fully supported on Linux (Oracle Linux, Red Hat Enterprise Linux) but OCFS2 independently as a native Linux filesystem is also, and continues to also be supported. ACFS is very much tied into the Oracle RDBMS, OCFS2 is just a standard native Linux filesystem with no ties into Oracle products. Customers running the Oracle database and ASM really should consider using ACFS as it also provides storage/clustered volume management. Customers wanting to use a simple, easy to use generic Linux cluster filesystem should consider using OCFS2. To learn more about OCFS2 in detail, you can find good documentation on http://oss.oracle.com/projects/ocfs2 in the Documentation area, or get the latest mainline kernel from http://kernel.org and read the source. One final, unrelated note - since I am not always able to publicly answer or respond to comments, I do not want to selectively publish comments from readers. Sometimes I forget to publish comments, sometime I publish them and sometimes I would publish them but if for some reason I cannot publicly comment on them, it becomes a very one-sided stream. So for now I am going to not publish comments from anyone, to be fair to all sides. You are always welcome to email me and I will do my best to respond to technical questions, questions about strategy or direction are sometimes not possible to answer for obvious reasons.

    Read the article

  • RabbitMQ - How I do configure servers for zero-downtime upgrades?

    - by Terence Johnson
    Having read through the docs and RabbitMQ in Action, creating a RabbitMQ cluster seems straightforward enough, but upgrading or patching an existing RabbitMQ cluster seems to require the whole cluster to be restarted. Is there a way to combine clustering, shovel, federation, and load balancing to make a rolling upgrade possible without losing queues or messages, or have I missed something slightly more obvious?

    Read the article

  • Troubleshooting failover cluster problem in W2K8 / SQL05

    - by paulland
    I have an active/passive W2K8 (64) cluster pair, running SQL05 Standard. Shared storage is on a HP EVA SAN (FC). I recently expanded the filesystem on the active node for a database, adding a drive designation. The shared storage drives are designated as F:, I:, J:, L: and X:, with SQL filesystems on the first 4 and X: used for a backup destination. Last night, as part of a validation process (the passive node had been offline for maintenance), I moved the SQL instance to the other cluster node. The database in question immediately moved to Suspect status. Review of the system logs showed that the database would not load because the file "K:\SQLDATA\whatever.ndf" could not be found. (Note that we do not have a K: drive designation.) A review of the J: storage drive showed zero contents -- nothing -- this is where "whatever.ndf" should have been. Hmm, I thought. Problem with the server. I'll just move SQL back to the other server and figure out what's wrong.. Still no database. Suspect. Uh-oh. "Whatever.ndf" had gone into the bit bucket. I finally decided to just restore from the backup (which had been taken immediately before the validation test), so nothing was lost but a few hours of sleep. The question: (1) Why did the passive node think the whatever.ndf files were supposed to go to drive "K:", when this drive didn't exist as a resource on the active node? (2) How can I get the cluster nodes "re-syncd" so that failover can be accomplished? I don't know that there wasn't a "K:" drive as a cluster resource at some time in the past, but I do know that this drive did not exist on the original cluster at the time of resource move.

    Read the article

  • Do I need to enable DRS to use Dynacache in Websphere Application Server Cluster

    - by rabs
    We are running a websphere commerce application with several websphere application servers configured in a cluster. We are using dynacache, so each server in the cluster will have its own cached objects in its own JVM. We are using CACHEIVL with database triggers for all cache invalidations. I was reading http://www.ibm.com/developerworks/websphere/library/techarticles/0603_crick/0603_crick.html and found an interesting sentence: "Furthermore, cache replication is necessary to ensure that invalidation messages are shared between the servers in a cluster." After thinking about this it would make sense that for the invalidation to work it would need to be triggered on all the servers in the cluster, but I couldn't find confirmation of this in the mountains of IBM doco. Does anyone know if you can use trigger based cache invalidation (through CACHEIVL) when you have several application servers clustered each with their own cache without DRS turned on? or do I need to use DRS for this to work?

    Read the article

  • Can't enable apache mod on emerge

    - by ranisalt
    I want to add mod_proxy and mod_proxy_http to Apache server on my Gentoo, but apparently some file with high priority on the system is disabling the mods and preventing me to install. I am currently editing /usr/portage/profiles/base/make.defaults file, but it gets updated (and changes lost) every time I update the system. I have to edit it every time I update the system/reinstall Apache. Besides that, I have already added dependencies to the /etc/portage/package.use file: www-servers/apache proxy proxy_http What other files do I have do change or should check flags so I can enable proxy and do not have to edit files again every time?

    Read the article

  • Buildcraft Minecraft mod causing crashes, NVIDIA-304xx Linux Drivers, KDE

    - by wolfo9999
    All is perfect, 96 fps average. Until an item tries to enter a buildcraft pipe. Tekkit instantly crashes, no error dialog. I have no idea what logs to look at for information on the crash, or how to fix it. OpenGL is enabled in KDE, Driver package is nvidia-304xx lspci output: VGA compatible controller: NVIDIA Corporation GT218 [Quadro FX 380M] (rev a2) (prog-if 00 [VGA controller]) Subsystem: Hewlett-Packard Company Device 172b Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at d2000000 (32-bit, non-prefetchable) [size=16M] Memory at c0000000 (64-bit, prefetchable) [size=256M] Memory at d0000000 (64-bit, prefetchable) [size=32M] I/O ports at 5000 [size=128] [virtual] Expansion ROM at d3080000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia

    Read the article

  • after enabling mod ssl apache stops listening on port 80

    - by zensys
    I have an ubuntu 12.04 server with zend server CE installed. I now wanted to enable https but after the first steps according to the documentation, 'a2enmod ssl' and 'apache service restart', apache does not listen on 443 but neither on 80, according to netstat -tap | grep http(s)! This is what I see in my error log, but I can't make much of it: [Fri May 25 19:52:39 2012] [notice] caught SIGTERM, shutting down [Fri May 25 19:52:41 2012] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Fri May 25 19:52:41 2012] [notice] ModSecurity for Apache/2.6.3 (http://www.modsecurity.org/) configured. [Fri May 25 19:52:41 2012] [notice] ModSecurity: APR compiled version="1.4.5"; loaded version="1.4.6" [Fri May 25 19:52:41 2012] [warn] ModSecurity: Loaded APR do not match with compiled! [Fri May 25 19:52:41 2012] [notice] ModSecurity: PCRE compiled version="8.12"; loaded version="8.12 2011-01-15" [Fri May 25 19:52:41 2012] [notice] ModSecurity: LUA compiled version="Lua 5.1" [Fri May 25 19:52:41 2012] [notice] ModSecurity: LIBXML compiled version="2.7.8" [Fri May 25 19:53:11 2012] [notice] ModSecurity for Apache/2.6.3 (http://www.modsecurity.org/) configured. [Fri May 25 19:53:11 2012] [notice] ModSecurity: APR compiled version="1.4.5"; loaded version="1.4.6" [Fri May 25 19:53:11 2012] [warn] ModSecurity: Loaded APR do not match with compiled! [Fri May 25 19:53:11 2012] [notice] ModSecurity: PCRE compiled version="8.12"; loaded version="8.12 2011-01-15" [Fri May 25 19:53:11 2012] [notice] ModSecurity: LUA compiled version="Lua 5.1" [Fri May 25 19:53:11 2012] [notice] ModSecurity: LIBXML compiled version="2.7.8" [Fri May 25 19:53:12 2012] [notice] Apache/2.2.22 (Ubuntu) PHP/5.3.8-ZS5.5.0 configured -- resuming normal operations and here is my httpd.conf: # Name based virtual hosting <virtualhost *:80> ServerName www-redirect KeepAlive Off RewriteEngine On RewriteCond %{HTTP_HOST} ^[^\./]+\.[^\./]+$ RewriteRule ^/(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] </virtualhost> Alias /shared/js "/home/web/library/js" Alias /shared/image "/home/web/library/image" <IfModule mod_expires.c> <FilesMatch "\.(jpe?g|png|gif|js|css|doc|rtf|xls|pdf)$"> ExpiresActive On ExpiresDefault "access plus 1 week" </FilesMatch> </IfModule> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> <Location /> RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ /index.php [NC,L] </Location> netstat -tap gives: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:mysql *:* LISTEN 765/mysqld tcp 0 0 *:pop3 *:* LISTEN 744/dovecot tcp 0 0 *:imap2 *:* LISTEN 744/dovecot tcp 0 0 *:http *:* LISTEN 19861/apache2 tcp 0 0 *:smtp *:* LISTEN 30365/master tcp 0 0 *:4444 *:* LISTEN 634/sshd tcp 0 0 *:kamanda *:* LISTEN 1167/lighttpd tcp 0 0 *:imaps *:* LISTEN 744/dovecot tcp 0 0 *:amandaidx *:* LISTEN 1167/lighttpd tcp 0 0 localhost.loc:amidxtape *:* LISTEN 19861/apache2 tcp 0 0 *:pop3s *:* LISTEN 744/dovecot tcp 0 384 mail.mysite.:4444 231.214.14.37.dyn:41909 ESTABLISHED 19039/sshd: web [pr tcp 0 0 localhost.localdo:mysql localhost.localdo:48252 ESTABLISHED 765/mysqld tcp 0 0 mail.mysite.:http 231.214.14.37.dyn:54686 TIME_WAIT - tcp 0 0 mail.mysite.:4444 231.214.14.37.dyn:42419 ESTABLISHED 19372/sshd: web [pr tcp 0 0 localhost.localdo:48252 localhost.localdo:mysql ESTABLISHED 19884/auth tcp 0 0 mail.mysite.:http 231.214.14.37.dyn:54685 TIME_WAIT - tcp6 0 0 [::]:pop3 [::]:* LISTEN 744/dovecot tcp6 0 0 [::]:imap2 [::]:* LISTEN 744/dovecot tcp6 0 0 [::]:smtp [::]:* LISTEN 30365/master tcp6 0 0 [::]:4444 [::]:* LISTEN 634/sshd tcp6 0 0 [::]:imaps [::]:* LISTEN 744/dovecot tcp6 0 0 [::]:pop3s [::]:* LISTEN 744/dovecot Anyone knows what I am doing wrong? Perhaps I should take some additional steps to make apache listen 0n 443 but that it stops listening on 80 altogether I can't understand.

    Read the article

  • Virtualhost setup for Ruby on Rails application (mod passenger)

    - by Ingo86
    Hi all, I'm trying to install Redmine under apache. The apache server works on a local network. My apache setup consist on a single virtual host. I can get insto different directories using simply the path corresponding: http://ip_address/folder_of_the_project_1 How can I setup the virtualhost to make redmine works in this situation? Here is my current virtualhost setup: NameVirtualHost * <VirtualHost *> ServerAdmin webmaster@localhost DocumentRoot /var/www/ RailsBaseURI /redmine <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> <Directory /var/www/redmine/public> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> Thank you, Ingo86

    Read the article

  • How can I override mod-php5's .php mapping to php4-cgi per VirtualHost or Directory?

    - by geocoo
    I am running Debian Linux with apache2 and libapache2-mod-php5 5.3.3-7. I have one VirtualHost which requires php4. So I researched and compiled php4-cgi. However, I cannot seem to: Override mod-php5's mapping of .php in that vhost (or even globally, without disabling php completley). Even find where that mapping is made, in hope of disabling it and enabling mod-php5 or php4-cgi per vhost. This is my php4-cgi mapping (Inside the one php4 vhost): ScriptAlias /php4 /usr/local/php4/bin <Directory /usr/local/php4/bin> Options +ExecCGI +FollowSymLinks </Directory> <Directory /www/test> AddHandler php4-cgi-script .php Action php4-cgi-script /php4/php Options +ExecCGI </Directory> This does not work, mod-php5 still runs all .php files in that vhost/directory. If I change the file extension in the AddHandler above from .php to .php4, then .php4 files do run php4-cgi as expected, but I can't change all the files in the app to .php4. I thought maybe I could disable the mod-php5's mapping in my vhost or directory, then do my cgi-config (as above) but many combinations of these in different contexts did not work: RemoveHandler .php RemoveType .php php_flag engine off (this seems to even disable my php4-cgi so that wont work) The only other place I can find any mapping is in /etc/mime.types, but commenting out the relevant lines and restarting apache2 does not affect mod-php5's .php mapping. I have searched as much as I can, it is now a mystery to me. Any help or direction would be greatly appreciated.

    Read the article

  • HPC Cluster planning workflow?

    - by Veronica
    After three days of intensive Google searching, I have not found any high-level workflow of how to build a low profile - cheap - computing cluster (we are not interested in HA yet). This is just a front-end plus a node for now. We want to start small with rockscluster, provide a web-based server for offering services, and then add nodes as our budget increases. We're small company, so we haven't enough human resources to implement it smoothly. Here are some facts about our environment: Our hardware is not constant (we will add nodes). Our workload will vary (in the order from 200Mb - 1Tb) Our software will change (scientific applications for data mining) Do you know any visual workflow, worksheet, chart, describing the general necessary steps to begin our cluster planning?

    Read the article

  • How to INF mod: Replacing 32bit dlls with 64bits

    - by Nime Cloud
    I've got a driver setup for 32 bit: An INF file and an x86 folder with two 32 bit dlls. I need to replace these 32 bit dll files with 64 bit ones. I just simply overwrite 32 bit files but no lock. How can I make 64 bit version of the driver? Update: I tried original setup files on 32 bit Windows XP, setup asks for WdfCoinstaller01009.dll, I just simply browse & point the file from somewhere on XP. ;-------------- WDF Coinstaller installation [DestinationDirs] CoInstaller_CopyFiles = 11 [silabser.Dev.NT.CoInstallers] AddReg=CoInstaller_AddReg CopyFiles=CoInstaller_CopyFiles [CoInstaller_CopyFiles] WdfCoinstaller01009.dll [SourceDisksFiles] WdfCoinstaller01009.dll=1 [CoInstaller_AddReg] HKR,,CoInstallers32,0x00010000, "WdfCoinstaller01009.dll,WdfCoInstaller" [silabser.Dev.NT.Wdf] KmdfService = silabser, silabser_wdfsect [silabser_wdfsect] KmdfLibraryVersion = 1.9

    Read the article

  • Apache Mod SVN Access Forbidden

    - by Cerin
    How do you resolve the error svn: access to '/repos/!svn/vcc/default' forbidden? I recently upgraded a Fedora 13 server to 16, and now I'm trying to debug an access error with a Subversion server running on using Apache with mod_dav_svn. Running: svn ls http://myserver/repos/myproject/trunk Lists the correct files. But when I go to commit, I get the error: svn: access to '/repos/!svn/vcc/default' forbidden My Apache virtualhost for svn is: <VirtualHost *:80> ServerName svn.mydomain.com ServerAlias svn DocumentRoot "/var/www/html" <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/var/www/html"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <Location /repos> Order allow,deny Allow from all DAV svn SVNPath /var/svn/repos SVNAutoversioning On # Authenticate with Kerberos AuthType Kerberos AuthName "Subversion Repository" KrbAuthRealms mydomain.com Krb5KeyTab /etc/httpd/conf/krb5.HTTP.keytab # Get people from LDAP AuthLDAPUrl ldap://ldap.mydomain.com/ou=people,dc=mydomain,dc=corp?uid # For any operations other than these, require an authenticated user. <LimitExcept GET PROPFIND OPTIONS REPORT> Require valid-user </LimitExcept> </Location> </VirtualHost> What's causing this error? EDIT: In my /var/log/httpd/error_log I'm seeing a lot of these: [Fri Jun 22 13:22:51 2012] [error] [client 10.157.10.144] ModSecurity: Warning. Operator LT matched 20 at TX:inbound_anomaly_score. [file "/etc/httpd/modsecurity.d/base_rules/modsecurity_crs_60_correlation.conf"] [line "31"] [msg "Inbound Anomaly Score (Total Inbound Score: 15, SQLi=, XSS=): Method is not allowed by policy"] [hostname "svn.mydomain.com"] [uri "/repos/!svn/act/0510a2b7-9bbe-4f8c-b928-406f6ac38ff2"] [unique_id "T@Sp638DCAEBBCyGfioAAABK"] I'm not entirely sure how to read this, but I'm interpreting "Method is not allowed by policy" as meaning that there's some security Apache module that might be blocking access. How do I change this?

    Read the article

  • Putting our OLTP and OLAP services on the same cluster

    - by Dynamo
    We're currently in a bit of a debate about what to do with our scattered SQL environment. We are setting up a cluster for our data warehouses for sure and are now in the process of deciding if our OLTP databases should go on the same one. The cluster will be active/active with database services running on one node and reporting and analytical services on the other node. From a technical standpoint I don't see an issue here. With the services being run on different nodes they shouldn't compete too heavily for resources. The only physical resource that may be an issue would be the shared disk space. Our environment is also quite small. Our biggest OLAP database at the moment is only about 40GB and our OLTP are all under 10GB. I see a potential political issue here as different groups are involved but I'm just strictly wondering if there would be any major technical issues that could arise from this setup.

    Read the article

  • Hyper V cluster - one VM won't migrate

    - by Chris W
    We have a Failover Cluster built up on 6 blades, each running Hyper V. Each box is running Server 2008 R2. We've got a number of VMs running that all have the same basic config: VHD stored on a cluster shared volume. 2 virtual NICs (1 for LAN connection and 1 for SAN connection). All of our VMs will happily migrate between any other blade apart from one single VM which is running fine on it's current blade but will not migrate to any other location. What could be the cause of it or where should I look to get a detailed error message as I can't seem to find much information logged in any of the logs. Edit: I know the usual culprit is mis-matching resource names. We've already been there with the NICs named differently on some of the blades. As far as we can tell now everything looks to be identical on each bit of metal.

    Read the article

  • Can i use a Windows 2008 r2 Cluster for file redundancy

    - by JERiv
    I'm researching a sever clustering architecture as a redundancy and backup solution for a client, and something that isn't made clear is whether or not i can use server clustering to replace a file server with backup solution. Forgive my Elementary understanding of server clustering but supposing: 2 Sites (NJ, CA) Identical Servers at each site setup as a Remote Site Cluster nodes with Windows Enterprise server 2008 r2 Services: File, Terminal, AD, and maybe DNS Will the following will be true: Files (including data drives) will be synced between the two servers eliminating the need for third party backup/mirroring software to sync/backup files. Also supposing i use roaming profiles w/ folder redirection; How will client computer in the WAN access their data through the cluster (i.e. will they automatically choose the best route)

    Read the article

  • OCFS2 Now Certified for E-Business Suite Release 12 Application Tiers

    - by sergio.leunissen
    Steven Chan writes that OCFS2 is now certified for use as a clustered filesystem for sharing files between all of your E-Business Suite application tier servers.  OCFS2 (Oracle Cluster File System 2) is a free, open source, general-purpose, extent-based clustered file system which Oracle developed and contributed to the Linux community.  It was accepted into Linux kernel 2.6.16.OCFS2 is included in Oracle Enterprise Linux (OEL) and supported under Unbreakable Linux support.

    Read the article

  • Focus On Systems Admins and Developers

    - by rickramsey
    Even if you're not going to Oracle Open World, you might find it interesting to hear what the different technology groups at Oracle are going to be talking about. And if you are going, here's your Systems schedule: Note: all links go to PDF files. Focus On: Oracle Linux Focus On: Oracle Solaris Focus On: Oracle Solaris Cluster Focus On: Oracle Solaris Studio Focus On: Desktop Virtualization Focus On: Oracle VM Server Virtualization Focus On: SPARC Servers Focus On: Storage Focus On: SPARC Supercluster - Rick Website Newsletter Facebook Twitter

    Read the article

  • The use of mod operators in ada

    - by maddy
    Hi all, Can anyone please tell me the usage of the following declarations shown below.I am a beginner in ada language.I had tried the internet but that was not clear enough. type Unsigned_4 is mod 2 ** 4; for Unsigned_4'Size use 4;

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >