Search Results

Search found 15129 results on 606 pages for 'orientation changes'.

Page 453/606 | < Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >

  • from svn to git (+ LDAP + password-less updates + passworded access control)

    - by Jayen
    We have an SVN setup and there are some things we dislike about it and some things we like about it. We want to move to git, but we're not sure exactly what setup will work for us. We're currently using SVN (w/ Authz) + Apache (w/ WebDAV & LDAP). Hook to update the live site [like] Live site update requires no additional interaction [like] Live site update uses stored password [dislike] Commits require centralized-password authentication [like] Commit from live site changes stored credentials [dislike] Access control (per repository) for commits [like] Point 5 above is the one that keeps stuffing us up. Someone makes a commit from the live site and then the hook breaks. We're thinking to use gitosis/gitolite to get access control, but as they use ssh keys, we won't be requiring passwords. We're also thinking to use git-http-backend, and use Apache for authentication, but then do we lose access control? Can the live site be automatically updated from a hook if Apache requires authentication? Can we combine git-http-backend and gitosis/gitolite somehow? Can we store http credentials with git?

    Read the article

  • Computer does not switch on after power outage

    - by cristian
    VOLTAGE DROP OFF FOR PC does not restart The other day my pc was turned off due to power outage. Since that time the computer would not turn on again, no sign of life, it seems dead. I did several tests, changed the power outlet and disconnect the wires ... also I have reseated the cards ... but the result is that nothing changes. What can I do? Could there may be damage to the hardware due to the power outage? Note: the voltage drop is not due to a lightning storm and so is not due to damaged components (burnt card etc ...) Original Text: l'altro giorno il pc mi si è spento improvvisamente per calo tensione.... da quel momento non si è piu' riacceso...nessun segnale di vita...sembra proprio morto. Ho fatto diverse prove, cambio presa di alimentazione, scollegare i fili...insomma ho "mischiato le carte"...ma il risultato è che non cambia nulla. Cosa posso fare? cosa puo' essere successo? Possono esserci danni hardware per il calo di tensione? NB: il calo di tensione non e' dovuto ad una saetta e quindi escluderei danni causa temporale (bruciature scheda ecc...) Grazie mille

    Read the article

  • Recommended programming language for linux server management and web ui integration

    - by Brendan Martens
    I am interested in making an in house web ui to ease some of the management tasks I face with administrating many servers; think Canonical's Landscape. This means doing things like, applying package updates simultaneously across servers, perhaps installing a custom .deb (I use ubuntu/debian.) Reviewing server logs, executing custom scripts, viewing status information for all my servers. I hope to be able to reuse existing command line tools instead of rewriting the exact same operations in a different language myself. I really want to develop something that allows me to continue managing on the ssh level but offers the power of a web interface for easily applying the same infrastructure wide changes. They should not be mutually exclusive. What are some recommended programming languages to use for doing this kind of development and tying it into a web ui? Why do you recommend the language(s) you do? I am not an experienced programmer, but view this as an opportunity to scratch some of my own itches as well as become a better programmer. I do not care specifically if one language is harder than another, but am more interested in picking the best tools for the job from the beginning. Feel free to recommend any existing projects that already integrate management of many systems into a single cohesive web ui, except Landscape (not free,) Ebox (ebox control center not free) and webmin (I don't like it, feels clunky and does not integrate well with the "debian way" of maintaining a server, imo. Also, only manages one system.) Thanks for any ideas! Update: I am not looking to reinvent the wheel of systems management, I just want to "glue" many preexisting and excellent tools together where possible and appropriate; this is why I wonder about what languages can interact well with pre-existing command line tools, while making them manageable with a web ui.

    Read the article

  • Windows Server 2008R2 Virtual Lab Activation strategies?

    - by William Hilsum
    I have a ESXi server that I use for testing, however, I am often needing to create additional Windows Server virtual machines. Typically, if I do not need a VM for more than 30 days, I simply do not activate. However, I have been doing a lot of HA/DRS testing recently and I have had a few servers up for more than this time. I have a MSDN account with Microsoft and have already received extra keys for Windows Server 2008 R2. I am doing nothing illegal and I am sure if I asked, they would issue more - but, I do not want to tempt fate! I have got 3 different "activated" windows snapshots I can get to at any time. If I try to clone these machines, I get the usual "did you copy or move them VM" message. If I choose copy, as far as I can see, it changes the BIOS ID and NIC MACs which is enough to disable activation. If I choose move, it keeps the activation fine (obviously, I know to change the NIC MAC - I believe I can leave the BIOS ID without problems). However, either of these options keeps the same SID code for the computer and user accounts. After the activation period has expired, as far as I can see, all that happens is optional updates do not work - it seems that the normal updates work fine. Based on this, as you can easily get in to Windows when not activated without any sort of workaround, I was wondering if it is ok just to leave a machine un activated? (However, I obviously would prefer if it was activated!) Alternatively, how dangerous is it run multiple machines on a non domain environment with the same SID? I am just interested to know if anyone can recommend a strategy for me? I have only found one solution that deals with bypassing activation - I am not interested in doing anything remotely dodgy... at a stretch, I am happy to rearm (I have never needed to keep a server past 100 days), but, I would rather have a proper strategy in place.

    Read the article

  • Ubuntu in VirtualBox File Modified Time in Future and PHP slow file operations

    - by user1750
    For some reason, some of my files have a last modified date in the future. In addition to this, file operations in PHP are SUPER slow. For example, rebuilding the Symfony2 cache can take over 40 seconds (its takes 1-2 on my MacBook Pro). Notice the time for ListingsCRUDController.php. It just says "2012". In order see the date more clearly I ran ls --time-style="full-iso" -l For some reason it shows that this file's last modified date is ~5 hours into the future. System time: To make things more confusing, the system will intermittently speed up. Suddenly, my app will start serving requests in 1-2 seconds (down from 40 seconds) for no apparent reason. I mean I don't do anything to my code/system config - it just changes. Also, during a slow PHP request, the php5-fpm process (nginx) uses 100% of the CPU for the duration of the request. This is the second VM this has happened on and I need to know why its doing this. It has become unusable. Information About My Setup VirtualBox 4.2.0 Host: Macbook Pro Guest: Ubuntu Server 12.04 Package dkms is installed Timezones match for Ubuntu and PHP. Things I've Tried Both Apache and Nginx. APC enabled and disabled. Xdebug enabled and disabled. 1 processor up to 4 processors. 1gb memory up to 4gb memory. I've installed Ubuntu using the regular kernel and the VM kernel.

    Read the article

  • How does one change the UUID of a Volume on Mac OS X 10.6?

    - by Emmel
    Does anyone know how to change the UUID of a Volume? The background for this question is that I have a duplicate UUID issue: I have /Volumes/OldMacHD with a UUID of XYZ. I have /Volumes/Mirror1 with a UUID of XYZ (same UUID! I bet that's because OldMacHD USED to be part of this mirror). I got these UUIDs via 'diskutil info /dev/thatdisknumber | grep UUID'. I'd like to change the UUID of 'Mirror1'. I discovered by chance the 'hfs.util' utility, since these are HFS volumes after all. The man page for hfs.util says that if you issue the -s flag, this changes the UUID. However, if you type hfs.util all by itself, it doesn't show you the -s option at all, just every option besides that! Grr. I tried it anyway: sudo /System/Library/Filesystems/hfs.fs/hfs.util -s /dev/disk4 (the raid volume). Nothing happens. No error message, no success message. UUID exactly the same. I tried it while the volume was unmounted. Any ideas?

    Read the article

  • Backup, Migrate or Clone Failing CentOS 4 (LVM)

    - by Hegelworm
    Hello there, I've been running a BlueQuartz CentOS 4 system (Nuonce.net distro) for a few years now and although the hard drive (Deskstar) has always been a bit noisy, on a few recent occasions I've heard it having trouble spinning up. Basically, I want to clone this drive to a similar sized one (80 Gig). I've spent many hours reading upon dd, dd_rescue, rsync, clonezilla and LVM mirroring yet the sheer number of options and nightmarish accounts has left me frozen - unable to make an informed decision as to how to start. I've made a few attempts. dd failed after about 2 hours, as, although the drives appeared to be identical on the surface (ATA Seagate Barracudas, Thai not Chinese), the destination drive is slightly smaller. My most recent attempt involved using a Debian CD to format the new drive and then rsync-ing everything over and editing the new drive's grub and fstab to reflect the changes. No joy here either as I hadn't chosen LVM when partitioning the destination drive and it wouldn't boot. As you can probably tell, I'm out of my depth here and a panic-invoking mixture of caution and frustration has prompted me to sign up here. The server itself, although not strictly a production environment, has a very specific installation of Festival, LAME and ffMpeg and provides the back-end for a Text-to-Speech jQuery plugin that I've built over the last 2 years. I'm also planning to rebuild the whole TTS system on Debian as the existing CentOS system still has PHP4 etc. For now though, I'd really like to just shift everything over to a new drive. As this is my first post, please feel free to lay any house rules on me that I might've overlooked; I've been hovering around StackOverflow for a while now but have only just signed up. Many thanks.

    Read the article

  • Intermittent PHP error: Undefined function <core function>

    - by Daniel
    In the last week I've been coming across an incredibly annoying error on one of Slicehost slices. It appears that every now and then PHP will fail with a fatal error, saying a certain function is undefined. The function changes, but is always a core PHP function e.g. defined(), version_compare(), etc. This problem has occurred while using several different PHP applications - PHPMyAdmin, my own custom built apps, etc, leading me to believe that the problem is not specific to the running code. Here are some details: - Debian Lenny - Apache 2.2.9 - PHP 5.2.6-1+lenny4 with Suhosin-Patch (running eAccelerator 0.9.6) Apache and PHP are installed from Debian packages. Error logs show nothing out of the ordinary. I thought memory might be an issue, but free -m reports upwards of 100MB free almost all the time. Another thing I'm trying to investigate is if the problem might be related to eAccelerator, but testing this theory out is incredibly hard because the issue doesn't appear very often and I've been using eAccelerator for months on this install without any problems up until now. Has anyone ever come across anything like this? Why would PHP report undefined core functions?

    Read the article

  • Outside VPN traffic not able to ping site-to-site VPN remote site

    - by Siriss
    we have two ASA 5510s running 8.4 in a site-to-site VPN setup. All internal traffic is working smoothly. Site/Subnet A: 192.100.0.0 - local Site/Subnet B: 192.200.0.0 - remote VPN Users: 192.100.40.0 - assigned by ASA When you VPN into the network, all traffic hits Site A, and everything on subnet A is accessible. Site B however, is completely inaccessible for VPN users. All machines on subnet B, the firewall itself, etc... is not reachable by ping or otherwise. I know I am missing a NAT rule, and in 8.2, it was easy as pie to setup using ASDM, but now I can't get it for the life of me as 8.4 apparently made a lot of changes to NAT rules. I am not too comfortable in the ASA command line, but if there is a command I need to add or if you could direct me where I can add this in 8.4 ASDM I would really appreciate it. I have tired NAT Exempt, Static NAT, Static NAT Policies, etc... I think I tried all the options. I also might have my interfaces confused with the new look at feel of ASDM. Thank you much in advance and I hope I have been thorough enough.

    Read the article

  • LDAP ACLs with ldapmodify & .ldif file grand user access only

    - by plaetzchen
    I want to change the settings my new LDAP server let only users of the server read entries and not anonymous. Currently my olcAccess looks like this: olcAccess: {0} to attrs=userPassword,shadowLastChange by self write by anonymous auth by dn="cn=admin,dc=example,dc=com" write by * none olcAccess: {1} to * by self write by dn="cn=admin,dc=example,dc=com" write by * read I tried to change it like so: olcAccess: {0}to attrs=userPassword,shadowLastChange by self write by anonymous auth by dn="cn=admin,dc=example,dc=com" write by * none olcAccess: {1} to * by self write by dn="cn=admin,dc=exampme,dc=com" write by users read But that gives me no access at all. Can someone help me on this? thanks UPDATE: This is the log read after the changes mentioned by userxxx Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 fd=28 ACCEPT from IP=87.149.169.6:64121 (IP=0.0.0.0:389) Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 op=0 do_bind: invalid dn (pbrechler) Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 op=0 RESULT tag=97 err=34 text=invalid DN Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 op=1 UNBIND Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 fd=28 closed Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 fd=28 ACCEPT from IP=87.149.169.6:64122 (IP=0.0.0.0:389) Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 op=0 do_bind: invalid dn (pbrechler) Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 op=0 RESULT tag=97 err=34 text=invalid DN Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 op=1 UNBIND Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 fd=28 closed pbrechler should be a valid user but has no system user (we don't need it) admin does't work also List item

    Read the article

  • How to connect the virtual networks of vmware guests running on different hosts?

    - by gyrolf
    In a test setup, we are running several virtual machines on a single vmware workstation host. All virtual machines are connected via a "host only" network. This runs fine up to 2 or 3 virtual machines (depending on the host hardware). To allow more virtual machines, we want to use more host machines. Details about the environment and applications: Host PCs are running Windows XP in a corporate intranet. VMware used is Workstation 6.5 Guests are running Windows Server 2003 All guests act as Web Servers One of the guests additionally acts as Windows File server, offering shared folders for the other guests to connect to. Restrictions: VMware guests shall not be visible from the intranet. Changes to the host PC are restricted by corporate policy. In the virtual network, no domain controller exists. All virtual machines are member of the same workgroup. Running the virtual network as NAT is possible. Port forwarding might be used if it does not conflict with ports used by the host PC. Looking for a solution, I found hints about using router or vpn software on the hosts, but without any details how to setup. (I found a similar question Sharing the network between 2 VMware hosts, but the answer was not sufficient for me.)

    Read the article

  • chrooting php-fpm with nginx

    - by dragonmantank
    I'm setting up a new server with PHP 5.3.9 and nginx, so I compiled PHP with the php-fpm SAPI options. By itself it works great using the following server entry in nginx: server { listen 80; server_name domain.com www.domain.com; root /var/www/clients/domain.com/www/public; index index.php; log_format gzip '$remote_addr - $remote_user [$time_local] "$request" $status $bytes_sent "$http_referer" "$http_user_agent" "$gzip_ratio"'; access_log /var/www/clients/domain.com/logs/www-access.log; error_log /var/www/clients/domain.com/logs/www-error.log error; location ~\.php$ { fastcgi_pass 127.0.0.1:9001; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/clients/domain.com/www/public$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /etc/nginx/fastcgi_params; } } It servers my PHP files just fine. For added security I wanted to chroot my FPM instance, so I added the following lines to my conf file for this FPM instance: # FPM config chroot = /var/www/clients/domain.com and changed the nginx config: #nginx config for chroot location ~\.php$ { fastcgi_pass 127.0.0.1:9001; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME www/public$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /etc/nginx/fastcgi_params; } With those changes, nginx gives me a File not found message for any PHP scripts. Looking in the error log I can see that it's prepending the root path to my DOCUMENT_ROOT variable that's passed to fastcgi, so I tried to override it in the location block like this: fastcgi_param DOCUMENT_ROOT /www/public/; fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; but I still get the same error, and the debug log shows the full, unchrooted path being sent to PHP-FPM. What am I missing to get this to work?

    Read the article

  • Repairing Damage to VMWare Virtual Disk

    - by Lachlan McDonald
    Evening all, I've got a considerable problem I'm hoping to get some resolution on. I had two VMWare 6.5 virtual machines, one running Ubuntu 9.10 and the other Ubuntu 10.04. I used 9.10 as a testing server, so I could install a LAMP environment to prepare some code. Over the months I took a number of snapshots of this VM just in case something went wrong, and did a full copy of the entire VM a month ago. I created the 10.04 VM when Lucid Lynx launched so I could continue development on a fresh install. To get the files over, I simply added the 9.10 virtual disk into the 10.04 VM, grabbed some of the files I needed, and dismounted it. Unknown to me at the time, the changes to the 9.04 virtual disk meant that I could no longer boot it with the 9.10 VM. I'd always get the "The parent virtual disk has been modified since the child was created." error. I decided this was a good time to backup all the critical files, but now whenever I open the 9.04 disk to get the data it isn't in the same state as it was earlier. My question is; is it possible when I'm mounting the virtual disk that I'm not seeing the most recent snapshot, or in my blundering, have I lost the virtual disk. Cheers

    Read the article

  • Apache doesn't immediately notice a change in the document root

    - by Tom
    We use capistrano for website deployments and our Apache document root is a symlink to a particular code release. The deployment procedure switches the symlink from the old release to the new release as the final step of the deployment. We are migrating our webservers from real servers running RHEL 5.6 to Amazon EC2 virtual machines running Ubuntu 11.10 and the new servers are suffering from a problem where Apache doesn't immediately notice the change to it's document root when the symlink is switched. It can take a second or so (and I think I've even seen it take a couple of minutes). It's kind of like Apache has cached the physical path of the symlink for some time. Does anyone know some Apache settings I could look at to get it to "scan" for changes to it's served files quicker. Thoughts: I read that the disks on virtual machines are much slower (since they are network attached storage). Perhaps the filesystem cache somehow works differently too? If so, is there anything that can be done? The website runs PHP code. Perhaps there is some PHP config differences between RHEL and Ubuntu? I checked realpath_cache_ttl but both servers have it commented out: e.g. ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://www.php.net/manual/en/ini.core.php#ini.realpath-cache-ttl ;realpath_cache_ttl = 120 We do use the APC opcode cache but don't think it's the issue due to experimentation. The PHP code is in different file paths for each deployment and we ensure stat=1. Here is a similar question that is very interesting: 294107 - but doesn't provide an answer for me. One solution would be to reload Apache everytime we modify the document root symlink. I'll do this if we can't find another solution.

    Read the article

  • Setting Up VirtualHosts for a local RubyOnRails Application

    - by chris Frisina
    I want to set up a VirtualHost so that when I type the project name in the address bar, it goes to the home page of the project. httpd-vhosts.conf files in both XAMPP configuration and apache configuration: <VirtualHost project> ServerAdmin [email protected] ServerName project DocumentRoot /Users/path/to/project/public <Directory "/Users/path/to/project/public"> Options Includes FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> RewriteEngine On RewriteOptions inherit </VirtualHost> I have also tried with the path directly to the project folder, and not the public folder of the project. the httpd.conf of XAMPP and apache : # Virtual hosts Include /Applications/XAMPP/etc/extra/httpd-vhosts.conf the /etc/hosts file: 127.0.0.1 other1 127.0.0.1 other2 127.0.0.1 project I have tried : 127.0.0.1 project 127.0.0.1:3000 project 127.0.0.1 project:3000 I have also restarted the rails server and the XAMPP server many times after changes. I have it working so that the project:3000/ works, but how do I get it so that I dont have to specify the port number? Notes: All other VHosts are working well. Rails 3.2.8 (willing to change) Ruby 1.9.3 WEBrick server

    Read the article

  • Route forwarded traffic through eth0 but local traffic through tun0

    - by Ross Patterson
    I have a Ubuntu 12.04/Zentyal 2.3 server configured with WAN NATed on eth0, local interfaces eth1 and wlan0 bridged on br1 on which DHCP runs, and an OpenVPN connection on tun0. I only need the VPN for some things running on the gateway itself and I need to make sure that everything running on the gateway goes through the VPNs tun0. root:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default gw... 0.0.0.0 UG 100 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 br1 192.168.1.0 * 255.255.255.0 U 0 0 0 br1 A.B.C.0 * 255.255.255.0 U 0 0 0 eth0 root:~# ip route 169.254.0.0/16 dev br1 scope link metric 1000 192.168.1.0/24 dev br1 proto kernel scope link src 192.168.1.1 A.B.C.0/24 dev eth0 proto kernel scope link src A.B.C.186 root:~# ip route show table main 169.254.0.0/16 dev br1 scope link metric 1000 192.168.1.0/24 dev br1 proto kernel scope link src 192.168.1.1 A.B.C.0/24 dev eth0 proto kernel scope link src A.B.C.D root:~# ip route show table default default via A.B.C.1 dev eth0 How can I configure routing (or otherwise) such that all forwarded traffic for other hosts on the LAN goes through eth0 but all traffic for the gateway itself goes through the VPN on tun0? Also, since the OpenVPN client changes routing on startup/shutdown, how can I make sure that everything running on the gateway itself loses all network access if the VPN goes down and never goes out eth0.

    Read the article

  • Can't Login to phpPgAdmin

    - by Devin
    I'm trying to set up phpPgAdmin on my test machine so that I can interface with PostgreSQL without always having to use the psql CLI. I have PostgreSQL 9.1 installed via the RPM repository, while I installed phpPgAdmin 5.0.4 "manually" (by extracting the archive from the phpPgAdmin website). For the record, my host OS is CentOS 6.2. I made the following configuration changes already: PostgreSQL Inside pg_hba.conf, I changed all METHODs to md5. I gave the postgres account a password I added a new account named webuser with a password (note that I did not do anything else to the account, so I can't exactly say that I know what permissions it has and all) phpPgAdmin config.inc.php Changed the line $conf['servers'][0]['host'] = ''; to $conf['servers'][0]['host'] = '127.0.0.1'; (I've also tried using localhost as the value there). Set $conf['extra_login_security'] to false. Whenever I try to log in to phpPgAdmin, I get "Login failed", even if I use successful credentials (ones that work in psql). I've tried to go through some of the steps noted in Question 3 in the FAQ, but it hasn't worked out well so far there. It likely does not help that this is my first day working with PostgreSQL. I'm farily familiar with MySQL, but I have to use PostgreSQL for the project I'm working on. Could anyone offer some help for how to set up phpPgAdmin on CentOS 6.2? If I've done something terribly wrong in my configuration so far, it's no big deal to blow something/everything away, as it's not like I've stored any data there yet! I appreciate any insight you may have!

    Read the article

  • Magento Apache Config & Memory Issues

    - by cheshirepine
    I have a Magento installation on a VPS that is giving me a headache. This particular VPS has a reasonable spec - 2gb Memory and 50gb storage. It runs a single domain, with a single Magento install - and nothing else. About 5 months ago we started having issues. Every so often (about once every 2 or 3 weeks) the VPS would crash - all processes stopped and the only way to restart the container is via Virtuozzo. Now, however its 2 or 3 times a week. My VPS hosts confirm I am breaching the 2gb memory limit, at which point all VPS processes are killed to stop it bringing the entire node down. I have not made any config changes to it at all - I was running New Relic on it for a short while, but have removed that in case it was contributing to the issues. I can see nothing in the logs which indicates an issue and we have no CRON jobs running at the time the crashes happen. The site generates steady, but not huge amounts of traffic (averaging usually less than 100 visits per day) Is there anything in particular I should have done to the Apache or PHP configs to help? Im not a massivley experienced Apache admin, but know more than enough to solve most problems... Failing that, any other ideas that might help? Can't afford for this site to be down this much.

    Read the article

  • Openfire on EC2 with Jingle

    - by Bjorn Roche
    I would like to run Openfire (or another XMPP server) on EC2. At the moment this is just for testing, so easy setup and configuration are important, as is low cost. At some point, however, if things go well, it will be important to scale this. Ideally, it would be nice to not have to switch software when the scaling happens, but if a switch needs to happen later it certainly can. My requirements are: basic XMPP services, including muc and pubsub. Logins controlled from an external API. Preferably, when a user attempts to connect, the XMPP server checks with the api to see if their username and password are correct, but I can also have the API keep the XMPP server up to date on new users, deleted users, pasword changes and so on. I see Openfire has a "user service" API. Not ideal, but it looks workable. Jingle, including relay and STUN. It's not at all clear to me if the Jingle Nodes plugin takes care of this. I'm a bit confused about what's required to set this up, and I'd rather know in advance than be confused along the way :). eg It seems like STUN servers require more than one IP address. Can Openfire do all this for me, including stun and media relay on a single machine? Is this hard to configure on EC2 with Openfire? What are the basic steps? Would this be easier with something else like, say Tigase? What about database? Should I use amazon's database service, or run a db on the same machine? Would the server be compatible with a service like http://www.siteuptime.com/ Thanks!

    Read the article

  • Server 2008 R2 DNS Lockup / Stops Resolving Internet Names

    - by Richard Maynard
    We've deployed our first 2008 R2 server on a client site which has replaced their existing 2003 DC. This server provides DNS resolution services to all client machines on that site for general internet usage. Since using the 2008 R2 DNS services we have noticed every couple of days the DNS server starts timing out when requests to certain sites are made (google is the only example I can provide at this time although it seems to be larger sites with problems rather than small - CDN compatiblity issue?). When you restart the DNS Server service then resolution returns to normal... just only for a day or so. Is anybody aware of any significant changes to the DNS server architecture or configuration out of the box in R2 that may explain this intermittent behaviour? I have already tried the fix listed here to no avail: http://weblogs.asp.net/owscott/archive/2009/09/15/windows-server-2008-r2-dns-issues.aspx The following PS command prompt info illustrates the issue: PS C:\Users\Administrator.UK> nslookup Default Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 > www.google.com Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 Non-authoritative answer: Name: www.l.google.com Addresses: 66.102.9.99 66.102.9.104 66.102.9.105 66.102.9.103 66.102.9.147 Aliases: www.google.com > www.google.co.uk Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 * s8209001.uk.kingdomfaith.com can't find www.google.co.uk: Server failed

    Read the article

  • Fedora 17 - Dropping into debug shell after attempted partitioning

    - by i.h4d35
    So I tried creating a new partition on Fedora 17 using fdisk as follows: Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (2048-823215039, default 2048): Using default value 2048 Last cylinder or +size or +sizeM or +sizeK (1-9039, default 9039): +15G Once this was done,instead of formatting the partition I created, I ran the partprobe command to write the changes to the partition table. On rebooting the computer, it drops to the debug shell and gives me the error as follows: dracut warning:unable to process initqueue dracut warning:/dev/disk/by-uuid/vg_mymachine does not exist dropping to debug shell dracut:/# While trying to run fsck on the said partition from the debug shell, it says "etc/fstab not found" and inside /etc I see a fstab.empty file. Is it now possible to retrieve what I have from the computer? Any help would be appreciated. Thanks in advance Edit: I've also tried the following steps for additional troubleshooting: I tried to boot using the Fedora disk and tried the rescue mode - says no Linux partition detected. I tried to create an fstab file by combining the entries from blkid and the /etc/mtab file and using the UUIDs from the mtab file - It didn't work. As soon as I rebooted the machine, it promptly dropped me in to the debug shell and the fstab file which i created wansn't there anymore in /etc (part of this solution)

    Read the article

  • AD server within another network - DNS issues

    - by Harry Muscle
    Here's a quick summary of the environment I support: we have a domain (domain A) that has about 20 client computers. The domain server for this domain and all the clients sit within the network infrastructure of a larger domain (domain B). All the computers get their network settings via DHCP from domain B's servers. I have no control and am unable to make changes to anything to do with domain B. The problem I have is that currently in order for my domain's (domain A) clients to be able to resolve the domain server and the shares on it they have their DNS server IP address set to domain A's domain server (via the default GPO). Unfortunately when a laptop (windows and mac) gets taken home, they are still looking for the domain server as their DNS server and obviously can't access the internet correctly outside of our environment. Ideally I need a solution where the machines use domain A's domain server as their DNS when inside the office and use what ever DNS server DHCP gives them when they are outside the office. However, since I have no control over the office DHCP server, I'm not sure how this can be accomplished. Any help and advice that anyone can offer is highly appreciated. Thanks, Harry P.S. The solution I'm trying to find needs to require no involvement from the user.

    Read the article

  • Apache 2.4 Prefork vs. PHP-FPM Event shows sig decrease in requests per second

    - by Mark
    On my Apache 2.4.2 server with a standard mod_php Prefork setup these are my server-status results Current Time: Wednesday, 24-Oct-2012 19:36:24 CDT Restart Time: Wednesday, 24-Oct-2012 01:27:30 CDT Parent Server Config. Generation: 1 Parent Server MPM Generation: 0 Server uptime: 18 hours 8 minutes 54 seconds Total accesses: 14304233 - Total Traffic: 342.3 GB CPU Usage: u12584.6 s721.93 cu.66 cs3.43 - 20.4% CPU load 219 requests/sec - 5.4 MB/second - 25.1 kB/request 507 requests currently being processed, 355 idle workers ______KKKKR_K______W_KKC___CKK_K_K_W__CC_KKK_KK._K_K_KK._KKKK_K_ K_____KK_KKKK_K_KK__K___KK_K___K_____CKKK_WK_K_____KCKK__K___K_K K_CK_K_K_____K__KKKK_K__K___K_KK_K_K_KKKCK____________KK_CK__KKK __C_KKKKKKK___CK___C_KKK_K__C__K_CK____KKK__K__K__K_K__KK_CK_K__ _KKKKK_K_W__KK______K___K__W___C_K__K____KKKKKKKK.KKKKKKKCK_K___ _C_KK_K_WK__K_KK__K__RK_KK___K____K_KK_K_K___RKC_KKKK___KKKC_K_W _C_KK_KK__W____KC__KKK__KKK___K___KKK_KK_K_KKW__K_KR_KK_KK__KKK_ R__KKK__KKKKKK__K_KKKKK_K__K_K___KKW_________KK_K___KKK___KK.K_C KKKKKKW_____K__K_KKC_KCKK_K_KK_K__KK__K___K__KK_KK__________KK__ __K___KK_K__K_C_KK_K___KK__KK__K__KCK_K__KK_________K_K_KK__.K__ K_CKK.CCRW__KKKKKKKKKKKC__W____K___KWK_KK_KKC______.K_K_KK_KKKC_ __KKK_W_KCKKK_K_K____CCCK__KC_KKKK_K____K_CK_K____K__K____KKK_KK KK___K_K_K__KW__KCKKKK____WKWK__K_KKRKK__C_K_KK_KK_K__KKCC_K__C_ KK_K___K_KK______K_____CKK_K_______KK_CKCK__KKKKK____K__K..K____ __KKWK_KW__KKK__K_KKK___K_KK_KKK__KK___KK___KK_KK___KK____KKWKKC KK_KKKK_................................` When I switch to a PHP-FPM setup with the Event MPM with no other variables changes, my requests/sec plummet and overall apache response is garbage. Current Time: Wednesday, 24-Oct-2012 19:51:21 CDT Restart Time: Wednesday, 24-Oct-2012 19:48:03 CDT Parent Server Config. Generation: 1 Parent Server MPM Generation: 0 Server uptime: 3 minutes 18 seconds Total accesses: 18720 - Total Traffic: 307.1 MB CPU Usage: u16.57 s4.74 cu0 cs0 - 10.8% CPU load 94.5 requests/sec - 1.6 MB/second - 16.8 kB/request 15 requests currently being processed, 49 idle workers PID Connections Threads Async connections total accepting busy idle writing keep-alive closing 11701 114 no 10 22 0 66 38 11702 134 no 5 27 0 81 48 Sum 248 15 49 0 147 86 __R_R__W___RRW________RR__R___W_W_______W_____W_____________R_R_ Is there any obvious reason anyone could think of why this would be the case. I can provide any other additional stats or server setup info to help out. Ive tried tweaking everything up and down and nothing really helps get the PHP-FPM setup anywhere near a baseic prefork/mod-php setup. Thanks!

    Read the article

  • DFS replication and the SYSTEM user (NTFS permissions)

    - by HopelessN00b
    Question for which I'm having trouble finding an answer on Google or Technet... Does granting the SYSTEM user permissions to DFS-shared files and folders have any effect on DFS replication? (And while we're at it, is there any good reason not to let SYSTEM have permissions to DFS-shared files?) It comes up because I have a collection of DFS namespaces and folders that I'm not able to make someone else's problem, and while troubleshooting a problem where one DFS replica just wasn't replicating with another for no discernible reason, I observed that the SYSTEM didn't have any permissions granted to any of the files or folders in the folder in question. So I set SYSTEM to have full control and propagated it down, and our DFS health diagnostic reports went from showing a backlog of ~80 files to a backlog of ~100,000... and things started replicating, including a number of files that had been missing on one server or the other (so more than just the permissions changes started replicating). Naturally, this made me curious as to whether or not DFS needs the SYSTEM account to have permissions to do its work, or if perhaps it was just any change to folder tree in question that prompted DFS to jump into action. If it matters, our DFS namespaces were set up under 2000/2003, and I have just recently finished upgrading all the servers to 2008 R2 or 2012 (with UAC enabled, blech), but have not yet gotten around to raising the DFS namespace functional levels to Server 2008. (And bonus points if anyone has an official Microsoft article on NTFS file permissions and the SYSTEM account as it pertains to DFS or network files.)

    Read the article

  • can't access SATA card config screen on boot, nor access the disks

    - by Ronald
    We've just upgraded our file server using an ASUS P6T WS Pro board, running FreeBSD-RELEASE 8.2 and using zfs to manage 12 WD20EARS disks. Since our 3ware card has been giving us trouble we started using the six on-board SATA connectors and got a SuperMicro USAS2-L8i to provide eight more ports. Mechanically, the card is an awkward fit but electrically it all seems ok. Upon boot, the LSI controller shows up and states that pressing ctrl-c will bring up the LSI Config Utility. When doing that, the message changes to state that the utility will be started after initialization, however that never happens. There does seem to be an error message that's only displayed too briefly to read and seems to be about PCI and "not enough space". (That message is pushed off by a hardware summary and I've found no way to scroll back at this point.) The disks do not show up in any recognizable ways after booting, either. I found a hint in another discussion to check the address mapping on either the card or the motherboard BIOS, but have found no way to do that. So what I tried on a hunch is to disable everything that's on-board, including network adapters, Firewire controller and SATA. In fact, after doing that, I can successfully launch the LSI Config Utility. As far as I can tell, all looks well in there, and when booting in that configuration it also displays a list of the disks connected to it, which looks just fine as well. Only problem now is that I can't boot that way, because I need the on-board SATA controller and network adapters. As soon as I re-enable any of them I'm back to square one. That discussion I mentioned about mapping addresses said to try D000, then D7FF, then DFFF, in order. The LSI Config Utility shows the card address as D000 but offers no way of changing it. Any tips or insights would be appreciated.

    Read the article

< Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >