Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 608/750 | < Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >

  • Sharing a folder with Nautilus and NTFS external drive gets errors

    - by TheLQ
    I am trying to share a folder in Lubuntu over a network that's on an external NTFS drive. Due to the system that I have (rotating backup disks) this is probably the second time that the drive would of been mounted. Its manually mounted with a simple (for example) mount /dev/sdb1 /media/BACKUP On an internal NTFS disk I have successfully setup a network share and can access it. However on the external disk I can't from any other Windows computer. When setting up the share Nautilus said that it needs to change the other's permissions to allow for other users to write. However afterwords its still blank. Changing it to Read and Write just changes back to blank. Chowning the entire /media folder recursively and trying didn't work. Running PCManFM as root and changing didn't work. Adding "public=yes" to smb.conf and restarting didn't work. I'm out of idea's on what to do. What's weird is that it worked just fine on an internal NTFS disk, so why not the external one? Any solutions need to be able to managed inside of a gui (preferably Nautilus) as the person managing the machine isn't as tech savvy. Thanks

    Read the article

  • SQL Server architecture - they want to move my database to new instance...Why?

    - by O'MALLEY
    Our current production database environment contains about 10 similarily managed databases. Our agency has just purchased and is installing new blade chasses and wants to move my database to a new instance (leaving the other 9 on another). This decision is being driven by one of our IT staff, not a DBA. I am a project manager, not a DBA but I know enough to not necesarrily have a good feeling about this decision and I am urging our IT department to make a sound decision based on what is best for the database. Our IT department has stated that it is not good to have all our eggs in one basket, and has also stated that my database contains "regulatory data" so it should be on its own instance. A couple of truths: - None of the databases on the current instance are OLTP databases nor are any of them data warehouses - My database currently has joins/views to a couple of the other databases in the production environment So my questions are as follows: Am I wrong to disregard a statement about eggs in baskets? (hello, this is why we have maintenance plans/disaster recovery plans). I'll mention that other databases also have regulatory data too. What types of questions do I need to ask to determine if this is a sound decicion? (A DBA friend mentioned that if the service level agreement of said database does not radically differ from the others then why do they want to do this?) I have done some research on linked servers. What arguments should I bring forth about the fact that I have views setup that rely on data from other DBs currently?

    Read the article

  • Google Contacts/Calendars + Address Book + iCal: built-in sync (problems) or Exchange sync?

    - by jtbandes
    (I've looked at a few other questions related to this, but I've only found old questions with people saying that they're having problems, or anticipating Snow Leopard fixing them; no recent updates.) I'm looking to sync my Google Contacts & Calendars, and Gmail, with my Mac & iPhone. The iPhone I have currently set up thus: IMAP for Mail Exchange (Google Sync) for Contacts & Calendars The Mac: Address Book: built-in sync iCal: CalDAV, configured as a Google account I haven't been syncing Gmail to Apple Mail, because I was having weird IMAP glitches every so often that just got to annoying. Will Exchange / Google Sync work for this at all? Any suggestions there? Here are the other problems I'm having. Address Book only syncs certain fields (for example, Birthdays don't sync at all). I believe this is a list of the information that's synced. Address Book's "Synchronize with Google" checkbox doesn't stay checked when I quit Address Book. I think iCal is working fine, for the most part. Any suggestions on how to improve this setup? Why doesn't Address Book / Google Contacts sync stay enabled? Could I use Exchange for it like I am on the iPhone? Will that sync all the fields, including Birthdays, etc.? Thanks in advance!

    Read the article

  • Manipulating IIS7 Configuration with Powershell

    - by John Price
    I'm trying to script some IIS7 setup tasks in PowerShell using the WebAdministration module. I'm good with setting simple properties, but am having some trouble when it comes to dealing with collections of elements in the config files. As an immediate, example, I want to have application pools recycle on a given schedule, and the schedule is a collection of times. When I set this up via the IIS management console, the relevant config section looks like this: <add name="CVSupportAppPool"> <recycling> <periodicRestart memory="1024" time="00:00:00"> <schedule> <clear /> <add value="03:31:00" /> </schedule> </periodicRestart> </recycling> </add> In my script I would like to be able to accomplish the same thing, and I have the following two lines: #clear any pre-existing schedule Clear-WebConfiguration "/system.applicationHost/applicationPools/add[@name='$($appPool.Name)']/recycling/periodicRestart/schedule" #add the new schedule Add-WebConfiguration "/system.applicationHost/applicationPools/add[@name='$($appPool.Name)']/recycling/periodicRestart/schedule" -value (New-TimeSpan -h 3 -m 31) That does almost the same thing, but the resulting XML lacks the <clear/> element that is created with via the GUI, which I believe is necessary to avoid inheriting any default values. This sort of collection (with "add", "remove", and "clear" elements) seems to be fairly standard in the config files, but I can't seem to find any good documentation on how to interact with them consistently.

    Read the article

  • Setting up httpd-vhosts.conf for multiple virtual hosts

    - by Chris Sobolewski
    I have a simple test setup using xampp at home, and I am getting really weird behavior when I attempt to set up multiple virtual hosts on this box. Here is my vhosts file: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName foo DocumentRoot "D:\wamp\xampp\htdocs\foo" ErrorLog logs/foo-error_log CustomLog logs/foo-access_log common <Directory "D:\wamp\xampp\htdocs\foo"> Options Indexes FollowSymLinks Includes execCGI AllowOverride All Order Allow,Deny Allow From All </Directory> </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName bar DocumentRoot "D:\wamp\xampp\htdocs\bar" ErrorLog logs/bar-error_log CustomLog logs/bar-access_log common <Directory "D:\wamp\xampp\htdocs\bar"> Options Indexes FollowSymLinks Includes execCGI AllowOverride All Order Allow,Deny Allow From All </Directory> </VirtualHost> When I attempt to run visit the first site, it works as expected. When I attempt to run the second site, I get a weird hybrid mishmash of both sites. It's the weirdest thing.

    Read the article

  • correct file permissions for trac and git user to access gitolite server repos

    - by klemens
    hi, sounds like a stupid questions (to me), but i couldn't find any info. on my server i host some git repositories via gitolite, and have a trac for every repository. i have a user called git to push/pull from server (git clone git@server:repo). and trac is a apache vhost with mod_wsgi. this runs with the www-data user. so what riddles me (maybe because I have not much of a clue about file-permissions at all) is whats the best permissions setup (chown, chmod) for the git repositories (/home/git/repositories/...). www-data (or trac) needs to at least read permissions (i think). and git (or gitolite) needs obviously read/write permissions to push changesets. i tried a little bit around (i.e. adding www-data and/or git to the www-data/git group), but didn't got it right. at least one of the two don't work (git or trac). any suggestions are highly appreciated. regard, klemens

    Read the article

  • Setting up a network where packets are traced

    - by Marcus
    My situation is the following: I have an internet connection, which is shared between people. More or less obviously, people is using it to download illegal stuff. Since I'm the owner of the connection, I want to avoid being sued. I don't want to prevent the people from doing the things they want, but I want to be legally safe. Now, I have relatively little competences in network administration, so I was wondering: is it possible to setup a network, where the source and destination of the packets are logged? I would use this to prove, in case of lawsuit, that the traffic was coming from a given machine. if the idea is feasible, is there any wireless router on which I can install linux, where I can install the packet sniffer? how much space could the logs take (containing only the timestamp/source/destination), per GB of traffic? a very rough estimation would be very helpful. if a machine on my network is sending bittorrent packets to a certain IP, would this log be able to reflect the time, source ip and destination ip? I assume that obviously the torrent data would be encrypted and un-decryptable. Am I missing something? Is there a better strategy? Any pointer to documentation would be helpful as well - in that case, I would use this as starting point.

    Read the article

  • Windows 7 comments field missing when browsing network

    - by Toymangenie
    I have just purchased three Windows 7 Professional Dell 64-bit PCs for testing prior to upgrading our company’s 120+ PCs from Windows XP Professional. The setup is a standard domain with a Windows Server 2003 32-bit server. We name each PC XP1 to XP150 so that when users join or leave, I don’t have to rename the PC. We use the Description field to allocate the user’s name to each PC. We also have a share set up on each PC using the user’s name. When I browse the network using Windows Explorer in XP, I get a useful display. The left pane showing the PC number and the right pane showing NAME and COMMENTS So, for example I would see: XP01 Fred Bloggs (Each PC on a new row.) The right pane is my main tool for administering the network. I can easily see the PC number and the name of the user. However, in Windows 7, this seems to have been thrown out of the window and replaced with fields that I do not need and in my case always display the same info. "Name", "Category", "Workgroup", "Network Location" In my case the Name column gives the PC number (XP10) etc and all three other columns display identical useless information. So I can’t see who is using XP10. When I am in “help desk” mode, I would naturally ask the user’s name and use my remote desktop client to view their screen. The user isn’t aware of their PC name, so I am finding it impossible to match the user name with a PC number. Any ideas how to overcome this "by design" change to Windows 7?

    Read the article

  • Amazon AWS VPN how to open a port?

    - by Victor Piousbox
    I have a VPN with public and private subnets; I am considering only public subnet for now. The node 10.0.0.23, I can ssh into it. Let's say I want to connect to MySQL on the node using its private address: ubuntu@ip-10-0-0-23:/$ mysql -u root -h 10.0.0.23 ERROR 2003 (HY000): Can't connect to MySQL server on '10.0.0.23' (111) ubuntu@ip-10-0-0-23:/$ mysql -u root -h localhost Welcome to the MySQL monitor. Commands end with ; or \g. --- 8< --- snip --- 8< --- mysql> The port 3306 is not reachable if I use the private IP? My security group allows port 3306 inbound from 0.0.0.0/0 AND from 10.0.0.0/24. Outbound, allowed all. The generic setup done by Amazon through their wizard does not work... I add ACL that allows everything for everybody, still does not work. What am I missing?

    Read the article

  • Is my OCZ SSD aligned correctly? (Linux)

    - by Barney Gumble
    I have an OCZ Agility 2 SSD with 40 GB of space. I use it as a system drive in Debian Linux (Squeeze) and in my opinion it's really fast. But I've read a lot on aligning partitions and file systems... And I'm not sure if I succeeded in aligning the partitions correctly. Maybe the SSD could be even faster?? ;-) I use ext4 and here is the output of fdisk -cul: Disk /dev/sda: 40.0 GB, 40018599936 bytes 255 heads, 63 sectors/track, 4865 cylinders, total 78161328 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: [...] Device Boot Start End Blocks Id System /dev/sda1 * 2048 73242623 36620288 83 Linux /dev/sda2 73244670 78159871 2457601 5 Extended /dev/sda5 73244672 78159871 2457600 82 Linux swap / Solaris My partitions were created just by the Debian Squeeze setup assistant. So I didn't care about the details of partitioning. But now I think maybe the installer didn't align it correctly? Actually, 2048 looks good to me (better than odd values like 63 or something like that) but I've no idea... ;-) Help plz! According to some "SSD Alignment Calculator" I found on the web, the OCZ SSDs have a NAND Erase Block Size of 512kB and their NAND Page Size is 4kB. 2048 is divisible by 4 and 512. So are the partitions aligned correctly?

    Read the article

  • Enabled Network Discovery on Server, and now VNC and Squeezebox clients don't work

    - by Mike Hanson
    I've recently setup a Windows Server 2008. It's running an email server, Squeezebox server, MS SQL Server, etc. I'm doing remote maintenance with UltraVNC. I had everything working fine. Then the server needed to access a network share on another machine, and I was prompted to turn on network discovery, which I did. I chose the Home rather than Public option. Since doing that, some things have stopped working, while others are still fine. Shared folders and the the Email services (ports 25 and 110) are still accessible. VNC (port 5900) and Squeezeboxes (port 9000) no longer work. Here's what I've tried to try to solve the problem: Checked the network discovery settings, to see if anything looked strange. Checked the firewall settings, and those ports appear to be open. Also in the firewall settings, the entries for Private domain Network Discovery were all on, but the Domain/Public ones were off. I tried turning those on. In the services, turned on Function Discovery Resource Publication and SSDP Discovery. Any other suggestions?

    Read the article

  • Issues using gmail with google apps and external domain

    - by Jonathan Kelly
    I have recently tried to use gmail through google apps as my main email client, but I'm experiencing a few different problems. I am managing the domain (conjunktiondesign.co.uk) through 123reg.co.uk but it is hosted through fasthosts.co.uk. I transfered the domain to 123reg as fasthosts did not allow me to change the MX records myself. I followed the setup instructions step by step on google apps and changed the MX records as they told me to. My email was now working perfectly but my website was down and I was getting the following error: The dnsserver returned: No DNS records I have a friend that is using the same system as me (ie. Externally hosted domain and google apps mail) and I changed my 123reg details to the same that he had (as his was working perfectly - both email and website). I changed my name servers to point to fasthosts, rather than 123reg and I added an A record called '@' pointing to fasthosts IP address. I also created another A record called 'www' pointing to fasthosts IP address. After I did this, my website worked almost immediately but I have only realised that since changing it my email is now down. I have not received anything since Saturday. I am a web designer and would consider myself fairly tech savvy, but I have no idea about A records, CNAME's and all the things I have been messing about with! What I ultimately need is someone to help me get my email and website working at the same time, rather than one being down when the other is OK. I seem only able to get one or the other working. I have now changed the name servers back to 123reg in an attempt to get my email back as it is more important than my website at this stage. Any help is much appreciated. Thanks.

    Read the article

  • Postfix SMTP-relay server against Gmail on CentOS 6.4

    - by Alex
    I'm currently trying to setup an SMTP-relay server to Gmail with Postfix on a CentOS 6.4 machine, so I can send e-mails from my PHP scripts. I followed this tutorial but I get this error output when trying to do a sendmail [email protected] Output: tail -f /var/log/maillog Apr 16 01:25:54 ext-server-dev01 postfix/cleanup[3646]: 86C2D3C05B0: message-id=<[email protected]> Apr 16 01:25:54 ext-server-dev01 postfix/qmgr[3643]: 86C2D3C05B0: from=<[email protected]>, size=297, nrcpt=1 (queue active) Apr 16 01:25:56 ext-server-dev01 postfix/smtp[3648]: 86C2D3C05B0: to=<[email protected]>, relay=smtp.gmail.com[173.194.79.108]:587, delay=4.8, delays=3.1/0.04/1.5/0.23, dsn=5.5.1, status=bounced (host smtp.gmail.com[173.194.79.108] said: 530-5.5.1 Authentication Required. Learn more at 530 5.5.1 http://support.google.com/mail/bin/answer.py?answer=14257 qh4sm3305629pac.8 - gsmtp (in reply to MAIL FROM command)) Here is my main.cf configuration, I tried a number of different options but nothing seems to work: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost inet_protocols = ipv4 mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost myhostname = host.local.domain myorigin = $myhostname newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relayhost = [smtp.gmail.com]:587 sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_sasl_type = cyrus smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt smtp_use_tls = yes smtpd_sasl_path = smtpd unknown_local_recipient_reject_code = 550 In the /etc/postfix/sasl_passwd files (sasl_passwd & sasl_passwd.db) I got the following (removed the real password, and replaced it with "password"): [smtp.google.com]:587 [email protected]:password To create the sasl_passwd.db file, I did that by running this command: postmap hash:/etc/postfix/sasl_passwd Do anybody got an idea why I can't seem to send an e-mail from the server? Kind Regards Alex

    Read the article

  • Best practices for thin-provisioning Linux servers (on VMware)

    - by nbr
    I have a setup of about 20 Linux machines, each with about 30-150 gigabytes of customer data. Probably the size of data will grow significantly faster on some machines than others. These are virtual machines on a VMware vSphere cluster. The disk images are stored on a SAN system. I'm trying to find a solution that would use disk space sparingly, while still allowing for easy growing of individual machines. In theory, I would just create big disks for each machine and use thin provisioning. Each disk would grow as needed. However, it seems that a 500 GB ext3 filesystem with only 50 GB of data and quite a low number of writes still easily grows the disk image to eg. 250 GB over time. Or maybe I'm doing something wrong here? (I was surprised how little I found on the subject with Google. BTW, there's even no thin-provisioning tag on serverfault.com.) Currently I'm planning to create big, thin-provisioned disks - but with a small LVM volume on them. For example: a 100 GB volume on a 500 GB disk. That way I could more easily grow the LVM volume and the filesystem size as needed, even online. Now for the actual question: Are there better ways to do this? (that is, to grow data size as needed without downtime.) Possible solutions include: Using a thin-provisioning friendly filesystem that tries to occupy the same spots over and over again, thus not growing the image size. Finding an easy method of reclaiming free space on the partition (re-thinning?) Something else? A bonus question: If I go with my current plan, would you recommend creating partitions on the disks (pvcreate /dev/sdX1 vs pvcreate /dev/sdX)? I think it's against conventions to use raw disks without partitions, but it would make it a bit easier to grow the disks, if that is ever needed. This is all just a matter of taste, right?

    Read the article

  • VPN on a ubuntu server limited to certain ips

    - by Hultner
    I got an server running Ubuntu Server 9.10 and I need access to it and other parts of my network sometimes when not at home. There's two places I need to access the VPN from. One of the places to an static IP and the other got an dynamic but with DynDNS setup so I can always get the current IP if I want to. Now when it comes to servers people call me kinda paranoid but security is always my number one priority and I never like to allow access to the server outside the network therefor I have two things I have to have on this VPN. One it shouldn't be accessiable from any other IP then these 2 and two it has to use a very secure key so it will be virtually impossible to bruteforce even from the said IP´s. I have no experience what so ever in setting up VPNs, I have used SSH tunneling but never an actuall VPN. So what would be the best, most stable, safest and performance effiecent way to set this up on a Ubuntu Server? Is it possible or should I just set up some kind of SSH Tunnel instead? Thanks on beforehand for answers.

    Read the article

  • Need a recommendation for shared storage on auto-scaling ec2 w/ scalr

    - by john h.
    I have come across so many answers to this question that I am completely lost! I am moving our 2 sites to a load balanced ec2 system with scalr as our cloud manager. Now the question is coming up about persistent storage for the user's uploaded content and other files. Could someone please give me a suggestion and possible a link to a tutorial for the following setup and goals. 2 websites (1 Forum, 1 ecommerce). 1 LB 1 App server (to scale out to as many as needed) 1 DB server (to scale out to as many as needed) Our sites will need to autoscale and according to what I am learning about scalr, that means as new instances load up, I need to run a script to set the basics up on that server (git,php mods, pull site from git, move keys, etc) What I don't understand is how should I handle user uploaded content like profile pictures, avatars, product images, themes, etc... Do I mount an EBS or s3fs folder to hold the websites (maybe /var/www/websitefolder) or do I do something like mount the avatar folders /var/www/websitefolder/images/avatars) I am not sure where to go with this. Could someone give me some detailed help? -John

    Read the article

  • Windows 8 Pro Homegroup Not Allowing Access to Shared Files

    - by Jack Herr
    I have two pc's and a laptop. With all three running Windows 7, homegroup worked perfectly, with all computers able to access the files on the others exactly as one would expect. I installed Windows 8 Pro, which I downloaded recently from the MSDN website, on the laptop. From the info on the MSDN website, I believe the version downloaded is the official release version, not a release candidate. The install preserved all settings, apps, and files. I cannot connect to the homegroup from the laptop. Well, not exactly. I joined the homegroup during the install, and later left and rejoined, that was advertised as being offered by one of the pc's (not sure why that one was picked and not the other or both). This homegroup appears in my "computer" folder in the file explorer, offering files from that pc. But it excludes documents (only the music, pix, videos, and playlists folders on the one computer appear). I can actually access those files from the laptop. But the Homegroup folder, which actually shows all the computers by account and computer name and includes documents from both computers, does not open those folders when clicked. The computers listed by name in the Network folder give me a login that doesn't work. It asks for a username and password, no combinations of which work. Further, the following error message appears before the login is even attempted: "The system cannot contact a domain controller to service the authentication request. Please try again later." The two windows 7 pc's continue to share all files, including documents, seemlessly. I can also get to the laptop shared files, which I shared during setup, from either pc. Any ideas?

    Read the article

  • Is this "cache administrator" error my server's problem?

    - by Eoin
    Hey, I have a CentOS VPS running Apache with a phpBB installation. One specific user has received errors when posting a message or logging in to the forum. The following issue has arisen in parallel to installing nginx, which serves only the static files of my site. Not sure if this is only coincidence. Furthermore, my setup uses redirects (in some cases, double-redirects) to point the user to a different virtual folder. So, the forum is seen to be at /translation/ but the actual files are found in /phpbb/. I'm at a loss as to what may be the underlying issue. My server? The person's ISP? She has tested both at home and at work, with similar issues. While trying to process the request: GET /phpbb/index.php?sid=f62c927e7eb8f1d60a92dcc6fd918112 HTTP/1.1 Host: www.irishgaelictranslator.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-za Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.irishgaelictranslator.com/phpbb/ucp.php?mode=login Cookie: phpbb3_cipi4_u=96645; phpbb3_cipi4_k=; phpbb3_cipi4_sid=f62c927e7eb8f1d60a92dcc6fd918112; __utma=153470688.1232378553.1294664234.1294664234.1294664234.1; __utmb=153470688.9.10.1294664234; __utmc=153470688; __utmz=153470688.1294664235.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); style_cookie=null The following error was encountered: Invalid Response The HTTP Response message received from the contacted server could not be understood or was otherwise malformed. Please contact the site operator. Your cache administrator may be able to provide you with more details about the exact nature of the problem if needed.

    Read the article

  • Windows 7 remote desktop encryption error every few minutes

    - by rfrankel
    Because of an error in data encryption, this session will now end. This is the error I've been getting more and more frequently over the past few days, to the point that I can't ignore it because it's happening consistently within 5 minutes of connecting - sometimes within a few seconds. Both the remote and local machines are Windows 7 Pro x64. The remote machine is behind a Linksys RV082, and I'm using UPnP to forward a remote port to the correct local port. This setup had been working fine for several months, and I can't think of any recent relevant changes that might have been made. Things I've already tried: Disabling unnecessary components of the network connection on the remote machine, until only IPv4 and Client for Microsoft Networks remain. Disabling TCP large send offload on both the remote and local machines. Confirming that the remote machine is not mentioned anywhere in any DMZ settings on the Linksys router. Confirming that there are no x509-related registry keys screwing things up (this is the suggested fix for a slightly different error anyway). These are the only solutions I've been able to find after about an hour of searching, and most of them apply to XP or Server 2003 in any case. If anyone could suggest something else, it would be much appreciated.

    Read the article

  • Why do HTTP loopback connections not work on my subdomains?

    - by memeLab
    I have a shared hosting account at Jumba running Linux kernel 2.6.9-103.ELsmp (don't know if that helps) with cpanel 1.0 (RC1). I am using the WordPress plugin Backup Buddy, which requires HTTP loopback connections to monitor / complete backups. This works fine on memelab.com.au, but doesn't work at any subdomain (e.g.: staging.memelab.com.au). Is it possible to setup an A record or some such to remedy this? I'm aware of a workaround, (setting WP_ALTERNATE_CRON) but I find this unsatisfactory due to the messy URLs. BackupBuddy:_Frequent_Support_Issues#HTTP_Loopback_Connections_Disabled Here is the reply from my host: …as main domain have it's own separate DNS entry it have localhost entry which helps for looback connections where as subdomains don't have separate DNS zone, so it is not possible to create looback connections for it. I have cpanel access to the 'advanced zone editor' - is there anything tricky I can do there? maybe 127.0.0.2? (I remember reading that there were at least 8 available local IPs available on (some) Linuxes.) All the A records point to the server IP, with the exception of localhost.memelab.com.au which points to 127.0.0.1. I've just tried entering a new A record: localhost.itours.memelab.com.au pointing to 127.0.0.2. I still get the warning in Backup Buddy that loopback is not active, and Cpanel won't let me enter 127.0.0.1 (guess it doesn't work like that!) nslookup itours.memelab.com.au Server: 203.88.112.33 Address: 203.88.112.33#53 Non-authoritative answer: Name: itours.memelab.com.au Address: 117.55.224.177

    Read the article

  • 2560 x 1600 screen resolution not available when a second monitor is attached.

    - by sgmoore
    I am running Windows 7 (64-bit edition) and have a 30" Dell 3007WFP monitor which runs at a screen resolution of 2560 x 1600. This works perfectly until I try to connect a second monitor, and then the screen resolution on the main monitor immediately drops to 1280x800 and I can't change it back up to the correct resolution until I disconnect the second monitor. The graphics card is a Nvidia Quadro FX 370. This has a dual link DVI connector (to which the 30" is connected) and a single link DVI connector. The second monitor can run at 1920x1080 and is connected using a VGA to DVI connector. Note, it does not seem to matter whether the second monitor is running at 1920x1080 or even at 800x600. Windows reports Total Available Graphics Memory: 3839MB Dedicated Video Memory: 256MB System Video Memory: 0MB Shared System Memory: 3583MB Does anyone know if this a limitation with the video card, memory, drivers, connectors or something else? If this is a limitation with the video card, can anyone recommend a PCI Express 16 card that would support at least this setup, but preferably support two 30" monitors both running 2560 x 1600. (I'm not into gaming etc, so it doesn't need to be very powerful)

    Read the article

  • Service nginx reload: unexpected error

    - by Anna
    I'm trying to install wordpress on my nginx server by following this tutorial: http://premium.wpmudev.org/blog/how-to-setup-your-own-nginx-powered-wordpress-server/ However, the last command at step 7 gave me a strange error: service nginx reload A copy-paste from my terminal: root@server:~# service nginx reload Reloading nginx configuration: nginx: [emerg] unexpected "o" in /etc/nginx/sites-enabled/wordpress:7 nginx: configuration file /etc/nginx/nginx.conf test failed When I nano into sites-enabled/wordpress, on the 7th line I can't find anything strange: <!DOCTYPE html> <html class=" "> <head prefix="og: http://ogp.me/ns# fb: http://ogp.me/ns/fb# object: http://ogp.me/ns/object# article: http://ogp.me/ns/article# profile: http://ogp.me/ns/profile#"> <meta charset='utf-8'> <meta http-equiv="X-UA-Compatible" content="IE=edge"> Also, I don't see any obvious errors in my nginx.conf file, but maybe I'm not checking something? The first couple of lines of the nginx config file: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } Any help is appreciated, thanks a lot in advance!

    Read the article

  • IPv6 working fine, IPv4 throws OpenSSL error

    - by jippie
    I am building a webserver ( http://blog.linformatronics.nl/ ), which functions just fine on both IPv4 and IPv6 and when using a non-SSL connection. However when I connect to it through https, IPv6 works as expected, but an IPv4 connection throws a client side error. Server side logs are empty for the IPv4/https connection. Summarized in a table: | http | https -----+-------+------------------------------------------------------- IPv4 | works | OpenSSL error, failed. No server side logging. -----+-------+------------------------------------------------------- IPv6 | works | self signed certificate warning, but works as expected Apparently the SSL tunnel isn't even set up, which accounts for the Apache logs being empty. But why does it work fine for IPv6 and fail for IPv4? My question is why is this OpenSSL error being thrown and how can I solve it? Below is some extra information about the setup. IPv6 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -6 https://blog.linformatronics.nl --2012-11-03 15:46:48-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 2001:980:1b7f:1:a00:27ff:fea6:a2e7 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|2001:980:1b7f:1:a00:27ff:fea6:a2e7|:443... connected. WARNING: cannot verify blog.linformatronics.nl's certificate, issued by `/CN=localhost': Self-signed certificate encountered. WARNING: certificate common name `localhost' doesn't match requested host name `blog.linformatronics.nl'. HTTP request sent, awaiting response... 200 OK Length: 4556 (4.4K) [text/html] Saving to: `/dev/null' 100%[=======================================================================>] 4,556 --.-K/s in 0s 2012-11-03 15:46:49 (62.5 MB/s) - `/dev/null' saved [4556/4556] IPv4 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -4 https://blog.linformatronics.nl --2012-11-03 15:47:28-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 82.95.251.247 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|82.95.251.247|:443... connected. OpenSSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol Unable to establish SSL connection. Notes I am on Ubuntu Server 12.04.1 LTS

    Read the article

  • Multiple munin-nodes per machine

    - by Alexander T
    I'm collecting statistics remotely through JMX. The munin JMX plugin allows you to select an URL to connect to when aggregating statistics. This allows me to collect statistics from hosts which do not actually have munin-node installed. I find this a desirable property for some systems where I am hindered to install munin-node. How I work today is that if i want to collect JMX stats from machine A without munin-node, I install munin-node on machine B. Machine B then collects data from A via JMX, and reports it to munin-server, which runs on machine C. This setup requires multiple B-type machines: one per C-type machine. I would like to avoid this and instead use only one B-type machine to collect the data from all A-type machines and reports it to the only munin-server (C-type machine). As far as I understand this requires running multiple munin-nodes on B or in some other way report to munin-server that the B-type machine is reporting data from multiple sources. Is this possible? Thank you.

    Read the article

  • SFTP, Chroot problems on Redhat

    - by Curtis_w
    I'm having problems setting up sftp with a ChrootDirectory. I've done an equivalent setup on other distros, but for some reason I cannot get it to work on a Redhat AMI. The changes to my sshd_config file are: Subsystem sftp internal-sftp Match Group ftponly PasswordAuthentication yes X11Forwarding no ChrootDirectory %h ForceCommand internal-sftp AllowTcpForwarding no I have the concerned usere's homes at /home/user, owned by root. After connecting with a user in the ftponly group, I'm dropped into / without permissions for anything, and am unable to do anything. sftp bob@localhost Connecting to localhost... bob@localhost's password: sftp> pwd Remote working directory: / I can connect normally with users not in the ftponly group. openssh version 5.3 I've experimented with different permissions, as well as having users own their own home directory (gives a Write failed: Broken pipe error), and so far, nothing has seemed to work. I'm sure it's a permissions error, or something equally as trivial, but at this point my eyes are beginning to glaze over, and any help would be greatly appreciated. EDIT: James and Madhatter, thanks for clarifying. I was confused by chroot dropping me in /... just didn't think through it properly. I've added the appropriate directories and permissions to get read access. One other key part was enabling write access to chrooted homes: setsebool -P ssh_chroot_rw_homedirs on in order to get write access. I think I'm all set now. Thanks for the help.

    Read the article

< Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >