Search Results

Search found 10622 results on 425 pages for 'shared hosting'.

Page 345/425 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • How to connect the virtual networks of vmware guests running on different hosts?

    - by gyrolf
    In a test setup, we are running several virtual machines on a single vmware workstation host. All virtual machines are connected via a "host only" network. This runs fine up to 2 or 3 virtual machines (depending on the host hardware). To allow more virtual machines, we want to use more host machines. Details about the environment and applications: Host PCs are running Windows XP in a corporate intranet. VMware used is Workstation 6.5 Guests are running Windows Server 2003 All guests act as Web Servers One of the guests additionally acts as Windows File server, offering shared folders for the other guests to connect to. Restrictions: VMware guests shall not be visible from the intranet. Changes to the host PC are restricted by corporate policy. In the virtual network, no domain controller exists. All virtual machines are member of the same workgroup. Running the virtual network as NAT is possible. Port forwarding might be used if it does not conflict with ports used by the host PC. Looking for a solution, I found hints about using router or vpn software on the hosts, but without any details how to setup. (I found a similar question Sharing the network between 2 VMware hosts, but the answer was not sufficient for me.)

    Read the article

  • Why is my cron daemon is being killed every few minutes?

    - by user113215
    As of about a week ago, my cron daemon refuses to stay running. I'm using Debian 6 x64 on an OpenVZ virtual machine. Running something like pgrep cron shows that the daemon isn't running. I start the service with service cron start or /etc/init.d/cron start and it launches, but it disappears from the running process list after a few minutes (varying anywhere between 1 - 30 minutes before the process is killed again). Using strace -f service cron start, I can see that the process is being killed for some reason: nanosleep({60, 0}, <unfinished ...> +++ killed by SIGKILL +++ There's nothing relevant in /var/log/syslog, /var/log/messages, /var/log/auth.log, or /var/log/kern.log to explain why the the process is dying. The system has at least 800 MB of free memory, and cat /proc/loadavg returns 0.22 0.13 0.04 so resources shouldn't be the issue. With cron running, free -m reports: total used free shared buffers cached Mem: 1024 211 812 0 0 0 -/+ buffers/cache: 211 812 Swap: 0 0 0 I also tried removing and reinstalling the cron package using apt-get. Update: I initially thought the problem was a resource issues. I erased my entire VPS and started from a fresh Debian image. There is now nothing else running on the system, but even from a clean install my cron daemon is still being killed at random. What else should I check? How do I find out what's killing my crond?

    Read the article

  • Puppet write hosts using api call

    - by Ben Smith
    I'm trying to write a puppet function that calls my hosting environment (rackspace cloud atm) to list servers, then update my hosts file. My get_hosts function is currently this: require 'rubygems' require 'cloudservers' module Puppet::Parser::Functions newfunction(:get_hosts, :type => :rvalue) do |args| unless args.length == 1 raise Puppet::ParseError, "Must provide the datacenter" end DC = args[0] USERNAME = DC == "us" ? "..." : "..." API_KEY = DC == "us" ? "..." : "..." AUTH_URL = DC == "us" ? CloudServers::AUTH_USA : CloudServers::AUTH_UK DOMAIN = "..." cs = CloudServers::Connection.new(:username => USERNAME, :api_key => API_KEY, :auth_url => AUTH_URL) cs.list_servers_detail.map {|server| server.map {|s| { s[:name] + "." + DC + DOMAIN => { :ip => s[:addresses][:private][0], :aliases => s[:name] }}} } end end And I have a hosts.pp that calls this and 'should' write it to /etc/hosts. class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { host{ $name: ip => $name[ip], host_aliases => $name[aliases] } } As you can imagine, this isn't currently working and I'm getting a 'Symbol as array index at /etc/puppet/manifests/hosts.pp:2' error. I imagine, once I've realised what I'm currently doing wrong there will be more errors to come. Is this a good idea? Can someone help me work out how to do this?

    Read the article

  • VPS Memory Exchausted Even With Light Settings

    - by user101570
    Linux noob here. I have a 256MB VPS on Ubuntu 11.04 server and when I run "free -m" the result shows all memory being used (including the second line re: buffers/cache). I found this very strange, considering I only have 5 Apache processes running each chewing up about 20MB each. MYSQL is taking up 30MB. To my knowledge, and according to "top", I have no other memory hogs operating. Settings that may be relevant: PHP memory_limit = 32M MYSQL key_buffer = 16M Prefork MPM Maxclients = 10 So when I reviewed these settings, I naturally thought maxclients was too high, so I tried switching it to 5. Now not only does my memory still show as being 100% used, my website loads much, much slower, despite not getting any traffic aside from mine at the moment. I don't understand this. I thought a single Apache process handles all requests from a client received within the "KeepAliveTimeout" window, which I've set to 2 seconds. With my initial config. of 10 maxclients, my page load times are around .3ms, so a single process should handle that no problem, correct? So next I went to an extreme level of 1 for maxclients. My memory is still at 100% usage and my site loads painfully slow. I'm a noob at a complete loss here. According to the many tutorials I've read on basic server setup, I should be good to go. Help! Please! Edit: total used free shared buffers cached Mem: 256 256 0 0 0 0 -/+ buffers/cache: 256 0 Swap: 0 0 0

    Read the article

  • Outlook Calendar Attachments to have limited access to just Required attendees

    - by Jason Pearce
    The management team at my company often attaches documents (Word, Excel, PDFs) to their Outlook Calendar meeting requests. The meeting requests are sent to the managers, but also to their assistants. The desire is to have everyone be able to view the full meeting request and its content, but limit the ability to open the attachments to just the managers. Is there a way in Outlook 2003 and/or 2007 to limit access to attachments that accompany meeting requests? Ideally, can access to the attachments be controlled by the "Select Attendees and Resources" window when selecting individuals from the Global Address List. Can those in the Required field have access to the attachments while those in the Optional or Resources fields not have access? My suggestion was to simply place all meeting attachments in a shared network folder that has read/write access limited to managers. They would then just place fully qualified links to those files in the body of the Meeting Request. While everyone would receive and see the links, only a few would have access. This, however, wasn't easy enough for them, so I'm looking for some other ideas.

    Read the article

  • Web based KVM management for Ubuntu

    - by Tim
    We've got a single Ubuntu 9.10 root server on which we want to run multiple KVM virtual machines. To administer these virtual machines I'd like a web based KVM management tool, but I don't know which one to choose from the list of tools mentioned on linux-kvm.org. I've used virsh & virt-manager on my desktop, but would like a web interface for the server. I tested ConVirt on my desktop, but it failed to pickup KVM machines from virsh / virt-manager, and I could not get KVM virtual machine import to work (only Xen). oVirt looks good, but I can't find out if and how I can install it on Ubuntu 9.10.. (And I'd really rather not waste another few days on testing stuff that might not work in the end.) Can anyone recommend any good web based KVM management tools that are easy to install on Ubuntu 9.10? I'm looking for something that will also allow me to run other services like apache and postgresql besides hosting virtual machines, so preferably fairly lightweight & no dedicated OS installs. We don't need any professional clustering / migration or anything, just something that will let us create, start, inspect, administer & stop virtual machines from a web page. Best regards, Tim Update: Anyone have any suggestions? It's awfully quiet here..

    Read the article

  • Connecting to a subdomain severs the connection to the domain itself. What's going on?

    - by TheAgent
    Hi all. We have a website on a third-party server (server leased and shared with other websites) and the server provides access to our SQL Server database through a subdomain in the form of mssql.DomainName.com. I was told to use SQL Management Studio Express to connect to this subdomain in order to manage the database. After a few tries and getting many "Timeout" messages, I finally manage to connect to the server; everything's fine. But now I can't connect to DomainName.com anymore. Trying to browse DomainName.com using Firefox, it tries to "lookup" DomainName.com address and fails, telling me "the server was not found". I have to disconnect Management Studio from the server and wait a couple of hour for DomainName.com to become available again, and after that, trying to reconnect to the SQL Server again repeats the scenario. While I can't browse DomainName.com directly, I can use a proxy to connect to it, meaning that the problem is somehow related to a DNS my computer tries to ask to translate the name to the corresponding IP. Anyone seen anything like this before? Any ideas? Thanks in advance.

    Read the article

  • GIT Website Deployment

    - by Brian
    I am attempting to setup GIT to deploy my project to different locations based on the branch. (I think this is what I want to do anyway). My current setup is this: Local dev machine running Netbeans to make changes. Remote server hosting GIT projects (same server running apache) - 2 subsites exist a test.FQDN.com and a live.FQDN.com What I would like to do is have 1 GIT project (MyProject) and create a new feature branch. Any commits done to the new feature branch would push to test.FQDN.com. Once the features have been tested and then merged into the master branch, it would push to live.FQDN.com. I have looked at GIT's post-receive hooks and was able to use "git checkout -f" command to pull on the test.FQDN.com site however that only pulls the master branch and not the new feature branch. I do not have any funding to use a third party to make this work, and would prefer to stay within GIT but have full root access to the web server if there is a package to install which would help control this. Any suggestions would be great!

    Read the article

  • Clone Microsoft VM > SharePoint issues

    - by Rob
    Hi, We have an existing (production) Hyper-V VM that I want to clone to create a staging server. This server is NOT on a domain/active-directory - uses all local computer accounts. It is Windows 2008 OS, SharePoint 2007, SQL server 2008, Reporting Services, and some custom line-of-business web-applications. We rename the server etc as per the documentation we have, but are having troubles specifically with SharePoint. Some of the sites do not come up, and some issues adding/updating site settings such as site owners. Does anyone know of a definitive resource to this process? thanks Update: We seem to have most working, but the Shared Service Provider, and the SSP's admin site, are not working. In addition, the custom web-parts that reference IIS Session objects are failing (which seem to be related to the Share Service Provider). All the Windows user accounts for various services are renamed during the server cloning, but the windows account names in SQL server are not automatically changed. We've tried to change them in SQL server but still does not seem to work Seems like allot of work - and am thinking that there must be some guidance out there. also getting this: NullReferenceException: Object reference not set to an instance of an object.] Microsoft.Office.Server.Administration.SqlSessionStateResolver.System.Web. IPartitionResolver.ResolvePartition(Object key) +135

    Read the article

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • How can I get ffmpeg to convert a .mov to a .gif?

    - by user29336
    I'm trying to convert a .mov to a .gif and I'm not having success. Here's the error: ffmpeg -pix_fmt rgb24 -i yesbuddy.mov output.gif ffmpeg version 0.11.1 Copyright (c) 2000-2012 the FFmpeg developers built on Jun 12 2012 17:47:34 with clang 2.1 (tags/Apple/clang-163.7.1) configuration: --prefix=/usr/local/Cellar/ffmpeg/0.11.1 --enable-shared --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-libfreetype --cc=/usr/bin/clang --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libxvid --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libass --enable-libvo-aacenc --disable-ffplay libavutil 51. 54.100 / 51. 54.100 libavcodec 54. 23.100 / 54. 23.100 libavformat 54. 6.100 / 54. 6.100 libavdevice 54. 0.100 / 54. 0.100 libavfilter 2. 77.100 / 2. 77.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 15.100 / 0. 15.100 libpostproc 52. 0.100 / 52. 0.100 Option pixel_format not found. If I leave out the -pix_fmt rgb24 part it complains. Thoughts on how to fix?

    Read the article

  • Uninstallation of WSUS form SBS 2008

    - by Logik
    I am not much experienced system admin, but i came across the client who was having SBS 2008. The server was running out of HDD space. So to recover some, I removed its WSUS role (twas not needed).this removed WSUS 3.0 SP1 & freed a lot of space. This SBS is: Domian controller, DNS, DHCP, File server. After i removed WSUS i disabled windows update service & i rebooted serer & checked from one of client if the shared folders are accessible. they were. Next day all of sudden i got call from them saying they can't login into their domain. I looked into server, the Active directory service was stopped. I never remember touching any service other than windows update. How come AD service stopped running all of sudden. Is removing WSUS have such impact? I am not aware of any such thing.

    Read the article

  • PHP on several servers with session-sharing

    - by Etu
    there's certanly other threads about this, but I have one more question. We are about to scale the website at work to have more than one server. And we need to share the sessions between the servers. We have been looking into different solutions, one in memcached and use Memcached as sessionhandler in PHP. That will probably work. And the idea would be to run memcached on every machine and let all webservers access all other servers memcached servers, and then we have shared sessions between the machines, yay. (we have no resources to setup with sticky-sessions yet, that's a later project. we need this running, and we need this running now. and we will loadbalance with DNS for a starter) But then... If I want to take one server down, say, for maintenance, or a server crashes, or whatever reason. I don't want the users to just loose their sessions and have to start from the beginning... That's why we need some kind of replication, which Memcached does not support. Then I found http://repcached.lab.klab.org/ -- which has multi-master replication of memcached, which is great, and is what I want. But does it work with 2 machines? Say 3, 5, 10? For future scaling. I also looked into redishttp://redis.io/ -- which also seems great, but is a bit more "shaky" with the php-session-handler support, and no multi-master-replication. The thing is that I like to use memcached, but I want to be able to power down one of two boxes without loosing half of the sessions. Any suggestions?

    Read the article

  • Is it bad to redirect http to https?

    - by jasondavis
    I just installed an SSL Certificate on my server. I use a web hosting panel called ZPanel that is an open source project. It then set up a redirect for all traffic on my domain on Port 80 to redirect it to Port 443. In other words, all my http://example.com traffic is now redirected to the appropriate https://example.com version of the page. The redirect is done in my Apache Virtual Hosts file with something like this... RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] My question is, are there any drawbacks to using SSL? Since this is not a 301 Redirect, will I lose link juice/ranking in search engines by switching to https? I appreciate the help. I have always wanted to set up SSL on a server, just for the practice of doing it, and I finally decided to do it tonight. It seems to be working well so far, but I am not sure if it's a good idea to use this on every page. My site is not eCommerce and doesn't handle sensitive data; it's mainly for looks and the thrill of installing it for learning. UPDATED ISSUE Strangely Bing creates this screenshot from my site now that it is using HTTPS everywhere...

    Read the article

  • Fresh install CentOS 6.4 64b with directadmin slowly consumes all memory and crashes

    - by Coen Ponsen
    Dear server fault community, This is my first question on server fault, i'm new to server (mis)configuration so please forgive me for asking something stupid :) I'm running Directadmin on a CentOS 6.4 64b with 4GB memory and over 10000Gh virtual machine. I migrated my websites because my former vps couldn't keep up anymore. Only half of the websites from this 1GB machine were migrated jet. So the migration is still in progress and already my server crashes every day. The server performance up until that moment is perfect. The directadmin log files show nothing out of the ordinary. Yesterday only the mysql server crashed but it also crashed the entire machine before. The memory usage in DA seems to be normal: directadmin directadmin (pid 3923 22158 22159 22160 22161 22162 )8.75 MB dovecot dovecot (pid 3851 ) 47.8 MB exim exim (pid 1350 ) 1.29 MB httpd (pid 21525 21528 21529 21530 21531 21532 21546 21571 21742 21743 21744 )490.4 MB mysqld mysqld (pid 1299 ) 287.8 MB named named (pid 3807 ) 16.3 MB proftpd proftpd (pid 1481 ) 1.91 MB sshd sshd (pid 1173 21494 ) 5.16 MB Restarting services immediately frees up memory, but slowly over time the memory usage increases(about 24 hours to crash). The commands: # sync # echo 3 > /proc/sys/vm/drop_caches Will free al memory correct. I could just create a cronjob but it seems the wrong way around to me. I can't seem to pinpoint the cause. Any advices, references or tips are highly appreciated! Greetings, Coen edit: free -m : after drop_caches: total used free shared buffers cached Mem: 3830 735 3095 0 0 21 -/+ buffers/cache: 712 3117 Swap: 991 0 991 I'll post another one this evening.

    Read the article

  • .htaccess with godaddy not working in subdomain

    - by explorex
    Hi, i have a site uploaded to shared subdomain (which is inside a folder). and htaccess is not working. please get details from here. EDIT::copied from stack overflow Hi, i uploaded as website to a subdomain, and every page is not working except the front page please check it here. what could be the possible reason? i shoud have 8 pages in front level and many more on admin level but i am getting 404 error as you can see, does anyone has idea or suggestion? UPDATE:: .htaccess file RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] UPDATE to url rounting i do have few url router like below BUT i dont have any default router $router->addRoute( 'get-destination', new Zend_Controller_Router_Route('destination/get/:id/:dest-name', array( 'controller' => 'destination', 'action' => 'get', 'id' => 'id', 'dest-name' => 'dest-name' )) ); just to make look cooler and on my navigation (which is loaded from xml i have) something like <nav> <home> <label>HOME</label> <controller>index</controller> <action>index</action> <route>default</route> </home> since i was getting url problem from where url was routed and please check phpinfo at http://websmartus.com/demo/globaltours/public_html/phpinfo.php

    Read the article

  • Synchronize Dreamweaver over an SSH tunnel using an SFTP connection

    - by Aeo
    Maybe... Just maybe... I'm asking too much here. Maybe I'm even barking up the wrong tree. I'm looking to essentially have Dreamweaver establish an SSH tunnel to one machine, and then use that connection to synchronize a site that is on another machine entirely. Now for some details: We've got two connections here at work. We've got our office connection for day to day business, and then we've got some fancy connection hosting our web servers upstairs. For the most part they've been mutually exclusive until recently. We had been establishing an SFTP connection to synchronize our web sites by going out over the office connection to the web and coming back in over the fancy connection to our servers upstairs. Recently -ish, we established a LAN connection to one of our servers that makes a pleasant change in VNC connection quality. Thanks to Vinagre, this makes it really easy to connect to any of our servers over this LAN connection via SSH tunnel for VNC. However, in spite of that new addition of a LAN connection, we still synchronize over the 'net. Out the office connection and in on the fancy one upstairs. I'm looking to change this. I'd like to get Dreamweaver to first tunnel over our LAN connection to the servers, and then go from there to whatever connection it needs to. Am I asking too much? The current set up: Dreamweaver is installed on Windows XP which is running within VirtualBox on top of Ubuntu 10.10. The network connection for VirtualBox is currently made in NAT mode, but could easily be switched to a Bridged Connection should it need be. The LAN connection is to 1 of 5 servers running CentOS 5.

    Read the article

  • Force delivery retry without restarting the SMTP Service on Windows Server 2008 R2

    - by Mathias R. Jessen
    I have a Windows Server 2008 R2 box hosting 3 virtual SMTP servers; vSMTP01, vSMTP02 and vSMTP03. The first two are configured to deliver all messages to dedicated smarthosts, while the last is set to just deliver the messages on its own. All other delivery settings are as default ----(vSMTP01)-----> {SMARTHST01} / ----Inbound mail--->---SMTPSRV01---[----(vSMTP02)-----> {SMARTHST02} \ ----(vSMTP03)-----> { Internet } Now I want to take SMARTHST01 out for maintenance, but I don't want to reject submissions to vSMTP01 while doing so, so I just let it continue running. When SMARTHST01 is no longer responding, vSMTP01 queues the messages and wait for the first retry interval to pass (15 minutes). So far so good. Let's say SMARTHST01 gets online again after 20 minutes. The first interval has passed, and I'll have to wait another 25 minutes for the second retry interval to pass. If I stop and start the SMTP Service (Services.msc - Simple Mail Transfer Protocol service - Stop), the server will retry all deliveries, but that would cause a service interruption for ALL virtual SMTP servers on the machine, which is highly undesirable. How can I manually force vSMTP01 to retry delivery of all queued messages without interrupting the service of vSMTP02 and vSMTP03?

    Read the article

  • Assigning cores to VM in vSphere

    - by user114933
    Complete vSphere newbie here... Background: So, I have a 12 core machine with 24 VMs on it. Currently, all the processing power is shared between these VMs equally. The question: Can I configure one VM to be given two CPU's worth processing no matter what's happening on the other machines? My Research: I tried two things in vSphere... I set the reservation and limit on one VM to equal the same as two cores. To test if my objective was being reached, I measured the time it would take to gzip a file when other VMs were running nothing and when other VMs were running CPU intensive operations. I expected the time to gzip the file would be the same because this VM gets priority for some processing. Unfortunately, the time taken to gzip the file when other VMs were running something was significantly more than when other VMs were not running anything. I tried setting the Hyperthreaded Core Sharing mode to Internal hoping that this would mean that my VM would get at least an entire core to itself. This did not work either. Thanks in advance!

    Read the article

  • Problems forwarding port 3306 on iptables with CentOS

    - by BoDiE2003
    Im trying to add a forward to the mysql server at 200.58.126.52 to allow the access from 200.58.125.39, and Im using the following rules (its my whole iptables of the VPS of my hosting). I can connect locally at the server that holds the mysql service as localhost, but not from outside. Can someone check if the following rules are fine? Thank you # Firewall configuration written by system-config-securitylevel # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :RH-Firewall-1-INPUT - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT -A RH-Firewall-1-INPUT -p 50 -j ACCEPT -A RH-Firewall-1-INPUT -p 51 -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp -s 200.58.125.39 --dport 3306 -j ACCEPT -A INPUT -p tcp -s 200.58.125.39 --sport 1024:65535 -d localhost --dport 3306 -m state --state NEW,ESTABLISHED -j ACCEPT -A OUTPUT -p tcp -s localhost --sport 3306 -d 200.58.125.39 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT COMMIT And this is the output of the connection trial. [root@qwhosti /home/qwhosti/public_html/admin/config] # mysql -u user_db -p -h 200.58.126.52 Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on '200.58.126.52' (113)

    Read the article

  • Can ZFS ACL's be used over NFSv3 on host without /etc/group?

    - by Sandra
    Question at the bottom. Background My server setup is shown below, where I have an LDAP host which have a group called group1 that contains user1, user2. The NAS is FreeBSD 8.3 with ZFS with one zpool and a volume. serv1 gets /etc/passwd and /etc/group from the LDAP host. serv2 gets /etc/passwd from the LDAP host and /etc/group is local and read only. Hence it doesn't not know anything about which groups the LDAP have. Both servers connect to the NAS with NFS 3. What I would like to achieve I would like to be able to create/modify groups in LDAP to allow/deny users read/write access to NFS 3 shared directories on the NAS. Example: group1 should have read/write to /zfs/vol1/project1 and nothing more. Question The problem is that serv2 doesn't have a LDAP controlled /etc/group file. So the only way I can think of to solve this is to use ZFS permissions with inheritance, but I can't figure out how and what the permissions I shall set. Does someone know if this can be solved at all, and if so, any suggestions? +----------------------+ | LDAP | | group1: user1, user2 | +----------------------+ | | | |ldap |ldap |ldap | v | | +-----------+ | | | NAS | | | | /zfs/vol1 | | | +-----------+ | | ^ ^ | | |nfs3 |nfs3| v | | v +-----------------------+ +----------------------------+ | serv1 | | serv2 | | /etc/passwd from LDAP | | /etc/passwd from LDAP | | /etc/group from LDAP | | /etc/group local/read only | +-----------------------+ +----------------------------+

    Read the article

  • What Media Extender / Centre Set up should I use?

    - by Bryn Hird
    I have installed cat6 throughout the house which I use for telephony and network. In my cellar I have a NAS Server, gigabit switch and I want to install a Media Centre to stream my video's, music, photo's and live TV (coax from the aerial to the cellar) over the cat6. Yeah I know I can get stuff on the internet but shared experience of watching TV as a family as it happens is a big plus for live TV. I'm aiming for 1080p. I want different users to be able to watch different channels. Max users = 4. I've played a little with Windows Media Centre, works fine with live TV. Likewise I have XBMC up and running with live TV. The issue I have is what do I put near the TV. I'd like a consistent user interface (grandma and the the other technophobes in the house are continually pestering me on how to use different TVs, change channel, inputs etc.) so a key part of this for me is to make the user experience the same and simple i.e. no keyboards / PCs hanging around the TV. I've just bought a Linksys DMA 2200 to test the Windows Media Centre, but obviously off eBay as they're a dying breed. And with Windows Media Centre removed from Microsoft plans such devices will get rarer. And as for 1080p, think I can forget it with that set up. I have tested XBOX 360, also works but ditto on Microsoft plans for WMC. I was thinking of a WD Live TV to test the XMBC setup. Now to the question. Any advice on Media Centre / Extender setups that will do the job as above and have some degree of futureproofing (building my own with my Raspberry PI is a last resort). I'd like to understand the standards involved in the futureproofing if anyone knows (DNLA, RVU etc.).

    Read the article

  • Drowning in documents - recommend doc management solutions?

    - by Martin Day
    I've been researching document management lately. I want to organise my docs at home and also at the office. Finding affordable solutions one can actually test drive is quite hard. Some that I've downloaded just don't seem to work (testing on brand new Vista PC). I've seen some software on Amazon like Paperport but not really sure what they're like. For home I'd like something to organise files, full text search, good scanner integration, nice interface etc. But for the office it seems harder. I need something that does proper workflow and keeps versions. It will have an audit trail. Documents can be approved, checked in/out etc. I know a few clients who would like something similar. It would be great just to import thousands of documents from a shared drive and get them indexed with dupes killed. I'd like to be super clear about how/where the documents are being stored so that maintenance and backups are clear. My Google/twitter searches lead back to the same tired and vague webpages pushing what look like expensive and custom made solutions. Some might be very good I suppose but it's darn hard to tell. I don't mind a hosted package but all in all I don't think something like Google Docs, as good as it is now, will work. There are too many quirks and missing features (as compared to Office). Being able to work directly with the common Office file formats is important. I've noted a similar sounding question asked here back in August but it didn't seem to turn up too many solutions that I could easily and quickly apply. Also there could have been some changes since then so I feel it's worth asking.

    Read the article

  • Servers/Websites Keep Going Down

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps Type: Linux

    Read the article

  • Google Chrome no longer treats " Web Apps" specially

    - by Adrian Petrescu
    I'm running Google Chrome (Dev Channel), with the --enable-apps flag, in both OSX and Ubuntu. I have four or five WebApps installed, and they appear in the "New Tab" page just fine. The problem is that, before, when the feature first became available in the Dev Channel, the actual tabs hosting the webapps received special treatment; they would have 3D Dock-like look, and (more importantly) the tab bar would be hidden while using that tab. Sometime in the last few weeks, however, it seems that the special treatment just disappeared with one of the daily updates. The webapps still show up in the New Tab page, they still work in the sense that they capture all URLs going to that webapp, and they use the right icons; but they've basically become indistinguishable from just a regularly pinned tab. The two special features mentioned above have disappeared, on both Ubuntu and OS X. My questions are simply: a) Does this happen to anyone else? When exactly did it begin? b) Why did Google regress the feature? c) Is there any flag I can enable to get it back?

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >