Search Results

Search found 28288 results on 1132 pages for 'home directory'.

Page 347/1132 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • Where to get glib-config for Kubuntu?

    - by Carl Smotricz
    I'm trying to compile Midnight Commander on a KUbuntu 9.10 (Karmic) box with no root access. I've set up a directory under $HOME, downloaded the mc source package and various stuff required for building, such as autotools. I've unpacked the CONTENTS of all those packages into this working directory such that I have the usual ./usr, ./lib, ./etc hierarchy. I manage to get configure through a lot of tests, but I can't seem to fool it into finding glib. checking for glib-2.0... checking for glib-config... no checking for glib12-config... no checking for glib-config... no checking for GLIB - version >= 1.2.6... no *** The glib-config script installed by GLIB could not be found *** If GLIB was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GLIB_CONFIG environment variable to the *** full path to glib-config. configure: error: Test for glib failed. GNU Midnight Commander requires glib 1.2.6 or above. My system has glib installed: /lib/libglib-2.0.so.0 /lib/libglib-2.0.so.0.2200.3 ... and I've also downloaded and unpacked the glib package into my working directory: libglib2.0-0_2.22.2-0ubuntu1_i386.deb libglib2.0-dev_2.22.2-0ubuntu1_i386.deb ... but still the elusive glib-config is nowhere to be found. It's not in any debian package for Karmic, either. So I'd appreciate any help getting over this hurdle. Please note, again, that I don't have root, so I can't just merrily apt-get stuff.

    Read the article

  • Windows Remote Desktop: "configuring remote session" closes without error

    - by icelava
    I have a desktop/laptop pair at home operating x64 Windows 7 (the desktop was upgraded from Windows Vista, works just fine). I remote desktop to them on a daily basis when outside. In recent weeks, I would occasionally fail to connect to my desktop. It can connect and authenticate fine, but the "configuring remote session" dialog would simply close and not show me the desktop window or any error message. There is no error event log relating to this on the desktop computer. Some suggestions call for disabling remote audio, which mine already is, but trying different audio modes did not yield any different result. I am not too sure if this is related to video card drivers (they do get auto-updated), since remote desktop video is supposed to steer via a virtual device driver? Nonetheless the desktop operates three monitors via an ATI Radeon HD5770 (1 Displayport, 2 DVI). I do not see a real problem with that since I can mostly connect and operate it remotely. I try to "remote tunnel" via my home laptop but obviously won't work either as the problem lies in the desktop. What other conditions can cause remote desktop to break without error? UPDATE I came home and still couldn't connect to the desktop until I restarted the entire system.

    Read the article

  • Apache2 shared server: default webpage

    - by Eamorr
    Greetings, I have an apache2 server with 4 domain names point to my server's single IP address. When I type in www.site1.com it serves pages from /home/eamorr/site1/index.php Same for www.site2.com, www.site3.com and www.site4.com However, when I type in to the address bar of a browser without the www, it always redirects to site1.com! i.e. site1.com - site1.com site2.com - site1.com site3.com - site1.com site4.com - site1.com How do I configure apache to do the following: site1.com - site1.com site2.com - site2.com site3.com - site3.com site4.com - site4.com Here is my default config: ServerAdmin [email protected] ServerName www.site1.com DocumentRoot /home/eamorr/sites/site1.com/www DirectoryIndex index.php index.html <Directory /home/eamorr/sites/site1.com/www> Options Indexes FollowSymLinks MultiViews Options -Indexes AllowOverride all Order allow,deny allow from all php_value session.cookie_domain ".site1.com" #Added by EOH for redirection RewriteEngine on RewriteRule ^([^/.]+)/?$ driver.php?uname=$1 [L] </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined I'd like to look at the domain name and then redirect to www.sitex.com. Is there an Apache rule to do this? I hope someone can help. My SysAdmin/apache2 config skill aren't the best. Many thanks in advance,

    Read the article

  • nginx: SSI working on Apache backend, but not on gunicorn backend

    - by j0nes
    I have nginx in front of an Apache server and a gunicorn server for different parts of my website. I am using the SSI module in nginx to display a snippet in every page. The websites include a snippet in this form: For static pages served by nginx everything is working fine, the same goes for the Apache-generated pages - the SSI include is evaluated and the snippet is filled. However for requests to my gunicorn backend running a Python app in Django, the SSI include does not get evaluated. Here is the relevant part of the nginx config: location /cgi-bin/script.pl { ssi on; proxy_pass http://default_backend/cgi-bin/script.pl; include sites-available/aspects/proxy-default.conf; } location /directory/ { ssi on; limit_req zone=directory nodelay burst=3; proxy_pass http://django_backend/directory/; include sites-available/aspects/proxy-default.conf; } Backends: upstream django_backend { server dynamic.mydomain.com:8000 max_fails=5 fail_timeout=10s; } upstream default_backend { server dynamic.mydomain.com:80; server dynamic2.mydomain.com:80; } proxy_default.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; What is the cause for this behaviour? How can I get SSI includes working for my pages generated on gunicorn? How can I debug this further?

    Read the article

  • Ubuntu: Tar doesn't work correctly

    - by Phuong Nguyen
    It looks like tar is having some problems. I installed Ubuntu 10.04 alpha a few of weeks ago. After that, this alpha version is so terrible that I must switched back to 9.10. So, I backed up all of my profiles data (/home/my_user_name) to my_user_name.tar.gz Here what I did: (in U10.04) Open Nautilus and goto /home/my_user_name Press Ctrl+H to view all hidden files. Press Ctrl+A to select all files Right click and choose [Compress...] Now, when I have set up Ubuntu 9.10 again, I extract the tar using Extract Here command from Nautilus. Funny things happened: Instead of extracting to current folder, the archive manager create a folder named my_user_name and put the extracted content into it. All of the files that I placed directly under /home/my_user_name doesn't get extracted All of the directories that started with . (dot) is not extracted. I wonder if there is any incompatibility between tar in U10.04alpha and U9.10 that cause the problem? Now every of my email, basket data is long gone. I'm freaking right now. Is there anything I can do to get the tar return back my data?

    Read the article

  • Have an unprivileged non-account user ssh into another box?

    - by Daniel Quinn
    I know how to get a user to ssh into another box with a key: ssh -l targetuser -i path/to/key targethost But what about non-account users like apache? As this user doesn't have a home directory to which it can write a .ssh directory, the whole thing keeps failing with: $ sudo -u apache ssh -o StrictHostKeyChecking=no -l targetuser -i path/to/key targethost Could not create directory '/var/www/.ssh'. Warning: Permanently added '<hostname>' (RSA) to the list of known hosts. Permission denied (publickey). I've tried variations using -o UserKnownHostsFile=/dev/null and setting $HOME to /dev/null and none of these have done the trick. I understand that sudo could probably fix this for me, but I'm trying to avoid having to require a manual server config since this code will be deployed on a number of different environments. Any ideas? Here's a few examples of what I've tried that don't work: $ sudo -u apache export HOME=path/to/apache/writable/dir/ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l deploy -i path/to/key targethost Eventually, I'll be using this solution to run rsync as the apache user.

    Read the article

  • Ubuntu VM "read only file system" fix?

    - by David
    I was going to install VMWare tools on an Ubuntu server Virtual Machine, but I ran into the issue of not being able to create a cdrom directory in the /mnt directory. I then tested to see if it was just a permissions issue, but I couldn't even create a folder in the home directory. It continues to state that it is a read only file system. I know a little about Linux, and I'm not comfortable with it yet. Any advice would be much appreciated. Requested Information from a comment: username@servername:~$ mount /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type tmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) For sure root output. root@server01:~# mount /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type tmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev)

    Read the article

  • What is the IPv6 equivalent to IPv4 RFC1918 addresses?

    - by Kumba
    Having a hard time wrapping my head around IPv6 here. A lot of the lingo seems targeted at enterprise-level IPv6 deployments, discussing link-local, site-local, global unicast, scopes, etc. Not a lot of solid information on really small networks, like home networks. I want to check my thinking and make sure I am getting the correct translations from IPv4-speak to IPv6-speak. The first question is, what's the equivalent of RFC1918 for IPv6? Initial searches suggested there was no equivalent. Then I stumbled upon Unique Local Addresses (RFC4193), and that states that all ULA's should be assigned the prefix fc00, followed by a 40-bit random number in the routing prefix. This random number is to "prevent collisions when two IPv6 networks are interconnected" -- again, another reference to an enterprise-level function. If I have a small local LAN at home, numbered using 192.168.4.0/24, what's my equivalent in IPv6's ULA scope? Assuming I will never, ever, tie that IPv6 address into the real internet (a router will NAT & firewall it), can I ignore the RFC to an extent and go with fc00::4:0/120? It also seems that any address in fc00::/7 are to be globally routable. Does this mean I'll need extra protections so my router would not automatically start advertising these private IPv6 addresses to the world? Second question, what's this link-local thing? Reading suggests a default-assigned address in the fe80::/10 range that has the last 64bits of the address comprised of the interface's MAC address. Seems to be required, too, but I'm annoyed by the constant discussion of it in relation to enterprise networks. Third question, what is scope id for? Seems to be yet another term tossed around in relation to enterprise networks, especially when interconnecting them, but almost no explanation on the smaller home network level. Can I see a scope ID AND CIDR notation used together? I.e., fc00::4:0/120%6, or are scope IDs only supposed to be applied to a single /128 IPv6 address?

    Read the article

  • NoMachine NX window closes after establishing connection

    - by blackicecube
    I am trying to use nomachine nx server and client. But somehow it doen't work. What happens is the following: Client starts up Client authenticates with Server The NoMachine window appears for 2-4 seconds The NoMachine window exists Somehow a "closeEvent" is sent. Here's what I see in the log file: [Thu Sep 24 11:20:37 2009]: Starting nxcomp with options: 'NX 299 Switch connection to: NX mode: unencrypted options: nx/nx,options=/home/foo/.nx/S-adnws029-1022-7EEF1367361DB2A7F4D9F76B06F4B434/options:1022'. [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor: opened file: [/home/foo/.nx/S-adnws029-1022-7EEF1367361DB2A7F4D9F76B06F4B434/session] [Thu Sep 24 11:20:38 2009]: LoginDialog::ShowConnectionStatus code=[246] str=[Initializing X protocol compression] error=[0] [Thu Sep 24 11:20:38 2009]: ProgressDialog::printNxStatus: [Initializing X protocol compression] [Thu Sep 24 11:20:38 2009]: LoginDialog::ShowConnectionStatus code=[247] str=[Established the display connection] error=[0] [Thu Sep 24 11:20:38 2009]: ProgressDialog::printNxStatus: [Established the display connection] [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: LoginDialog: slotAgentTimer [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: QClipboard: Unknown SelectionClear event received. [Thu Sep 24 11:20:38 2009]: LoginDialog: slotAgentTimer [Thu Sep 24 11:20:38 2009]: LoginDialog: Agent found closing windows... [Thu Sep 24 11:20:38 2009]: LoginDialog: setting automatic reconnection to true. [Thu Sep 24 11:20:38 2009]: Settings::flush [Thu Sep 24 11:20:38 2009]: Settings::flush [Thu Sep 24 11:20:38 2009]: LoginDialog: closeEvent received! [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: LoginDialog::destructor called begin [Thu Sep 24 11:20:38 2009]: LoginDialog: stopAllTimers [Thu Sep 24 11:20:38 2009]: LoginDialog: stopProgressTimer [Thu Sep 24 11:20:38 2009]: Utility::getPreferencesFile: 'nxclient' - '/home/foo/.nx/config/nxclient.cfg' [Thu Sep 24 11:20:38 2009]: Settings::flush [Thu Sep 24 11:20:38 2009]: Called destructor for protocol class [Thu Sep 24 11:20:38 2009]: LoginDialog::destructor called end Anyone with a helpful idea?

    Read the article

  • Typing Japanese on Windows Vista with Dvorak

    - by Ken
    I'm using Windows Vista, and I type English with the Dvorak keyboard layout, and I want to be able to type Japanese text that way, too. I've figured out how to set it up to let me type Japanese here, but it uses QWERTY. What I've got so far is: click the "EN" in the taskbar, and select "JP" if the letter that appears in the taskbar is "A", hit alt-~ to change it to "?" type as if I was typing Romaji on a QWERTY keyboard, (e.g., left pinky home row, right ring finger top row), and hiragana appear (??) press spacebar to convert to kanji (e.g., ?), and return to accept That all works great, but it assumes I'm on QWERTY, which isn't very comfortable for me. I want everything the same, but to be able to type kana with Dvorak (e.g., left pinky home row, left ring finger home row - ??). I can do this on Mac OS, so it's not an unheard-of feature. But it was kind of an obscure setting to find, so I figure on Windows it's probably a really obscure setting. :-) But I haven't been able to find it yet. Thanks!

    Read the article

  • Nexus One WiFi connection problems.

    - by sunocky
    I have two new Nexus One for a research project. For the projects I need to keep a server running on the phone. But soon I found out that both the phones have inconsistent WiFi connection problems at my home. It can connect to my WiFi network, but will drop off in a random time. And in order to reconnect to my WiFi, I may need to reboot my router, or the phone will say "obtaining IP address" and then "Unsuccessful". I also own a G1 with firmware version 1.6, it has no such connection problems. Well, to my surprise, the two Nexus One works fine with connecting to the WiFi network at my work place, which is a WEP type WiFi connection. By the way, it is a WPA type connection at my home. Anyone knows what's the problems with the Nexus One? Any suggestions on what should I do if I want to keep the WiFi connection live all the time at my home? Thanks very much!

    Read the article

  • Has anyone managed to build php5-xapian on Ubuntu 12.04?

    - by jetboy
    As Xapian's been dropped from the Ubuntu repositories, I'm attempting to build my own .deb from the instructions here: http://article.gmane.org/gmane.comp.search.xapian.general/8855 http://beeznest.wordpress.com/2011/07/06/howto-build-your-own-binaries-of-php-xapian-bindings-for-debian/ I can only get things to progress beyond the first few seconds by leaving out 'rm debian/control', but if I do, it looks as if the Python and Ruby bindings are building and passing their versions of smoketest correctly. However, the PHP part of the build is failing with this error: /home/charlie/xapian-bindings-1.2.8/php/smoketest.php:38: include(xapian.php): failed to open stream: No such file or directory FAIL: smoketest.php There's a xapian.php file in /home/charlie/xapian-bindings-1.2.8/php/php5/ but if I copy it to /home/charlie/xapian-bindings-1.2.8/php/ or change the path to it in smoketest.php, the build fails right near the start with: dpkg-source: error: aborting due to unexpected upstream changes Unfortunately I'm out of my comfort zone building from source. Anyone got any ideas? Edit post James' answer: Builds fine if I follow instructions exactly. I built it on a test VM initially, but that didn't build the PHP package as PHP itself wasn't installed. Obvious gotcha, but worth mentioning. Installing generated the following error: Setting up php5-xapian (1.2.8-1) ... Processing triggers for libapache2-mod-php5 ... dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/libapache2-mod-php5.postinst): Permission denied ssion denied dpkg: error processing libapache2-mod-php5 (--install): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: libapache2-mod-php5 It's only a script for restarting Apache. Stopping Apache before running sudo dpkg -i php5-xapian_*.deb prevents the error. Xapian now shows up in phpinfo(). Job done. Thanks.

    Read the article

  • How/when do you study?

    - by Sergei
    Would be interesting to find out how other sysadmins are educating themselves. I find myself in the need of constantly learning new things. I prefer to spend more time on the subject and know it thoroughly knowing that not doing so will kick me back in the future. This can be frustrating sometimes as it feels that I move too slowly. Our company has account on safari.oreilly.com and I am reading a book or two at any given time. I also read sysadmin related blogs for ideas and tips and to keep myself in the tune with the trends. I cannot do any study at home as I would rather spend my out of work hours with my family plus I find it hard/impossible to study at home due to the inability to concentrate at home. So I mostly study while on the train, luckily my commute time takes up to 2 hours a day. I also read a lot at work and don't feel guilty about it. To fix/implement/plan, I need to have a solid knowledge and if it requires time then this is a part of my job being a sysadmin. There is a joke that says "sysadmin is a person that knows a lot about everytihng and as a result knows nothing" - I think ther is a grain of truth here...

    Read the article

  • CDN Rerouting on 404 (file not yet in synch with original storage)

    - by Alan Ristic
    Here is the problem. I've setup my app(on EC2) to store uploaded images directly on Amazon S3. I'd like to be able to serve static files(cdn) from my 'home' server so I wrote script that does sync from S3. But there is a window of (at least) one minute in synch. Now I see two solutions on the problem of pics not been available on 'home' server here: 1.I write script on EC2 (where the app resides) to fetch from DB pics that have status of "not-yet-synch", which is default state when user uploads picture. The script then does a ping to picture and if it gets OK response, updates DB from "not-yet-synch" to "synch". 2.Prefered solution would be to let apache (in this case) redirect request for an image if it sees 404 (e.g. doesent find image requested) to S3. This way I wouldn't need script from solution 1. So what approach do you suggest I take in solving this redundancy problem? Or what is practice in production environments? To further clarify; I'd like so serve images first from 'home' server, if that fails serve them from S3. Tnx, Alan

    Read the article

  • Can I get a domain controller not to act as DNS for the members?

    - by rsw
    Hi, Let me try to explain my current setup. I have one linux machine acting as DHCP and DNS (dhcpd3 and bind) in my network. This works fine, all computers I hook up to the network gets an IP address and proper DNS servers set. Let's call it 10.12.0.10 However, we also have a Windows Server 2003 Domain Controller in our network to which we add our Windows computers (running XP), let's call it 10.12.0.20. I noticed that when I run 'nslookup' on one of the windows machines, it says that the primary DNS is 10.12.0.20. This have not been much of a problem since: The Windows clients are stationary The Windows server in itself point out my real DHCP/DNS, since I can reach everything specified in it However, this turns out to be a problem when we use Laptops. They connect to the domain here and gets a DNS server, but when the user travels or connect the computer from home, we hit a problem. They are connected to their internet, but their DNS is 10.12.0.20 which they can't reach since they're at home and not at the office network. I solved this by removing the register key called "NameServer" with the value 10.12.0.20, but it gets set again whenever they logon to the domain the next time (when they get back to the office). Can I somehow make the computers take whatever DNS server they are handed when connecting to the internet or a home network, instead of always trying to reach the Domain Controller?

    Read the article

  • apache using mod_auth_kerb always asks for the password twice

    - by DrStalker
    (Debian Squeeze) I'm trying to set apache up to use Kerberos authentication to allow AD users to log in. It is working, but prompts the user twice for a username and password, with the first time being ignored (no matter what is put it in.) Only the second prompt includes the AuthName string from the config (i.e.: the first windows is a generic username/password one, the second includes the title "Kerberos Login") I'm not worried about integrated windows authentication working at this stage, I just want users to be able to login with their AD account so we don't need to set up a second repository of user accounts. How do I fix this to eliminate that first useless prompt? The directives in the apache2.conf file: <Directory /var/www/kerberos> AuthType Kerberos AuthName "Kerberos Login" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms ONEVUE.COM.AU.LOCAL Krb5KeyTab /etc/krb5.keytab KrbServiceName HTTP/[email protected] require valid-user </Directory> krb5.conf: [libdefaults] default_realm = ONEVUE.COM.AU.LOCAL [realms] ONEVUE.COM.AU.LOCAL = { kdc = SYD01PWDC01.ONEVUE.COM.AU.LOCAL master_kdc = SYD01PWDC01.ONEVUE.COM.AU.LOCAL admin_server = SYD01PWDC01.ONEVUE.COM.AU.LOCAL default_domain = ONEVUE.COM.AU.LOCAL } [login] krb4_convert = true krb4_get_tickets = false The access log when accessing the secured directory (note the two seperate 401's) 192.168.10.115 - - [24/Aug/2012:15:52:01 +1000] "GET /kerberos/ HTTP/1.1" 401 710 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" 192.168.10.115 - - [24/Aug/2012:15:52:06 +1000] "GET /kerberos/ HTTP/1.1" 401 680 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" 192.168.10.115 - [email protected] [24/Aug/2012:15:52:10 +1000] "GET /kerberos/ HTTP/1.1" 200 375 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" And one line in error.log [Fri Aug 24 15:52:06 2012] [error] [client 192.168.0.115] gss_accept_sec_context(2) failed: An unsupported mechanism was requested (, Unknown error)

    Read the article

  • Moving a lotus database causes incoming mail failure

    - by Logman
    Hey all, I moved a couple lotus note databases to another hd (ran out of space) by using a directory link/pointer on the domino server. USed a text file with the .DIR ext with the desired new path etc... well that works fine. I open up a lotus notes client, use the id and then OPEN the database. The directory link works fine, the new location of the db is opened with no problems. We found out that we couldnt FORWARD any mail: We did the following to make it work: Goto menu FILE|MOBILE|EDIT CURRENT LOCATION Goto tab MAIL and enter the correct path for "Mail File:" example: it should read "mail\morespace\flabor" and not "mail\flabor" "morespace" = directory link/pointer we can forward and reply to emails fine after that fix. The problem is that the user has no incoming email. Domino is still trying to send the incoming mail of that user to its old db location "mail\flabor" instead of "mail\morespace\flabor". Delivery error saying user does not exist. Is this a cache problem? We have reset the server ("Q" at the prompt), though we have not completely shut it down though. Thanks Frank

    Read the article

  • Can't get rsync over sftp to work

    - by Patrik
    I'm trying to set up a backup system from an Ubuntu server to a Synology NAS (DS413j) using rsync and sftp. I have created a user for this that we can call ubuntu-backup. I have a directory in ubuntu-backup home directory called www where the backup will be saved. I have enabled Network Backup in DSM The user ubuntu-backup has full access to it's home directory Here is my rsync config file on the Synology NAS: #motd file = /etc/rsyncd.motd #log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid lock file = /var/run/rsync.lock use chroot = no [NetBackup] path = /var/services/NetBackup comment = Network Backup Share uid = root gid = root read only = no list = yes charset = utf-8 auth users = root secrets file = /etc/rsyncd.secrets [ubuntu-backup] path = /volume1/homes/ubuntu-backup/www comment = Ubuntu Backup uid = ubuntu-backup gid = users read only = false auth users = ubuntu-backup secrets file = /etc/rsyncd.secrets The permissions on /volume1/homes/ubuntu-backup/www is ubuntu-backup:users 777 Here is the command i'm running. rsync -aHvhiPb /var/www/ [email protected]:./ The result: sending incremental file list ERROR: module is read only rsync error: syntax or usage error (code 1) at main.c(1034) [Receiver=3.0.9] rsync: connection unexpectedly closed (9 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(605) [sender=3.0.9] If I'm running this: rsync -aHvhiPb /var/www/ [email protected] It looks like its sending files. No errors. But I cant find anything on the NAS.

    Read the article

  • 2010 cgi script failure

    - by Barry F
    Hi. I hope you can help, I'm just a beginner! I have listed a few extra details which may not be relevant. I upload cgi scripts onto local/personal directory on a Apache/2.2.10 server, using FTP95Pro in ASCII. The scripts execute correctly using perl on my web-server in a terminal session. Thus my code has no fatal syntax errors. Webpages 'action' each cgi script at /cgi-bin/. There are symbolic links which link system directory files to my local directory files. FollowSymLinks is enabled (unsure how). Permissions are correct (755). This set-up hasnt changed, apparently. The scripts have excuted perfectly for years, up to 2010. But now, in 2010, I have replaced working scripts with new script/files, now with exactly the same text, filename and permissions. Only the date (last modified) has changed. But now I receive a 500 Internal Server Error, and cannot determine why. My server administator assumes I have code errors. But code is unchanged since last year, and it runs fine (albeit no arguments) on web-server console using perl myscript.cgi Is there anything you can think of which may have changed ? I'm suspicious of the new decade. I think the server swapped from Linux to Windows OS last year, but my server administrator got it all working OK. Is there something unusual he may have missed, related to 2010 ? Thank you in advance

    Read the article

  • How can I associate html/htm files with Chrome in Windows 7 64 bit?

    - by matt
    I want Chrome to open all .html files. It is currently set as my default browser, however html files open in IE9. When I go to Control Panel\Programs\Default Programs\Set Associations I see that .html and .htm files are associated with IE. When I choose to change the default program it I'm presented with a list of programs but Chrome is not one of them. I browse to, and then select the Chrome.exe (C:\Users\Matt\AppData\Local\Google\Chrome\Application\chrome.exe) but it goes right back to IE. This is the first time I've seen anything like this. I'm running Windows 7 64 bit. I never had this problem on Windows 7 32 bit. Is this because Chrome by default installs in the User directory, not the Program Files directory? How can I fix these file associations? EDIT: It's not that things are reverting back to IE after associating them with Chrome. When I browse to Chrome in the file association window, and select it, it doesn't seem to take. It doesn't show Chrome in the list of programs despite pointing to the Chrome.exe location. I really think this has something to do with the fact that it doesn't install into the Program File Directory.

    Read the article

  • File Replication Service Errors

    - by ekamtaj
    Hey Guys, We have a windows 2003 r2 server and couple of the users are reporting that they can not scan files into the windwos server. They are getting an Out of Space errors. I took a look at the server and we have 600GB free disk space on that partition. But while looking at the event log I found a lot of errors like (13552,13555) The File Replication Service is unable to add this computer to the following replica set: "DOMAIN SYSTEM VOLUME (SYSVOL SHARE)" This could be caused by a number of problems such as: -- an invalid root path, -- a missing directory, -- a missing disk volume, -- a file system on the volume that does not support NTFS 5.0 The information below may help to resolve the problem: Computer DNS name is "server.domain.local" Replica set member name is "server" Replica set root path is "c:\windows\sysvol\domain" Replica staging directory path is "c:\windows\sysvol\staging\domain" Replica working directory path is "c:\windows\ntfrs\jet" Windows error status code is FRS error status code is FrsErrorMismatchedJournalId Other event log messages may also help determine the problem. Correct the problem and the service will attempt to restart replication automatically at a later time. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Host Primary Domain from a subfolder

    - by TandemAdam
    I am having a problem making a sub directory act as the public_html for my main domain, and getting a solution that works with that domains sub directories too. My hosting allows me to host multiple sites, which are all working great. I have set up a subfolder under my ~/public_html/ directory called /domains/, where I create a folder for each separate website. e.g. public_html domains websiteone websitetwo websitethree ... This keeps my sites nice and tidy. The only issue was getting my "main domain" to fit into this system. It seems my main domain, is somehow tied to my account (or to Apache, or something), so I can't change the "document root" of this domain. I can define the document roots for any other domains ("Addon Domains") that I add in cPanel no problem. But the main domain is different. I was told to edit the .htaccess file, to redirect the main domain to a subdirectory. This seemed to work great, and my site works fine on it's home/index page. The problem I'm having is that if I try to navigate my browser to say the images folder (just for example) of my main site, like this: www.yourmaindomain.com/images/ then it seems to ignore the redirect and shows the entire server directory in the url, like this: www.yourmaindomain.com/domains/yourmaindomain/images/ It still actually shows the correct "Index of /images" page, and show the list of all my images. Here is an example of my .htaccess file that I am using: RewriteEngine on RewriteCond %{HTTP_HOST} ^(www.)?yourmaindomain.com$ RewriteCond %{REQUEST_URI} !^/domains/yourmaindomain/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /domains/yourmaindomain/$1 RewriteCond %{HTTP_HOST} ^(www.)?yourmaindomain.com$ RewriteRule ^(/)?$ domains/yourmaindomain/index.html [L] Does this htaccess file look correct? I just need to make it so my main domain behaves like an addon domain, and it's subdirectories adhere to the redirect rules.

    Read the article

  • Windows 7 library nightmare

    - by Lobuno
    In our active directory we deploy a policy to our clients where the personal directory (My documents) is redirected to a file server of ours \server\share\username\Documents In older systems everything worked fine. in Windows 7 some users are experimenting the following symptoms: * The Documents library is EMPTY * Where the documents library should be shown in Explorer an empty white icon is displayed. No caption. * Right clicking in the Documents library to edit the folders that are part of the libraries brings the dialog up. However, that dialog is unusable. No folder is present there and clicking Add folder does nothing. * Deleting the library and auto-creating it doesn't solve the problem * The shared directory can be accessed via UNC paths and it can be mounted as a shared drive as well. The library is still broken. * The shared drives are on a W2008 indexed server... * Using the Windows Library tool utility doesn't solve the problem. What can the cause of this problem be and how can this be solved?

    Read the article

  • Anonymous FTP upload on CentOS 5.2

    - by Craig
    I need to allow users to upload files to an FTP server anonymously. They should not be able to see any other files, or download files. It is a CentOS 5.2 server. I have a separate partition for the the upload area (mounted at /ftp). I have tried to set up vsftpd, followed all the instructions/advice I could find. But, when a user logs in and tries to transfer a file it throws a "553 could not create file." error. If I do a 'pwd' it shows the directory as "/" rather than the anon_root of "/ftp/anonymous". Any attempt to change the remote directory ends with "550 Failed to change directory.". I have a subdirectory "/ftp/anonymous/incoming" that is writable for the uploads SELinux is in permissive mode. I am running version 2.0.5 release 16.el5 of vsftpd. Here is the vsftpd.conf file: anonymous_enable=YES local_enable=YES write_enable=YES local_umask=002 anon_umask=007 file_open_mode=0666 anon_upload_enable=YES anon_mkdir_write_enable=NO dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES chown_uploads=YES chown_username=inftpadm xferlog_std_format=YES nopriv_user=nobody listen=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES ftp_username=inftpadm anon_root=/ftp/anonymous anon_other_write_enable=NO anon_mkdir_write_enable=NO anon_world_readable_only=NO dirlist_enable=YES Can anyone help?

    Read the article

  • ssh_exchange_identification: Connection closed by remote host

    - by rick
    Firstly, I know that this question has been asked a million times, and I have read everything I can find and still cannot fix the problem. i am encountering this issue when ssh'ing in from my mac to my Ubuntu server on a fresh install of Ubuntu (I reinstalled because of this issue). I have SSH portmapped to 7070 because my ISP is blocking 22. On the client: bash: ssh -p 7070 -v [email protected] debug1: Reading configuration data /etc/ssh_config debug1: Connecting to address.org port 7070. debug1: Connection established. debug1: identity file /home/me/.ssh/identity type -1 debug1: identity file /home/me/.ssh/id_rsa type 1 debug1: identity file /home/me/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host Here's what I have done to try to resolve the issue: Made sure my maxstartups is ok: bash: grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10:30:60 Made sure hosts.deny is clear of denials. Made sure hosts.allow has my client IP. Clear out known_hosts on client Changed ownership of /var/run to root Made sure etc/run/ssh is Made sure /var/empty exists Reinstall openssh-server Reinstall ubuntu When I run telnet localhost, I get this: telnet localhost Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused When I run /usr/sbin/sshd -t Could not load host key: /etc/ssh/ssh_host_rsa_key Could not load host key: /etc/ssh/ssh_host_dsa_key When I regenerate the keys with ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key I get the same error. I am pretty sure this is the issue. Can anyone help?

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >