Search Results

Search found 3168 results on 127 pages for 'directories'.

Page 101/127 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • CLI package to replace Plesk

    - by dotancohen
    Myself and another programmer are tasked with maintaining a few webservers. I prefer CLI tools, she prefers Plesk. However, I am adamant about not installing Plesk for quite a few reasons. I have written a small Python script for adding new domains, and now I am about to add the ability to configure email addresses while abstracting the details of Postfix from her. Before I go that route, I have googled to see if anything already exists, and am surprised that I have come up with nothing! Are there any mature, stable "control panels" or "server admin" tools like Plesk, but which are accessed via the CLI over SSH? I am looking for the following features: Add / remove / configure domains served by Apache. Add / remove / configure email boxes and mail groups. Add / remove MySQL databases, users, and configure users to databases. Provide basic monitoring of "server health", that is: memory usage, disk usage, CPU usage, bandwidth usage. Possibly set up STFP accounts so that only specific FTP users could access specific /var/www/someSite/ directories. Note that I was unsure if this question is OT for ServerFault. As per the ServerFault about page (There seems to be no more FAQ) this question meets two of the "ask about" criterion and zero of the "don't ask about" with the possible exception of being opinion-based. Therefore, to keep on-topic, I would like to know about the available applications but we should be subjective and less opinionated. Thank you!

    Read the article

  • Running python script in incrontab in Debian

    - by WilliamMayor
    I have a user, dropbox, that runs the Dropbox daemon, I want to monitor the directories in the Dropbox directory for new files and run a python script when they appear. I have the python script that I know works: $ /home/dropbox/monitor.py Trying to get lock Got lock, waiting for Dropbox to be idle Dropbox idle Finding instructions Done, releasing lock I have an incrontab entry: $ incrontab -l /home/dropbox/Dropbox IN_CREATE /home/dropbox/monitor.py | logger /home/dropbox/test IN_CREATE logger "$$ $@ $# $% $&" When I add a file to the test directory I see the output in /var/log/syslog: $ touch /home/dropbox/test/a $ tail /var/log/syslog ... Nov 9 10:18:27 vps incrond[1354]: (dropbox) CMD (logger "$ /home/dropbox/test a IN_CREATE 256") Nov 9 10:18:27 vps logger: "$ /home/dropbox/test a IN_CREATE 256" ... However, when I add a file to the Dropbox directory the command doesn't seem to run: $ touch /home/dropbox/Dropbox/a $ tail /var/log/syslog ... Nov 9 10:24:16 vps incrond[1354]: (dropbox) CMD (/home/dropbox/monitor.py | logger) ... So the incron daemon notices the new file and the correct command is found to be executed but it never actually gets executed. Nor are there any error messages. It kind of seems like incrontab can only be used to run the most simple of commands. This might be a similar question to: Incrond running but not executing commands CentOS 6.4 but I think that I don't have env problems, every path is absolute. I tried changing .../monitor.py to /usr/bin/python2.7 .../monitor.py just in case but it didn't make any difference.

    Read the article

  • Apache directory authorization bug (clicking cancel gives acces to partial content)

    - by s4uadmin
    I got a minor problem (as the site is not high priority) but still a very interesting one. I have an apache root domain wherein other sites live "/var/www/" And I have foo.example.com forwarding to "/var/www/foo-example" (wordpress site) The problem here is that when you go to foo.example.com you are prompted to enter credentials. If you hit cancel it gives you the access denied page. But when you go to the servers' direct IP (this gives you the default index page) and hit cancel when prompted for credentials it just keeps giving you the login screen, and after pressing cancel a few times more it gives (a perhaps cached) bare html part of the page. How do I prevent this from happening? Perhaps this is a bug... Even if I would block access to the root directory when going to the ip/foo-example it would still do this. And I want to keep all the directories within the www directory or at least all in the same. Thanks PS: here is my configuration: <VirtualHost *:80> DocumentRoot /var/www/wp-xxxxxxx/ ServerName beta.xxxxxxxxx.nl <Directory "/var/www/wp-xxxxxxxxx/"> Options +Indexes AuthName "xxxxxxxx Beta Site" AuthType Basic require valid-user Satisfy all AuthBasicProvider file AuthUserFile /var/www/wp-xxxxxxx/.htxxxxxxxxx order deny,allow allow from all </Directory> ServerAdmin [email protected] ServerAlias beta.xxxxxxx.nl </VirtualHost>

    Read the article

  • arrays in puppet

    - by paweloque
    I'm wondering how to solve the following puppet problem: I want to create several files based on an array of strings. The complication is that I want to create multiple directories with the files: dir1/ fileA fileB dir2/ fileA fileB fileC The problem is that the file resource titles must be unique. So if I keep the file names in an array, I need to iterate over the array in a custom way to be able to postfix the file names with the directory name: $file_names = ['fileA', 'fileB'] $file_names_2 = [$file_names, 'fileC'] file {'dir1': ensure => directory } file {'dir2': ensure => directory } file { $file_names: path = 'dir1', ensure =>present, } file { $file_names_2: path = 'dir2', ensure =>present, } This wont work because the file resource titles clash. So I need to append e.g. the dir name to the file title, however, this will cause the array of files to be concatenated and not treated as multiple files... arghh.. file { "${file_names}-dir1": path = 'dir1', ensure =>present, } file { "${file_names_2}-dir2": path = 'dir1', ensure =>present, } How to solve this problem without the necessity of repeating the file resource itself. Thanks

    Read the article

  • tab complete not working for vim in particular directory - ubuntu 12.04

    - by user1160958
    I am working on a ruby on rails app. All of the sudden the command line tab complete stopped working for vim, only for files though, and only for the vim command (i.e. works for other commands, ls, rm etc.) After further investigation - this only occurs in a specific directory, the home directory of my rails app. If I go into a sub directory in my rails app, or any other directory on my machine, the tab complete works again. If I go into the root directory of any other rails app, it works. I also tried renaming the diretory, and copying the contents of the directory to another directory, and that did not work either. It only does not work for files, and works for any other command - ls, rm etc. But when I do vim /path/to/file/, then tab to see a list of files in that directory, only other directories show, not files. I am using ubuntu 12.04. Also, I tried re-installing vim, re-booting, removing ~/.viminfo (there was no vimrc file) that didn't work. Any help would be appreciated!

    Read the article

  • Overriding HOMEDRIVE and HOMEPATH as a Windows 7 user

    - by MikeC
    My employer has an Active Directory group policy which sets my Windows 7 laptop HOMEDRIVE to "M:" (a mapped network drive) and my HOMEPATH to "\". Since I have read-only permissions for the root of that shared drive, I cannot create files or directories in my windows home directory. My attempts to work with the IT department have been unsuccessful. Is there a way for me to globally change these envars at boot or login time? I need for all applications to use alternate values (such as "C:" and "\Users\myname"). I have some installed utilities (like gvim and others) that store preference files in the user's home directory. IMPORTANT: Changing these envars under "System Properties Environment Variables" does not work. I have tried setting these as both User and System Variables (including a reboot). TypingSET HOMEin a DOS window clearly shows that my settings are ignored. Also, using "Start in" in a Windows shortcut will also not solve this, as I need things like Explorer context menu items (like "Edit with Vim") to operate correctly. I do have admin rights on this company laptop, but I am not a Win7 guru. Back in the day, a boot script would have solved this in a minute. Is it even possible today? Thanks.

    Read the article

  • How to move or delete files from a folder containing 2 million files on an NTFS drive?

    - by Beau
    The issue is that any modification to the directory locks up Explorer indefinitely, though Samba access to other directories still works. I've tried moving files locally and over Samba. Even enumerating the directory to get the list of files locks up the computer indefinitely. I tried using Python's win32file.FindFilesIterator to iterate the files but that also hangs. My idea was to move each file to a different directory (in a directory above the directory we're dealing with) based on its timestamp, so that we'd have at most a thousand or so files in each directory... But since I can't even enumerate the files, that's been a non-starter. If I have to give up and just nuke the directory I'm willing to do that, but a standard delete also hangs indefinitely. I have set these two parameters to increase speed and they also did not help the issue: R:\>fsutil behavior query disablelastaccess disablelastaccess = 1 R:\>fsutil behavior query disable8dot3 disable8dot3 = 1 These are all sequential images that would have run into the 'bug' with 8.3 filenames whereby many similarly named files in one directory can take a long time to compute 8.3 filenames. From what I understand this data is stored in the file system even after disable8dot3 is enabled, so it may still be contributing to the problem. Any ideas?

    Read the article

  • Serve a specific set of error pages for different subdirectories

    - by navitronic
    I am currently trying to setup 2 different sets of Error documents for separate folders within a website. I have 2 folders within the root of a site: demo/ live/ Any requests that return 404's or 403's within the demo folder needs to load one set of pages for the Apache errordocuments, eg. ErrorDocument 404 /statuses/demo-404.html ErrorDocument 403 /statuses/demo-403.html And the live needs to go to similarly name files. ErrorDocument 404 /statuses/live-404.html ErrorDocument 403 /statuses/live-403.html So far I have tried placing an .htaccess file in both directories with the ErrorDocument directives setup pointing to the specific files, the 404 works fine and references the correct page. However, the 403s do not work and revert to the server default when trying to access forbidden folders within the demo directory, the logs indicate the following: [Wed Jun 16 04:47:44 2010] [crit] [client 115.64.131.144] (13)Permission denied: /home/abstract/public_html/demo/xxx/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Is this correct? Would apache revert to default because it is trying to look for the htaccess in a folder it doesn't have permission in? Why wouldn't it work it's way back through the folder tree? Can I make it do this?

    Read the article

  • SFTP, Chroot problems on Redhat

    - by Curtis_w
    I'm having problems setting up sftp with a ChrootDirectory. I've done an equivalent setup on other distros, but for some reason I cannot get it to work on a Redhat AMI. The changes to my sshd_config file are: Subsystem sftp internal-sftp Match Group ftponly PasswordAuthentication yes X11Forwarding no ChrootDirectory %h ForceCommand internal-sftp AllowTcpForwarding no I have the concerned usere's homes at /home/user, owned by root. After connecting with a user in the ftponly group, I'm dropped into / without permissions for anything, and am unable to do anything. sftp bob@localhost Connecting to localhost... bob@localhost's password: sftp> pwd Remote working directory: / I can connect normally with users not in the ftponly group. openssh version 5.3 I've experimented with different permissions, as well as having users own their own home directory (gives a Write failed: Broken pipe error), and so far, nothing has seemed to work. I'm sure it's a permissions error, or something equally as trivial, but at this point my eyes are beginning to glaze over, and any help would be greatly appreciated. EDIT: James and Madhatter, thanks for clarifying. I was confused by chroot dropping me in /... just didn't think through it properly. I've added the appropriate directories and permissions to get read access. One other key part was enabling write access to chrooted homes: setsebool -P ssh_chroot_rw_homedirs on in order to get write access. I think I'm all set now. Thanks for the help.

    Read the article

  • xauth, ssh and missing home directory

    - by flolo
    We have several servers, and normaly everything works fine, except now... we get a new aircondition installed. This takes 36 hours and for this time almost all servers got shutdown, only 2 remaining servers run for the most important tasks (i.e. accepting incoming email, delivering some important websites, login-server). Everybody was informed that when they need appropiate data from the homedirs they should fetch it before take down. Long story short: Someone realized that he have run a certain program on one of the servers. No Problem, he can remote login into our login server and run the programm there without home directory (binaries are local and necessary information can be copied to the /tmp). That works like a charm until... ... the user needs to run a GUI programm. I find no easy way to make it running, usually ssh -Y honk@loginserver is enough but now the homedirectory is missing and ssh is not able to copy the cookies into ~/.Xauthority (as the file server with the home directories is down). Paranoid as all systemadmins all X-Server just listen locally not on tcp ports, so no remote X connection possible SSH config is waterproof - i.e. no way to set environment variables. My Problem is, that the generated proxy MIT cookie from ssh get lost as the .Xauthority doesnt exist. If I could retrieve it somehow I could reenter it a .Xauthority in /tmp. The only other option (besides changing the config) which came to my mind is, makeing a tunnel (netcat, or better ssh) from the remote host to the loginserver and copy the cookie manually (not sure if it the tcp-unix domain socket stuff works as expected). Any good suggestions (for the future - now our servers are already up)?

    Read the article

  • Apache Name-based Virtual Hosts - configuring httpd.conf file

    - by Dave
    Hi there. I am running a web app on Tomcat at the following location on my server. /var/tomcat/webapps/SoccerApp I am looking to update the Tomcat httpd.conf file with the following virtual host... <VirtualHost *:80> DocumentRoot /var/tomcat/webapps/SoccerApp/MyTeam ServerName www.mysoccerapp.com </VirtualHost> This gives me a 404 error as the directory MyTeam does not exist. However my application behaves in a way that it uses this URL directory as the name of the soccer team for which to display data, so it will never be a physical folder on the server. None the less, I would like www.mysoccerapp.com to resolve to webapps/SoccerApp/MyTeam, even though the directory isnt there. does this make any sense? Any ideas on how to get this working. At the end of the day, i want to do the following... www.teamone.com -> runs /webapps/SoccerApp/TeamOne www.teamone.com -> runs /webapps/SoccerApp/TeamTwo ...where TeamOne and TeamTwo are not physical directories, but merely processed by my SoccerApp application as the current soccer team to display data for. Many many thanks! Dave.

    Read the article

  • Mac mini simple customized, Mac mini server or other?

    - by microspino
    I'm in front of a big IT choice for my little office and I need some advice. We have 5 users, 1 super user, 1 HP500 DesignJet Plotter, other 4 laser printers, 1 HP Fax/Print/Scan/Copy machine. All the clients are XP Sp3 boxes. We would like to: centralize and share 90Gb of files using a Dropbox (this way we will have LAN sync of local working directories + internet backup + access our files wherever we are). centralize our plotter, printers and fax machine backup all the workstations share outlook calendar and tasks run 24x7 saving some energy Of course this setup It's just the first step to a more serious and creative network management of our office, so we are open to new ideas. The budget vary from 400€ to 900€, we are not tech gurus but at least one of us is a power user close to become a geek. I've read some articles on macminicolo about a mac mini either normal or with snow leopard server. I heard about Windows Home Server too on the lifehacker website but I'm in a sort of analysis - paralysis can You help me?

    Read the article

  • How can I prevent Apache from asking for credentials on non SSL site

    - by Scott
    I have a web server with several virtual hosts. Some of those hosts have an associated ssl site. I have a DirectoryMatch directive in my main config file which requires basic authentication to any directory with secured as part of the directory path. On sites that have an SSL site, I have a rewrite rule (located in the non ssl config for that site), that redirects to the SSL site, same uri. The problem is the http (80) site asks for credentials first, and then the https (443) site asks for credentials again. I would like to prevent the http site from asking and thus avoid the potential for someone entering credentials and having them sent in clear text. I know I could move the DirectoryMatch down to the specific site, and just put the auth statement in the SSL config, but that would introduce the possibility of forgetting to protect critical directories when creating new sites. Here are the pertinent declarations: httpd.conf (all sites): <DirectoryMatch "_secured_"> AuthType Basic AuthName "+ + + Restrcted Area on Server + + +" AuthUserFile /home/websvr/.auth/std.auth Require valid-user </DirectoryMatch> site.conf (specific to individual site) <DirectoryMatch "_secured_"> RewriteEngine On RewriteRule .*(_secured_.*) https://site.com/$1 </DirectoryMatch> Is there a way to leave DirectoryMatch in the main config file and prevent the request for authorization from the http site? Running Apache 2 on Ubuntu 10.04 server from the default package. I have AllowOverride set to none - I prefer to handle things in the config files instead of .htaccess.

    Read the article

  • How to fix IE starting when I try to start Opera?

    - by Scott Leis
    I have a problem that started today, where trying to run any of the 3 versions of Opera I have installed causes Internet Explorer to start instead. The Opera versions I have installed are 9.64, 10.10, and 10.63. My OS is Windows Vista with the latest critical/important updates. The behaviour is the same regardless of whether I double-click a shortcut to Opera, double-click on Opera.exe in one of the install directories, or run Opera.exe with the full path from Start-Run. The only work-around I've found is to right-click Opera.exe or a shortcut, and click "Run as administrator". This starts Opera, and it appears to work normally. Opera seems to be the only program so affected. I've checked Firefox and a few other (non-browser) programs, and they work normally. When IE starts instead of Opera, there are always two instances of iexplore.exe started, and they both crash before I can do anything else. IE also crashes if I try to start it from its own shortcut, but I don't know if that only started today, since I hadn't used IE for a few weeks. Does anyone know what might cause this and how to fix it? It's possible my PC has a virus, but BitDefender (for which I have automatic updates enabled) hasn't detected anything.

    Read the article

  • Files deleted. What could have happened?

    - by jjfine
    I'm having a weird issue today. I was writing and testing out some simple cgi scripts this morning when I realized that I couldn't run them from one of the other computers on the (windows) network. So I had my network admin come in and take a look at what was going on. A few minutes later a co-worker came in and told me that a bunch of files he was working with as well as a bunch of others (all *.c files) on the network drive got deleted. He also noticed some strange apache_dump_500.log.txt files in the same directories where the files got deleted. The apache_dump_500.log.txt files all look like this: REDIRECT_HTTP_ACCEPT=*/*, image/gif, image/x-xbitmap, image/jpeg REDIRECT_HTTP_USER_AGENT=Mozilla/1.1b2 (X11; I; HP-UX A.09.05 9000/712) REDIRECT_PATH=.:/bin:/usr/local/bin:/etc REDIRECT_QUERY_STRING= REDIRECT_REMOTE_ADDR=<my computer's local ip> REDIRECT_REMOTE_HOST= REDIRECT_SERVER_NAME=<my computer's domain url> REDIRECT_SERVER_PORT= REDIRECT_SERVER_SOFTWARE= REDIRECT_URL=/cgi-bin/trojan.py I looked and I don't have any trojan.py in my cgi-bin folder. And all my apache logs are clean. Windows event logger seems to not have any traces of what happened either. My httpd.conf: http://pastebin.com/Yny2Yh8v I think we've got some kind of virus that added this trojan.py file to my cgi-bin, ran the script, and deleted the script and any traces from the logs. Is this a thing that happens? Any ideas whatsoever would be much appreciated!

    Read the article

  • apt-mirror does not mirror the i18n directory

    - by Fred
    I need to setup a local Ubuntu mirror so the whole network doesn't need to hit remote servers in order to update and install new packages. Following a brief tutorial found here, I managed to get a server up and running that correctly mirrors packages from the main and restricted categories. However, when I call apt-get update on a client, I get a couple of errors such as : Ign http://192.168.1.18 karmic/main Translation-fr Ign http://192.168.1.18 karmic/restricted Translation-fr Checking back on the server, I see that apt-mirror only took the binary-amd64 directory of the mirror, and didn't take i18n that would provide Translation-fr. The manpage for apt-mirror doesn't say anything about i18n, and Google is of no help either. How do I properly mirror i18n? My current mirror.list file is as follows : ############# config ################## # # set base_path /var/spool/apt-mirror # # if you change the base path you must create the directories below with write privileges # # set mirror_path $base_path/mirror # set skel_path $base_path/skel # set var_path $base_path/var # set cleanscript $var_path/clean.sh # set defaultarch <running host architecture> # set postmirror_script $var_path/postmirror.sh set run_postmirror 0 set nthreads 20 set _tilde 0 # ############# end config ############## deb http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive karmic main restricted deb http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive karmic-updates main restricted clean http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive

    Read the article

  • Puppet - Is it possible to use a global var to pull in a template with the same name?

    - by Mike Purcell
    I'm new to puppet. As such I am trying to work my way around the best way to setup my manifests that make sense. Following the DRY (don't repeat yourself) principle, I am trying to load common directives in one template, then load in environment specific directives from a file matching the environment. Basically like this: # nodes.pp node base_dev { $service_env = 'dev' } node 'service1.ownij.lan' inherits base_dev { include global_env_specific } class global_env_specific { include shell::bash } # modules/shell/bash.pp class shell::bash inherits shell { notify{"Service env: ${service_env}": } file { '/etc/profile.d/custom_test.sh': content => template('_global/prefix.erb', 'shell/bash/global.erb', 'shell/bash/$service_env.erb'), mode => 644 } } But every time I run puppet agent --test puppet complains that it can't find the shell/bash/$service_env.erb file, but I double checked that it exists. I know the var is accessible due to the notify statement outputting the expected value, so I suspect I am doing which is not allowed. I know I could have a single template.erb and pass variables to the template, which would work in this case because the custom.sh file is small and not many changes across environments, but for more complex configs (httpd, solr, etc) I'd prefer to access environment specific files. I am also aware that I can specify environment specific module paths, but I'd prefer to just handle this behavior at the template level, instead of having several, closely named directories. Thanks.

    Read the article

  • How to set up a PRIVATE vimwiki on Dropbox.com

    - by Zongheng Yang
    Hi everyone, I assume those who are reading this page know what vimwiki and dropbox.com are and what they are for, so I might directly go into my confusion. The common way of setting a PRIVATE vimwiki on dropbox is simply let your vimwiki directories be under Dropbox folder (but not Dropbox/Public/ because it would be PUBLIC). Dropbox allows directly viewing html with dropbox.com/* url: for example a index.html can be accessed by url https://dl-web.dropbox.com/get/Wiki/html/index.html?w=bfead71a, being added after the file name a specified string, ?w=bfead71a. Hence, if inside index.html there is reference to A.html, which is located in the same folder index.html is in, it has to be accessed using some url like https://dl-web.dropbox.com/get/Wiki/html/index.html?w=SPECIFIED_STRING. But it is seemingly impossible to hack vimwiki in order to make the hrefs in converted htmls corrected in this way. Is there some approach that can resolve this problem? I hope I make myself clear. Had you any questions, please ask me for further explanations. Thank you!

    Read the article

  • Users removing Administrator from files/folders permissions

    - by user64204
    We're running Windows Server 2003 R2 with Active Directory and are having an issue with network shares whereby users, in an attempt to secure their documents, remove everybody (including the Administrator account) from their files/folders permissions. Since the Administrator no longer has read permission to them, we can't even backup files manually as we get permission errors. One solution that we've found is to change the owner of the files and directories to the Administrator account. We can then change the permissions as we wish. The problem is that this has to be done manually so can't really be applied to an entire share. Another solution that we've tried is to use cacls as follows: cacls d:\path\to\share /C /T /E /G Administrator:F The problem with this is that we're still getting an ACCESS DENIED error on files/folders on which Administrator was removed. Q1: Is there a way to restore at least read access to all files/folders to the Administrator account in a recursive fashion? That would be for the short term. For the long term we're looking for a solution to prevent users from removing Administrator from files/folders permissions. Since we're going to migrate to Windows Server 2008 R2 soon we could wait until we've migrated to implement such solution if need be. Q2: Is there a way to prevent users from removing Administrator from files/folders permissions on Windows Server 2003/2008?

    Read the article

  • Partitioning & Linux

    - by Zac
    Every tutorial on Linux-based partitioning schemes (or, just partitioning in general) will tell you that a PC can have either 4 primary partitions, or 3 primaries and 1 extended. They will all also tell you that Linux (in my case, Ubuntu) can be installed on either. It's also come to my attention that it is not too atypical for FHS directories, such as usr/, tmp/, etc/, home/ or var/ to be mounted separately on other partitions. Several questions I am unable to find the answers to, purely for my own edification: (1) By "PC", are we really talking about common PC disk types, like IDE or SATA? I guess I'm wondering why PC uses are limited to 4 primaries or 3 primaries + 1 extended (2) I'm choking on some basic OS concepts: it is said that a partition can be mounted by a file system or an OS. So I assume this means I can somehow instruct Ubuntu to mount to 1 partition, and then any part of, say, ReiserFS, to be mounted to another partition? How? (3)(a) What about creating swap partitions? Is there too much of a good thing with swap partitioning? If I have 4GB RAM over 320GB disk, what should my swap partition size be, and why? (3)(b) Are swap files the only way to create swap partitions? Wouldn't a Linux partitioning utility allow me to define a partition as being for virtual memory only? (4) Why are partitions limited to being "mounted" by just OSes and file systems? Why couldn't I write a program to take up its own, say, 512 MB partition, and then have it invoked or uses by an OS installed on another partition? Thanks for shedding any light here... not critical that I know this stuff, but it's got me thinking incessantly. And when I think incessantly, I...can't......sleep....

    Read the article

  • Backup solution to backup terabytes and lots of static files on linux server?

    - by user28679
    Which backup tool or solution would you use to backup terabytes and lots of files on a production linux server ? Note that the files are all different and almost never modified, and usage is mostly adding files, so data volume is today 3TB growing all the time at around +15GB/day. Please do not reply rsync. Basic unix tools are not enough, rsync does not keep history, rdiff-backup miserably fails from time to time and screw the history. Moreover these are all file based backup, which put a lot of IOwait just to browse directories and query stat(). But i guess, except R1Soft CDP, there is no way around that. We tried R1Soft CDP backup, which is block level backup, and it proved good and efficient for all our other servers, but systematically fails on the server with 3 terabytes and gazillions of files. That is already more than 2 months that the engineers of R1Soft and datacenter are playing a hot ball game... and still no backup except regular rsync We never tried big commercial solutions, except R1Soft CDP since it was provided as an optional service by the datacented hosting our servers.

    Read the article

  • How do I fix error 1303 during TI Connect install?

    - by smoth190
    I recently purchased a TI-84 Plus graphing calculator, and I'm trying install the TI Connect software in order to connect the calculator to my computer via the USB cable. Unfortunately, I'm getting this error while trying to install the program: Error 1303. The installation has insufficient privileges to access this directory: E:\Data\Timothy\Documents\MyTIData. The installation cannot continue. Log on as administrator or contact your system administrator. However, my account is the only account on my PC, and it has administrative privileges. I've also tried running the installer with Run as Administrator, but with no luck. If I create the folder MyTIData manually, I receive this error: Error 1317. An error occurred while attempting to create the directory: E:\Data\Timothy\Documents\MyTIData I've reapplied the security settings to the E:\Data folder (and all its sub-directories) to Full for my account. I've also gone into Computer Management, and given SYSTEM full privileges for the entire disk. I've also logged out, logged back in, restarted, etc. but still, no luck. Now, I should mention that my Documents folder is not at the default location. I changed it due to my C: disk being a 90GB SSD, so I moved all my personal data onto the extra storage disk (which is ~1TB). I don't know if that is causing the issue, but it can't hurt throwing it out there. So why can't I install this program? Google'ing the problem brings up this error for various other installers (such as Visual Studio and Microsoft Office), but nothing for TI Connect. All the solutions are the same: Give the folder Full privileges...but I've already done this! I've also tried running the installer with and without the calculator plugged in, but it didn't change anything. In the prompt that contains the error, repeatedly clicking Retry or waiting a few moments before clicking Retry also produces no result.

    Read the article

  • File Server Resource Manager attempting to access quota.xml on System Reserved partition?

    - by pmellett
    I've got a new install of Server 2008 R2 that is designed to be our quota server for user home directories and shared areas. I installed FSRM and set up a few quotas to try out. They worked fine but at some point over the weekend it's stopped loading the FSRM console quota screen and gives the following error, with Event ID 8228: File Server Resource Manager was unable to access the following file or volume: '\\?\Volume{73649de6-7f04-11e1-a344-005056b10310}\System Volume Information\SRM\quota.xml'. This file or volume might be locked by another application right now, or you might need to give Local System access to it. I have removed and reinstalled the FSRM Role Service, cleared the \System Volume Information\SRM folder on each volume and am at the verge of just starting again. I'd rather not since then I have to go through and set up all my NTFS permissions again. Since it looks like the service is trying to access the System Reserved partition, which I assume won't have any files it could possibly need, how do I remove System Reserved partition as a volume to be monitored for the quota service? (I am not aware of configuring that to be the case originally though!)

    Read the article

  • How do I batch-downsize images on linux, while keeping small images small?

    - by Gabriel
    I have a whole lot of photos and it's time to clean up the mess and free some disk space. I know mogrify is great to batch-resize things down. The problem is, in some directories I have small images mixed with the big ones. I'd like to batch-downsize all the big one but not upsize the small ones. As an example, I have a rep with tens of MBs-pictures in the 3000x2000s. Some of them I have already downsized so I could email them. They may be 1024x768. I'd like to downsize the big ones to 1600x1200, a disk-space-to-quality tradeoff I like. But then, with mogrify or convert, the small ones will be upsized, which would be a waste of disk space. I found some tricky ways to use identify with cut and some scripting to filter the small pics out and mogrify the others, but man, there's got a way to tell mogrify not to upsize my pics... How ? Is there some other tool better suited ?

    Read the article

  • Setup: Eclipse in Ubuntu with Apache2 and Subversion

    - by Ricalsin
    Trying to setup Eclispe. I am running ubuntu 10.10 (Maverick). Apache2.2.16 Subversion 1.6.12 The Eclipse help/about/installed software says: Eclipse Platform 3.5.2 Subclipse 1.0.0 Version Control with Subversion 1.1.1 The Subclips wiki I followed is here I have installed the libsvn-java app as discussed. I added the line "-Djava.library.path=/usr/lib/jni" to the eclipse.ini file I checked the Eclipse help/about/confirguration settings and both of these lines are listed: eclipse.vmargs=-Djava.library.path=/usr/lib/jni java.library.path=/usr/lib/jni I checked that those files are in those directories. Still, when I check the preferencesteamsvn an error dialog shows: Failed to load JavaHL Library. These are the errors that were encountered: no libsvnjavahl.1 in java.library.path Incompatible JavaHL library loaded 1.3.x or later required I followed the "Testing JavaHL libraries" troubleshooting section at the bottom of the wiki: I downloaded the tarbal and ran it in a folder on my desktop with no problems. Then, I followed the instructions and placed that file INSIDE the path (usr/lib/jni/testJavaHL) and ran it from there. There are 50 tests performed and each one of them came back with this same error (posting only one for brevity): 50) testCommitRevprops(org.tigris.subversion.javahl.BasicTests)java.io.FileNotFoundException: /usr/lib/jni/testJavaHL/local_tmp/greek_files/iota (No such file or directory) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:209) at java.io.FileOutputStream.<init>(FileOutputStream.java:160) at org.tigris.subversion.javahl.WC.materialize(WC.java:70) at org.tigris.subversion.javahl.SVNTests.buildGreekFiles(SVNTests.java:303) at org.tigris.subversion.javahl.SVNTests.setUp(SVNTests.java:222) at org.tigris.subversion.javahl.RunTests.main(RunTests.java:111) FAILURES!!! Tests run: 50, Failures: 0, Errors: 50 Any ideas as to how/why the "local_tmp/greek_files/iota" is appended to the directory? I assume that's my problem.. I'm also having a problem with newrepository location = ...as the directory location of my svn repository is one level above the home directory - which is prepended to whatever I place in the dialog box - resulting in this error: svn: '/home/ricalsin/file:/home/svn' does not exist Thank you for any help.

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >