Search Results

Search found 13437 results on 538 pages for 'trusted root certificates'.

Page 359/538 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • Windows/Samba connection error

    - by Gomibushi
    I have a Linux fileserver serving up /home for linux and windows users. I was able to connect from my windows client, but not from a DC. Then suddenly I could connect from the DC too. The linux servers run Centrify clients, and as such are part of the domain. All on same subnet. This is what the the log.smbd says, repeatedly: [2010/02/11 11:25:57, 0] lib/util_sock.c:read_data(534) read_data: read failure for 4 bytes to client 192.168.200.3. Error = Connection reset by peer On Windows it appeared as an "unknown error". EDIT: the error code is "0x80004005". We are developing a system depended on the samba share, and are worried this will appear again. It would be nice to pin point the root of this. Any ideas what this might be? Places to look?

    Read the article

  • Squid randomly stops serving requests. How can I resolve this issue?

    - by Vijay
    The squid (2.7) proxy that I have running on ubuntu 8.10 stops accepting new requests after being online for a while, due to reasons that I can't discover. However doing a squid -k reload resolves the problem immediately. Now I manually run this command by monitoring the log and if i don't see any activity for 5 minutes I reload the config. Now on my quest for a solution I had several ideas: diagnose the root cause and eliminate it setup a script to automatically reload script if no new entries in access.log for the past 3 minutes painstakingly upgrade server to newer ubuntu version while keeping network offline or during off hours to minimize downtime. so i thought I would turn to you for solutions to option 2), as I do not understand squid enough for 1), and I'm avoiding 3) as long as i can. so can ideas?

    Read the article

  • How to tell rsync not to touch the destination directory permissions?

    - by Sorin Sbarnea
    I am using rsync to sync a directory from a machine to another but I encountered the following problem: the destination directory permissions are altered. rsync -ahv defaults/ root@hostname:~/ The problem is that in this case the permissions and ownership of the defaults forlder will be assigned to the destination folder. I do want to keep the permissions for the files and subdirectories but not for the source directory itself. Also, I do not want to remove any existing files from the destination (but to update them if needed), but I think that current settings are already ok regarding this. How can I do this?

    Read the article

  • How do I get these permissions working right so Apache can work with the files?

    - by cosmicbdog
    I am having a go at setting up my own Apache and can't seem to get my head around the permissions. Lets say I grab a file from somewhere off the web and it has permission of 600. I then upload this file via ftp to a user directory, which is also an apache virtual site, and so this file retains this permission of 600. This means that the user can read this file, but Apache can't: it will be forbidden. What is the most simple solution so that apache can read + write whatever files end up in the users directory? Can apache be granted some sort of root power over files in a directory?

    Read the article

  • How to compile gcc-4.0 on Mountain Lion

    - by Frizlab
    So far I've successfully launched the configure, but when I type make, I get the following error, after some time (there's a lot which compile successfully): ld: unknown/unsupported architecture name for: -arch i686 /usr/bin/libtool: internal link edit command failed make[2]: *** [libgcc_s.dylib] Error 1 make[1]: *** [libgcc.a] Error 2 make: *** [all-gcc] Error 2 Is there a way to tell gcc not to compile itself for the i686 architecture? Here's my uname -a if it can help: Darwin Frizlabs-Computer.local 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64 PS: I know gcc-4.0 is ancient, but I do need it.

    Read the article

  • Can not start Apache 2.2.22 in Fedora 15

    - by Roderik
    I am trying to start Apache 2.2.22 under Fedora 15 on my local machine. After fixing some errors related to missing modules, httpd -t will just give me 'Syntax OK'. However when I try to start apache as the root user: service httpd start it still returns: Starting httpd (via systemctl): Job failed. See system logs and 'systemctl status' for details. [FAILED] When entering systemctl I don't see any extra information other than: httpd.service loaded failed failed LSB: start and stop Apache HTTP Server So I wonder where to look now to get this back up and running.

    Read the article

  • Java and Sendmail HELO requires domain address

    - by ealgestorm
    I am trying to set up emailing from a java web application hosted on a linux server (Cent OS) in apache. Sendmail is working fine from the command line as root on localhost but when trying to send emails from the java web app (also on the same server from localhost) the following java exception is thrown. 501 5.0.0 HELO requires domain address EDIT: I have read that some people have found this is due to an incorrect hosts entry currently the hosts file contains 127.0.0.1 Centos-VPS localhost.localdomain localhost and i'm not sure what the Centos-VPS bit at the start is for but this is a clients hosted server so don't really want to break stuff EDIT see the RFC is helpful ... 501 Syntax error in parameters or arguments Now I know what the problem is! (note the sarcasm people.)

    Read the article

  • How to automatically copy a file uploaded by a user by FTP in Linux (CentOS)?

    - by Buttle Butkus
    Outside contractor says they need read/write/execute permissions on part of the filesystem so they can run a script. I'm ok with that, but I want to know what they're running, in case it turns out there is some nefarious code. I assume they are going to upload the file, run it, and then delete it to prevent me from finding out what they've done. How can I find out exactly what they've done? My question specifically asks for a way of automatically copying the file, which would be one way. But if you have another solution, that's fine. For example, if the file could be automatically copied to /home/root/uploaded_files/ that would be awesome.

    Read the article

  • Unable to sunchronize local and remote directories ("set times: Operation not permitted")

    - by Tom Auger
    I'm running into FTP errors using software like NetBeans or WinSCP: whenever I attempt to perform a synchronization or update of files from local -- server I get errors on the client saying "set times: Operation not permitted". This is clearly an issue with the way I've configured my Fedora installation. The user that I'm logging in with cannot touch -t any of these files, though he IS part of a group that has r/w access on the files. I do have root / sudo access to this server. What I would like to know is: a) is it likely that this problem would be solved by allowing my FTP user to "touch -t" these files b) how do I enable a certain user to be able to set timestamps on files without giving them ownership of the files (certain of these files need to be owned by Apache, for instance, so I don't want to chown them). Thanks in advance.

    Read the article

  • Permissions issue on Fedora with separate home partition

    - by Tres
    I am running Fedora 12 and I've setup a partition separate from my root partition to keep shared files and home directories. Now, I've been having permission issues where it says the user cannot chdir into their home directory (/files/home/*). Now, I fixed this originally by chmodding / to 0755 and the home directories also to 0755. And yes, the user is the owner:group of their home directory. Now get this, I didn't change a thing, rebooted, everything still works. Great, right? I boot the server up a day later, and now same ol issue. This is a home server that wasn't on at all at any point in between the working state and non-working state. Also, nothing else was modified. Any ideas? Thanks!

    Read the article

  • Can I install fresh Linux accross partitions (LUKS & LVM) and preserve/use existing home user?

    - by xtian
    With an existing LUKS encrypted logical volume partitioned hard disk dual boot to Windoz and Linux (Fedora 15), is it necessary to "start over" with the LUKS setup when upgrading the system? I recall some note about dividing the Linux installation over different partitions would help to preserve the home data in future update (I can't find this now) Before I try it, is this possible and intended use case for partitioning a Linux installation? # lsblk -fa NAME FSTYPE LABEL MOUNTPOINT sda [80G] +-sda1 [system W95 FAT 32] vfat +-sda2 ext4 /boot +-sda3 [52.4G] crypto_LUKS +-luks-de25ac97-6a32-4b79-a6a0-296a39376b3b (dm-0) LVM2_member +-cryptVG-root (dm-1) [21.5G] ext4 / +-cryptVG-swap (dm-2) [5.4MB] swap [SWAP] +-cryptVG-data (dm-3) [25.6G] ext4 /home

    Read the article

  • Server 2008 R2 file access permissions

    - by Napster100
    I'm finding it awkward to sort out permissions for file sharing and access on my LAN. I've created an account on the server node (as a normal user) and shared a drive that has 2 folders at the root, one is for personal file storage and the other shared files, if I connect to the shared area from a workstation running windows 7 and log-in using the account I created on the server, I can look through directories but can't look in some (which I wanted as I changed the permissions for that to happen), but my problem is although the permissions are set for this user account to have full control of the specific folder I can't create a folder in that area or upload files to that folder. Could someone explain why this is? Thanks in advanced

    Read the article

  • Windows 8 power-shell "update-help" is failing. Does anyone know how to fix it?

    - by Warren P
    Windows 8 includes PowerShell out of the box, but not the help. To get the help you run PowerShell as administrator and type "update-help". I get this error: > update-help update-help : Failed to update Help for the module(s) 'BitLocker, NetWNV' with UI culture(s) {en-US} : The value of the HelpInfoUri key in the module manifest must resolve to a container or root URL on a website where the help files are stored. The HelpInfoUri 'http://technet.microsoft.com/library/cc732148.aspx' does not resolve to a container. At line:1 char:1 + update-help + ~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (:) [Update-Help], Exception + FullyQualifiedErrorId : InvalidHelpInfoUri,Microsoft.PowerShell.Commands.UpdateHelpCommand Can anyone tell me how I fix this or if it's not important? I'm guessing that if I don't need help on NetWNV or BitLocker, that this is the only thing wrong?

    Read the article

  • is there a linux equivalent of iTerm(mac) sending command to multiple tabs functionality?

    - by jabbertalker
    in iTerm, you can send a command to execute simultaneously on a set of already opened tabs. Is there a way to do this in linux (with gnome-terminal preferably)? for instance, supposed that I had 10 tabs already ssh'd into [email protected] and sudoed to root and wanted to send a command to run on all 10 tabs. The goal of this is to be able to stay within a set of tabs and command them, rather than having to use expect scripts to ssh and elevate and run commands. Basically, like how you could do in iTerm.

    Read the article

  • Iptables to lock down compromised server to a single ip

    - by ollybee
    I have a Linux server which is compromised, I can see nasty looking perl scripts executing with root privileges. I want to get some data off it before I wipe it. How can I block all inbound and outbound traffic except for my ip? It's a Centos server I assume i can do this with iptables? I'm aware a the server is rooted there is a possibility that attackers could have made changes on the server that would prevent this from working. Ill be testing to make sure and only have the server online for a couple of hours before it is nuked.

    Read the article

  • Booting an EXT4 image file from GRUB2

    - by sjjg
    My friend needed a fast HDD so I gave her my small 64GB SDD. This SSD had my Linux install on it. I used dd to make an image of the partition (boot, root and home on one partition). This partition is now sitting on a traditional 500GB EXT4 formatted drive. Is there any way I can get GRUB to just boot using this .img file I have? I'm not getting my SSD back and I can't be bothered to go through the hassle of setting up my Linux install from scratch. I have come across loopback support in GRUB for ISO images. Does this support EXT4 also? I don't seem to be able to find anything specific and don't want to trash anything. Cheers.

    Read the article

  • http 301 redirection issue

    - by Guilhem Soulas
    I'm a little bit lost with a redirection. I want mysite.com, www.mysite.com and www.mysite.co.uk to redirect to mysite.co.uk. In Apache, I wrote this for mysite.co.uk in order to redirect www to the root domain: RewriteEngine on RewriteCond %{HTTP_HOST} ^www RewriteRule ^/(.*) http://mysite.co.uk/$1 [L,R=301] And for mysite.com, I wrote this redirect to mysite.co.uk: ServerName www.mysite.com RewriteEngine on RewriteRule ^/(.*) http://mysite.co.uk/$1 [L,R=301] This way, I can make the redirection work properly from www.mysite.com to mysite.co.uk, but it doesn't work for mysite.com too mysite.co.uk (without the www) at the same time. Could someone tell me how to make all my redirections work in all cases?

    Read the article

  • tomcat processParameters complains about "invalid chunk ignored"

    - by cgicgi
    I am hosting a software system running under tomcat for quite a number of customers. Some of these send invalid URLs as request. These URLs may contain "&=" or "&&", which is not within the http specs. Now my tomcat complains about the following: "08.09.2010 12:36:04 org.apache.tomcat.util.http.Parameters processParameters WARNING: Parameters: Invalid chunk '' ignored." It is no problem, as is doesn't affect the operation in any way. Only problem ist that the tomcat/logs/catalina.out is growing with every single request. In the net you can find suggestions like: - Fix your URLs (which I can't, as it is the customers who send them) - Raise tomcats log level to ERROR (which I don't want to do, as it would suppress INFO like "INFO: Reloading context [/ContextName]" and other stuff you want to know. - Redirect the log to the application log (which won't solve the problem, as the message will flood just another log) Does anyone know how to solve the problem at its ROOT, which means: Tell tomcat not to complain about invalid request parameters any longer

    Read the article

  • New 64 bit linux system has regular processes (ps, grep etc) taking up way too much VIRT mem

    - by user42980
    We just moved from a 32-bit machine to a 64-bit machine. We have quickly ran out of memory despite the new boxes have twice as much ram as the old boxes. Running a simple ps command will illustrate the problem. New machine: 132 prod-Charlotte1-node1 ~/public_html/rearch/cgi-bin ps aux | grep ps root 293 0.0 0.0 0 0 ? S< May09 0:00 [kpsmoused] xamine 2267 1.0 0.0 63728 928 pts/3 R+ 16:50 0:00 ps aux xamine 2268 0.0 0.0 61172 752 pts/3 S+ 16:50 0:00 grep ps Old machine: 132 prod-116431-node1:/home/xamine ps aux | grep ps xamine 23191 0.0 0.0 2332 768 pts/6 R+ 15:41 0:00 ps aux xamine 23192 0.0 0.0 3668 692 pts/6 S+ 15:41 0:00 grep ps Notice that the ps process is using 63M of VIRT mem vs 2 on the old machine. New Machine: Enterprise Linux Enterprise Linux Server release 5.4 (Carthage) Red Hat Enterprise Linux Server release 5.4 (Tikanga) Old Machine: Red Hat Enterprise Linux ES release 4 (Nahant Update 4) Thanks for any thoughts you have!

    Read the article

  • Mount a remote Linux hard drive as another Windows 7 partition during boot?

    - by zhuanyi
    I would like to mount a hard drive on a remote computer (running on CentOS 6) as a Windows drive so that I can install programs to that drive. The primary hard drive for my Windows machine (which is at home) is pretty small, I have a Linux server sitting in a remote data center with a much larger hard drive and allow me to install more stuff. I know most of you are going to say Samba, unfortunately the biggest problem for me in this case is that I can not mount Samba as a network share unless I start OpenVPN or SSH tunneling first, which is not good for my case because I will install some startup programs to the remote drive as well. Therefore, the remote drive has to be ready and work just like another drive BEFORE any of the startup programs start to load. Is that possible? My home PC has Windows 7 Professional 32 bit installed and the remote server is a Xen virtual server running on CentOS 6. I have admin/root permissions for both. Thanks a lot!

    Read the article

  • MySQL server simple insert/update/delete queries are taking a long time to execute

    - by ElGabbu
    We have a VPS hosting server with a MySQL server running on it. We host several databases for client's websites. Recently we have noticed that insert/update and delete queries are taking a long time to execute sometimes as close as 30 seconds. I use the following command to see these queries being executed: watch -n1 mysqladmin proc stat We have still not been able to track the root of this problem. I would apprecite if anyone had any pointers as to what we can check or improve to resolve the issue. Thanks

    Read the article

  • close ssh sessions

    - by egor7
    I'm using ~/.ssh/config for logging to the internal.local corporate server: Host internal.local ProxyCommand ssh -e none corporate.proxy nc %h %p But after closing session (typing exit), my sshd session on server stays still active (I see it through different connection). Hot do I close session or change my config in the appropriate way, to eleminate hang sessions? First check from the second, root session: ps -fu user_name user_name 861 855 0 16:58:16 pts/3 0:00 -bash user_name 855 854 0 16:58:13 ? 0:00 /usr/lib/ssh/sshd After logging out: user_name 855 854 0 16:58:13 ? 0:00 /usr/lib/ssh/sshd Just after scp files to/from the internal.local a new scp sessions still hangs on the server.

    Read the article

  • Is there a BSD equivalent to "!!"?

    - by CT
    I often find myself issuing a command that I do not have the proper elevated privileges for. On Ubuntu I could use sudo !! This would issue the same command with sudo privlidges. Is there an equivalent on OpenBSD? Edit: I should have been more specific on what version of OpenBSD. I am using OpenBSD 4.8 where sudo seems to be installed by default. I have already created a user besides root and edited my sudoers file to allow for that user to use sudo. My question is, is there already a built-in shortcut for the "!!" to use previous command.

    Read the article

  • Apache routing vhosts to /var/www

    - by FHannes
    One user at my site has reported that he reaches the content at /var/www when browsing to any of the vhosts at my server. As far as I’m aware, my Apache server does not contain a document root that references this folder. On top of that, this user seems to be the only one experiencing the issue. According to his ISP, the issue isn’t caused by them, yet, on his mobile connection, he can access the site. When browsing to my server’s IP, he also receives the correct content from the default vhost. What could be the possible causes of this issue and how can I get it to stop? I’ve explored pretty much every option I could think of.

    Read the article

  • Explaining svn / apache permissions error? (I know "how" but not "why")

    - by Neil
    I have the following error occurring on occasion when trying to do an svn switch (have it set up to do via a web request): svn: Can't open file '/root/.subversion/servers': Permission denied This happens after an apache httpd.conf change and corresponding restart. How to fix this? I can get it to fix by doing an apache restart - BUT, it often takes multiple tries. Curious if anybody can explain this. Why did this error go away on my 8th apache restart, but not on the prior ones (with no edits to the conf file)? Basically, I kind of have a "how" in terms of solving this, but I don't have a "why" . . . Thanks!

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >