Search Results

Search found 22383 results on 896 pages for 'package private'.

Page 709/896 | < Previous Page | 705 706 707 708 709 710 711 712 713 714 715 716  | Next Page >

  • Debian Apache2 and SSL

    - by Topher Fangio
    Hello all, I recently took over a server that is using Apache2 with SSL. I have setup a new server to which I am migrating all of the old websites so that we can more easily scale (it's a cloud server) and so that I can set everything up correctly (or at least with some sort of convention). I have read quite a few articles on setting up Apache2 and SSL with virtual hosts, but I'm a bit confused because all of the examples show three files and I only seem to have two. To compound the problem, they are all named differently (do the file extensions actually make a difference?). The examples show something to this effect: <VirtualHost X.X.X.X:443> ServerAlias something.mydomain.com ServerAdmin [email protected] DocumentRoot /var/www/project/client/site SSLEngine on SSLCertificateFile /etc/ssl/certs/mydomain-cert.pem SSLCertificateKeyFile /etc/ssl/private/mydomain-key.pem SSLCertificateChainFile /etc/ssl/certs/mydomain-ca.crt </VirtualHost> However, the files I have are: _.mydomain.com.crt gd_bundle.crt It is a wildcard certificate that we purchased through GoDaddy I believe. I believe that the first file is the actual certificate file and the gd_bundle.crt is the chain file, but that leaves me without a key file. There is also a random mydomain.csr file lying around on the old server, but it wasn't one of the files bundled with the download from GoDaddy, so I'm not really sure as to what it is. Any help in figuring out what I need to do would be greatly appreciated. I am software developer, so I know my way around computers, but I have only dabbled in server setup/maintenance. Much Thanks!

    Read the article

  • FTP timing out after login

    - by Imran
    For some reasons I cant access any of my accounts on my dedicated server via FTP. It simply times out when it tried to display the directories. Heres a log from FileZila... Status: Resolving address of testdomain.com Status: Connecting to 64.237.58.43:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [TLS] ---------- Response: 220-You are user number 3 of 50 allowed. Response: 220-Local time is now 19:39. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER testaccount Response: 331 User testaccount OK. Password required Command: PASS ******** Response: 230-User testaccount has group access to: testaccount Response: 230 OK. Current restricted directory is / Command: SYST Response: 215 UNIX Type: L8 Command: FEAT Response: 211-Extensions supported: Response: EPRT Response: IDLE Response: MDTM Response: SIZE Response: REST STREAM Response: MLST type*;size*;sizd*;modify*;UNIX.mode*;UNIX.uid*;UNIX.gid*;unique*; Response: MLSD Response: ESTP Response: PASV Response: EPSV Response: SPSV Response: ESTA Response: AUTH TLS Response: PBSZ Response: PROT Response: 211 End. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (64,237,58,43,145,153) Command: MLSD Response: 150 Accepted data connection Response: 226-ASCII Response: 226-Options: -a -l Response: 226 18 matches total Error: Connection timed out Error: Failed to retrieve directory listing I have restarted the FTP service serveral times but still It doesnt loads. I only have this problem when my server is reaching it peak usage which is still only 1.0 (4 cores), 40% of 4GB ram. The ftp connections isnt maxed out because only me and my colleague have access to FTP on the server.

    Read the article

  • VsFTPd - pam_mkhomedir

    - by Totor
    I am trying to set up a FTP server that authenticates against an LDAP server. This part is done and works. My server is VsFTPd on Ubuntu Server 11.04. But I have to create the home directories for my LDAP users. I am trying to user the pam_mkhomedir module but it is not working: when I add its line to the /etc/pam.d/vsftpd file, my users can not login anymore to the FTP server. The problem is that I have very few information on what is wrong. VsFTPd just responds 530: login incorrect and I could not find a way to get debug or error messages from pam_mkhomedir. Here are my different configuration files. The /etc/pam.d/vsftpd file: auth required pam_listfile.so item=user sense=deny file=/etc/ftpusers onerr=succeed auth required pam_ldap.so account required pam_ldap.so password required pam_ldap.so session optional pam_mkhomedir.so skel=/home/skel debug The /etc/vsftpd.conf file: listen=YES anonymous_enable=NO local_enable=YES write_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem guest_enable=YES session_support=YES log_ftp_protocol=YES tcp_wrappers=YES Permissions on /home and /home/skel: root@ftp:/home# ls -al total 16 drwxrwxrwx 4 root root 4096 2011-10-11 21:19 . drwxr-xr-x 21 root root 4096 2011-09-27 13:32 .. drwxrwxrwx 2 root root 4096 2011-10-11 19:34 skel drwxrwxrwx 5 foo foo 4096 2011-10-11 21:11 foo root@ftp:/home# ls -al skel/ total 16 drwxrwxrwx 2 root root 4096 2011-10-11 19:34 . drwxrwxrwx 4 root root 4096 2011-10-11 21:19 .. -rwxrwxrwx 1 root root 3352 2011-10-11 19:34 .bashrc -rwxrwxrwx 1 root root 675 2011-10-11 19:34 .profile Yes, I know, permissions are not properly set but security is not the issue here: I first need to get it to work. So, to recapitulate: without pam_mkhomedir my LDAP users can login, but they cannot do anything because they are in an empty chrooted jail. If I add pam_mkhomedir, they cannot login anymore. If anyone has an idea why, or know how to get more information from logs, I would be very grateful, thanks.

    Read the article

  • Win7 playback of dvr-ms files stutters

    - by Jim Lynn
    I've just had to install Windows 7 on my Media Center machine because my Vista installation had a faulty drive. I've got the latest drivers that I can find - Intel 945GM integrated Graphics, Realtek audio drivers. Things are working OK with one exception. Playback of old recordings, from dvr-ms format files, is choppy. The picture freezes for a fraction of a second, then quickly catches up. The sound is uninterrupted and doesn't pause. These freezes happen once every 5 seconds or so. It's very regular. Playback of Live TV from the digital tuner is perfectly smooth. DVD playback is perfectly smooth. As an experiment, I used the MPEG editing package VideoReDo to create a small test file in three different formats. This program takes the raw MPEG streams and repackages them into the desired container. I took the same clip and created three files in three formats: dvr-ms (Microsoft's old recorded TV format); mpg (standard MPEG); and ts (raw MPEG transport stream of the kind often produced by PVRs). When these three files are played back under Windows 7, the mpg and ts files play smoothly, but the dvr-ms file stutters. The last piece of data I have is that two other Windows 7 machines can play back dvr-ms files smoothly with no stuttering. One is a netbook, with less grunt than the media centre. So there must be something specific about my Media Center machine that's causing the problem. Does anyone have any idea where I can look now? I don't know much about AV software, codecs, filter graphs etc. but I suspect that's where the problem lies. Rendering the video isn't the problem, but extracting the streams is. How would I go about diagnosing the problem? Edited to add: I just used the GraphStudio tool to look at the filter graph on the offending PC. The filter graph it uses by default for dvr-ms looks identical to the other machines, and, interestingly, when I play the files using GraphStudio they run smoothly. Under Windows Media Player and Windows Media Center they stutter. I'd like to see the filter graph for WMP but GraphStudio won't show it. It looks like WMP and WMC are using a different decoding path to GraphStudio. Edited again to add: Today I purchased a new HDTV. The same Media Center driving the TV at 1080p is now playing back the old Recorded TV files smoothly, without stuttering. So whatever the cause of the original problem, using a different resolution seems to have removed the problem. It might also explain why nobody else has had this problem. I doubt many people use Media Centre with a 14in portable TV.

    Read the article

  • Launching firefox on remote server causes local firefox to start instead

    - by terdon
    Right, this is strange. I am connecting from my laptop (LMDE) to a remote host (SUSE linux enterprise) using ssh -X. I want to launch a firefox instance running on the remote server so I can have access to webpages on a private network. User@RemoteMachine $ which -a firefox /usr/bin/firefox User@RemoteMachine $ /usr/bin/firefox --version Mozilla Firefox 2.0.0.2, Copyright (c) 1998 - 2007 mozilla.org User@LocalMachine $ which -a firefox /usr/bin/firefox User@LocalMachine $ /usr/bin/firefox --version Mozilla Firefox 14.0.1 Now, if firefox is not running on the local machine, everything goes as expected and executing firefox on the remote machine causes a firefox (v 2.0) window running on the remote machine to show up. However, if firefox is running on the local machine a second window of firefox 14.0.1 running on the local machine appears. I have checked top in both machines. In the 2nd case, a firefox process briefely appears on the remote machine and then disappears when the local version of firefox is launched. My questions are the following: What gives? How/why can firefox connect to its existing instance on the local machine? The remote machine appears to have access to the local machine. It, in fact, appears to have the right to execute programs on my local machine. Am I missing something or is this just weird? Is this not a security risk?

    Read the article

  • Getting macro keys from a razer blackwidow to work on linux

    - by Journeyman Geek
    I picked up a razer blackwidow ultimate that has additional keys meant for macros that are set using a tool that's installed on windows. I'm assuming that these arn't some fancypants joojoo keys and should emit scancodes like any other keys. Firstly is there a standard way to check these scancodes in linux? Secondly how do i set these keys to do things in command line and x based linux setups? My current linux install is xubuntu 10.10, but i'll be switching to kubuntu once i have a few things fixed up. Ideally the answer should be generic and system-wide Things i have tried so far: showkeys from the built in kbd package (in a seperate vt) - macro keys not detected xev - macro keys not detected lsusb and evdev output this ahk script's output suggests the M keys are not outputting standard scancodes Things i need to try snoopy pro + reverse engineering (oh dear) Wireshark - preliminary futzing around seems to indicate no scancodes emitted when what i seem to think is the keyboard is monitored and keys pressed. Might indicate additional keys are a seperate device or need to be initialised somehow. Need to cross reference that with lsusb output from linux, in 3 scenarios - standalone, passed through to a windows VM without the drivers installed, and the same with. LSUSB only detects one device on a standalone linux install It might be useful to check if the mice use the same razer synapse driver , since that means some variation of razercfg might work (not detected. only seems to work for mice) Things i have Have worked out: In a windows system with the driver, the keyboard is seen as a keyboard and a pointing device. And said pointing device uses, in addition to your bog standard mouse drivers.. a driver for something called a razer synapse. Mouse driver seen in linux under evdev and lsusb as well Single Device under OS X apparently, though i have yet to try lsusb equivilent on that Keyboard goes into pulsing backlight mode in OS X upon initialisation with the driver. This should probably indicate that there's some initialisation sequence sent to the keyboard on activation. They are, in fact, fancypants joojoo keys. Extending this question a little I have access to a windows system so if i need to use any tools on that to help answer the question, its fine. I can also try it on systems with and without the config utility. The expected end result is still to make those keys usable on linux however. I also realise this is a very specific family of hardware. I would be willing to test anything that makes sense on a linux system if i have detailed instructions - this should open up the question to people who have linux skills, but no access to this keyboard The minimum end result i require I need these keys detected, and usable in any fashion on any of the current graphical mainstream ubuntu varients

    Read the article

  • Sync OneNote Notebooks to/on SkyDrive

    - by Sam
    I've got OneNote running on all computers in our house, using it all the time with several people and computers. The only drawback: I want to keep the copies of OneNote in sync without having to run a dedicated server myself. Right now one of my computers has a folder share, where all others sync to, but this is highly impractical since the computer is not always running. So my question is: is it possible to put the notebook files on a (private) SkyDrive Folder and have all the computers sync to there? This way all computers could keep in sync whenever they got access to the web. Can this be done? and, of course, How? [Update] Maybe I should not have taken knowledge about OneNote as granted: OneNote uses a propietary file format, but has a very good in-file-syncing, working on network shares. Generic 'just sync the complete file' won't be useful at all, because I'd just have 'file has changed on server and on client' conflicts all the time. The sync needs to know OneNote files and be able to sync the content - eg. OneNote itself needs to sync the files, not some generic sync tool.

    Read the article

  • Static IP for dynamic IP

    - by scape279
    I have a dynamic IP address. I would like to have a static IP, but Virgin Media don't allow static IPs for residential broadband services, even if you ask them really nicely and offer to pay for it without switching to a business tariff. I am already registered with a dynamic DNS service which is updated by my router eg me.example.com will always resolve to my dynamic IP. This is fine for some circumstances, but not if you can only enter an IP address into configuration files/hardware etc like firewalls, subversion services etc etc. Is there a way I can have a static IP address 'forwarding' to my dynamic IP? Would a possible solution involve tunnelling? Setting up a private proxy? Please note the following: I am able to buy an IP address from my web host. I have access to a webserver and I am able to create custom DNS zones. I'm happy to have a webserver running at home if necessary also. I do not wish to change broadband providers. I have zero control over the services that require the IP address entering so I cannot tackle the problem that way round (services I need to access are at work). PS I've tried googling this issue, but it is very difficult to search for as most results are related to dynamic dns (which I already have set up and isnt quite what I'm after)

    Read the article

  • ssl_error_rx_record_too_long connection error with SSL site in Firefox

    - by Thomas
    I am trying to set up SSL in Apache but when I go to the server in Firefox I get the following error message: An error occurred during a connection to sludge.home. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) My virtual host config file looks like this. <IfDefine SSL> <VirtualHost *:443> ServerName sludge.home SSLEngine on SSLCertificateFile /usr/local/apache/cert/server.crt SSLCertificateKeyFile /usr/local/apache/cert/server.pem SSLProtocol -SSLv2 SSL CipherSuite HIGH:!ADH:!EXP:!MD5:!NULL DocumentRoot "/usr/local/apache/htdocs" ServerAdmin [email protected] <Location /> AuthType Digest AuthName "private area" AuthDigestDomain / AuthDigestProvider file AuthUserFile /usr/local/apache/passwd/digest_pw Require valid-user </Location> <Directory /usr/local/apache/htdocs/bugz> AddHandler cgi-script .cgi Options +Indexes +ExecCGI DirectoryIndex index.cgi AllowOverride Limit FileInfo Indexes </Directory> <Directory /usr/local/apache/htdocs/bugzilla> AddHandler cgi-script .cgi Options +Indexes +ExecCGI DirectoryIndex index.cgi AllowOverride Limit FileInfo Indexes </Directory> <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> <Directory "/usr/local/apache/htdocs"> Options Indexes FollowSymLinks Allow Override None Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.html </IfModule> <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> </VirtualHost> </IfDefine> A telnet to this server reveals that server is sending plain HTML back. Further more it seems as though mod_ssl is not loaded/working even though when I httpd -l it shows up as being statically compiled in. I have exhausted most avenues I can think of.

    Read the article

  • SSH Tunnel for Remote Desktop via Intermediary Server Part II

    - by Mihai Todor
    I asked previously how to configure 2 SSH tunnels using an intermediary server in order to run Remote Desktop through them and I managed to make it work. Now, I'm trying to do the same, using the same machines, but in reverse order. Here's the setup: Windows 7 PC in a private network, sitting behind a firewall. Public access Linux server, which has access to the PC. Windows 7 laptop, at home, on which I wish to do Remote Desktop from the PC. I use Putty on the laptop to create a reverse tunnel from it to the Linux server: R60666 localhost:3389. I use Putty on the PC to create a regular tunnel from it to the Linux server: L60666 localhost:60666. I SSH to the Linux sever and I run telnet localhost 60666 and it seems to produce the expected output, as described in the debugging tips that I received here. I try to connect Remote Desktop from the PC to the laptop: localhost:60666. It asks for my username and password, I click OK and it locks my current session on the laptop (so I see the welcome screen on the laptop instead of my desktop), it shows the "Welcome" message in the Remote Desktop screen and then it just goes black. It doesn't disconnect, it doesn't provide any error and I'm not able to perform any actions in the Remote Desktop screen. I tried the same setup with a Windows XP laptop and I'm experiencing the same symptoms. I also tried to use different ports than 60666, but nothing changed. Does anybody have any idea what I'm doing wrong? Update: As pointed out by @jwinders, I'm not able to run telnet PC 3389 from the Linux server directly. Since Windows Firewall has a rule to allow all connections on port 3389, I have no idea what is blocking it. Fortunately, I'm able to create a SSH tunnel from the Linux machine to the PC ssh 3389:localhost:3389 'domain\user'@PC.

    Read the article

  • How do you host multiple public facing websites on a VPS?

    - by pedroarvy
    We host about 30 websites using typical shared hosting plans using ASP.NET and SQL 2000/2005/2008. I am now wondering about hosting all of these websites using our own virtual private server. This is clearly cheaper but comes with a lot of questions I need answers to: Is the risk of having to keep this VPS server up and running worth it? Until now, the host provider has managed the server and we have not had to worry about crashes, downtime, software patches etc. We are not server administrators, we are programmers, so this is not really our expertise. On the other hand, it may not be hard to learn. When we make a website live, we log in to a domain management control panel and change the primary and secondary name servers to point to our shared web host: Eg ns1.sharedwebhost.com and ns2.sharedwebhost.com These name servers are going to have to change when we have a VPS. I don’t understand anything about how to set this up. Is there some useful info anyone could direct me to? Or is there software we need to install to make the primary and secondary name servers work on our VPS? The control panel we have for shared hosting comes with DNS management like this: http://www.yart.com.au/stackoverflow/dns.png What software would I need to install to create this for each site we host at a VPS? The control panel we have for shared hosting also comes with a POP email interface that allows email addresses to be added easily by our customers. Is this something that can be easily set up at a VPS so clients can manage their own email addresses? Is there software we need to install to make this work?

    Read the article

  • How to find out where or if MYSQL5 logs are stored on a machine WHM/Cpanel

    - by moi
    I have a WHM/Cpanel re-seller hosting account on a virtual private server (Linux). I have root access to the machine via SSH I am trying to locate a file that contains information that will help me to determine which users have accessed what db and from which hosts. I would imagine this kind of data is stored in a log file somewhere. The MySQL page says: The general query log - Established client connections and statements received from clients See: http://dev.mysql.com/doc/refman/5.0/en/server-logs.html It also says: By default, all log files are created in the mysqld data directory. So, I am am NOT asking where are the general query log logs stored, (cos I expect I will get answers saying "it depends") Please help me work out: "How can go about finding out where MySQL general query log logs are stored on a linux machine" Couple of things i've already tried: I looked at /etc/my.cnf it was a tiny file that only contained the following info: [mysqld] skip-bdb skip-innodb set-variable = max_connections=500 safe-show-database ~ ~ I have looked in: /var/lib/mysql/ But I could not see any log-like file names in that directory. Any clues on this would be most welcome.

    Read the article

  • s3cmd fails too many times

    - by alfish
    It used to be my favorite backup transport agent but now I frequently get this result from s3cmd on the very same Ubuntu server/network: root@server:/home/backups# s3cmd put bkup.tgz s3://mybucket/ bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 36864 of 2711541519 0% in 1s 20.95 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.00) WARNING: Waiting 3 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 36864 of 2711541519 0% in 1s 23.96 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.01) WARNING: Waiting 6 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 18.71 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.05) WARNING: Waiting 9 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 18.86 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.25) WARNING: Waiting 12 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 15.79 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=1.25) WARNING: Waiting 15 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 12288 of 2711541519 0% in 2s 4.78 kB/s failed ERROR: Upload of 'bkup.tgz' failed too many times. Skipping that file. This happens even for files as small as 100MB, so I suppose it's not a size issue. It also happens when I use put with --acl-private flag (s3cmd version 1.0.1) I appreciate if you suggest some solution or a lightweight alternative to s3cmd. Thanks

    Read the article

  • Installing drivers for switchable graphics

    - by Anonymous
    I recently bought a laptop that came with Windows 7 64-bit installed. I have some older (16-bit and 32-bit) software that doesn't work with 64-bit Windows, but works just fine with 32-bit. Since I also wanted to get rid of all of the pre-installed spam, I decided to wipe the hard drive and install a fresh copy of Windows 7 32-bit. I can't get the graphics cards working. This laptop uses switchable graphics, an Intel card and a Radeon card. I first tried installing this driver from Intel, which works for the Intel card. Of course, the Radeon card doesn't work with this driver and I need it for some of the newer games I have. I also tried this driver. Windows's device manager will recognize the Radeon card, but it will still use the Intel card. Also, even though that package says it contains the Intel driver, the Intel card still isn't properly recognized by Windows (leaving me with a nasty 800x600 resolution). On top of that, the Catalyst Control Center won't open (saying "The Catalyst Control Center is not supported by the driver version of your enabled graphics adapter") I tried installing HP's driver then installing Intel's driver on top of it. Device manager will then recognize both graphics cards properly. However, the laptop still uses the Intel card. The CCC still won't start (saying the same thing as before) and I can't find any of 'switching' graphics cards. Before formatting, I could right-click the desktop and click "Configure Switchable Graphics" This option hasn't been in the context menu regardless of what driver(s) I've installed. After some research, I found out that this menu entry runs the command "cli.exe Start PowerXpressHybrid" I've tried manually running this command, but I get the same unsupported message from CCC. So, does anyone know how I can get this working? I would like to be able to switch between the Intel and Radeon. But, if there's some way to disable the Intel and use only the Radeon, that would be fine I dual-boot with Linux (framebuffer uses the Intel, haven't even tried getting X set up yet) Here's the output of lspci # lspci -v | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller]) 01:00.0 VGA compatible controller: ATI Technologies Inc NI Seymour [AMD Radeon HD 6470M] (prog-if 00 [VGA controller]) The laptop is a HP Pavilion g6t-1d00. HP doesn't support installing anything but Windows 7 64-bit, so calling tech support isn't an option. Thanks for any help UPDATE: I finally got it working. After a fresh install of Windows 7, I installed the HP driver (the one linked above). Then, there's an optional Windows update I installed (don't remember the exact name, but it'll stick out). After that, graphics switching works just like it's supposed to. Moab, thanks anyways for your help

    Read the article

  • Passenger error: No such file or directory - config/environment.rb

    - by JJD
    I installed Redmine on MacOSX Server 10.6.8 according to this installation description. So far everything works fine: When I start webrick the server serves the Redmine pages. The gems and redmine are installed under the user "redmine". After that I aimed configuring apache2 with passenger as described here. As suggested by the description I also installed the passenger-pane which stores its virtual host configuration files in /private/etc/apache2/passenger_pane_vhosts. This is what I came up with after a lot of manual try and error. At least, now I can reach a passenger error page. // redmine.vhost.conf <VirtualHost *:80> ServerName host ServerAlias localhost DocumentRoot "/Users/redmine/Sites/redmine" # RackEnv production # RackBaseURI / RailsEnv production RailsBaseURI / # PassengerUser www-data # PassengerGroup www-data <Directory "/Users/redmine/Sites/redmine"> Order allow,deny Allow from all </Directory> </VirtualHost> However, the passenger module still runs into the following errors. Error message: No such file or directory - config/environment.rb The /var/log/apache2/error_log of the web server stated the following. [warn] NameVirtualHost *:80 has no VirtualHosts [notice] Apache/2.2.21 (Unix) Phusion_Passenger/3.0.12 configured -- resuming normal operations [ pid=21824 thr=2151905620 file=utils.rb:176 time=2012-06-01 18:22:07.126 ]: *** Exception Errno::ENOENT in PhusionPassenger::ClassicRails::ApplicationSpawner (No such file or directory - config/environment.rb) (process 21824, thread #<Thread:0x0000010086f2a8>): I experimented with the user switch functionality of passenger as described in the documentation - as you can tell from my configuration file. Though, I was not successful.

    Read the article

  • How to remotely open gedit with SFTP URL in Gnome through SSH?

    - by Álvaro Justen
    My setup is weird and I can't change it now. I have two machines: local-machine: it's my desktop running Ubuntu with Gnome remote-machine: it's one virtual machine, also running Ubuntu but without X In both machines I have my private and public SSH keys. I need to run SSH from remote-machine to local-machine and run gedit (in local-machine, under the default $DISPLAY) but openning a file in remote-machine throught SFTP. Something like this: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 gedit sftp://remote-machine/some/file" The command above doesn't work. gedit shows this message: Could not open the file sftp://remote-machine/some/file. gedit cannot handle sftp: locations. Note that: /some/file exists on remote-machine. I can SSH normally from remote-machine to local-machine using my SSH key without any problems! I can run the command DISPLAY=:0.0 gedit sftp://remote-machine/some/file in a terminal on local-machine and gedit opens the file on remote-machine without any problems - but the terminal in which I executed the command is running in DISPLAY :0 (really, it's gnome-terminal). I also tried -t option of SSH client (to force pseudo-tty allocation) but it didn't work. If I try to run DISPLAY=:0.0 gedit sftp://remote-machine/some/file in local-machine but under a tty (for example in tty1, by pressing <Ctrl>+<Alt>+<F1>) it doesn't not work - I get the same error when running from remote-machine. I found that if I pass the environment variable DBUS_SESSION_BUS_ADDRESS with a correct value, it works! So, if I do something like that: myuser@local-machine:~$ env | grep DBUS_SESSION_BUS_ADDRESS > env.txt myuser@local-machine:~$ scp env.txt remote-machine: and then: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 $(cat env.txt) gedit sftp://remote-machine/some/file" it works! The problem is that I'm not on local-machine so I can't get the correct value for this env variable. Is there any other way to make this work?

    Read the article

  • vagrant up command very slow on OS X Lion

    - by Andy Hume
    When I run vagrant up to provision a new VM on Lion it takes an extremely long time, during which the entire Mac is very laggy and unresponsive. The output is as follows, the key point being the "notice: Finished catalog run in 754.28 seconds" > vagrant up [default] Importing base box 'lucid64'... [default] The guest additions on this VM do not match the install version of VirtualBox! This may cause things such as forwarded ports, shared folders, and more to not work properly. If any of those things fail on this machine, please update the guest additions and repackage the box. Guest Additions Version: 4.1.0 VirtualBox Version: 4.1.6 [default] Matching MAC address for NAT networking... [default] Clearing any previously set forwarded ports... [default] Forwarding ports... [default] -- ssh: 22 => 2222 (adapter 1) [default] -- web: 80 => 4567 (adapter 1) [default] Creating shared folders metadata... [default] Running any VM customizations... [default] Booting VM... [default] Waiting for VM to boot. This can take a few minutes. [default] VM booted and ready for use! [default] Mounting shared folders... [default] -- v-root: /vagrant [default] -- v-data: /var/www [default] -- manifests: /tmp/vagrant-puppet/manifests [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: /Stage[main]/Lucid64/Package[apache2]/ensure: ensure changed 'purged' to 'present' [default] [default] notice: /Stage[main]/Lucid64/File[/etc/motd]/ensure: defined content as '{md5}a25e31ba9b8489da9cd5751c447a1741' [default] [default] notice: Finished catalog run in 754.28 seconds [default] [default] err: /File[/var/lib/puppet/rrd]/ensure: change from absent to directory failed: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: change from absent to directory failed: Could not find group puppet [default] [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: Finished catalog run in 2.05 seconds [default] [default] err: /File[/var/lib/puppet/rrd]: Could not evaluate: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: Could not evaluate: Could not find group puppet [default] [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: Finished catalog run in 1.36 seconds [default] [default] err: /File[/var/lib/puppet/rrd]: Could not evaluate: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: Could not evaluate: Could not find group puppet [default] >

    Read the article

  • RDS, RDWeb, and RemoteApp: How to use public certificate for launching apps on session host?

    - by Bret Fisher
    Question: How do i tell RDWeb to launch apps from remote.domain.com rather then host.internaldomain.local? Environment: Existing org with AD forest. New single Server 2012 running all Remote Desktop Services roles for session host. Used the new 2012 wizard to setup "QuickSessionCollection" with roles: RD Session Host RD Connection Broker RD Gateway RD Web Access RD Licensing Everything works with self-signed cert, but we want to prevent those. The users are potentially non-domain machines so sticking a private root cert for on their machines isn't an option. Every part of the solution needs to use public cert. Added public remote.domain.com cert to all roles using Server Manager GUI: RD Connection Broker - Enable Single Sign On RD Connection Broker - Publishing RD Web Access RD Gateway So now everything works beautifully except the last step: user logs into https://remote.domain.com user clicks a app icon, which in background downloads a .rdp file that is signed by remote.domain.com. .rdp is set to use RD Gateway, which is remote.domain.com .rdp says app is hosted on internal host.internaldomain.local, which doesn't match the RDP-tcp TLS cert of remote.domain.com, and pops a warning. It's this last step that I'd like to fix. Is there a config option in PowerShell, WMI, or .config to tell RDWeb/RemoteApp to use remote.domain.com for all published apps so the TLS cert for RDP matches what the Session Host is using? NOTE: This question talks about this issue, and this answer mentions how you might fix it in 2008, but that GUI doesn't exist in 2012 for RemoteApp, and I can't find a PowerShell setting for it. NOTE: Here's a screenshot of the setting in 2008R2 that I need to change. It tells RemoteApp what to use for the Session Host server name. How can I set that in 2012?

    Read the article

  • Windows Question: RunOnce/Second Boot Issues

    - by Greg
    I am attempting to create a Windows XP SP3 image that will run my application on Second Boot. Here is the intended workflow. 1) Run Image Prep Utility (I wrote) on windows to add my runonce entries and clean a few things up. 2) Reboot to ghost, make image file. 3) Package into my ISO and distribute. 4) System will be imaged by user. 5) On first boot, I have about 5 things that run, one of which includes a driver updater (I wrote) for my own specific devices. 6) One of the entries inside of HKCU/../runonce is a reg file, which adds another key to HKLM/../runonce. This is how second boot is acquired. 7) As a result of the driver updater, user is prompted to reboot. 8) My application is then launched from HKLM/../runonce on second boot. This workflow works perfectly, except for a select few legacy systems that contain devices that cause the add hardware wizard to pop up. When the add hardware wizard pops up is when I begin to see problems. It's important to note, that if I manually inspect the registry after the add hardware wizard pops up, it appears as I would expect, with all the first boot scripts having run, and it's sitting in a state I would correctly expect it to be in for a second boot scenario. The problem comes when I click next on the add hardware wizard, it seems to re-run the single entry I've added, and re-executes the runonce scripts. (only one script now as it's already executed and cleared out the initial entries). This causes my application to open as if it were a second boot, only when next is clicked on the add hardware wizard. If I click cancel, and reboot, then it also works as expected. I don't care as much about other solutions, because I could design a system that doesn't fully rely on Microsoft's registry. I simply can't find any information as to WHY this is happening. I believe this is some type of Microsoft issue that's presenting itself as a result of an overstretched image that's expected to support too many legacy platforms, but any help that can be provided would be appreciated. Thanks,

    Read the article

  • Thunderbird doesn't show folders on a new Dovecot install

    - by Zoran Zaric
    Hey, I set up a new mailserver with postfix and Dovecot some days ago, everything is working except for Thunderbird not showing any folders. Evolution shows me all folders. I migrated from a Courier install using imapsync. In the filesystem the folders don't have a INBOX in their name, so the tho folders ar called .Folder 1 not .INBOX.Folder 1. This is the output of dovecot -n: # 1.0.10: /etc/dovecot/dovecot.conf Warning: mail_extra_groups setting was often used insecurely so it is now deprecated, use mail_access_groups or mail_privileged_group instead base_dir: /var/run/dovecot/ log_timestamp: “%Y-%m-%d %H:%M:%S ” protocols: imap pop3 listen(default): *:143 listen(imap): *:143 listen(pop3): *:110 disable_plaintext_auth: no login_dir: /var/run/dovecot//login login_executable(default): /usr/lib/dovecot/imap-login login_executable(imap): /usr/lib/dovecot/imap-login login_executable(pop3): /usr/lib/dovecot/pop3-login first_valid_uid: 1001 last_valid_uid: 1001 mail_extra_groups: vmail mail_access_groups: vmail mail_location: maildir:/var/vmail/%d/%u maildir_copy_with_hardlinks: yes mail_executable(default): /usr/lib/dovecot/imap mail_executable(imap): /usr/lib/dovecot/imap mail_executable(pop3): /usr/lib/dovecot/pop3 mail_plugin_dir(default): /usr/lib/dovecot/modules/imap mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3 pop3_uidl_format(default): pop3_uidl_format(imap): pop3_uidl_format(pop3): %08Xu%08Xv auth default: user: nobody passdb: driver: sql args: /etc/dovecot/dovecot-sql.conf userdb: driver: sql args: /etc/dovecot/dovecot-sql.conf socket: type: listen client: path: /var/spool/postfix/private/auth mode: 432 user: postfix group: postfix master: path: /var/run/dovecot/auth-master mode: 432 user: vmail group: vmail Thanks!

    Read the article

  • Puppet Agent fails sporadically, with either timeout or "Could not find class" error

    - by smokris
    I have puppet master running on a Xen dom0, and 3 domUs syncing to it via an hourly crontab puppet agent --test. About 80% of the time, the puppet agent --test completes successfully: info: Retrieving plugin info: Caching catalog for test3 info: Applying configuration version '1333319732' notice: Finished catalog run in 5.08 seconds The other 20% of the time, it fails midway, with errors such as the following: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find class iptables for test1 at /etc/puppet/manifests/site.pp:1 on node test1 warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run or info: Retrieving plugin info: Caching catalog for test2 info: Applying configuration version '1333319732' notice: Finished catalog run in 24.73 seconds err: Could not send report: Error 500 on SERVER: Internal Server Error private method `gsub' called for WEBrick::HTTPStatus::RequestTimeout:Class WEBrick/1.3.1 (Ruby/1.8.5/2006-08-25) OpenSSL/0.9.8e-rhel5 at puppet:8140 or info: Retrieving plugin err: Could not retrieve catalog from remote server: execution expired warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run or info: Retrieving plugin info: Caching catalog for test3 info: Applying configuration version '1333319732' notice: Finished catalog run in 9.47 seconds err: Could not send report: Error 408 on SERVER: Request Timeout During this time, I've not made any changes to the Puppet configuration — it just sporadically fails. I'm running puppet-2.7.12 on CentOS, and followed the setup instructions described on http://docs.puppetlabs.com/learning/agent_master_basic.html. Any ideas about how I can troubleshoot this?

    Read the article

  • puppet cert mismatch in ec2

    - by Stick
    I'm setting up a puppetmaster (2.7.6) in ec2 via gems (on rhel6) and I'm running into problems with the cert names and getting the master able to talk to itself. my puppet.conf looks like this: [main] logdir = /var/log/puppet rundir = /var/run/puppet vardir = /var/lib/puppet ssldir = $vardir/ssl pluginsync = true environment = production report = true certname = master When I start the puppetmaster process the ssl directory looks like: ssl/private_keys/master.pem ssl/crl.pem ssl/public_keys/master.pem ssl/ca/ca_crl.pem ssl/ca/signed/master.pem ssl/ca/ca_crt.pem ssl/ca/ca_pub.pem ssl/ca/ca_key.pem ssl/certs/ca.pem ssl/certs/master.pem I have an /etc/hosts entry on the box to point the 'puppet' hostname to localhost so that I don't have to change the 'server' option. When I run the agent I get the following: # puppet agent --test info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Server hostname 'puppet' did not match server certificate; expected master err: /File[/var/lib/puppet/lib]: Could not evaluate: Server hostname 'puppet' did not match server certificate; expected master Could not retrieve file metadata for puppet://puppet/plugins: Server hostname 'puppet' did not match server certificate; expected master err: Could not retrieve catalog from remote server: Server hostname 'puppet' did not match server certificate; expected master warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run err: Could not send report: Server hostname 'puppet' did not match server certificate; expected master If I specify the certname as the server (with corresponding hosts entry) I get: # puppet agent --test --server master info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://master/plugins info: Caching catalog for master info: Applying configuration version '1321805956' notice: Finished catalog run in 0.05 seconds Which is success of a sort, that source error will bite me later when I'm applying manifests. I've tried a couple of other variations with using the ec2 private hostname and gotten mixed results. I'd like to avoid setting server = 'x' and use dns/hosts to control what 'puppet' resolves to in order to decide which server (plays easier with availability zones, etc)

    Read the article

  • Azure can't ping or telnet VM from client

    - by Raif
    I have a VM on Azure with an instance sqlserver 2012 running on it. From my work computer and my home computer I can't get sqlserver management studio connect to it. I have looked at ALL the settings recommended in numerous articles. everything is setup correctly. endpoint 1433 Private and public sqlserver tcp enabled. sqlserver tcp listening on right port sqlserver using mixed auth windows fire wall, holes poked and then disabled on both client and VM can log in from VM using the credentials that I'm trying to use remotely further more I can't ping the dns or ip or tellnet address from my local machines. I can however hit the iis from a browser using the ip. strange. CS asked me to download MS Network Monitor, which I did and pinged and telneted. I have the results saved but can't really make heads or tails of them. CS hasn't responded yet. I can post some info here that would help. EDIT Never one to shrink from a challenge, I deleted my VM and re-did everything. Now it works although my confidence azure is somewhat shaken.

    Read the article

  • How to perform SCP as a Sudo user

    - by Ramesh.T
    What is the best way of doing SCP from one box to the other as a sudo user. There are two servers Server A 10.152.2.10 /home/oracle/export/files.txt User : deploy Server B 10.152.2.11 /home/oracle/import/ User : deploy Sudo user : /usr/local/bin/tester all i want is to copy files from server A to Server B as a sudo user... In order to do this, first i normally login as deploy user on the target server and then switch as a sudo user without password. after that SCP to copy file, this is the normal way i perform this activity... In order to auotmate i have written script #!/bin/sh ssh deploy@lnx120 sudo /usr/local/bin/tester "./tester/deploy.sh" I have generated the private key for deploy user, so it allows me to login as deploy user without password. afterthar the sudo command is executed it will switch the user to tester... after that nothing happens.. i mean the script is not getting executed ... is there any way to accomplish this in a different way...

    Read the article

  • VSFTPD does not allow upload with virtual users

    - by Mr. Squig
    I am attempting to setup VSFTPD with virtual users on a server running Ubuntu 12.04. I have configured the server to allow for virtual users to login, but I am having trouble getting it to allow uploads. My vsftpd.conf is as follows: listen=YES anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 anon_upload_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES virtual_use_local_privs=YES guest_enable=YES guest_username=virtual user_sub_token=$USER local_root=/var/www/$USER hide_ids=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem /etc/pam.d/vsftpd contains: auth required pam_pwdfile.so pwdfile /etc/vsftpd.passwd crypt=hash account required pam_permit.so crypt=hash I have two virtual users set up, one of which has the same name as a local user. They each have a directory in /var/www/ owned by 'virtual'. As I understand it, when a virtual user logs in this way they will appear to the system as the user virtual. Using this configuration user can log on, but cannot upload files. The error given in /var/log/vsftpd.log is: Tue Nov 20 19:49:00 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:07 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 3] [zac] FAIL CHMOD: Client "96.233.116.53", "/test.ppm 644" I have tried changing the permissions of these directories in all sorts of ways, but nothing seem to work. I have a feeling that it is something simple related to permissions. Any ideas?

    Read the article

< Previous Page | 705 706 707 708 709 710 711 712 713 714 715 716  | Next Page >