Search Results

Search found 13788 results on 552 pages for 'instance'.

Page 394/552 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • Using GitOAuthPlugin for Jenkins - not working as expected

    - by Blundell
    I need some clarity and maybe a fix. I'm using this plugin to authorise who views our Jenkins ci server: https://wiki.jenkins-ci.org/display/JENKINS/Github+OAuth+Plugin As I understand it anyone who is auth'd to view one of our github project's can also login to our Jenkins box. This works I thought it would also allow the person logging in to only view the Project that they have GitHub permission on. For instance. Three projects on GitHub (A,B,C). Three builds on Jenkins. User 1 has Git access to all 3 projects (A B C). User 2 has Git access to only 1 project (A). When logging into Jenkins: User 1 can see all 3 projects ( this works ) User 2 can only see project A The problem is User 2 can also see all 3 projects when they should only see 1! Have I got this correct, and if so is this a bug? I have the settings set in Jenkins configuration Github Authorization Settings. Here we have some admin users. One organization. And none out of the 4 checkboxes ticked. (User 2, is not an admin, is not part of the org). The plugin is open sourced here: https://github.com/mocleiri/github-oauth-plugin I was trying to get Jenkins to print me the Logs from the plugin but I also failed at viewing these (to see if there was an issue). I followed these instructions: https://wiki.jenkins-ci.org/display/JENKINS/Logging It's the same concept as outlined below but using GitHub rather than manually selecting users: https://wiki.jenkins-ci.org/display/JENKINS/2012/01/03/Allow+access+to+specific+projects+for+Users%28Assigning+security+for+projects+in+Jenkins%29 Have I got this right or wrong? Is it possible to auth a Jenkins user to only see one project?

    Read the article

  • mod_wsgi, .htaccess and rewriterule

    - by hadaraz
    I'm using several django projects running on the same apache instance through mod_wsgi, configured with virtualhost for each site, see the httpd.conf here. For one of the sites I want to use static-cache (staticgenerator), so I set up a directory with .htaccess file which contains: RequestHeader unset X-Forwarded-Host RewriteEngine on RewriteBase / RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}/index.html !-f RewriteRule ^(.*) http://127.0.0.1:3456/$1 [P] where 3456 is the django port on the server. Using this rewrite rule, the request is always forwarded to the mod_wsgi handler, even if the file or directory exists, and if the file index.html exists the request shows as request-path/index.html. I tried another setup: RequestHeader unset X-Forwarded-Host RewriteEngine on RewriteBase / RewriteCond $1 !-d RewriteCond $1index.html !-f RewriteRule ^(.*) http://127.0.0.1:3456/$1 [P] but got almost the same results. All requests are transferred to the mod_wsgi handler, but the request path is now the original one. To sum it up: What is the correct RewriteCond to use here? How do you transfer a request to the mod_wsgi handler? Is it the right way? If that's not the way to do it, then how do you serve static files from a directory when they exist, and when they don't you serve from apache/mode_wsgi? Thanks for your help.

    Read the article

  • GlassFish cluster-targeted jdbc is not enabled

    - by Jin Kwon
    I have a GlassFish cluster. When I tried to add node and a instance, DAS saids a bunch of error messages telling Resource [ jdbc/xxxx ] of type [ jdbc ] is not enabled [#|2012-11-14T12:07:04.318+0900|SEVERE|glassfish3.1.2|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=2803;_ThreadName=Thread-2;|java.lang.StackOverflowError at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:318) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122) at java.io.PrintStream.write(PrintStream.java:480) at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291) at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295) at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141) at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229) at java.util.logging.StreamHandler.flush(StreamHandler.java:242) at java.util.logging.ConsoleHandler.publish(ConsoleHandler.java:106) at java.util.logging.Logger.log(Logger.java:522) at com.sun.logging.LogDomains$1.log(LogDomains.java:372) at java.util.logging.Logger.doLog(Logger.java:543) at java.util.logging.Logger.log(Logger.java:607) at com.sun.enterprise.resource.deployer.JdbcResourceDeployer.deployResource(JdbcResourceDeployer.java:117) at org.glassfish.javaee.services.ResourceProxy.create(ResourceProxy.java:90) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:507) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:455) at javax.naming.InitialContext.lookup(InitialContext.java:411) at javax.naming.InitialContext.lookup(InitialContext.java:411) at com.sun.appserv.connectors.internal.api.ResourceNamingService.lookup(ResourceNamingService.java:221) the JDBC Resource is ok and targeted with the cluster. I've installed the JDBC driver on the new node. Can anybody help?

    Read the article

  • Chroot jail of Nginx and php

    - by sqren
    I'm hosting multiple websites on one VPS, and want to chroot each website, eg. /chroot/website1 /chroot/website2 I'm using makejail, which is a highlevel tool, for creating the jails, and copying the libraries and dependencies. Easy peasy. Each website will need nginx, php and mysql. For php I'm using php5-fpm which actually supports chroot by configuration, however I'm not using this (maybe I should?) My question is which approach of the following three is the better: 1) Every website will have its own seperated instance of nginx, php and mysql. The downside is, that each webserver + php has to listen to a different port. I also need a "master" nginx web server in front of them, reverse proxying to the chrooted servers behind it. Probably most secure, but also most advanced. 2) I don't make any chroot jails manually. I setup one nginx web server, that proxies php requests to php-fpm, on different ports. I can have multiple php-fpm configurations each with is own chroot'ed folder. This is quite managable - however only php will be chrooted. Not the actual webserver. Is this secure enough. Also, I tried this option out, and it seems I will need to use TCP instead of sockets for connecting to MySQL. 3) You tell me ;) I'm quite new to chroot jailing, so please correct me if I'm wrong in my assumptions. I've been reading all the tutorials I could find, however, I find the market for chroot guides very scarce. Any help or inputs much appreciated!

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • iPhone Remote with iTunes Library via VPN

    - by sudo work
    Alright, so I'm currently behind a network router (not under my control). The router performs NAT and somehow prevents a computer from scanning other nodes. At least, you're unable, in this instance, to locate an iTunes library. You can, however, communicate with a node's open ports if the local IP address is known, as well as the port. I haven't actually tried port scanning a specific IP using nmap or another tool yet. So I've tried one solution to remove the contribution of the router entirely (to verify that it works without the influence of the routers). I set up an access point using my iPhone and tethered my computer (with the library) to it. From here, I was able to pair my library and the iPhone Remote application. Control of the library was normal as well. This solution is not ideal, however, because I am actively using bandwidth with my computer and cannot afford to be tethered to my 3G connection. A viable solution for me is to use a common VPN connection, which I have set up on a Ubuntu (Intrepid) server that is remote. Both my computer and iPhone are able to access the VPN via PPTP. The server is setup with PPTPD as the VPN-server; I'm using IPTables to perform IP masquerading and forwarding traffic. I however, still cannot connect the library to the phone. I can however, see both devices on the VPN subnet (192.168.0.0/24). SSH'ing and such works fine. What settings on the VPN server must I change to get this to work? Also, how can I assign static IP addresses to various PPTP clients based on MAC addresses?

    Read the article

  • How to specify search domain name of nginx resolver for proxy_pass

    - by myjpa
    Assuming my server is www.mydomain.com, on Nginx 1.0.6 I'm trying to proxy all request to http://www.mydomain.com/fetch to other hosts, the destination URL is specified as a GET parameter named "url". For instance, when user requests either one: http://www.mydomain.com/fetch?url=http://another-server.mydomain.com/foo/bar http://www.mydomain.com/fetch?url=http://another-server/foo/bar it should be proxyed to http://another-server.mydomain.com/foo/bar I'm using the following nginx config and it works fine only if the url paramter contains domain name, like http://another-server.mydomain.com/...; but fails on http://another-server/... on error: another-server could not be resolved (3: Host not found) nginx.conf is: http { ... # the DNS server resolver 171.10.129.16; server { listen 80; server_name localhost; root /path/to/site/root; location = /fetch { proxy_pass $arg_url; } } Here, I'd like to resolve all URL without domain name as host name in mydomain.com, in /etc/resolv.conf, it's possible to specify default search domain name for the whole Linux system, but it doesn't affect nginx resolver: search mydomain.com Is it possible in Nginx? Or alternatively, how to "rewrite" the url parameter so that I can add the domain name?

    Read the article

  • pure-ftpd not listening on specified port

    - by Jason McLaren
    I installed the pure-ftpd package (version 1.0.35-1) on an Ubuntu 12.04 box (an EC2 instance based on the standard Ubuntu 12.04 AMI). The pure-ftpd daemon is running (verified with ps), though there is no PID file (expected one to be created by the /etc/init.d/pure-ftpd script). Here's the resulting command that gets run by the init.d script: /usr/sbin/pure-ftpd -l pam -O clf:/var/log/pure-ftpd/transfer.log -o -8 UTF-8 -u 1000 -E -B -g /var/run/pure-ftpd/pure-ftpd.pid Here's my real problem: the ftp server isn't actually listening on any port (checked with netstat and nmap). So I can't ftp to the server (either locally using localhost or remotely using the public IP address). I tried adding a Bind file to /etc/pure-ftpd/conf and restarting, but it didn't help. When I installed pure-ftpd, it replaced inetd with openbsd-inetd, but did not run it since there were no services enabled. So inetd is not listening on port 21 either. (Apparently Ubuntu has a no-inetd-by-default policy, according to https://lists.ubuntu.com/archives/ubuntu-users/2010-September/227905.html .) I want to run pure-ftpd by itself (not with inetd) anyways, since the /etc/init.d/pure-ftpd script requires no inetd if you use the UploadScript feature. I'm not familiar with how Ubuntu handles network services (and can't find any relevant docs besides generic man pages), so I'm probably missing something obvious. Nothing seems out of the ordinary with /etc/hosts.allow (empty) or hosts.deny (empty), and I didn't add any firewall rules (iptables -L shows that the firewall is in its initial state). I've checked the pure-ftpd docs; not sure what else to look at. Any help would be appreciated, thanks!

    Read the article

  • apache pointing to the wrong version of python on ubuntu how do I change?

    - by one
    I am setting up a flask application on and Ubuntu 12.04.3 LTS EC2 instance and everything seemed to be working well (i.e. I could get to the webpage via the publicly available url) until I tried to import a module (e.g. numpy) and realised the apache python differs from the one I used to compile the mod_wsgi and also the one I am using I am running apache2. The apache2 logs show the warnings (specifically the last line shows the path hasnt changed): [warn] mod_wsgi: Compiled for Python/2.7.5. [warn] mod_wsgi: Runtime using Python/2.7.3. [warn] mod_wsgi: Python module path '/usr/lib/python2.7/:/usr/lib/python2.7/plat-linux2:/usr/lib/python2.7/lib-tk:/usr/lib$ I have tried to set the path in my virtual host conf (my python is located in /home/ubuntu/anaconda/bin along with all of the other libraries): WSGIPythonHome /home/ubuntu/anaconda WSGIPythonPath /home/ubuntu/anaconda <VirtualHost *:80> ServerName xx-xx-xxx-xxx-xxx.compute-1.amazonaws.com ServerAdmin [email protected] WSGIScriptAlias / /var/www/microblog/microblog.wsgi <Directory /var/www/microblog/app/> Order allow,deny Allow from all </Directory> Alias /static /var/www/microblog/app/static <Directory /var/www/FlaskApp/FlaskApp/static/> Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> But I still get the warnings and the apache python path hasnt changed - where do I need to put the relevant directives to point apache at my python version and modules (e.g. scipy, numpy etc)? Separately, could I have avoided this using virtual environments? Thanks in advance.

    Read the article

  • Nexus 1000v VEM fails on 2 out of 8 hosts.

    - by cougar694u
    I have 8 ESXi hosts. I do a fresh install from the installable CD directly to 4u1. We have another 2-node cluster with a working Nexus 1000v primary & secondary. Everything's up and running. I installed 6 hosts and everything worked great, migrated them to the Nexus DVS, and VUM installed the modules. I did the 7th host, and when I tried to migrate it to the DVS, it failed with the following error: Cannot complete a Distributed Virtual Switch operation for one or more host memebers. DVS Operation failed on host , error durring the configuration of the host: create dvswitch failed with the following error message: SysinfoException: Node (VSI_NODE_net_create) ; Status(bad0003)= Not found ; Message = Instance(0): Inpute(3) DvsPortset-0 256 cisco_nexus_1000v got (vim.fault.PlatformConfigFault) exception Then, I tried to do host 8, and got the exact same problem. It worked about 15 minutes prior when I did host 6, nothing changed, then went to host 7 and it failed. If I try to remediate either of these two hosts, either patches or extensions, it fails. Anyone else have these problems?

    Read the article

  • Virtualbox Headless Server on Ubuntu missing VRDP Options

    - by The Daemons Advocate
    I'm running VirtualBox headless server on an Ubuntu 64 bit host, and I want to use it remotely. However, I'm having problems connecting via RDP. The DNS names in my network show the host to be 'server', and the guest to be 'ubuntu-vm'. From the official documentation, I gather that I am to connect to server on the default RDP port in order to see the guest machine. I start the virtual machine like so: vboxheadless -startvm My_VM Then I connect on my laptop, and I get... rdesktop -a 16 server ERROR: server: unable to connect So next I consult the documentation further, and I find there are RDP flags that can be turned on (but should be on implicitly for a headless server). So I pull up information using 'vboxmanage showvminfo My_VM', and I find the VRDP property is off. VRDP Connection: not active To make things even weirder, RDP flag seems to be missing from vboxmanage. I've installed straight from the ubuntu repo's using the virutalbox-ose package, not sure how that measures up against the official docs. For instance, this command doesn't exist: VBoxManage modifyvm My_VM --vrdp on From the UI, the VM's Settings regarding Display have greyed out the 'remote Display' option. What I'm looking for is advice :). I'm open to suggestions that don't involve starting again with something like VMWare. Thanks in advance!

    Read the article

  • Run script before shutdown/restart

    - by dtbarne
    I'd like to run a PHP script when an instance is told to shutdown, but of course before it actually finishes shutting down. My particular script is just looking to push some log files from the local partition to a another server. I've got the gist of how this process works, but I need some clarification. How I understand it. Please correct me if I'm wrong. Create an executable script in /etc/init.d (lets call it /etc/init.d/push-logs) Create a symlink to /etc/init.d/push-logs from /etc/rc0.d (shutdown) and /etc/rc6.d (reboot). The name should be KXXpush-logs Here's my questions: Of course - am I understanding correctly? For #2 above - it sounds like the lower the XX the better - is there too low a number I can use? Does it matter if it shares a number with another script? Does the script in /etc/init.d/push-logs HAVE to follow the standard init.d template (supporting start/stop, etc. commands)? This doesn't really apply to my use case. If possible I just want the script to be the following: #!/bin/sh # # Run PHP file prior to shutdown # /usr/bin/php /path/to/php_file.php

    Read the article

  • Cursor lag when mouse cursor changes?

    - by Mathias Lykkegaard Lorenzen
    Cursor lag issue Introduction I'm experiencing a newly arrived problem lately that frustrates me a lot. The computer I bought is a Clevo 150ERM. Two of my friends bought the same machine, and are experiencing the same issue. The computer came with Windows 7. There, I had no issues. Then, when we all switched to Windows 8, they had the mouse problem and I didn't. That is until after 4 or 5 months when I decided to install the RTM driver of my Intel graphics chip, and the latest Nvidia driver. I also installed the latest version of Skype that just released (Skype 6 and Skype for Metro). This basically leads me to conclude that the issue is not hardware-prone, and is not based on the operating system itself, rather the drivers or components that follow with it. Description of the issue The issue itself (lag with the mouse) happens whenever the cursor icon changes. For instance, if I keep hovering from and to a textfield (and the cursor changes into a caret and then back to a mouse), it stops for 200 milliseconds while it changes the icon. An example is if I follow the mouse in the pattern shown by the arrows below. When crossing the window border, the cursor changes into a "resize window" cursor for a short while, making the cursor lag. This doesn't sound like much, but it happens every time the cursor changes (even if it's to just move the mouse somewhere else, and accidentally make it cross a window border from where the resize cursor shows etc). What do you suggest I try?

    Read the article

  • Hyper-V Guests Dying

    - by Jon Rauschenberger
    I just hit my THIRD instance of a Hyper-V guest machine dying with the exact same behavior. In all three instances we are hosting WS2008 guests on a WS2008 host. AFter a config change, we reboot the guest and the guest OS comes up but in a very cripled state. Specifically, we are able to log into the guest, but can't launch any apps and the guest never comes active on the network. I opened a support ticket with MS the second time this happened and they focused in on the DCOM subsystem not coming up...best explanation they could provide was that permissions on key system files got corrupted. I eventually gave up on the ticket after close to 10 hours on the phone trying different things that were going no where. What really concerns me is that we have now seen the exact same thing happen to a guest hosted on a completly differet host machine. There is zero hardware overlap between the two. Has anyone seen this before?? It's really odd behavior, but it also seems like there's a pattern here that's concerning me. Thanks, jon

    Read the article

  • How can I set up a 404 error page when people access http://ftp.mydomain.com?

    - by Tim B.
    I am a freelance videographer/developer, and part of my job involves transferring large files over FTP to production houses/television stations. While the majority of people in my industry understand the difference between FTP and HTTP, I've experienced several interactions in the past couple months of people who still open Internet Explorer and try to access http://ftp.mydomain.com, receive an error page served by HostGator, and tell me that they cannot access my FTP server. Instead of spending time delivering instructions via e-mail, I'd much prefer to serve up a custom error page in this instance that instructs them how to download and use an FTP client. I tried setting up a sub-domain in Cpanel hoping I could simply drop in an .htaccess file with the error page, but I got this error: ftp.mydomain.com domainadmin-domainexistsglobal I also tried creating a custom error page in PHP which reads the site URL and serves up the custom content only when http://ftp.mydomain.com is accessed. Unfortunately, the error page works for every subdomain except that one. I'm not entirely sure this is even technically possible, which is why I bring it to the good people of StackOverflow to help. Thanks!

    Read the article

  • Favorite tricks with linux kernel boot parameters?

    - by ~drpaulbrewer
    Most linux bootloaders let you edit the kernel boot command line before booting. There are often lots of parameters available -- Knoppix, for instance, has a list on their Knoppix Cheat Codes page -- but most are applicable only to compatibility and special situations. A few are hidden gems. Common usages of these codes are to boot to single-user mode, alter screen mode or drivers, or to specify an alternative root directory. Other more exotic uses are possible. Some linux distributions let you copy the boot cd into ram. Others (e.g., Ubuntu) let you use preseed files to clone installs when setting up multiple systems -- useful when installing a lab full of computers without having to baby sit each install. What other tricks have you found useful in system installs, repairs, backups, restores, establishing temporary servers, or other tasks? To add your favorite trick to the list: As much of the code for these options goes on either in initrd, or in a service handler that detects the kernel parameters, please list *(1) the kernel boot line parameter, (2) what it does, (3) the linux distribution and any required packages to activate the feature*. Thanks.

    Read the article

  • Configuring vsftpd with nginx on Ubuntu 12.04 LTS

    - by arby
    I've attempted to configure a nginx / vsftpd server on Ubuntu 12.04 LTS (via amazon ec2) a couple times now, but I seem to keep making a mistake along the way. Currently, when I try to connect to my ftp server it takes a minute or so before it connects. Then when I issue a command, they all timeout with an operation failed error. Aside from these issues, I'm not completely confident with the file ownership & permissions or the configuration / settings. So, I think it's best if I just re-install and re-configure correctly. I believe the nginx installation comes with a default user of www-data:www-data and web root directory ownership by root:root. Vsftpd, however, needs to have a user created with the same group as the nginx user (www-data), and the same home directory as the nginx server (/usr/share/nginx/www), with g+w chmod permissions granted on that directory. The vsftpd.conf file should disable anonymous logins and enable local logins, file writing, and chroot local users. In my previous config, I had /bin/false set for the ftp user's shell and pam_shells.so disabled. I also had local_umask set to 0027. So, starting with a fresh ec2 instance, I've got: sudo apt-get install vsftpd sudo apt-get install nginx For the firewall I issued the command (not sure if necessary): sudo ufw allow ftp Which commands / config is recommended from here? I only need 1 ftp user that I can use to login with my ftp client to modify the single nginx web domain, which will need php & sql for WordPress.

    Read the article

  • Editing a windows XP installation's registry without being able to log in.

    - by Alain
    I've got a windows XP installation that has a corrupt registry. A worm (which was removed) had hijacked the HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon entry (which should have a value of Userinit=C:\windows\system32\userinit.exe When the worm was removed, the corrupt entry was deleted entirely, and now the system automatically logs off immediately after attempting to log in. Regardless of the user and boot mode, no accounts can be logged in to. The only thing required to correct this behavior is to restore the registry key, but I cannot come up with any ways of editing the registry without logging in to an account. I tried remotely connecting to the registry but the required services aren't enabled on the machine. I tried booting on the same machine using the BartPE boot CD but I could not find any way of editing the registry on the C:\Windows installation - running regedit only modifies the X:\I386\ registry in memory. So, what can I use modify the registry of an un-login-able Windows XP instance so that I can log in again? Thanks guys. EDIT: The fix worked. The solution to the auto-logoff problem was, as hoped, to simply add the value mentioned above to the appropriate registry entry. This can be done using the BartPE Boot CD, as described in the accepted answer below, but I used the Offline NT Registry Editor software mentioned in another answer. The steps were: Boot from the NT Registry Editor CD Follow the directions until the appropriate boot sector is loaded. Instead of using one of the default options for modifying passwords or user accounts, type "software" to edit that hive. Type '9' to enter the command line based registry editor. Type "cd Microsoft" (enter) "cd Windows NT" (enter) "cd CurrentVersion" (enter) "cd Winlogon" (enter) Type "nv 1 Userinit" to create a new value under the Winlogon key Type "ev Userinit" to edit the new value, and when prompted, type "C:\windows\system32\userinit.exe" (enter) Type 'q' to quit the registry editor, and as you back out of the system, follow directions to write the hive back to disk. Restart your computer and log in - problem solved. (generic 'warning: back up your registry' disclaimer)

    Read the article

  • vsftp login errors 530 login incorrect

    - by mcktimo
    Using Ubuntu 10.04 on an aws ec2 instance. I was happy just using ssh but then a wordpress plugin needs ftp access...I just need ftp access for one site www.sitebuilt.net which is in /home/sitebuil. I installed a vftpd and pam and followed suggestions that got me to the following state /etc/vftpd.conf listen=YES anonymous_enable=NO local_enable=YES write_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES xferlog_file=/var/log/vsftpd.log secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem guest_enable=YES user_sub_token=$USER local_root=/home/$USER chroot_local_user=YES hide_ids=YES check_shell=NO userlist_file=/etc/vsftpd_users /etc/pam.d/vsftpd # Standard behaviour for ftpd(8). auth required pam_listfile.so item=user sense=deny file=/etc/ftpusers onerr=succeed # Note: vsftpd handles anonymous logins on its own. Do not enable pam_ftp.so. # Standard pam includes @include common-account @include common-session @include common-auth auth required pam_shells.so # Customized login using htpasswd file auth required pam_pwdfile.so pwdfile /etc/vsftpd/passwd account required pam_permit.so session optional pam_keyinit.so force revoke auth include system-auth account include system-auth session include system-auth session required pam_loginuid.so /etc/vsftpd_users sitebuil tim /etc/passwd ... sitebuil:x:1002:100:sitebuilt systems:/home/sitebuil:/bin/sh ftp:x:108:113:ftp daemon,,,:/srv/ftp:/sbin/nologin /etc/vsftpd/passwd sitebuil:Kzencryptedpwd /var/log/vftpd.log Wed Feb 29 15:15:48 2012 [pid 20084] CONNECT: Client "98.217.196.12" Wed Feb 29 15:16:02 2012 [pid 20083] [sitebuil] FAIL LOGIN: Client "98.217.196.12" Wed Feb 29 16:12:33 2012 [pid 20652] CONNECT: Client "98.217.196.12" Wed Feb 29 16:12:45 2012 [pid 20651] [sitebuil] FAIL LOGIN: Client "98.217.196.12"

    Read the article

  • DNS setup problems with Windows Azure VPS

    - by jbigelow
    What is the proper to setup the A record (or CNAME) for a Windows Azure VPS? I can't connect to my website after setting up IIS and believe I don't have the correct DNS setup. I created a small VPS instance with the default Windows Server 2012 configuration. I RDP'd in and added the Webserver role. In my DNSMadeEasy control panel I added an A record with my Public Virtual IP Address. In IIS I went to the default website and added bindings for the hostname of my website, so I should be able to type mywebsite.com and see the IIS 8 splash screen, but instead my browser cannot connect. I attempted to navigate to the site by typing in my Virtual IP address into the browser and still cannot connect. I RDP'd back into the machine and turned off Windows Firewall. No change, still cannot navigate to my website. From within IIS I double checked my binding. If I click "browse *:80" I can bring up my website in IE with the http:// localhost address. If I click "browse mywebsite on *.80" IE says "This page cannot be displayed.", from within the RDP session I can view the site if I navigate to http:// 127.0.0.1 but not if I navigate to my Virtual IP, nor can I view the page if I try navigating to http:// mywebservername.cloudapp.net I'm thinking I must be fundamentally not understanding how do DNS setup with Azure VPS but my initial Google searches aren't turning up any helpful information. (spaces added after the http:// so serverfault doesn't try and render them as valid urls.)

    Read the article

  • How to improve Windows Server 2008 R2 to handle many connections?

    - by invisal
    It has been a few days so far that I am trying to figure how to solve this problem. First of all, I am running a website with an average daily page view of 350,000. Previously, all ads management (tracking click and impression that each ads has served) and content were served in a single server with the following spec: Server 1 OS: Windows 2008 R2 64-Bit CPU: Intel® Core™ i5 - 4 cores RAM: 8 GB Storage: 2 x 1 TB hard drives Bandwidth: 10 TB per month To improve our website speed, I decided to separate the ads management script to another dedicated server because we have more than 15 advertisers to 30 advertisers per each page. Server 2 OS: Windows 2008 R2 64-Bit CPU: Intel® Core™ i5 - 4 cores RAM: 4 GB Storage: 2 x 300 GB hard drives Bandwidth: 10 TB per month The Problem The problem is that Server 1 can handle both content and ads system. Now, that I take away the ads system and put it at Server 2. Server 2 can barely serve only ads system. Test First of all, I moved 75% of the ads to Server 2. And then, perform a ping to server: ping -t xxxxx. [I did the ping for 10 minutes and its following similar pattern as below] Reply from xxxxx bytes=32 time=290ms TTL=116 Reply from xxxxx bytes=32 time=289ms TTL=116 Reply from xxxxx bytes=32 time=320ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Reply from xxxxx bytes=32 time=348ms TTL=116 Reply from xxxxx bytes=32 time=284ms TTL=116 Then, I moved 100% of the ads to Server 2. Then, perform a ping to server again. [I did the ping for 10 minutes and its following similar pattern as below] Reply from xxxxx bytes=32 time=290ms TTL=116 Request timed out Reply from xxxxx bytes=32 time=320ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Request timed out Request timed out Reply from xxxxx bytes=32 time=284ms TTL=116 Attempts Increase MaxUserPort and TcpNumConnection Restart the server Increase IIS Max Instances and Instance MaxRequests Server Resource Only 10%-15% of the network connection is used Only 10%-15% of the CPU is used Only 25% of the memory is used

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr K.
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • Can I run a mix of static addressing and NAT?

    - by aroth
    Let's say that an ISP offers a plan with 5 static IP addresses, but I have more than 5 devices, many of which (such as a networked printer, for instance) I do not want or need to have a static IP address. So the topography I'm planning goes something like: ISP -> Router -> Switch -> Computer (static address) -> Computer (static address) -> Printer (DHCP/NAT) -> Tv (DHCP/NAT) -> (...) -> Wireless Devices (DHCP/NAT) Generally speaking, is it possible to run a network like that? If not, then what sort of setup do I need so that I can assign static addresses just to the things that need them, and use DHCP/NAT for everything else? Also, what internal networking devices will consume a static IP address? I'm pretty sure the router will, correct? Does the switch also?

    Read the article

  • "Run As Administrator" on program right click failing and not launching program

    - by GONeale
    This problem lies within a relatively fresh x64 Windows 7 install ~4 weeks, but is also a problem I have seen on Windows Vista machines (x86 versions). Since the other day, any programs attempted to be launched via right clicking on a shortcut (.lnk)'s context menu and pressing - "Run As Administrator" for instance, in the Quick Launch/Jump List in Windows 7 has failed, screen has not dimmed, no UAC popup. In fact the program does not even load. There is no way around this unless I use the shortcut version from "All Programs" which appears to work, very strange? I have performed no major software installs, nothing out of the ordinary. Has anybody encountered this or know what would be causing it? Here's an example of somebody else experiencing this problem in Vista with no solution: http://www.vistax64.com/vista-general/131918-strange-run-administrator-problem.html and I believe this problem is related, I also cannot right click - "Manage" on my computer): http://windows7forums.com/windows-7-support/5501-run-administrator-broken.html I am running the latest version of Avira AntiVir Virus Scanner and pretty concious of what I download, I don't think it is a virus, nor do I believe it is due to the RC Version of Windows 7, because I have seen the problem across multiple Operating Systems versions. Thanks guys.

    Read the article

  • Postfix: change sender in queued messages

    - by ring0
    Following a complete re-installation we got a problem with the configuration: the sender address was wrong and some recipients (mail servers) rejected them. So there is a bunch of mails stuck in the Postfix queue. Ideally, a change of the sender address directly in the queued mails, and then flushing the queue would be optimal. I tried this answer that addresses this very problem. But messages don't seem to be easily modifiable in the version I have (2.11.0). For instance there is no /var/spool/mqueue dir, but, instead, /var/spool/postfix/... active bounce corrupt defer deferred dev etc flush hold incoming lib maildrop pid private public saved trace usr and the dir of interest is deferred. I tried to modify a few files there changing the wrong domain with the correct one (and was careful to ensure only those were changed). But then, those mails were moved to corrupt, meaning that a simple text change doesn't seem to work (done with vi). Any other cleaner way to change the sender in queued mails?

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >