Search Results

Search found 22139 results on 886 pages for 'security testing'.

Page 697/886 | < Previous Page | 693 694 695 696 697 698 699 700 701 702 703 704  | Next Page >

  • VMWare Converter - Remote Linux Install P2V

    - by Zed Said
    I am trying to get what I thought was an easy process working. Here is my situation. I have a remote linux server (running Debian) that I would like to turn into a VM so that I can do testing on it rather than the production server. I downloaded VMWare Converter. I went to the "convert machine" wizard, chose Powered-on machine, entered my remote machine details. Clicking next shows that it correctly connects to my remote linux box. Now, on the next screen, for Destination type, it enters "WMWare Infrastructure virtual machine", and I can't change this. This is where I am stuck. Why do I need another sever to convert with? Is this where my VM gets sent to? And why can't I just convert the VM and save it on the computer that I am running the converter program on? If I do have to use a vmware infrastructure destination, I am confused at what ESX is, why and how I use it. Any help with this process would be more than appreciated!

    Read the article

  • USB Hardware vs. Software Write Lock

    - by TreyK
    I'm in the market for a USB flash drive, and remember this cool feature a tiny 32MB flash drive of mine had: a write lock switch. This seemed like it would be an amazing feature to have as a shield against any nastiness happening to the drive on an unfamiliar computer. However, very few drives on the market offer this feature. Instead, it seems that forms of software protection are the more prominent method. This software protection causes me a bit of uneasiness, as it seems like this software wouldn't be nearly as bulletproof as a physical switch. Also, levels of protection seem to vary from product to product. Being able to protect certain folders from reading and/or writing would be nice, but is the security trade-off worth it? Just how effective can this software protection be? Wouldn't a simple format be able to clean any drive with software protection? My drive must also be compatible with Windows XP, Vista, and 7, as well as Linux and Mac. What would be the best way forward for getting a well-sized (~8GB) flash drive with a strong write protection implementation, for little or no more than a regular drive? Thanks.

    Read the article

  • MSI Installer error 2203; how to force permissions on installer directory?

    - by goober
    [Cross-Posted on StackOverflow.com as well because the question relates to development. Feel free to let me know where it best belongs.] Hi all, I'll try to bullet-point to keep it short: Background / Issue Trying to install ASP.NET MVC 3 RC on my Windows 7 machine. Uninstalled other versions of MVC (2 and 3 Beta 1). Ran the installer -- got a generic error, 2203. Log files said that it was a permissions error on C:\Windows\Installer. Checked C:\Windows\Installer -- sure enough, it's marked as read-only. I un-checked "Read-Only" in the folder properties and applied. It appears to open the dialog and apply to all files. However, when clicking properties again, the read-only box is backed to checked. Checked the security tab of the folder -- both system and the Administrators group have full access. I checked ownership -- the Administrators group is listed as an owner. Verified that I'm in the system as an Administrator (in fact, the only account in the Administrators group besides Administrator). So, what gives? Thanks in advance for any help you can provide!

    Read the article

  • ASP.NET Website Administration Tool: Unable to connect to SQL Server database

    - by MedicineMan
    I am trying to get authentication and authorization working with my ASP MVC project. I've run the aspnet_regsql.exe tool without any problem and see the aspnetdb database on my server (using the Management Studio tool). my connection string in my web.config is: <connectionStrings> <add name="ApplicationServices" connectionString="data source=MYSERVERNAME;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient" /> The error I get is: There is a problem with your selected data store. This can be caused by an invalid server name or credentials, or by insufficient permission. It can also be caused by the role manager feature not being enabled. Click the button below to be redirected to a page where you can choose a new data store. The following message may help in diagnosing the problem: Unable to connect to SQL Server database. In the past, I have had trouble connecting to my database because I've needed to add users. Do I have to do something similar here?

    Read the article

  • Changing subnet-mask of class-c network host to 255.255.0.0

    - by Prashant Mandhare
    We have a existing class-c network with IP address range 11.22.33.44/24 (just for example). My domain controller has been configured within this subnet. So all servers within this subnet have subnet mask configured to 255.255.255.0. Now we have got a new subnet with IP address 11.22.88.99/24 (note that only last 2 octets have changed). I want all new hosts in this new subnet to join my existing DC. For this we have configured firewall properly so allow this. (so there is no issue with firewall). But initially I was not able to join hosts in new subnet in existing domain. Later I doubted on subnet mask used in domain controller (255.255.255.0) and for testing purpose I changed it to 255.255.0.0, it worked like charm, i was able to join subnet-2 hosts in subnet-1 domain. Now i am wondering whether it will be good practice to change subnet mask of a class-c network to 255.255.0.0? Can any issues arise due to this? Experts please provide your opinion.

    Read the article

  • How to push changes from Test server to Live server?

    - by anonymous
    As a beginner, I finally noticed the issue with making changes to the live server I've been working on, now that I have a couple users on it, since I bring it down so often. I created an EC2 image of my live server and set up a separate instance on EC2, so now I have 2 EC2 instances, Stage and Production. I set up GitHub and push changes to stage and test my code there, and when it's all done and working, I push it to the production branch, and everything is good. And there is a slight issue here since I name my files config_stage.js and config_production.js and set up .gitignore on each server, and in my code, I would have it read the ENV flags and set up the appropriate configs, is this the correct approach? And my main question is: how do you keep track of non-code changes to the server? For example, I installed HAProxy, Stunnel, Redis, MongoDB and several other things onto the Stage server for testing and now that it's all working and good, how do I deploy them to production? Right now, I'm just keeping track of everything I installed and copying configuration files over, which is very tedious and I'm afraid I may have missed a step somewhere. Is there a better way to port these changes over from my test server to my live server?

    Read the article

  • Why the system information message when accessing an Ubuntu server doesn't match free -m?

    - by Andres
    Each time I SSH into my AWS Ubuntu servers I see a system information message, showing load, memory usage and packages available to install, like this: Welcome to Ubuntu 12.04.3 LTS (GNU/Linux 3.2.0-51-virtual x86_64) * Documentation: https://help.ubuntu.com/ System information as of Sun Nov 10 18:06:43 EST 2013 System load: 0.08 Processes: 127 Usage of /: 4.9% of 98.43GB Users logged in: 1 Memory usage: 69% IP address for eth0: 10.236.136.233 Swap usage: 100% Graph this data and manage this system at https://landscape.canonical.com/ 13 packages can be updated. 0 updates are security updates. Get cloud support with Ubuntu Advantage Cloud Guest http://www.ubuntu.com/business/services/cloud Use Juju to deploy your cloud instances and workloads. https://juju.ubuntu.com/#cloud-precise *** /dev/xvda1 will be checked for errors at next reboot *** *** System restart required *** My question is about the memory percentage shown. In this case, it's showing a 69% of memory usage, but since the swap usage was 100% I checked it by myself. So when I run free -m I get this: total used free shared buffers cached Mem: 1652 1635 17 0 4 29 -/+ buffers/cache: 1601 51 Swap: 895 895 0 And that's of course closer to 100% than to 69%

    Read the article

  • Can't access IIS 7 server URL from the same IIS 7 server.

    - by Kevin Raffay
    We have an intranet site ie, xxx.yyyy.com, that users access by entering "http"://xxx.yyy.com. Our problems started when we migrated to IIS 7 running on a new 2003 server. We got rid of our single-sign on code and implemented a security model where we capture a user's domain credentials which we then authenticate against a DB. In order to get the domain credentials passed to our ASP.NET app, we have the following settings: Anonymous Authentication:Disabled ASP.NET Impersonation: Enabled Basic/Digest/Forms Authentication: Disabled Windows Authentication: Enabled We allow "*" and deny "?" in the web.config. Browsing "http"://xxx.yyy.com from any client PC results in a domain login prompt, and if your enter a proper user/pwd, you can get in. However, browsing "http"://xxx.yyy.com while remoting into the server results in 3 domain login prompts and eventually a 401 error - unauthorized. We have traced this behavior to problems with our web site where we have pages doing "screen scraping" using the HttpRequest calling a url on the same server. When doing a HttpRequest from any other client, using a test harness that passes authorized credentials, all is good. So internal HttpRequest calls on the server fail, just like attempts to browse that server's url from within a remote session. Why would a to "http"://xxx.yyy.com on server xxx.yyy.com fail authentication?

    Read the article

  • MaxStartups and MaxSessions configurations parameter for ssh connections?

    - by Webby
    I am copying the files from machineB and machineC into machineA as I am running my below shell script on machineA. If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same files from machineC. I am copying the files in parallel using GNU Parallel library and it is working fine. Currently I am copying 10 files in parallel. Below is my shell script which I have - #!/bin/bash export PRIMARY=/test01/primary export SECONDARY=/test02/secondary readonly FILERS_LOCATION=(machineB machineC) export FILERS_LOCATION_1=${FILERS_LOCATION[0]} export FILERS_LOCATION_2=${FILERS_LOCATION[1]} PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers SECONDARY_PARTITION=(1643 1103 1372 1096 1369 1568) # this will have more file numbers export dir3=/testing/snapshot/20140103 find "$PRIMARY" -mindepth 1 -delete find "$SECONDARY" -mindepth 1 -delete do_Copy() { el=$1 PRIMSEC=$2 scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. } export -f do_Copy parallel --retries 10 -j 10 do_Copy {} $PRIMARY ::: "${PRIMARY_PARTITION[@]}" & parallel --retries 10 -j 10 do_Copy {} $SECONDARY ::: "${SECONDARY_PARTITION[@]}" & wait echo "All files copied." Problem Statement:- With the above script at some point I am getting this exception - ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host And I guess the error is typically caused by too many ssh/scp starting at the same time. That leads me to believe /etc/ssh/sshd_config:MaxStartups and MaxSessions is set too low. But my question is on which server it is pretty low? machineB and machineC or machineA? And on what machines I need to increase the number? On machineA this is what I can find - root@machineA:/home/david# grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10:30:60 root@machineA:/home/david# grep MaxSessions /etc/ssh/sshd_config And on machineB and machineC this is what I can find - [root@machineB ~]$ grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10 [root@machineB ~]$ grep MaxSessions /etc/ssh/sshd_config #MaxSessions 10

    Read the article

  • Enable RemoteApp Full Desktop programmatically

    - by Scott Chamberlain
    I am writing a powershell script to set up some HyperV VM's however there is one step I am having trouble automating. How do I check the box to allow Remote desktop access from the RemoteApp settings programmatically? I can set up all of my customizations I need by doing #build the secrity descriptor so the desktop only shows up for people who should be allowed to see it $remoteDesktopUsersSid = New-Object System.Security.Principal.SecurityIdentifier($remoteDesktopUsersGroup.objectSid[0],0) $aceTemplet = 'O:WDG:WDD:ARP(A;CIOI;CCLCSWLORCGR;;;{0})' $securityDescriptor = $aceTemplet -f $remoteDesktopUsersSid #get a copy of the WMI instance $tsRemoteDesktop = Get-WmiObject -Namespace root\CIMV2\TerminalServices -Class Win32_TSRemoteDesktop #set settings $tsRemoteDesktop.Name = $ServerDisplayName $tsRemoteDesktop.SecurityDescriptor = $securityDescriptor $tsRemoteDesktop.IconPath = $IconPath $tsRemoteDesktop.IconIndex = $IconIndex #push settings back to server Set-WmiInstance -InputObject $tsRemoteDesktop -PutType UpdateOnly however the instance of that WMI object does not exist until after you have the above box checked. I attempted to use Set-WmiInstance to instantiate and set the settings at the same time but I keep getting errors like: Set-WmiInstance : At line:53 char:16 + Set-WmiInstance <<<< -Namespace root\CIMV2\TerminalServices -Class Win32_TSRemoteDesktop -Arguments @{Alias='TSRemoteDesktop';Name=$ServerDisplayName;ShowInPortal=$true;SecurityDescriptor=$securityDescriptor} + CategoryInfo : NotSpecified: (:) [Set-WmiInstance], ArgumentException + FullyQualifiedErrorId : System.ArgumentException,Microsoft.PowerShell.Commands.SetWmiInstance (also after running the command and getting the error it will delete the instance of Win32_TSRemoteDesktop if it already exited and un-check the box in the properties setting) Is there any way to programmatically check that box or can anyone help with why Set-WmiInstance throws that error?

    Read the article

  • Check for updates of a specific Debian package list

    - by Erwan Queffélec
    The setup I run a Debian Squeeze host that I use to build a multilanguage project (python, java, php...) and generate custom packages (debian and RPM) automatically (through jenkins) The problem The target distributions of those Debian packages are Etch, Lenny and Squeeze. But our project has some native dependencies that are available only through the DebianRelease + 1 repository (i.e Lenny + 1 == Squeeze, Squeeze + 1 == Wheezy). We for example, need the jetty packages from Squeeze in Lenny, and the cyrus-imapd-2.4 packages from Wheezy in Squeeze. Some additional info : Some package we can simply 'backport by hand' by mirroring the DebianRelease + 1 packages to our own repositories. For instance, the jetty package from Squeeze will run fine on Lenny because it doesn't need an s**tload of additional dependencies However we do need to rebuild some packages. For instance, cyrus-imapd-2.4 from Wheezy has a lot of unsatisfied dependencies on Squeeze. So we need to rebuild it in Squeeze and then upload it to our repo. The question I need to have a simple way of knowing if they are any updates on those extra packages (both "normal" and "security" updates). I could write a simple script that runs weekly, get some parameters from a file, and generate an update report. Let's say the file looks like this: jetty:squeeze cyrus-imapd-2.4:wheezy The script should run as normal user not to mess up the system apt configuration and issue the appropriate commands to generate that report. Does Debian has some built-in apt-* commands/options dedicated to that kind of problem I could use to write this script ? If not, can someone think of another clean solution to achieve what I need ?

    Read the article

  • Strange performance issue with Dell R7610 and LSI 2208 RAID controller

    - by GregC
    Connecting controller to any of the three PCIe x16 slots yield choppy read performance around 750 MB/sec Lowly PCIe x4 slot yields steady 1.2 GB/sec read Given same files, same Windows Server 2008 R2 OS, same RAID6 24-disk Seagate ES.2 3TB array on LSI 9286-8e, same Dell R7610 Precision Workstation with A03 BIOS, same W5000 graphics card (no other cards), same settings etc. I see super-low CPU utilization in both cases. SiSoft Sandra reports x8 at 5GT/sec in x16 slot, and x4 at 5GT/sec in x4 slot, as expected. I'd like to be able to rely on the sheer speed of x16 slots. What gives? What can I try? Any ideas? Please assist Cross-posted from http://en.community.dell.com/support-forums/desktop/f/3514/t/19526990.aspx Follow-up information We did some more performance testing with reading from 8 SSDs, connected directly (without an expander chip). This means that both SAS cables were utilized. We saw nearly double performance, but it varied from run to run: {2.0, 1.8, 1.6, and 1.4 GB/sec were observed, then performance jumped back up to 2.0}. The SSD RAID0 tests were conducted in a x16 PCIe slot, all other variables kept the same. It seems to me that we were getting double the performance of HDD-based RAID6 array. Just for reference: maximum possible read burst speed over single channel of SAS 6Gb/sec is 570 MB/sec due to 8b/10b encoding and protocol limitations (SAS cable provides four such channels).

    Read the article

  • Certain banking pages not loading

    - by Joseph Lee
    For some unknown reason, I am suddenly unable to access my accounts at several banking and credit sites. I have been a registered user at each site for several years and know I am using the correct user ID and password. Yet, after entering the data, answering security questions, and clicking the submit button, I land on a page with an error message saying their is a technical problem preventing me from accessing my account. On one site, I end up at the sign in page repeatedly. I am never told that my ID/password are incorrect. I believe may be firewall related. Windows firewall was damaged after a recent malware attack. I am now using a third party firewall (Fort Knox). I am not seeing a pop-up indicating sites are blocked or asking me to indicate yes or no. I am using Windows 7 Home Premium. I get the same result regardless of the browser. I switched to Maxthon last night and am getting the same result. This is not happening at other sites. And I am able to access some banking sites normally. This is frustrating because I need to make payments and have gone paperless. Any feedback will be appreciated. ---- Joe ----

    Read the article

  • Attempting to update Amazon Route53 using a script, but domain is not being updated

    - by ks78
    I have several Amazon EC2 instances, running Ubuntu 10.04, with which I'd like to use Amazon's Route53. I setup a script as described in Shlomo Swidler's article, but I'm still missing something. When the script runs, it doesn't return any output, which I initially assumed meant it ran correctly. However, when I check the DNS records using MyR53DNS, there are no entries for my instances. Here's my script: #!/bin/tcsh -f set root=`dirname $0` setenv EC2_HOME /usr/lib/ec2-api-tools setenv EC2_CERT /etc/cron.route53/ec2_x509_cert.pem setenv EC2_PRIVATE_KEY /etc/cron.route53/ec2_x509_private.pem setenv AWS_ACCESS_KEY_ID myaccesskeyid setenv AWS_SECRET_ACCESS_KEY mysecretaccesskey /user/bin/ec2-describe-instances | \ perl -ne '/^INSTANCE\s+(i-\S+).*?(\S+\.amazonaws\.com)/ \ and do { $dns = $2; print "$1 $dns\n" }; /^TAG.+\sShortName\s+(\S+)/ \ and print "$1 $dns\n"' | \ perl -ane 'print "$F[0] CNAME $F[1] --replace\n"' | \ xargs -n 4 $/etc/cron.route53/cli53/cli53.py \ rrcreate -x 60 mydomain.com Does anyone see a problem with this script? If its not the script, what else could be preventing my Route53 domain from being updated? I am using the Security Groups to IP-restrict the instances. I've tried opening port 53, but that didn't seem to have an effect. Is there another port that Route53 uses? I'd appreciate any help or guidance the ServerFault community can offer. Let me know if you need any further info.

    Read the article

  • RPCSS kerberos issues on imaged Windows workstations

    - by sysadmin1138
    While doing some unrelated troubleshooting I came across a set of Event Log entries that have me concerned. Machine Name: labcomputer82 Source: Security-Kerberos Event ID: 4 Event Description: The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server labcomputer143$. The target name used was RPCSS/imagemaster4.ad.domain.edu. This indicates that the target server failed to decrypt the ticket provided by the client. This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using. Please ensure that the target SPN is registered on, and only registered on, the account used by the server. This error can also happen when the target service is using a different password for the target service account than what the Kerberos Key Distribution Center (KDC) has for the target service account. Please ensure that the service on the server and the KDC are both updated to use the current password. If the server name is not fully qualified, and the target domain (AD.DOMAIN.EDU) is different from the client domain (AD.DOMAIN.EDU), check if there are identically named server accounts in these two domains, or use the fully-qualified name to identify the server. There are three machine names used in this message. It's generated on labcomputer82, it's attempting to talk to another lab workstation called labcomputer143, and the service in question (RPCSS) refers to the name of the machine that this machine was imaged from (and possibly also that of labcomputer143, I'm not sure). The thing that has me raising both eyebrows is that the machine named labcomputer82 is attempting to use an SPN of RPCSS/imagemaster4.ad.domain.edu. The SPN attribute on the computer object in AD looks just fine. It has all the names it should have. Of the over 3,000 computer objects in our AD domain, somewhere around 1,700 of the are computer-lab seats that are frequently imaged. If we're doing something wrong, I'd like to know in time to get our procedures modified (and people retrained) for fall quarter. But if this is normal for imaged machines, I'll just continue ignoring these.

    Read the article

  • COM+/Desktop Heap errors in IIS affecting sites at random?

    - by tresstylez
    We have a Win2K3 server that is hosting 30+ sites. Each site is configured to have its own unique application pool -- so that we can manually recycle specific sites if needed and not kill sessions for the others. From what I've read, the consequence of this type of setup is that each application pool worker process gets allocated a Desktop Heap (normally 512 kb's) and we limit the number of app pools we can serve. http://blogs.msdn.com/b/david.wang/archive/2006/01/25/security-considerations-of-usesharedwpdesktop-on-iis6.aspx PROBLEM: What we're seeing is that occasionally COM+ errors get triggered, presumably by hitting our 512 kb limit of the desktop heap -- and certain sites become unresponsive (or have errors) until we manually recycle that specific app pool. I know that I can increase the desktop heap limit to 1024, and make other tweaks/tunes, but I've been tasked with finding out what exactly causes one site's heap to max out as opposed to another. It seems that when we start seeing COM+ errors, the sites it affects are random -- small sites or big sites (heavier used). Is it based on process id? Traffic? Any pointers on understanding this a little more would be excellent. Thanks! jg

    Read the article

  • Create SAMBA node trust relationship to Windows 2003 PDC server

    - by Rod Regier
    I am having problems creating a trust relationship between an OpenVMS/IA64 node running V/IA64 8.3-1H1, TCPIP 5.6 ECO 5, CIFS 1.1 ECO1 PS11 (SAMBA 3.0.28a) and Windows 2003 server running as a PDC. I do have two other OpenVMS/Alpha nodes running V/A 8.3, TCPIP 5.6 ECO 4, CIS 1.1 ECO1 PS10 (SAMBA 3.0.28a) with working trust relationships to the same Windows 2003 server. Looking for assistance in resolving the trust "handshake". \\ Details from failing node. Unless otherwise noted, corresponding files on working nodes are similar or identical. SMB.CONF extract: [global] server string = Samba %v running on %h (OpenVMS) workgroup = WILMA netbios name = %h security = DOMAIN encrypt passwords = Yes name resolve order = lmhosts host wins bcast Password server = * log file = /samba$log/log.%m printcap name = /sys$manager/ucx$printcap.dat guest account = DYMAX print command = print %f/queue=%p/delete/passall/name="""""%s""""" lprm command = delete/entry=%j map archive = No printing = OpenVMS net rpc testjoin [2010/08/13 16:09:28, 0] SAMBA$SRC:[SOURCE.RPC_CLIENT]CLI_PIPE.C;1:(2443) get_schannel_session_key: could not fetch trust account password for domain 'WILMA' [2010/08/13 16:09:28, 0] SAMBA$SRC:[SOURCE.UTILS]NET_RPC_JOIN.C;1:(72) net_rpc_join_ok: failed to get schannel session key from server W2K3AD2 for domain WILMA. Error was NT_STATUS_CANT_ACCESS_DOMAIN_I NFO Join to domain 'WILMA' is not valid net rpc join "-Uaccount%password" tdb_open_isam: error verifying status of file SAMBA$ROOT:[PRIVATE]secrets.tdb tdb_open_isam: errno value = 1 [2010/08/13 16:21:13, 0] SAMBA$SRC:[SOURCE.PASSDB]SECRETS.C;1:(72) Failed to open /SAMBA$ROOT/PRIVATE/secrets.tdb [2010/08/13 16:21:13, 0] SAMBA$SRC:[SOURCE.UTILS]NET_RPC.C;1:(322) error storing domain sid for WILMA tdb_open_isam: error verifying status of file SAMBA$ROOT:[PRIVATE]secrets.tdb tdb_open_isam: errno value = 1 [2010/08/13 16:21:13, 0] SAMBA$SRC:[SOURCE.PASSDB]SECRETS.C;1:(72) Failed to open /SAMBA$ROOT/PRIVATE/secrets.tdb [2010/08/13 16:21:13, 0] SAMBA$SRC:[SOURCE.UTILS]NET_RPC_JOIN.C;1:(409) error storing domain sid for WILMA Unable to join domain WILMA. \\ Example from other node: net rpc testjoin Join to 'WILMA' is OK

    Read the article

  • Phpmyadmin location for nginx

    - by multiformeinggno
    I installed nginx and phpmyadmin. I set up a domain with these parameters to test phpmyadmin: server { listen 80; server_name domain.com; root /usr/share/phpmyadmin; index index.php; fastcgi_index index.php; location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; fastcgi_pass unix:/var/run/php5-fpm.sock; } } And everything works properly (if I visit the domain I can login to phpmyadmin). The problem is that it was just for testing phpmyadmin, now I'd like to move this to my 'default' site. But I can't figure out how to have it on /phpmyadmin. Here's the config for the 'default' nginx site (where I'd like to put this /phpmyadmin location): server { server_name blabla; access_log /var/log/nginx/$host.access.log; error_log /var/log/nginx/error.log; root /var/www/default; index index.php index.html; location / { try_files $uri $uri/ index.php; } location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; } ### NginX Status location /nginx_status { stub_status on; access_log off; } ### FPM Status location ~ ^/(status|ping)$ { fastcgi_pass unix:/var/run/php5-fpm.sock; access_log off; } }

    Read the article

  • can't find port 22 traffic under VirtualBox

    - by telliott99
    I'm trying to learn to use tcpdump. I thought I'd eavesdrop on my ssh login. The setup is a bit unusual, I have OS X Lion running VirtualBox, with Ubuntu running in the VM. I have ssh enabled and can login from OS X normally: > ssh -p 22 10.0.1.2 -l telliott Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-17-generic i686) * Documentation: https://help.ubuntu.com/ 0 packages can be updated. 0 updates are security updates. Last login: Sat Mar 31 19:54:36 2012 from toms-mac-mini.local telliott@U32:~$ logout Connection to 10.0.1.2 closed. > I have not obfuscated the ssh port on Ubuntu. From OS X, stroke gives what I expect: > ./stroke 10.0.1.2 22 22 Port Scanning host: 10.0.1.2 Open TCP Port: 22 ssh So from OS X I do: > sudo tcpdump -i en1 -v port 22 Password: tcpdump: listening on en1, link-type EN10MB (Ethernet), capture size 65535 bytes Then I login from OS X to Ubuntu using ssh, but I see nothing with tcpdump. Here is ifconfig from Ubuntu: telliott@U32:~$ ifconfig eth1 Link encap:Ethernet HWaddr 08:00:27:d7:ba:0e inet addr:10.0.1.2 Bcast:10.0.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fed7:ba0e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:799 errors:0 dropped:0 overruns:0 frame:0 TX packets:465 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:96863 (96.8 KB) TX bytes:68638 (68.6 KB) Where are the packets I was hoping to see? Thanks for any help.

    Read the article

  • SqlCmd : Login timeout expired from localhost

    - by mschr
    I've setup the instance SQLEXPRESS via SQL Server 2008 R2 installation, added a security login with all server roles, one called 'sqluser'. The server authentication is SQL Server and Windows Authentication mode. However, when i specify the -S property, login fails. There is no firewall enabled and SQL server even accepts connections from remote hosts. C:\Users\user>sqlcmd -U sqluser -P qwerty -Q "Select * FROM testdb.dbo.testtable" Output: integer ------- 1 2 3 4 (4 rows affected) However when specifying 'localhost' the query fails... Question is Why? C:\Users\user>sqlcmd -S localhost/sqlexpress -U cpt -P 1234 -Q "Select * FROM cpt.dbo.testme" Output: HResult 0x43, Level 16, State 1 Named Pipes Provider: Could not open a connection to SQL Server [67]. Sqlcmd: Error: Microsoft SQL Server Native Client 10.0 : A network-related or in stance-specific error ..... Sqlcmd: Error: Microsoft SQL Server Native Client 10.0 : Login timeout expired. Changing 'localhost' with '%COMPUTERNAME' is same result if someone would be wondering. The server is running as a LocalSystem instance.

    Read the article

  • Outbound HTTP performance tuning recommendations

    - by Richard Gadsden
    I'll detail my exact setup below, but general recommendations for a better web-browsing experience will be useful. A nice checklist of things to try would be great! I have 600 users on a single site with an 8MB leased line. I get a lot of moans about the performance of "the internet" (ie web-browsing). What recommendations do the community have for speeding things up without just throwing more bandwidth at it? I expect I will end up buying some more, but good management tips are always valuable. My setup is this: Cisco PIX (515E) firewall on the edge of the network. It's just doing some basic NAT, and opening up a handful of ports to various bastion hosts (aka DMZ servers). The DMZ is just a switch that the servers are plugged into. ISA 2006 Enterprise array (two servers) connecting DMZ to the internal LAN, with WebSense Web Security filtering HTTP traffic so users can't look at porn or waste bandwidth on YouTube during working hours. I've done a few things - I've just switched my internal DNS over to use root hints, which halved DNS query latency from 500ms to 250ms. Well worth doing. I'm trying to cache more aggressively, but so much more of the internet is AJAXy and doesn't cache very well as compared to five years ago. Plus the 70GB of cache which felt like a lot a few years ago really isn't any more. I'm getting about 45% cache hits by number of requests, but only about 22% by size, ie larger objects are less likely to be cached. Latency seems to be part of the problem. Is that attributable to the bandwidth problem, or are there things I can look at to try to reduce latency even on heavily-loaded bandwidth?

    Read the article

  • administrator user unable to login, suspicious user accounts "sky$", "admin$"

    - by mks
    I have a Windows 2008 R2 Standard (64 bit) running in a virtual machine. Suddenly from yesterday onwards I am not able to login as administrator. Nobody changed the password. Both in the console as well as using remote desktop I am unable to login. Whenever I login as Administrator I am getting this error: "The user name or password is incorrect" Nothing has changed in the machine and I have logged in the past successfully both through console and via remote desktop several time on the same machine. One strange behaviour I noticed is, I am seeing some additional user accounts if I try to login as other user. The suspicious user account are: sky$ admin$ SUPPORT_388945a0 Is it created by some malware/virus? Or is it some windows hidden account? Microsoft site says that SUPPORT_388945a0 is: The Support_388945a0 account enables Help and Support Service interoperability with signed scripts. This account is primarily used to control access to signed scripts that are accessible from within Help and Support Services. Administrators can use this account to delegate the ability for an ordinary user, who does not have administrative access over a computer, to run signed scripts from links embedded within Help and Support Services. These scripts can be programmed to use the Support_388945a0 account credentials instead of the user’s credentials to perform specific administrative operations on the local computer that otherwise would not be supported by the ordinary user’s account. When the delegated user clicks on a link in Help and Support Services, the script executes under the security context of the Support_388945a0 account. This account has limited access to the computer and is disabled by default. However I am not sure from where this "admin$" and "sky$" came. Anyone has similar experience?

    Read the article

  • Validating signature trust with gpg?

    - by larsks
    We would like to use gpg signatures to verify some aspects of our system configuration management tools. Additionally, we would like to use a "trust" model where individual sysadmin keys are signed with a master signing key, and then our systems trust that master key (and use the "web of trust" to validate signatures by our sysadmins). This gives us a lot of flexibility, such as the ability to easily revoke the trust on a key when someone leaves, but we've run into a problem. While the gpg command will tell you if a key is untrusted, it doesn't appear to return an exit code indicating this fact. For example: # gpg -v < foo.asc Version: GnuPG v1.4.11 (GNU/Linux) gpg: armor header: gpg: original file name='' this is a test gpg: Signature made Fri 22 Jul 2011 11:34:02 AM EDT using RSA key ID ABCD00B0 gpg: using PGP trust model gpg: Good signature from "Testing Key <[email protected]>" gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: ABCD 1234 0527 9D0C 3C4A CAFE BABE DEAD BEEF 00B0 gpg: binary signature, digest algorithm SHA1 The part we care about is this: gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. The exit code returned by gpg in this case is 0, despite the trust failure: # echo $? 0 How do we get gpg to fail in the event that something is signed with an untrusted signature? I've seen some suggestions that the gpgv command will return a proper exit code, but unfortunately gpgv doesn't know how to fetch keys from keyservers. I guess we can parse the status output (using --status-fd) from gpg, but is there a better way?

    Read the article

  • mdadm+zfs vs mdadm+lvm

    - by Alex
    This may be a naive question since I'm new to this and I cannot find any results about mdadm+zfs, but after some testing it seems it might work: The use case is a server with RAID6 for some data that is backed-up somewhat infrequently. I think I'm well served by any of ZFS or RAID6. Platform is Linux. Performance is secondary. So the two setups I am considering are: A RAID6 array plus regular LVM and ext4 A RAID6 array plus ZFS (without redundancy). Is this second option that I don't see discussed at all. Why ZFS+RAID6? It's mainly because the inability of ZFS to grow a raidz2 with new disks. You can replace disks with larger ones, I know, but not add another disk. You can accomplish 2-disk redundancy and ZFS disk growth using mdadm as the redundancy layer. Besides that main point (otherwise I could go directly to raidz2 without RAID under it), these are the pros-cons that I see for each option: ZFS has snapshots without preallocated space. LVM requires preallocation (might be no longer true). ZFS has checksumming (very interested in this) and compression (nice bonus). LVM has online filesystem growth (ZFS can do it offline with export/mdadm --grow/import). LVM has encryption (ZFS-on-Linux has not). This is the only major con of this combo I see. I guess I could go RAID6+LVM+ZFS... seems too heavy, or not? So, to close with a proper question: 1) Is there anything that inherently discourages or precludes RAID6+ZFS? Anyone has experience with a setup like this? 2) Are there possibilities for checksumming and compression that would make ZFS unnecessary (maintaining the possibility of filesystem growth)? Because the RAID6+LVM combo seems the sanctioned, tested way.

    Read the article

  • How to generate customized sudoers files in puppet depending on the environment they're deployed to?

    - by gozu
    the sysadmins are present in the sudoers files of all environments, but other sudoers are not. Different environments all have slightly different sudoers. Most of the time, 90% of users are the same, and 10% vary so we cannot have only one sudoers file for everything. Right now, we are using puppet with 10 different files with names like sudoers.production1, sudoers.production2, sudoers.production3, sudoers.testing1, sudoers.staging1 and so forth. Puppet then picks the file to deploy based on the server's $domain (ex: dbserver.staging1.acme.com) or $hardwaremodel. It works fine but it's a nightmare to maintain so many files. I'd like to autogenerate sudoers files based on the server's domain and have only one big file with all the sudoers permissions for all users and all environments. Something that looks like: User_Alias ADMINS = abe, bob, carol, dave case $domain { "staging1.acme.com" { #add dev1,dev2,tester1,tester2 to sudoers file } "testing2.acme.com" { #add tester1, tester3, tester4 to sudoers file } What's the best way to go about this? Suggestions for alternatives are welcome. I'd appreciate any tips. Update 1: For security reasons, we'd rather not concatenate a bunch of files from a folder located on a puppet client in case someone puts a file in there (maliciously or not) and either breaks the combined file or inserts something in it. Most importantly, for usability, we'd like to keep the number of sudoers related files (fragment or complete) on puppet server to either 3 (prod/stage/test) or preferably 1 file. this file would (somehow) generate sudoers files on the puppet server and send one customized file to each puppet client. The purpose of this would be only searching for a username in a single file and removing it quicker than doing it on 11 files. When adding a user to a bunch of environments, it won't be as quick, but only one file would need to be opened and looked at, greatly reducing the chances of an omission. our Sudo version is 1.6.9p8 so we can't use /sudoers.d folder, only a sudoers file.

    Read the article

< Previous Page | 693 694 695 696 697 698 699 700 701 702 703 704  | Next Page >