Search Results

Search found 22139 results on 886 pages for 'security testing'.

Page 708/886 | < Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >

  • Trying to understand Wireless N vs Wireless AC

    - by EGHDK
    Whenever a new wireless standard gets approved you expect faster speeds and longer range. From everything that I've read about it, it seems that AC will only transfer over the 5GHz band and up to 3Gbps. Studying the new AC routers on the market, it seems that they will transfer over 5GHz and 2.4GHz. And 5GHz will only transfer at 1.3Gbps. Which isn't what AC is supposed to be. I know there is a difference between what the standard actually says, and what products will actually do, but is there any reason for this? Is there any other main differences between AC and N? I've heard people discussing AC and saying that it's finally "fixing" what N was supposed to fix... what do they mean by that? Any security benefits? I have seen this image online: Will AC really do that? Will that require an AC network card in my laptop for that to actually happen? Lastly, will the router only be able to communicate with AC devices if I have beamforming technology on? I know it's a ton of questions, but most articles online seem to be outdated, and don't provide too much reliability.

    Read the article

  • Nginx Slower than Apache??

    - by ichilton
    Hi, I've just setup 2x identical Rackspace Cloud instances and am doing some comparisons and benchmarks to compare Apache and Nginx. I'm testing with a 3.4k png file and initially 512MB server instances but have now moved to 1024MB server instances. I'm very surprised to see that whatever I try, Apache seems to consistently outperform Nginx....what am I doing wrong? Nginx: Server Software: nginx/0.8.54 Server Port: 80 Document Length: 3400 bytes Concurrency Level: 100 Time taken for tests: 2.320 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 3612000 bytes HTML transferred: 3400000 bytes Requests per second: 431.01 [#/sec] (mean) Time per request: 232.014 [ms] (mean) Time per request: 2.320 [ms] (mean, across all concurrent requests) Transfer rate: 1520.31 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 11 15.7 3 120 Processing: 1 35 76.9 20 1674 Waiting: 1 31 73.0 19 1674 Total: 1 46 79.1 21 1693 Percentage of the requests served within a certain time (ms) 50% 21 66% 39 75% 40 80% 40 90% 98 95% 136 98% 269 99% 334 100% 1693 (longest request) And Apache: Server Software: Apache/2.2.16 Server Port: 80 Document Length: 3400 bytes Concurrency Level: 100 Time taken for tests: 1.346 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 3647000 bytes HTML transferred: 3400000 bytes Requests per second: 742.90 [#/sec] (mean) Time per request: 134.608 [ms] (mean) Time per request: 1.346 [ms] (mean, across all concurrent requests) Transfer rate: 2645.85 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 1 3.7 0 27 Processing: 0 3 6.2 1 29 Waiting: 0 2 5.0 1 29 Total: 1 4 7.0 1 29 Percentage of the requests served within a certain time (ms) 50% 1 66% 1 75% 1 80% 1 90% 17 95% 19 98% 26 99% 27 100% 29 (longest request) I'm currently using worker_processes 4; and worker_connections 1024; but i've tried and benchmarked different values and see the same behaviour on all - I just can't get it to perform as well as Apache and from what i've read previously, i'm shocked about this! Can anyone give any advice? Thanks, Ian

    Read the article

  • apache 2.4, mod_proxy_fcgi not honouring .htaccess, work around needed

    - by user229874
    I am using apache 2.4.7 with mod_proxy_fcgi for purpose of passing through php to php-fpm (this will be used for shared hosting environment). The htaccess works fine for non php files, but once it hit rewrite rule that proxies through the php requests, the htaccess is ignored. I know why it is happening. The question is: how do I work around it? The question how do I force apache to treat the request to php file as a request to local file, and then proxy it through? I have spent substantial time in researching on this problem, and following "answers" were given as solution: 1) "use apache configuration instead of .htaccess" it is valid solution, but not for shared hosting environment (I am not going to give access to apache configuration to shared hosting customers ;)). 2) "don't use .htaccess, as it has performance/security/other issues", well how else would shared hosting customers control access/url rewriting on their site? Besides if the .htaccess was not a requirement I would simply use nginx. 3) "put rewrite rule for proxy inside of " - this is incorrect, and it does not work. This behaviour appears to be not a bug but a "feature" as per https://issues.apache.org/bugzilla/show_bug.cgi?id=54887

    Read the article

  • Google: "302 Moved" in Firefox

    - by Virtlink
    For some Google search queries executed from the Firefox search bar, or manually by typing in the URL, I get a ''302 Moved'' page. I did a quick virus scan, have checked the hosts file and the plugins and add-ons that are installed in Firefox. Nothing is out of the ordinary. What could be the problem? These URLs (and any URL with google.com, empty and firefox-a in them) show me a 302 Moved page: https://www.google.com/search?q=c%23+empty+array&client=firefox-a https://www.google.com/search?q=empty&client=firefox-a Whereas these URLs work fine: https://www.google.com/search?q=c%23+empty+array (no firefox-a) https://www.google.com/search?q=empty (no firefox-a) https://www.google.com/search?q=c%23+array&client=firefox-a (no empty) https://www.google.nl/search?q=c%23+empty+array&client=firefox-a (no .com) By default Google redirects my queries to their .nl website, and uses HTTPS. I am currently executing a full system virus scan using Security Essentials. My Firefox plugins were up-to-date. No unfamiliar Firefox plugins or add-ons found. Restarting Firefox did not solve the issue. The issue does not occur in Internet Explorer. The hosts file did not contain any unfamiliar entries.

    Read the article

  • DEB: "Provides:" field ignored

    - by Creshal
    I need to replace a package with a custom one, which gets its own name (foo-origpackage). To allow it to be used as drop-in replacement, I added the Provides: origpackage line to the control file. apt-cache show foo-origpackage lists the "Provides" entry just fine. However, when I want to install a file depending on origpackage, it fails ("Package origpackage not installed"). Is there some distinction between "real" and virtual packages I'm missing? EDIT: To be precise, what I want to replace is xen-utils-common for Squeeze. My tao-xen-utils-common has the following control file: Source: tao-xen-utils-common Section: kernel Priority: optional Maintainer: Creshal <[email protected]> Build-Depends: debhelper Standards-Version: 3.8.0 Homepage: http://tao.at Package: tao-xen-utils-common Architecture: all Depends: gawk, lsb-base, udev, xenstore-utils, tao-firewall Provides: xen-utils-common Conflicts: xen-utils-common Replaces: xen-utils-common Description: Xen administrative tools - common files (modified) The userspace tools to manage a system virtualized through the Xen virtual machine monitor. Modified for use with TAO Firewall. Installing xen-utils-4.0 fails, however: foo@bar# apt-cache showpkg tao-xen-utils-common Package: tao-xen-utils-common Versions: 4.0.0-1tao1 (/var/lib/apt/lists/repo.tao.at_dists_stable_main_binary-amd64_Packages) (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/repo.tao.at_dists_stable_main_binary-amd64_Packages MD5: 7c2503f563fca13b33b4eb3cbcb3c129 Reverse Depends: tao-firewall,tao-xen-utils-common tao-firewall,tao-xen-utils-common Dependencies: 4.0.0-1tao1 - gawk (0 (null)) lsb-base (0 (null)) udev (0 (null)) xenstore-utils (0 (null)) tao-firewall (0 (null)) xen-utils-common (0 (null)) xen-utils-common (0 (null)) Provides: 4.0.0-1tao1 - xen-utils-common Reverse Provides: foo@bar# apt-get install xen-utils-4.0 Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: xen-utils-common Suggested packages: xen-docs-4.0 The following packages will be REMOVED: tao-xen-utils-common The following NEW packages will be installed: xen-utils-4.0 xen-utils-common Edit:foo@bar# apt-cache policy xen-utils-4.0 xen-utils-4.0: Installed: (none) Candidate: 4.0.1-4 Version table: 4.0.1-4 0 500 http://ftp.at.debian.org/debian/ stable/main amd64 Packages 4.0.1-4 0 500 http://security.debian.org/ stable/updates/main amd64 Packages

    Read the article

  • PLS HLP Chrome & Internet Explorer won't connect after infected Fire Fox works.

    - by Zack
    HI Guys Please Help I am pretty New Here. I'm having problems. Cannot connect with chrome or Internet Explorer. Fire Fox works fine. It seems it happens when I was infected by a "Trojan Horse Generic 17.BWIK" and a "Trojan Horse SHeur.UHL", when I reply to a post for a Thread I posted. I have removed the treat and got Fire Fox working, "so i think", but not G'Chrome or IE still cannot connect. I do not want to loose Chrome History so re-setting would be my last option and uninstall and install will be out of the question. Is there a way around this? I am using XP Pro on a desktop and DSL connection. Be aware from "Fake_Antispyware.FAH", which I had on my computer, I just found out while doing this, according to my AVG anti-virus security. Please can you direct me for a cure. Thank you in advance for your sincere willingness contributions.

    Read the article

  • SBS 2011 Essentials and too many new Mac users

    - by Harry Muscle
    We currently have about 15 users on a Windows SBS 2011 Essentials Server. I've just been informed that we plan to bring aboard about 15 more users that will be using Macs. We'll be using a Mac Server to manage the 15 new Macs, however, I'm looking for advice on how to best set this all up. Ideally I would just add the 15 new Mac users to Active Directory and setup the Mac Server to authenticate against AD, unfortunately the SBS 2011 Essentials Server has a limit of 25 users, so adding these new users to AD won't work unless we upgrade the Windows server (which I'd rather avoid since it's a lot of work and a lot of money). That leaves the option of creating user accounts for these 15 Mac users on the Mac Server only. The problem that this creates though is how do I share files been Mac users and Windows users since they are now using different systems for network authentication. Any advice (short of upgrade to SBS Standard) is highly appreciated. Thanks, Harry P.S. We don't run Exchange or anything else on our server ... it's mainly used for file sharing and enforcing security via group policies.

    Read the article

  • VMware and Windows Activation

    - by Peter M
    Yesterday I installed Slysoft's Virtual CloneDrive in order to mount an iso for some software installation on my host system (XP Pro SP3) This morning I fired up VMware and made a linked clone of an existing XP vm in order to do some software testing. This is the sort of thing that I do all the time, and the base XP vm that I clone was activated a long long time ago. The surprise today was that the newly cloned vm was no longer activated and XP cited major changes in hardware as the reason. I repeated the test with a full clone of the base system and got the same message. I then started up my base vm and it seemed to be activated, yet another vm (which I fully cloned from the base vm a long time ago) now started reporting that XP was not activated. At this point I guessed that Virtual DriveClone might have been the source of my hardware differences so I uninstalled it and rebooted. After this I made a new linked clone and full clone of the base vm and XP did not complain about not being activated. So I seem to be back to where I was before installing Virtual DriveClone with the exception that that one particular XP vm continues to complain about activation (even though 4 or 5 other XP vm's are fat and happy) Now to my questions: Why would adding Virtual CloneDrive to the host system affect XP activation on the vm's? From their point of view I would have thought that the environment had not changed as I had not enabled any new hard drives in their systems. Or is adding a hard drive to the host system enough to upset XP activation? Since this event, one of my fully cloned vm's is still reporting that XP is not activated even though I have removed Virtual CloneDrive. Is there anyway to convince XP that it is on the same system as yesterday? Or are my only options to do an activation or restore the vm from a previous backup?

    Read the article

  • Extending home network using multiple access point originating from another access point

    - by cyberjar09
    I have a home network with the current setup (RJ45 running from main router to access point) : +-------------+ +--------------+ | 192.168.2.1 | |192.168.2.2 | | router +--------------->access point 1| +-----^-------+ +--------------+ | | +-----+--------+ | 192.168.1.1 | | modem | +-----^--------+ | | | | +--+--+ | ISP | +-----+ However I would like to extend the network to two more floors in the house via the existing Access Point (router is too far and not reachable using a network cable, hence I need to extend using current access point). Please see diagram below : +-------------+ +--------------+ +----------------+ | 192.168.2.1 | |192.168.2.2 | | 192.168.2.3 | | router +--------------->access point 1+----------> access point 2 | +-----^-------+ +--------+-----+ +----------------+ | | | | +-----+--------+ | | 192.168.1.1 | | | modem | | +-----^--------+ | +----------------+ | +----------------> 192.168.2.4 | | | access point 3 | | +----------------+ | +--+--+ | ISP | +-----+ Q1 : is this setup possible? Q2 : if possible, will I have to do anything different from what I did to setup access point 1? edit 1 : I am trying to study the dd-wrt documentation to see which would be the correct mode of operation for me Linking Routers but Im confused because I dont see any info on how to use an existing Access point to extend the signal of the SSID. If anyone could point me to the correct wiki for how I should setup AP2 and AP3 based on AP1, it would be very helpful. For AP1, I did the following Use static IP and setup same SSID as primary wireless router use same security as primary wireless router make AP1 point to 192.168.2.1 (primary router) for DHCP Thanks in advance.

    Read the article

  • Gmail.com detect mail as spam, but the server is not on any BlackList

    - by Tomer W
    I have an issue with Google. (GMail to be exact) About 1 month ago, we had a security breach, and mail was relayed through our servers. we got listed in almost ALL Black-Lists :( we fixed the problem, and requested removal from Black-lists, which was granted easily. currently (over 3 weeks), we are not sending any spam anymore. furthermore, we got clear from all the Black-lists (MxToolBox Black-List Search Result) But, GMail still refuse to get Anything from the server, stating '550 Spam'. Following, Telnet attempt to send to gmail: 220 mx.google.com ESMTP g47si45436208eep.123 helo megatec.co.il 250 mx.google.com at your service mail from: <[email protected]> 250 2.1.0 OK g47si45436208eep.123 rcpt to: <[email protected]> 250 2.1.5 OK g47si45436208eep.123 Data 354 Go ahead g47si45436208eep.123 Test123 . 550-5.7.1 [62.219.123.33 11] Our system has detected that this message is 550-5.7.1 likely unsolicited mail. To reduce the amount of spam sent to Gmail, 550-5.7.1 this message has been blocked. Please visit 550-5.7.1 http://support.google.com/mail/bin/answer.py?hl=en&answer=188131 for 550 5.7.1 more information. g47si45436208eep.123 Connection to host lost. i tried filling the form @ Gmail - Report Delivery Problem i also tried reaching Google by phone, but the message was to go to the Link mentioned above. I Checked ReverseDNS and is ok... We dont have TLS, but that shouldn't be a problem, shouldn't it? Note: we are not a Bulk sender. Anyone has an idea? what can be blocking our IP? Anyone know whom can be contacted in order to resolve this BL listing?

    Read the article

  • Concerns about Apache per-Vhost logging setup

    - by etienne
    I'm both senior developer and sysadmin in my company, so i'm trying to deal with the needs of both activities. I've set up our apache box, wich deals with 30-50 domains atm (and hopefully will grow larger) and hosts both production and development sites, with this directory structure: domains/ domains/domain.ext/ #FTPS chroot for user domain.ext domains/domain.ext/public #the DocumentRoot of http://domain.ext domains/domain.ext/logs domains/domain.ext/subdomains/sub.domain.ext domains/domain.ext/subdomains/sub.domain.ext/public #DocumentRoot of http://sub.domain.ext Each domain.ext Vhost runs with his dedicated user and group via mpm-itk, umask being 027, and the logs are stored via a piped sudo command, like this: ErrorLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_error.log" CustomLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_access.log" combined Now, i've read a lot about not letting the logs out of a very restricted directory, but the developers often need to give a quick look to a particular subdomain error log, and i don't really want to give them admin rights to look into /var/logs. Having them available into the ftp account is REALLY handy during development stages. Do you think this setup is viable and safe enough? To me it is apparently looking good, but i'm concerned about 3 security issues: -is the sudo pipe enough to deal with symlink exploits? Any catches i'm missing? -log dos: logs are in the same partition of all domains. got hundreds of gigs, but still, if one get disk-space dos'd, everything will break. Any workaround? Will a short timed logrotate suffice? -file descriptors limits: AFAIK the default limit for Apache on Ubuntu Server is currently 8192, which should be plenty enough to handle 2 log files per subdomain. Is it? Am i missing something? I hope to read some thoughts on the matter!

    Read the article

  • VPN from Windows XP to OpenSwan: correct setup?

    - by Gnudiff
    Main question is what I am doing wrong in my OpenSwan or L2TP client setup? I am trying to create a Linux OpenSwan VPN connection from Windows XP machine, using preshared key and the builtin Windows XP L2TP IPsec option. I have followed the instructions in Linux Home networking Wiki for setting up OpenSwan and a guide to making it work with the Windows XP client, but am now stuck. The net setup is as follows: [my windows client, private IP A]<->[f/wall B]<-internet->[g/w X]<->[Linux OpenSwan server Y] A - private subnet /24 B - internet address X - internet address /24 Y - internet address on same subnet as X What I essentially want is for computer with A address to feel and work, as if it was in X subnet for purposes of outgoing and incoming TCP and UDP connections. My OpenSwan setup is as follows: /etc/ipsec.conf (AAA and YYY indicates ip address parts of A and Y addresses): conn net-to-net authby=secret left=B leftsubnet=AAA.AAA.AAA.0/24 leftnexthop=%defaultroute right=Y rightsubnet=YYY.YYY.YYY.0/24 rightnexthop=B auto=start the secret in /etc/ipsec.secrets is listed as: B Y : PSK "0xMysecretkey" where B & Y stand for respective IP adresses of gateway B and linux server Y My L2TP WinXP setup is: IP of destination: Y don't prompt for username security options: typical, require secured pass, don't require data encryption, IPSec PSK set to 0xMysecretkey networking options: VPN Type: L2TP IPSec VPN; TCPIP protocol (with automatic IP address assignment) and QOS packet schedulers enabled The error I get from Windows client is 789: "error during initial negotiation"

    Read the article

  • Windows 7 Folder Redirection (GPO)

    - by Kev
    I have been fighting this issue for a day or two now, so I am looking for some insight. I am taking over admin duties in a domain of 800 users, and the previous admins really did not employ much of any GPO settings for the clients of the Domain. In each site, there is a location on the file server where "Home" folders were manually created. EX: \server\home\enduser Whenever a user got a machine, the admin would manually right-click on the "My Documents" folder and manually enter the path to the home folder. We are planning to start putting Windows 7 machines on the Network, and I am wanting to automate as much as I can, everything that was not done in the past. Since everyone has exising "Home" folders I have been fighting and trying to get Folder Redirection to work with a new Windows 7 machine (In a Test OU). I am getting all kinds of errors and I can't get the Windows 7 "Documents" folder to redirect to the users EXISTING home folders. As I stated earlier, all of the Home folders were (and still are) manually created on the File Server and are set with the following Security permissions - Domain Admins - Full Control euser (end user) - Modify (Everything but Full) Can someone point me in the right direction on the proper setting to put in the Folder Redirection GPO to get this to work with the Existing Home folders.

    Read the article

  • mod_rewrite REQUEST_FILENAME doesn't contain absolute path

    - by Paul Dixon
    I have a problem with a file test operation in a mod_rewrite RewriteCond entry which is testing whether %{REQUEST_FILENAME} exists. It seems that rather than %{REQUEST_FILENAME} being an absolute path, I'm getting a path which is rooted at the DocumentRoot instead. Configuration I have this inside a <VirtualHost> block in my apache 2.2.9 configuration: RewriteEngine on RewriteLog /tmp/rewrite.log RewriteLogLevel 5 #push virtually everything through our dispatcher script RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^/([^/]*)/?([^/]*) /dispatch.php?_c=$1&_m=$2 [qsa,L] Diagnostics attempted That rule is a common enough idiom for routing requests for non-existent files or directories through a script. Trouble is, it's firing even if a file does exist. If I remove the rule, I can request normal files just fine. But with the rule in place, these requests get directed to dispatch.php Rewrite log trace Here's what I see in the rewrite.log init rewrite engine with requested uri /test.txt applying pattern '^/([^/]*)/?([^/]*)' to uri '/test.txt' RewriteCond: input='/test.txt' pattern='!-f' => matched RewriteCond: input='/test.txt' pattern='!-d' => matched rewrite '/test.txt' -> '/dispatch.php?_c=test.txt&_m=' split uri=/dispatch.php?_c=test.txt&_m= -> uri=/dispatch.php, args=_c=test.txt&_m= local path result: /dispatch.php prefixed with document_root to /path/to/my/public_html/dispatch.php go-ahead with /path/to/my/public_html/dispatch.php [OK] So, it looks to me like the REQUEST_FILENAME is being presented as a path from the document root, rather than the file system root, which is presumably why the file test operator fails. Any pointers for resolving this gratefully received...

    Read the article

  • Where is the TFS database?

    - by Blanthor
    I've been using TFS 2010 with no problems. I tried adding a user and I got the following error message. "TF30063: You are not authorized to access <serverName>\DefaultCollection. -The remote server returned an error: (401) Unauthorized." I remoted into the server, <serverName>, and opened the TFS Console. The logs mentioned a connection string: ConnectionString: Data Source=<serverName>\SS2008;Initial Catalog=Tfs_DefaultCollection;Integrated Security=True While remoted in I open SQL Server 2008 Management Studio opening the (local) server with Windows Authentication. It shows the connection to be (local)(SQL Server 9.04.03 - <serverName>\Admin), and there is no Tfs_DefaultCollection database. Can someone tell me what is going on? Was I wrong in connecting to this instance of the database (i.e. Is the log file the wrong place to find the connection string)? Is the database so corrupted that SQL Manager Studio cannot see it anymore, although TFS could? Should I be logging into Management Studio as user SS2008? btw I don't know of any such credentials.

    Read the article

  • Samba PDC share slow with LDAP backend

    - by hmart
    The scenario I have a SUSE SLES 11.1 SP1 machine as Samba master PDC with LDAP backend. In one share there are Database files for a Client-Server application. I log XP and Windows 7 machines to the local domain (example.local), the login is a little slow but works. In the client computers have an executable which opens, reads and writes the database files from the server share. The Problem When running Samba with LDAP password backend the client application runs VERY SLOW with a maximum transfer rate of 2500 MBit per second. If disable LDAP the client app speed increases 20x, with transfer rate of 50Mbit/sec and running smoothly. I'm doing test with just two users and two machines, so concurrency, or LDAP size shouldn't be the problem here. The suspect LDAP, Smb.conf [global] section configuration. The Question What can I do? I've googled a lot, but still have no answer. Slow smb.conf WITH LDAP [global] workgroup = zmartsoft.local passdb backend = ldapsam:ldap://127.0.0.1 printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes add machine script = /usr/sbin/useradd -c Machine -d /var/lib/nobody -s /bin/false %m$ domain logons = Yes domain master = Yes local master = Yes netbios name = server os level = 65 preferred master = Yes security = user wins support = Yes idmap backend = ldap:ldap://127.0.0.1 ldap admin dn = cn=Administrator,dc=zmartsoft,dc=local ldap group suffix = ou=Groups ldap idmap suffix = ou=Idmap ldap machine suffix = ou=Machines ldap passwd sync = Yes ldap ssl = Off ldap suffix = dc=zmartsoft,dc=local ldap user suffix = ou=Users

    Read the article

  • VNC authentication failure

    - by cf16
    I try to connect to my vncserver running on CentOs from home computer, behind firewall. I have installed Win7 and Ubuntu both on this machine. I have an error: VNC conenction failed: vncserver too many security failures even when loging with right credentials (I reset passwd on CentOs) I get: authentication failure. I observe that I have to wait a whole day to be able to relogin at all. Is it something regarding that I try as root? I think important is also that I have to login to remote Centos through port 6050 - none else port works for me. Do I have to do something with other ports? I see that vncserver is listening on 5901, 5902 if another added - and I consider connection is established because from time to time (long time) the passwd prompt appears,... right? I have created additional user1, password for him to CentOS and to VNC, also user2. I do: service vncserver start and two servers starts, one :1, and second on :2. When I try to connect to vncserverIP:1 I get what described above, but when I try connect to vncserverIP:2 it says that the trial was unsuccessful. please help, what to do? additionally: how to disable this lockout for a testing purposes?

    Read the article

  • AWS Linux EC2: yum won't run with plugins

    - by Patrick
    Short Version: yum commands on my Amazon Linux EC2 AMI only work with --noplugins. Long Version: A couple of days ago, I ran yum update at the behest of the SSH Login MoTD telling me I had updates to install. About midway through the update (specifically while updating the kernel), the update abruptly ended (79 of 138 items completed). The website I host on EC2 got weird for a few minutes, but eventually seemed to stabilize back out (maybe EC2 restarted itself?), and I didn't have further issues (other than MySQL started running out of memory, but I think that's probably unrelated to this). Today, I went to install gcc-c++ (with yum install gcc-c++). When I did, I got the following message: Loaded plugins: priorities, security, update-motd, upgrade-helper Config error: Command "updateinfo" already defined and I get that for any command I can think to run using yum. However, If I throw in the --noplugins flag, then magically it seems to work. To be clear, when I installed a different package a week ago, it worked totally correctly, so the yum update is the only thing I can think of that changed. I could find nothing on Google with regard to "updateinfo" already defined (with and without quotes). I tried running yum update --noplugins which spit out a message telling me that I should have run yum-complete-transaction instead, but proceeded to try to update something on its own. When that completed, I tried yum-complete-transaction but that gave me a message about the transactions not lining up correctly, so it removed the old transaction (Probably since I should have completed the first transaction before trying to update again, if I had known). Based on the SF question "Linux EC2 Broken Yum", I've also tried yum clean all --noplugins (fails the same with plugins) which just gives me Cleaning repos: amzn-main amzn-updates rpmforge Cleaning up everything I also tried package-cleanup --problems Loaded plugins: priorities, update-motd, upgrade-helper No Problems Found and package-cleanup --dupes Gives a lot of dupes, so I pasted them here: http://pastebin.com/VVFQEkTT instead of inline. At this point, I'm not sure what else there even is to check.

    Read the article

  • Loop through several servers, find specific dlls , get the dll version, internal filename and path?

    - by Graham
    I am a newby to Powershell, and using PS v2. I can see the massive potential it has, but I just can't get the following code to work fully. I am trying to end up with a csv file that contains the wild carded required dlls in the GAC_MSIL or sub-directory, get the dll version, internal filename and path, and the server IP address. The code is below, and it is in single line format because it appears easier to remote onto one of the servers in the server farm and run the single line from that console, ue to security log-ins etc. The code has produced a set of results, but only for the last server, it probably does the first server, then overwrites it but I am not sure about that. I have done a lot of reading about using arrays, and custom objects, and had a go at doing that, but my scripting skills in PS are not yet up to it. Code: $out = "Ouput_dll_ver_results.csv";foreach ($server in '11.222.33.123', '11.222.33.124') {$VersionInfo = (Get-ChildItem \$server\C$\windows\assembly\GAC_MSIL -recurse -Include abc*.dll,def*.dll,ghi*.dll,jkl*.dll | Where-Object { $.FullName -notmatch "\windows\assembly\temp\" })}; $VersionInfo | %{Get-Command $.FullName} | select -expand File* |Export-Csv $out Can you please advise if/how the above code can be corrected, and if not, what alternatives do I have to get the information I need. Many thanks in advance. Graham

    Read the article

  • Apache2 - Hosting two sites on the same domain with different ports

    - by user1026361
    I am hosting a staging site (test.mydomain.com) which currently work well on port 80 for two sites (test.mydomain.com and test.FRmydomain.com) I am working on a new backend and I would like to deploy a third site on this server for testing. My hope is that it will live at test.mydomain.com:4204. I've got some experience with apache and quickly added statements: Listen 4204 NameVirtualHost *:4204 and created a new config for my site. What I imagine are the relevant parts of my config: <VirtualHost *:4204 > ServerAdmin [email protected] ServerName test.mydomain.com:4204 However, the site is not publicly available, by name or ip. If i curl localhost:4204 from the server, I get the expected page content At this point, I'm a bit of a loss on how to go forwards. It seems like my config is correct but not available to be served. Am I better off defining a proxy definition so that, for instance: test.mydomain.com/4204 proxies to my localhost server or is there a way to make the site available via the internet? EDIT: I have added an iptable rule after further Googling with the command: iptables -I INPUT -p tcp --dport 4204 -j ACCEPT I can see apache listening on 4204 and the rule is definitely in place but cant reach the site

    Read the article

  • Pattern matching gnmap fields with SED

    - by Ovid
    I am testing the regex needed for creating field extraction with Splunk for nmap and think I might be close... Example full line: Host: 10.0.0.1 (host) Ports: 21/open|filtered/tcp//ftp///, 22/open/tcp//ssh//OpenSSH 5.9p1 Debian 5ubuntu1 (protocol 2.0)/, 23/closed/tcp//telnet///, 80/open/tcp//http//Apache httpd 2.2.22 ((Ubuntu))/, 10000/closed/tcp//snet-sensor-mgmt/// OS: Linux 2.6.32 - 3.2 Seq Index: 257 IP ID Seq: All zeros I've used underscore "_" as the delimiter because it makes it a little easier to read. root@host:/# sed -n -e 's_\([0-9]\{1,5\}\/[^/]*\/[^/]*\/\/[^/]*\/\/[^/]*\/.\)_\n\1_pg' filename The same regex with the escape characters removed: root@host:/# sed -n -e 's_\([0-9]\{1,5\}/[^/]*/[^/]*//[^/]*//[^/]*/.\)_\n\1_pg' filename Output: ... ... ... Host: 10.0.0.1 (host) Ports: 21/open|filtered/tcp//ftp///, 22/open/tcp//ssh//OpenSSH 2.0p1 Debian 2ubuntu1 (protocol 2.0)/, 23/closed/tcp//telnet///, 80/open/tcp//http//Apache httpd 5.4.32 ((Ubuntu))/, 10000/closed/tcp//snet-sensor-mgmt/// OS: Linux 9.8.76 - 7.3 Seq Index: 257 IPID Seq: All zeros ... ... ... As you can see, the pattern matching appears to be working - although I am unable to: 1 - match on both the end of line ( comma , and white/tabspace). The last line contains unwanted text (in this case, the OS and TCP timing info) and 2 - remove any of the un-necessary data - i.e. print only the matching pattern. It is actually printing the whole line. If i remove the sed -n flag, the remaining file contents are also printed. I can't seem to locate a way to only print the matched regex. Being fairly new to sed and regex, any help or pointers is greatly appreciated!

    Read the article

  • sudoer scheme for another web developer that retains my future control of a virtual server?

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done.

    Read the article

  • passing on command line options in bash

    - by bryan
    I have a question. I have a script, a kind of long script written in bash aprox. 370 lines. That has several functions and in those functions the user has to enter information which is then stored in files. ( This is suppose to represent a MySQL database, with the functions INSERT, UPDATE, SELECT, SELECT where x=y.) I created this myself in bash, now the only thing that rests me is that I need to be able to pass arguments on the command line to the script, that will do the same as the script does. I know that bash has positional parameters such as $1 $2 $3 $* $@ $0 ( refers to the name of the script) etc. I know how I can use these parameters in a simple if function. This is not enough for my script. I basicly need to do the same thing that the script does, but then from the command line. I have been struggling with this for a couple of days now and I cannot think of a way to get it to work. Maybe someone here can help me with this? If you want to have the script. That can be possible, but I don't think I can paste it in here... EDIT: Link to script, http://pastebin.com/Hd5VsDv2 Note, I am a beginner in bash scripting. EDIT: This is in reply to Answer 1. As I said I hope I can just replace the if [ "$1" = "one" ] ; then echo "found one" to if [ "$1" = "one" ] ; then echo SELECT where SELECT is the function I previously had in my script(above) http://pastebin.com/VFMkBL6g LINK to testing script

    Read the article

  • Cannot find "IIS APPPOOL\{application pool name}" user account in Windows Server 2008

    - by MacGyver
    Normally when setting up IIS 7, I'm used to allowing permissions to user IIS APPPOOL\{application pool name} on the root folder of my web application(s). I also give permissions to IUSR (or the IIS_IUSRS user group. (Note, in Windows Server 2008, I found that IUSR isn't in that group by default, so I added it). In Windows Server 2008, I cannot find user IIS APPPOOL\{application pool name} under Security under the Windows Folder Properties. I'm using Windows Authentication in ASP.NET. I'm receiving a 401.1 on the page in Internet Explorer 8 after getting the authentication prompt. Mozilla Firefox also gave me a Windows authentication prompt, and got me into the site fine. Same with Google Chrome. How can I solve this one? HTTP Error 401.1 - Unauthorized You do not have permission to view this directory or page using the credentials that you supplied. Specific page information: Module: WindowsAuthenticationModule Notification: AuthenticateRequest Handler: PageHandlerFactory-ISAPI-4.0_32bit Error Code: 0x8009030e Requested URL: http://.....aspx Physical Path: C:\.........aspx Logon Method: Not yet determined Logon User: Not yet determined

    Read the article

  • Force local IP traffic to an external interface

    - by calandoa
    I have a machine with several interfaces that I can configure as I want, for instance: eth1: 192.168.1.1 eth2: 192.168.2.2 I would like to forward all the traffic sent to one of these local addresses through the other interface. For instance, all requests to an iperf, ftp, http server at 192.168.1.1 should be not just routed internally, but forwarded through eth2 (and the external network will take care of re-routing the packet to eth1). I tried and looked at several commands, like iptables, ip route, etc... but nothing worked. The closest behavior I could get was done with: ip route change to 192.168.1.1/24 dev eth2 which send all 192.168.1.x on eth2, except for 192.168.1.1 which is still routed internally. May be I could then do NAT forwarding of all traffic directed to fake 192.168.1.2 on eth1, rerouted to 192.168.1.1 internally? I am actually struggling with iptables, but it is too tough for me. The goal of this setup is to do interface driver testing without using two PCs. I am using Linux, but if you know how to do that with Windows, I'll buy it!

    Read the article

< Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >