Search Results

Search found 10873 results on 435 pages for 'virtual slide'.

Page 288/435 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • tcp flags in iptables: What's the difference between RST SYN and RST and SYN RST ? When to use ALL?

    - by Kris
    I'm working on a firewall for a virtual dedicated server and one of the things I'm looking into is port scanners. TCP flags are used for protection. I have 2 questions. The rule: -p tcp --tcp-flags SYN,ACK,FIN,RST SYN -j DROP First argument says check packets with flag SYN Second argument says make sure the flags ACK,FIN,RST SYN are set And when that's the case (there's a match), drop the tcp packet First question: I understand the meaning of RST and RST/ACK but in the second argument RST SYN is being used. What's the difference between RST SYN and RST and SYN RST ? Is there a "SYN RST" flag in a 3 way handshake ? Second question is about the difference between -p tcp --tcp-flags SYN,ACK,FIN,RST SYN -j DROP and -p tcp --tcp-flags ALL SYN,ACK,FIN,RST SYN -j DROP When should ALL be used ? When I use ALL, does that mean if the tcp packet with the syn flag doesn't have the ACK "and" the FIN "and" the RST SYN flags set, there will be no match ?

    Read the article

  • Virtualised Sharepoint Backup Strategies

    - by dunxd
    I have a Sharepoint (OSS 2007) farm running on three virtual machines in VMWare ESX, plus a SQL Server backend on physical hardware. During a recent Business Continuity Planning event I tried restoring the sharepoint farm with only the config and content databases, and failed to get things working. My plan was to build a new sharepoint server, then attach this to a restoration config database and install the Central Management site on this server, then reattach the content databases. This failed at the Central Management part of the plan. So I am back to the drawing board on the best strategy for backup and recovery, with reducing the time and complexity of the restore job the main objective. I haven't been able to find much in the way of discussion of backup/restore strategies for Sharepoint in a VMWare environment, so I figured I'd see if anyone on server fault has any ideas or experience.

    Read the article

  • Stream Music To Ventrilo From ESXi VM

    - by omghai2u
    I would like to stream music to my Ventrilo server from a Windows XP virtual machine running on an ESXi host. I have followed the instructions outlined here to stream music from something like VLC to the Ventrilo server on another machine and it works fine. I have also added the lines: sound.present = "TRUE" sound.virtualDev = "es1371" sound.fileName = "-1" sound.autodetect = "TRUE" to my .vmx file, as suggested here (http://communities.vmware.com/thread/191878), to get a sound card in my VM. The problem I am having is that it seems that my VM is not outputting any sound, so there's nothing to stream through Ventrilo. The Device Manager on the VM shows that this new sound card has drivers and doesn't appear to have any concerns with it. Can someone point me in the right direction to get my desired outcome? Thanks! PS. sorry for the long 2nd link, apparently I can only post 1 hyperlink with this low reputation.

    Read the article

  • Apache not using the right SSL certificate [on hold]

    - by user2420318
    In my apache2 setup, I have one VirtualHost for my main site, and another for a static content site, like downloads, css, etc. I have ssl certificates for both, and the static content one is under a subdomain of the main site. I have configured the four virtualhosts altogether, as both sites need SSL ones as well. When I only had 1 SSL site, everything was OK, but now with the second, the first site uses the second site's certificate, even though it is told specifically to use its own in the VirtualHost section. I honestly have no idea why apache would do this. Any ideas? I have a feeling there may be some default/global setting or something that are set for some odd reason. I am using different IPs for the Virtual hosts.

    Read the article

  • Setting up routing for MS DirectAccess to a VMWare EsXi Host

    - by Paul D'Ambra
    I'm trying to set up DirectAccess on a virtual machine so I can demonstrate it's value and then if need be add a physical machine to host it. I'm hitting a problem because the Direct Access machine (DA01) needs to have 2 public addresses actually configured on the external adapter but there is a Zyxel Zywall USG300 between the VMware ESXi host and the outside world. I've summarised my setup in this diagram If I ping from the LAN to 212.x.y.89 I get a response but if I ping from the VM I get destination host unreachable. I used "route add 212.x.y.89 192.c.d.1" and get request timed out. At that point I see outbound traffic allowed on the Zyxel firewall but nothing coming back. I'm past my understanding of routing and VMWare so am not sure how to tie down where my problem lies (or even if this setup is possible). So any help massively appreciated. Paul

    Read the article

  • Mails bounce because of invalid character ('@') in username

    - by user1598585
    I have a working exim setup with virtual users, working alright, except for when I try to send email to certain servers. These servers reject my emails because of #5.1.3 Invalid character ('@') in username. The offending header parts seem to be: Return-path: <"[email protected]"@smtp.example.com> and ...(envelope-from <"[email protected]"@smtp.example.com>)... The problem is that I cannot find where and why the usernames are being generated like this. My router for submission is: dnslookup: driver = dnslookup domains = ! +local_domains transport = remote_smtp ignore_target_hosts = 0.0.0.0 : 127.0.0.0/8 no_more And the respective transport: remote_smtp: driver = smtp What can be producing this problem?

    Read the article

  • Good C++ books regarding Performance?

    - by Leon
    Besides the books everyone knows about, like Meyer's 3 Effective C++/STL books, are there any other really good C++ books specifically aimed towards performance code? Maybe this is for gaming, telecommunications, finance/high frequency etc? When I say performance I mean things where a normal C++ book wouldnt bother advising because the gain in performance isn't worthwhile for 95% of C++ developers. Maybe suggestions like avoiding virtual pointers, going into great depth about inlining etc? A book going into great depth on C++ memory allocation or multithreading performance would obviously be very useful.

    Read the article

  • Are Plesk server backups useful?

    - by Michael T. Smith
    I'm working for a startup now, and I'm the programmer. Because of our small team size, I'm also handling the server management for now (until we get a dedicated server administrator.) I've never used Plesk before, and the server we're using (a Media Temple Dedicated Virtual server) had it installed when I got here. One of my first jobs was to set up backups: Plesk was already running it's nightly server-wide backups. I created a small script to dump the web app, it's DBs and any assets, tar them, store them, and then copy them to another small server we have (to backup the backups.) But, we're constantly running into hard drive space issues because of the Plesk backups. And I'm wondering, are they useful? If I have the web app and all of it's assets, I could easily enough get another server up and running. Do we need to keep running Plesk's backups? Thoughts?

    Read the article

  • OS X Lion - Installing Oracle 10g Standard Edition

    - by Cellze
    Im trying to install oracle 10g on to OS X Lion, I have previous achieved this on snow leopard with the following http://blog.rayapps.com/2009/09/14/how-to-install-oracle-database-10g-on-mac-os-x-snow-leopard/ The issue im having is that the ulimit settings in the oracle/.bash_profile cannot be modified. I have the following in the bash_profile: export DISPLAY=:0.0 export ORACLE_BASE=$HOME umask 022 # must match `sysctl kern.maxprocperuid` ulimit -Hu 512 ulimit -Su 512 # must match `sysctl kern.maxfilesperproc` ulimit -Hn 10240 ulimit -Sn 10240 Upon applying the bash_profile settings . ~/.bash_profile i get the following error: -bash: ulimit: max user processes: cannot be modify limit: Invalid argument This then results in $ sqlplus / as sysdba not functioning correctly with a Segmentation fault: 11 The output of $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 10240 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 512 virtual memory (kbytes, -v) unlimited If any one knows how I can apply these ulimit settings to the oracle user I have created to allow me to install sqlplus and therefore create a db, that would be great.

    Read the article

  • What should we be aware of when moving windows servers to another domain

    - by Klaus Byskov Hoffmann
    Hi everyone, We have a bunch of (virtual vmware) windows servers (2003 and 2008) that our hosting provider wants to move to a new domain. They also want to rename the servers. The hosting provider is in charge of maintaining the servers, while we are in charge of making sure that all our business applications are working. Our business applications include custom developed .net applications using such things as SQLServer 2008, TFS 2010, asp.net, some legacy COM+ apps, etc. To be honest I don't feel too convinced that this migration will be as painless as the hosting provider wants to make it sound. I would greatly appreciate any input on what we should be aware of when discussing the practicalities involved in the migration with the hosting provider. Thanks in advance. Klaus

    Read the article

  • Trying to set up OpenVPN server on a vps

    - by Austin
    I'm trying to set up an OpenVPN server on my VPS for myself when I'm in public places, using this tutorial, http://tipupdate.com/how-to-install-openvpn-on-ubuntu-vps/ However whenever I try to start the server, it gives me this, root@vps:~# /etc/init.d/openvpn start * Starting virtual private network daemon(s)... * Autostarting VPN 'server' [fail] The log contains this Tue Dec 11 10:53:32 2012 Diffie-Hellman initialized with 1024 bit key Tue Dec 11 10:53:32 2012 /usr/bin/openssl-vulnkey -q -b 1024 -m <modulus omitted> Tue Dec 11 10:53:33 2012 TLS-Auth MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Tue Dec 11 10:53:33 2012 ROUTE: default_gateway=UNDEF Tue Dec 11 10:53:33 2012 Note: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2) Tue Dec 11 10:53:33 2012 Note: Attempting fallback to kernel 2.2 TUN/TAP interface Tue Dec 11 10:53:33 2012 Cannot allocate TUN/TAP dev dynamically Tue Dec 11 10:53:33 2012 Exiting So obviously it's something to do with the tun, but I don't understand how to fix it. Thanks!

    Read the article

  • dhclient configures /etc/resolv.conf with invalid entry

    - by kubal5003
    I'm trying to figure out why running dhclient on my interface sets /etc/resolv conf to the ip number of my gateway(router). This entry is invalid and each and every time causes inability to resolve any address. I would like to: stop dhclient from overwriting the /etc/resolv.conf or make dhclient write there the valid dns ip from my router More on the environment: I'm using virtual Debian Wheezy as a client system on Windows Seven x64. It is run by Virtualbox with networking mode set to bridged (all packets from debian are injected to my network interface on windows). If I manually configure the /etc/resolv.conf then everything works fine. Doing this on every boot is quite annoying.. PS I know I can write a script to do it for me, but this is not the solution I want. //edit router ip: 192.168.1.100 /etc/resolv.conf AFTER running dhclient eth0: "nameserver 192.168.1.100" what I would like the /etc/resolv.conf to look like: "nameserver 89.202.xxxx" (I don't have to provide the real ip do I? )

    Read the article

  • apache2 mod_proxy without 301 moved permanently?

    - by Guy Sensei
    Is it possible to not send a 301 moved permanently response to the client when using mod_proxy? I would like the client to deal with the reverse proxy as opaquely as possible. My Virtual Host Settings- relevant snippet ProxyPreserveHost On ProxyPass /GTM http://192.168.1.27/GTM ProxyPassReverse /GTM http://192.168.1.27/GTM wget localhost/GTM --2011-09-27 21:54:22-- localhost/GTM Resolving localhost... ::1, 127.0.0.1 Connecting to localhost|::1|:80... failed: Connection refused. Connecting to localhost|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: localhost/GTM/ [following] --2011-09-27 21:54:22-- localhost/GTM/ Reusing existing connection to localhost:80. HTTP request sent, awaiting response... 200 OK

    Read the article

  • Redirect an Apache2 SSL VirtualHost with mod_alias

    - by Jeff
    I want to make sure there aren't any odd behaviors that I don't know about when redirecting a SSL VirtualHost with mod_alias Redirect as outlined by Apache here. My code seems to work, but since SSL virtual hosts are restricted to just one IP address, I want to make sure there aren't any problems eluding me. Explicitly not using TLS. I'm stuck with Apache 2.2 for now. <VirtualHost *:443> ServerName example.com SSLEngine On Redirect 301 / https://www.example.com/ </VirtualHost> <VirtualHost *:443> ServerName www.example.com SSLEngine On # Do stuff # </VirtualHost> So I guess my question is, should SSL VirtualHost redirection with mod_alias Redirect work the same as non-SSL redirection?

    Read the article

  • CISCO 2911 Router configuration

    - by bala
    Device cisco 2911 router configuration support is required please. I have exchange server 2010 configured and working without any errors the problem is in cisco router configuration when exchange server sends emails out the receives WAN IP not the public ip. I have configured RDNS lookups with our MX record IP addesses that match the FQDN but all our emails are rejected because it does not match with the public ip. Receiving mails problem is not an problem all mails are coming through. i am sure i am missing something on the router configuration that does not sends the public ip, can any one help me to solve this issue. Note; I've got 1 WAN IP & 8 Public IP from ISP . Find below the running configuration. Building configuration... Current configuration : 2734 bytes ! ! Last configuration change at 06:32:13 UTC Tue Apr 3 2012 ! NVRAM config last updated at 06:32:14 UTC Tue Apr 3 2012 ! NVRAM config last updated at 06:32:14 UTC Tue Apr 3 2012 version 15.1 service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname BSBG-LL ! boot-start-marker boot-end-marker ! ! enable secret 5 $x$xHrxxxxx5ox0 enable password 7 xx23xx5FxxE1xx044 ! no aaa new-model ! no ipv6 cef ip source-route ip cef ! ! ! ! ! ip flow-cache timeout active 1 ip domain name yourdomain.com ip name-server 213.42.20.20 ip name-server 195.229.241.222 multilink bundle-name authenticated ! ! crypto pki token default removal timeout 0 ! ! license udi pid CISCO2911/K9 ! ! username bsbg ! ! ! ! ! ! interface Embedded-Service-Engine0/0 no ip address shutdown ! interface GigabitEthernet0/0 ip address 192.168.0.9 255.255.255.0 ip flow ingress ip nat inside ip virtual-reassembly in duplex auto speed 100 no cdp enable ! interface GigabitEthernet0/1 ip address 213.42.xx.x2 255.255.255.252 ip nat outside ip virtual-reassembly in duplex auto speed auto no cdp enable ! interface GigabitEthernet0/2 no ip address shutdown duplex auto speed auto ! ip forward-protocol nd ! no ip http server no ip http secure-server ! ip nat inside source list 120 interface GigabitEthernet0/1 overload ip nat inside source static tcp 192.168.0.4 25 94.56.89.100 25 extendable ip nat inside source static tcp 192.168.0.4 53 94.56.89.100 53 extendable ip nat inside source static udp 192.168.0.4 53 94.56.89.100 53 extendable ip nat inside source static tcp 192.168.0.4 110 94.56.89.100 110 extendable ip nat inside source static tcp 192.168.0.4 443 94.56.89.100 443 extendable ip nat inside source static tcp 192.168.0.4 587 94.56.89.100 587 extendable ip nat inside source static tcp 192.168.0.4 995 94.56.89.100 995 extendable ip nat inside source static tcp 192.168.0.4 3389 94.56.89.100 3389 extendable ip nat inside source static tcp 192.168.0.4 443 94.56.89.101 443 extendable ip nat inside source static tcp 192.168.0.12 80 94.56.89.102 80 extendable ip nat inside source static tcp 192.168.0.12 443 94.56.89.102 443 extendable ip nat inside source static tcp 192.168.0.12 3389 94.56.89.102 3389 extendable ip route 0.0.0.0 0.0.0.0 213.42.69.41 ! access-list 120 permit ip 192.168.0.0 0.0.0.255 any ! ! ! control-plane ! ! ! line con 0 exec-timeout 5 0 line aux 0 line 2 no activation-character no exec transport preferred none transport input all transport output pad telnet rlogin lapb-ta mop udptn v120 ssh stopbits 1 line vty 0 4 password 7 xx64xxD530D26086Dxx login transport input all ! scheduler allocate 20000 1000 end

    Read the article

  • Unable to connect to a remote SQL Server Instance over a VPN

    - by Jack Njiri
    I'm running SQL Server 2005 on two different servers running Win XP. The two servers are in different physical locations and are connected via a dedecated point to point data link in a virtual private network(VPN). Im only able to connect to the remote instance of SQL Server by specifying the IP address on the server name property. If I provide the actual server name say 'ServerA', then I get an error message. Everything works fine except configuring replication at the subscriber level, which requires the actual name of the instance, not an IP address or alias. I have already configured both instances on allow remote connections and im running the SQL Server Browser. How do I connect to the remote instance by providing the instance name? Alternatively how I configure subscription to a remote publisher without supplying the remote instance name?

    Read the article

  • Why can't I run virtualenv without root?

    - by James
    I'm trying to run virtual env and all the documentation says I don't need to run it as root (and probably shouldn't). If I run it as root, everything works. If I run it without root, I get: [stats@crunch ~]$ virtualenv env Traceback (most recent call last): File "/usr/bin/virtualenv", line 5, in from pkg_resources import load_entry_point File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 2655, in working_set.require(requires) File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 648, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 546, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: virtualenv==1.7.1.2 I believe I can change the ownership and it's the same difference, but I'd like to know why this is happening. It's a fresh CentOS 6.2 installation.

    Read the article

  • WebDAV "PROPFIND" exception in IIS due to network share?

    - by jacko
    We're finding continuous exceptions in our event viewer on our live box to the following exception: [snippet] Process information: Process ID: 3916 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: HttpException Exception message: Path 'PROPFIND' is forbidden. Thread information: Thread ID: 14 Thread account name: OURDOMAIN\Account Is impersonating: True Stack trace: at System.Web.HttpMethodNotAllowedHandler.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Other Specs: Windows Server 2003 R2 & IIS 6.0 We've narrowed it down to occuring when people try to access shares on the box from within the network, and have discovered (we think) that its due to the WebDAV web services extension being previously disabled by past staff. The exceptions are being thrown when trying to access directories that are virtual dirs in IIS, and plain old UNC network shares What the implications for enabling the WebDAV extensions on our live web server? And will this solve our problems with the exceptions in our event log?

    Read the article

  • How can I compact the VHD file with Ubuntu?

    - by AmShegar
    I use windows server 2008r2 with role Hyper-V. The guest system is Ubuntu 12.04 LTC. It is situated on the dynamic virtual hard disk. I want to compact this VHD (The real size is 50 GB, 360 GB on the disk). But I can not do this, because the Ubuntu file system is not NTFS. What do I need (gparted, sdelete, ...) for solving this problem? The main problem is that the filesystem is not NTFS, but ext4.

    Read the article

  • Difference between VMWare tools?

    - by tore-
    I'm currently writing a module for puppet which installs VMWare tools to virtual nodes. I want to do this via yum and and yum-repo. VMWare have their own repo (http://packages.vmware.com/tools/esx/3.5latest/rhel5/x86_64/index.html) which I thought I could use, rather than creating my own. But then I noticed that their repo files is alot different than the rpm file used when installing VMWare Tools on the node, via the "Install/upgrade VMWare Tools" in vSphere. Does anyone know what the real difference is? Does anyone have any preferences?

    Read the article

  • Why won't SSI work in IIS?

    - by Josh Kodroff
    I can't get IIS to respect my SSI directives - it just outputs the #include directive as if it were regular old html. Here's the relevant data points: My file with the include directive is called index.html This is my directive: <!-- #include file = "header.shtml" --> (it doesn't work with virtual either.) The file being requested is in the same directory as the file being #include-ed. The SSI module is installed. The SSINC-shtml handler mapping is present and enabled. I think it might be some sort of permissions issue (read/write/execute), but I don't know where those settings are in IIS 7.5.

    Read the article

  • Error starting Hyper-V VM

    - by Peter Bernier
    I'm trying to start a VM on a new Hyper-V installation and I'm receiving the following error: The virtual machine could not be started because the hypervisor is not running. The following actions may help you resolve the problem: 1) Verify that the processor of the physical computer has a supported version of hardware-assisted virtualization. 2) Verify that hardware-assisted virtualization and hardware-assisted data execution protection are enabled in the BIOS of the physical computer. (If you edit the BIOS to enable either setting, you must turn off the power to the physical computer and then turn it back on. Resetting the physical computer is not sufficient.) 3) If you have made changes to the Boot Configuration Data store, review these changes to ensure that the hypervisor is configured to launch automatically. My machine supports virtualization at the hardware level and it is enabled in BIOS. Why am I receiving this error?

    Read the article

  • XEN disk mapping problem under opensolaris

    - by Louis
    I have a system with two harddisks, i wanted to use the simplicity of ZFS for my file server and i also need to run a linux. I choosed XEN virtualization for that, supported on both system. My GRUB is well configured and i can boot both system. I would like is to run both system with solaris as a dom0 and the debian installed on the 2nd HD as a virtual machine. My problem is that i want to use the partitions of my 1st harddisk (sda1 under linux) and it does not work. I didn't find my use case on the web- Here is my Opensolaris device name of this partition : /dev/rdsk/c7d0p1 But when i use : disk = [ 'phy:rdsk/c7d0p1,sda1,w' ] as a disk mapping in my XEN configuration file i have the error : Error: Device 2049 (vbd) could not be connected. error: "rdsk/c7d0p1" is not a valid block device. I am "lost".

    Read the article

  • Redmine + Backlogs not working on Turnkey Linux (Ubuntu)

    - by Riddler
    I'm trying to get Redmine + Backlogs work, so for starters I took a virtual appliance with Redmine from Turnkey Linux (http://www.turnkeylinux.org/redmine) and installed Backlogs on top of it, following the installation instructions (http://www.redminebacklogs.net/en/installation/ - used method #2). It seems to have installed ok, but when I go to the "Backlogs" tab and attempt to create some stories, this is what I get - first shows some kind of error/warning icon, others continue to display "in progress" icon indefinitely (can't post a screenshot, unfortunately, but you can take a look at it here: http://www.redmine.org/attachments/5329/Backlogs.jpg). None of the stories get actually created - leaving this tab and returning back to it shows empty backlogs. So.. what am I doing wrong, and how to fix this?

    Read the article

  • Can KVM CPU assignment count differ from physical hosts CPU count?

    - by javano
    I have read this question. I knew already that I could for example, have a quad core machine with four guests each having two vCPUs. As they don't all be require 100% CPU usage all the time, the scheduler would handle this for me. My question is about how this relates to a fail-over or migration situation; If host1 has two dual-core CPUs, and I assign guest1 four vCPUs (so it accessed all four physical cores), what will happen if I try and migrate it to host2 which only has one dual-core CPU? Can qemu-kvm emulate more vCPUs than there are physical? Or would I have to shut down the virtual machine, change the CPU assignment, migrate it, and then boot it back up (so no live migration)? Many thanks.

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >