Search Results

Search found 20045 results on 802 pages for 'reporting services 2005'.

Page 715/802 | < Previous Page | 711 712 713 714 715 716 717 718 719 720 721 722  | Next Page >

  • memcache fast-cgi php apache 2.2 windows 7 creating problems

    - by Ahmad
    hi, i am trying to run memcache, fast-cgi with apache 2.2 + php on a windows 7 machine. if i dont use memcache everything works fine. the moment i disable extension=php_memcache.dll in php.ini everything returns to normal. once i start apache, the apache logs say: [Wed Jan 12 18:19:23 2011] [notice] Apache/2.2.17 (Win32) mod_fcgid/2.3.6 configured -- resuming normal operations [Wed Jan 12 18:19:23 2011] [notice] Server built: Oct 18 2010 01:58:12 [Wed Jan 12 18:19:23 2011] [notice] Parent: Created child process 412 [Wed Jan 12 18:19:23 2011] [notice] Child 412: Child process is running [Wed Jan 12 18:19:23 2011] [notice] Child 412: Acquired the start mutex. [Wed Jan 12 18:19:23 2011] [notice] Child 412: Starting 64 worker threads. [Wed Jan 12 18:19:23 2011] [notice] Child 412: Starting thread to listen on port 80. and after accessing the page [the page just has echo phpinfo()]. i get this error in the error.log [Wed Jan 12 18:20:54 2011] [warn] [client 127.0.0.1] (OS 109)The pipe has been ended. : mod_fcgid: get overlap result error [Wed Jan 12 18:20:54 2011] [error] [client 127.0.0.1] Premature end of script headers: index.php i have php_memcache.dll in my ext directory and httpd.conf is like this: LoadModule fcgid_module modules/mod_fcgid.so FcgidInitialEnv PHPRC "c:/php" FcgidInitialEnv PATH "c:/php;C:/WINDOWS/system32;C:/WINDOWS;C:/WINDOWS/System32/Wbem;" FcgidInitialEnv SystemRoot "C:/Windows" FcgidInitialEnv SystemDrive "C:" FcgidInitialEnv TEMP "C:/WINDOWS/Temp" FcgidInitialEnv TMP "C:/WINDOWS/Temp" FcgidInitialEnv windir "C:/WINDOWS" FcgidIOTimeout 64 FcgidConnectTimeout 32 FcgidMaxRequestsPerProcess 500 <Files ~ "\.php$>" AddHandler fcgid-script .php FcgidWrapper "c:/php/php-cgi.exe" .php </Files> so the problem has to be related to memcache coz if i disable it, fast-cgi seems to be working fine. any possible reasons for this?? the memcache service is running.. i can check it through control panel-services

    Read the article

  • Access 2007: How can I make this EXPRESSION less complex?

    - by Mike
    Access is telling me that my new expression is to complex. It used to work when we had 10 service levels, but now we have 19! Great! My expression is checking the COST of our services in the [PriceCharged] field and then assigning the appropriate HOURS [Servicelevel] when I perform a calculation to work out how much REVENUE each colleague has made when working for a client. The [EstimatedTime] field stores the actual hours each colleague has worked. [EstimatedTime]/[ServiceLevel]*[PriceCharged] Great. Below is the breakdown of my COST to HOURS expression. I've put them on different lines to make it easier to read - please do not be put off by the length of this post, it's all the same info in the end. Many thanks,Mike ServiceLevel: IIf([pricecharged]=100(COST),6(HOURS), IIf([pricecharged]=200 Or [pricecharged]=210,12.5, IIf([pricecharged]=300,19, IIf([pricecharged]=400 Or [pricecharged]=410,25, IIf([pricecharged]=500,31, IIf([pricecharged]=600,37.5, IIf([pricecharged]=700,43, IIf([pricecharged]=800 Or [pricecharged]=810,50, IIf([pricecharged]=900,56, IIf([pricecharged]=1000,62.5, IIf([pricecharged]=1100,69, IIf([pricecharged]=1200 Or [pricecharged]=1210,75, IIf([pricecharged]=1300 Or [pricecharged]=1310,100, IIf([pricecharged]=1400,125, IIf([pricecharged]=1500,150, IIf([pricecharged]=1600,175, IIf([pricecharged]=1700,200, IIf([pricecharged]=1800,225, IIf([pricecharged]=1900,250,0)))))))))))))))))))

    Read the article

  • Remote Desktop to Server 2008 fails from one particular Win7 client

    - by Jesse McGrew
    I have a VPS running Windows Web Server 2008 R2. I'm able to connect using Remote Desktop from my home PC (Windows 7), personal laptop (Windows 7), and work laptop (Windows XP). However, I cannot connect from my work PC (Windows 7). I receive the error "The logon attempt failed" in the RDP client, and the server event log shows "An account failed to log on" with this explanation: Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: username Account Domain: hostname Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: JESSE-PC Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 I can connect from the offending work PC if I start up Windows XP Mode and use the RDP client inside that. The server is part of a domain but my account is local, so I'm logging in using a username of the form hostname\username. None of the clients are part of a domain. The server uses a self-signed certificate, and connecting from home I get a warning about that, but connecting from work I just get the logon error.

    Read the article

  • Use Alladin eToken with ThunderBird and other tool

    - by Yurij73
    I'm looking for an example on how to setup the eToken PRO Java device to work with Mozilla Thunderbird and with other Linux tool such as PAM logon. I installed distributed pkiclient-5.00.28-0.i386.RPM from the official product page eToken Pro but that tool only handles importing/exporting certificates on the device. I read a glance an old HOWTO from eToken on Linux, but I couldn't install pkcs11-lib for this device as recommended for Thunderbird use this crypto device. It seems my usb token isn't listed in system, unless lsusb show it, so that is the matter modutil -list -dbdir /etc/pki/nssdb Listing of PKCS #11 Modules NSS Internal PKCS #11 Module Blockquote slots: 2 slots attached Blockquote status: loaded Blockquote slot: NSS User Private Key and Certificate Services Blockquote token: NSS Certificate DB Blockquote CoolKey PKCS #11 Module Blockquote library name: libcoolkeypk11.so Blockquote slots: 1 slot attached Blockquote status: loaded Blockquote slot: AKS ifdh [Main Interface] 00 00 token: is my token absent? on other hand i don't know which module is convenient to Java Pro, does CoolKey does all the job well? It seems Java token is too new hardware for Linux? there is excerpt from /etc/pam_pkcs11.conf #filename of the PKCS #11 module. The default value is "default" use_pkcs11_module = coolkey; screen_savers = gnome-screensaver,xscreensaver,kscreensaver pkcs11_module coolkey { module = libcoolkeypk11.so; description = "Cool Key"`

    Read the article

  • Gnome 3 gdm fails to start after preupgrade from fedora 14 to 15

    - by digital illusion
    I'm not able to boot fedora 15 in runlevel 5. After all services start, when the login screen should appear, gdm just show a mouse waiting cursor and keeps restarting itself. From /var/log/gdm/\:0-greeter.log Gtk-Message: Failed to load module "pk-gtk-module" /usr/bin/gnome-session: symbol lookup error: /usr/lib/gtk-3.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type /usr/libexec/gnome-setting-daemon: symbol lookup error: /usr/lib/gtk-3.0modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Where should atk_plug_get_type be defined? Edit: Here a better description of the error (system-config-network-gui:2643): Gnome-WARNING **: Accessibility: failed to find module 'libgail-gnome' which is needed to make this application accessible /usr/bin/python: symbol lookup error: /usr/lib/gtk-2.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Why there are still references to gtk2? Did preupgrade fail? Attaching upgrade log... it seems gdm was not added, but it is present in the users and groups list. May 26 11:25:52 sysimage sendmail[1076]: alias database /etc/aliases rebuilt by root May 26 11:25:52 sysimage sendmail[1076]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:46:09 sysimage useradd[1793]: failed adding user 'dbus', data deleted May 26 11:53:37 sysimage systemd-machine-id-setup[2443]: Initializing machine ID from D-Bus machine ID. May 26 11:55:28 sysimage useradd[2835]: failed adding user 'apache', data deleted May 26 11:55:38 sysimage useradd[2842]: failed adding user 'haldaemon', data deleted May 26 11:55:43 sysimage useradd[2848]: failed adding user 'smolt', data deleted May 26 11:57:32 sysimage sendmail[3032]: alias database /etc/aliases rebuilt by root May 26 11:57:32 sysimage sendmail[3032]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:57:46 sysimage groupadd[3066]: group added to /etc/group: name=cgred, GID=482 May 26 11:57:47 sysimage groupadd[3066]: group added to /etc/gshadow: name=cgred May 26 11:57:47 sysimage groupadd[3066]: new group: name=cgred, GID=482 May 26 11:58:42 sysimage useradd[3086]: failed adding user 'ntp', data deleted May 26 12:00:13 sysimage dbus: avc: received policyload notice (seqno=2) May 26 12:15:08 sysimage useradd[4950]: failed adding user 'gdm', data deleted May 26 12:24:39 sysimage dbus: avc: received policyload notice (seqno=3) May 26 12:25:24 sysimage useradd[5522]: failed adding user 'mysql', data deleted May 26 12:25:37 sysimage useradd[5533]: failed adding user 'rpcuser', data deleted May 26 12:26:31 sysimage useradd[5592]: failed adding user 'tcpdump', data deleted Any suggestions before I revert installation to F14?

    Read the article

  • One Comcast Business Gateway, One Router, Two Web Servers

    - by Kevin Scheidt
    I have a Comcast business account with a router and a web server (info) attached. behind the router there are multiple computers and a second web server (info) which also serves as a file server. (info) has two nics in it. One direct to comcast and one connected to the router. It needs to serve the world it's websites. It needs however, to also be able to see all the internal computers and (com)'s served files. With just 1 nic (the one connected to the router, not comcast), (info) works fine but no one outside can see it. (com) services port 80 and (info) needs to handle port 80 as well. I have two domain names registered, and 5 static ip's from comcast. right now h t t p: / /www.graceamazing.com handled by (com) works fine and h t t p: / /www.graceamazing.com:1307 handled by (info) works fine. but as soon as I enable the 2nd nic in (info) h t t p: / /www.graceamazing.info runs extremely slow (Horribly slow). however, h t t p: / /www.graceamazing.com:1307 and .com work fine. (com) has an ip address via the router 70.89.233.41 (info) has a ip addy of 70.89.233.46 via comcast (2nd nic) and a internal ip of 192.168.x.100 via static behind the router. Any suggestions or changes to make that will make h t t p: / /www.graceamazing.info perform with the same speed it has when going through h t t p: / /graceamazing.com:1307 is there a setting I should check / could have misssed?

    Read the article

  • Reduce power consumption of gaming computer while idle

    - by White Phoenix
    This is my current build: EVGA X58 (first generation) motherboard Intel i7 965 clocked @ 3.3 Ghz 3x DDR3-1600 Corsair RAM at stock timings and voltages Corsair AX750 80 Plus Gold PSU 1 Optical Drive 1 Seagate 7200.10 500 GB drive 2x Western Digital Caviar Black 1 TB drives OCZ Vertex 1 60 GB EVGA GTX 460 oc'd at 800/1600/1850 Antec 1200 case HT-Omega Striker 7.1 Sound Card Windows 7 32-bit Professional (PAE Enabled) I've already seen this post Reduce power use on computer and this post How do I lower power consumption of my computer and while useful, I'm looking for answers specific to my build and OS. I'm pretty sure this build is a energy-intensive build by default, but I want to try to reduce the amount of energy my build uses when I leave it idle (when I go to bed or go out, etc). The first requirement for this machine is that I need to leave it on, so I cannot turn it off while it's being unused. I run it as a file server for personal reasons and I also leave it on in case people leave me messages on various IM services and chat clients (IRC, MSN, Steam, XFire, Pidgin, etc). I'm also unable to replace the parts in my computer with a cheaper "greener" part. What are some ways to minimize the amount of power the machine uses? I'm already using a high efficiency power supply (80 Plus Gold), but I imagine there's other things that can be done in the BIOS and Windows' power settings to reduce power usage while I'm not using the computer. From what I can tell, I can't use Sleep since that'll disable network access (whole reason why I leave the computer on in the first place). I already turn off my monitor when it's not in use. I enabled Intel SpeedStep within the BIOS (I know, I have a 965 and why am I enabling SpeedStep?) Should I bring the graphics card back to stock speeds and lower the clock on the processor even more? Main reason why I'm asking is I think this computer alone is the reason why my power bill is high, so I want to reduce its consumption to as low as possible without having to shut the thing down.

    Read the article

  • Dynamic ARP Entries turning into Static ARP entries

    - by Zach
    I recently acquired a client that has a strange ARP caching issue on one of thier servers. I have a server that will eventually start turning it's dynamic ARP entries into static ARP entries. This causes problems because when the machine that has a static ARP entries on this server receives a new IP via DHCP, then the server is not able to communicate with the clients. Clearing the ARP cache resolves the issue and the server is fine for about a week and then it starts slowly turning ARP entries into static ARP entries. I haven't narrowed it down to when or how many it starts to do, but slowly you start seeing 1 static ARP and then 5 and then 10. The server in question is a Windows Server 2003 SP2. It is a DC, DHCP, and DNS server. I've checked the DHCP scope options and there's nothing in there that would indicate anything to do with static ARP entries. The only thing different between this DNS server and our other DNS server is that the 'Dynamically Update DNA A and PTR records for DHCP clients that do not request updates' is checked on the problematic server. I've done a bit of research about this and it seems that this may happen if any PXE type services are running, from what I can tell, there is nothing running a PXE server. I'm a bit lost as I have never seen dynamic ARP entries start to turn into static ARP entries. Right now my solution is a schedule task that runs every 24 hours to clear the ARP cache (arp -d *). I would like to not rely on this schedule task. Has anybody seen this before or have any suggestions on how to troubleshoot this?

    Read the article

  • MySQL Master-Master w/ multiple read slave cost effective setup in AWS

    - by Ross
    I've been evaluating Amazon Web Services RDS for MySQL and costing out potential scenarios involving a simple multi-AZ deployment read/write setup vs. a multi-AZ deployment mysql master (hot-standby) with additional read-only slaves. the issue I'm trying to cost-optimize includes their reserved instance vs on-demand instances. Situation 1: purchase reserved multi-az setup for Extra-large-hi-mem(17GB RAM) instance for $5200/yr and have my application query the master all the time. the problem is, if I don't need all the resources of the (17GB RAM) all the time and therefore, especially not a hot-standby, what alternatives for savings can a better topology create, like potentially situation 2 below: Situation 2: purchase reserved multi-az setup using smaller master instances than above for the master-master hot-standby to receive the writes only. Then create and load balance several read-only slaves off the master and add/remove and/or scale up/down the read slaves based on demand. This might only cost $1000 + the on-demand usage of the read slaves. My thinking is, if I have a variable read-intensive application load, with low write load, the single level topology in situation 1 means I'm paying for a lot of resources at the write level of topology when I don't need them there. My hope is that situation 2 can yield cost savings from smaller reserved instances on the master-master resource level allowing me to scale up and down and/or out on the read-level according to demand as needed. Does anyone see a downside to doing this or know of some reason this isn't possible with RDS? Any other thoughts or advice always welcome of course. Thanks in advance, R

    Read the article

  • Shared firewall or multiple client specific firewalls?

    - by Tauren
    I'm trying to determine if I can use a single firewall for my entire network, including customer servers, or if each customer should have their own firewall. I've found that many hosting companies require each client with a cluster of servers to have their own firewall. If you need a web node and a database node, you also have to get a firewall, and pay another monthly fee for it. I have colo space with several KVM virtualization servers hosting VPS services to many different customers. Each KVM host is running a software iptables firewall that only allows specific ports to be accessed on each VPS. I can control which ports any given VPS has open, allowing a web VPS to be accessed from anywhere on ports 80 and 443, but blocking a database VPS completely to the outside and only allowing a certain other VPS to access it. The configuration works well for my current needs. Note that there is not a hardware firewall protecting the virtualization hosts in place at this time. However, the KVM hosts only have port 22 open, are running nothing except KVM and SSH, and even port 22 cannot be accessed except for inside the netblock. I'm looking at possibly rethinking my network now that I have a client who needs to transition from a single VPS onto two dedicated servers (one web and one DB). A different customer already has a single dedicated server that is not behind any firewall except iptables running on the system. Should I require that each dedicated server customer have their own dedicated firewall? Or can I utilize a single network-wide firewall for multiple customer clusters? I'm familiar with iptables, and am currently thinking I'll use it for any firewalls/routers that I need. But I don't necessarily want to use up 1U of space in my rack for each firewall, nor the power consumption each firewall server will take. So I'm considering a hardware firewall. Any suggestions on what is a good approach?

    Read the article

  • Sluggish Windows SBS 2003

    - by TomWilsonFL
    One of my customers has a Windows 2003 Small Business Server which at this point is basically the DC, DNS, Fileserver and Symantec Protection Manager. I have disabled Exchange because I moved their mail to Google Apps. The server is extremely sluggish when doing anything. It is most noticeable when a dialog box is open (say the System properties), and you try to change tabs. This is usually instant, but on this machine can take 3-5 seconds. What additional services / packages can I uninstall from this machine knowing that it is only performing the above roles? Will removing the "Small Business Server" package in Add / Remove Programs get rid of a few unnecessary things? Any other thoughts? P.S. I know Symantec Endpoint and the Protection Manager are hogs, but I have nothing to replace the solution with at the moment. Thanks, Tom UPDATE: I looked over the different performance metrics, but nothing stood out as a problem. One of my friends mentioned Symantec's log and temp files can get quite huge and slow things down, so I ran CCleaner on the machine and found close to 3 GB of Symantec "stuff." Removed that and now the machine is MUCH better. I am still unsure why the data just sitting there would cause such a slowdown. The drive is not even near full. The only thing I can imagine is that Symantec must have to run through this stuff now and then.

    Read the article

  • AD User Passwords expiring without any notifications?

    - by scooter133
    We setup password Policies in Active Directory to Expire peoples passwords after so many days. Well it looks like the time has come for the Expiration of the Passwords and people are getting locked out... There has been no warning of user passwords about to expire. They just come in to work and they cannot log in, the phones no longer connect, nothing. Reset the password and all is good. Some of the users are locked out, though most are not, they just cannot log in. On setting the password Expiration, I didn't see anything about nor warning the users of the impending expiration. Seems like it used to warn you 15 days or so before it would expire. Clients range from: WinXP, WinVista, Win7 and Server 2008R2 Remote Desktop Services. How can I make sure my users are warned of the Expiration? Resultant Set of Policy for User that was not prompted: Account Policies/Password Policy Policy Setting Winning GPO Enforce password history 10 passwords remembered Default Domain Policy Maximum password age 270 days Default Domain Policy Minimum password age 0 days Default Domain Policy Minimum password length 4 characters Default Domain Policy Password must meet complexity requirements Disabled Default Domain Policy Store passwords using reversible encryption Disabled Default Domain Policy Account Policies/Account Lockout Policy Policy Setting Winning GPO Account lockout duration 20 minutes Default Domain Policy Account lockout threshold 5 invalid logon attempts Default Domain Policy Reset account lockout counter after 15 minutes Default Domain Policy Local Policies/Audit Policy Policy Setting Winning GPO Audit account logon events Failure Default Domain Policy Audit account management Success, Failure Default Domain Policy Audit directory service access Success, Failure Default Domain Policy Audit logon events Failure Default Domain Policy Audit policy change Success, Failure Default Domain Policy Audit privilege use Failure Default Domain Policy Local Policies/Security Options Interactive Logon Policy Setting Winning GPO Interactive logon: Prompt user to change password before expiration 7 days Default Domain Policy

    Read the article

  • How do I force a server to leave a SharePoint farm

    - by Stefan
    I have two web servers in a SharePoint (WSS 3.0) farm with one database server for the config and content databases. I already moved my content databases to a new database server successfully. But when I tried to move the sharepoint config database using the "stsadm deleteconfigdb" and "stsadm setconfigdb" commands, one of my servers got stuck in an intermediate state. I was able to join one of the web servers with the config database on the new server, but the other server is not able to join because it believes it is already part of the farm (which it used to be, before the move). On the central administration it says the status of the services on the server is "stopping". Even after rebooting all servers involved, uninstalling SharePoint and what not, this status does not change, and because of it, I am not able to join the second server with the new config database. I get random error messages when trying to join the farm. I believe that if I can unstuck this server, it will be able to join the farm again. The farm believes the second server is already part of it, but the web server itself knows its not. Any ideas on how to forcefully kick out a server from the farm?

    Read the article

  • Swap static public IPs without creating DNS conflicts?

    - by Jakobud
    Our ISP is Comcast and we have 5 static public IPs from them that we use for various services, including customers connecting to our network, VPN, web, DNS, etc... We need more IP addresses from Comcast. Unfortunately, Comcast is telling us that they can't just simply give us 5 more addresses. They only give static IP addresses in blocks of 1, 5 or 13. In order for us to get more static IPs, they have to take away our current 5 static IPs and give us 13 new ones. How do we make this transition without causing all sorts of DNS chaos? We run public DNS servers, so we can make the DNS changes ourselves, but it will take some time obviously for those DNS changes to propagate throughout the internet. Are there any easy ways to make this transition? Like create some type of fallback DNS entry or something? Surely there must be some sort of procedure for this kind of thing. The Comcast support guy was useless.

    Read the article

  • Error regarding DNS - "... must be able to resolve names ..." (Windows Server 2008 R2 installation)

    - by Scolytus
    I'm trying to replace our old Windows 2000 Server by a Windows Server 2008 R2. I followed the guide at MSDN. Coming to the step "Install Active Directory Domain Services..." the option to install the DNS-Server was grayed-out. According to Microsoft Support I skipped the DNS Server Installation at this point. (Because of the single-label DNS name) I then installed the DNS-Server role and created a forward-lookup-zone for the domain. When running the Best Practices Analyzer of the DNS-Server role I get these two messages for both domain controllers (the old win2k and the new win 2008 R2): The DNS server [IP address] on [adapter name] must be able to resolve names in the primary DNS domain zone The DNS server [IP address] on [adapter name] must be able to resolve names in the forest root domain name zone The TechCenter articles suggest to use a proper DNS Server - that's pointless when I try to configure a proper DNS Server. How do I configure the DNS Server in a way that it resolves these zones? Or are these errors irrelevant? dcdiag /v /test:DNS Seems to run fine...

    Read the article

  • how to authenticate once for multiple servers, using only apache configs?

    - by Wang
    My problem is, I have a number of prepackaged web apps (a print system, a wiki, a bug tracker, an email archive, etc.) running on different Mac OS X Leopard (soon to be SL) servers that each need to authenticate users from the internet at large. Right now every server presents an Apache basic authentication prompt, which takes a shared login, but it's apparently enough of an inconvenience to log in repeatedly that people are sending email without checking the wiki or bug tracker or archive. In the case of the bug tracker, a user [might need to log in twice---once for apache if he hasn't used any other protected service on that server, once for the bug tracker itself so it can distinguish different people. Since the only common component to all these apps is Apache 2 itself, does it have any way of authenticating a user once, in some way that will be respected by other servers and various web apps? Looked at http://serverfault.com/questions/32421/how-is-session-stickiness-achieved-across-multiple-web-servers but it sounds like the answer is assuming that I get to write my own web app. Looked at Ian Bicking's blog but it's four years old and recommends something available only for apache 1.3, not apache 2. Sorry not to hyperlink the second site---apparently I need 10 reputation points. Edit: Shibboleth does what I need, but I should have specified that I'm looking for a really dumb, really simple solution for in-house services that need to handle all of a dozen users, probably not more than three at a time.

    Read the article

  • Iptables: "-p udp --state ESTABLISHED"

    - by chris_l
    Hi, let's look at these two iptables rules which are often used to allow outgoing DNS: iptables -A OUTPUT -p udp --sport 1024:65535 --dport 53 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -p udp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT My question is: How exactly should I understand the ESTABLISHED state in UDP? UDP is stateless. Here is my intuition - I'd like to know, if or where this is incorrect: The man page tells me this: state This module, when combined with connection tracking, allows access to the connection tracking state for this packet. --state ... So, iptables basically remembers the port number that was used for the outgoing packet (what else could it remember for a UDP packet?), and then allows the first incoming packet that is sent back within a short timeframe? An attacker would have to guess the port number (would that really be too hard?) About avoiding conflicts: The kernel keeps track of which ports are blocked (either by other services, or by previous outgoing UDP packets), so that these ports will not be used for new outgoing DNS packets within the timeframe? (What would happen, if I accidentally tried to start a service on that port within the timeframe - would that attempt be denied/blocked?) Please find all errors in the above text :-) Thanks, Chris

    Read the article

  • Virtual Wifi Issue Windows 7

    - by Matt
    Lately I've been trying to use my laptop as a wireless router in my room. I have it connected to my school's network through ethernet, and I want to set up wireless so that I can use Wifi on my Android phone and iPod Touch. In the past, I used Connectify, but I started having an issue where my phone would find the network, connect, attempt to get the IP, and then suddenly the network would disappear. Then it'd pop up again, and the same process would happen over and over. I decided that I'd totally uninstall Connectify, but after that, neither Virtual Router Manager nor the command prompt could create a viable network either. My phone and even my iPod now encounter the same problem. Neither can successfully connect. So evidently there is something wrong with the laptop's virtual wifi feature, and I have no idea what that could be. I've tried enabling certain services that virtual wifi supposedly relies on, but some of them don't start, namely Remote Access Connection Manager. But I also have read that these enable on their own and that if they are normally not enabled it's fine. Furthermore, I even uninstalled and reinstalled the drivers for my wireless card. Any ideas as to why my virtual wifi won't function? Anything? I really would love to get this working...

    Read the article

  • Changing default openVPN IP in linux server

    - by Lamboo
    The problem is that we have a public OpenVPN service. Pay €9.95 and you get an OpenVPN account at currently half a dozen of servers for a month. This means there are always and will always be some people who create a certain amount of abuse or trouble. On the long run, the external IP every OpenVPN user gets assigned is prohibited from editing Wikipedia, it might be banned by e-gold and on some popular webforums, one-click-hosters, etc. Not a pleasant experience for the 97% of our customers who use our service responsibly and legitimately to regain their privacy. So even if I could change the assigned external IP every few months; e. g. from 216.xx.xx.164 to 216.xx.xx.170, it would help us a lot to combat this abuse and to provide our paying clients with "fresh" IP addresses that aren't banned or restricted on some popular Internet sites and services, yet. Does anybody know how to change the first IP address assigned to the public interface in CentOS? So that e.g. OpenVPN in future doesn't give our OpenVPN clients the external IP 123.xx.xx.164 but rather 123.xx.xx.170?

    Read the article

  • NTDS Replication Warning (Event ID 2089)

    - by Chris_K
    I have a simple little network with 3 AD servers in 2 sites. Site A has Win2k3 SP2 and Win2k SP4 servers, site B has a single Win2k3 SP2 server. All have been in place for at least 3 years now. Just last week I started getting Event 2089 "not backed up" warnings (example below) on both of the win2k3 servers. I understand what the message means, no need to send me links to the technet article explaining it. I'll improve my backups. What I'm more curious about is why did I just start getting this message now? Why haven't I been getting it for the past 3 years?!? Perhaps this is related: I recently decommissioned a few other sites and AD controllers (there used to be 3 more sites, each with their own controller). Don't worry, I did proper DCpromo exercises and made sure we didn't lose anything. But would shutting those down possibly be related to why I get this error now? This won't keep me awake at night but I am curious as to what changed... Event Type: Warning Event Source: NTDS Replication Event Category: Backup Event ID: 2089 Date: 3/28/2010 Time: 9:25:27 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: RedactedName Description: This directory partition has not been backed up since at least the following number of days. Directory partition: DC=MyDomain,DC=com 'Backup latency interval' (days): 30 It is recommended that you take a backup as often as possible to recover from accidental loss of data. However if you haven't taken a backup since at least the 'backup latency interval' number of days, this message will be logged every day until a backup is taken. You can take a backup of any replica that holds this partition. By default the 'Backup latency interval' is set to half the 'Tombstone Lifetime Interval'. If you want to change the default 'Backup latency interval', you could do so by adding the following registry key. 'Backup latency interval' (days) registry key: System\CurrentControlSet\Services\NTDS\Parameters\Backup Latency Threshold (days) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Appears to be "randomly" switching between the acl matched backend and the default backend

    - by Xoor
    I have HAProxy acting as a proxy in front of: An NGinx instance An in-house load balancer in front of multiple dynamic services exposed with socket.io (websockets) My problem is that from time to time my connections are proxied correctly to my socket.io cluster, and then randomly it fallsback to routing to NGinx which obviously is annoying and meaningless since NGinx isn't mean't to handle the request. This happens when requesting for URLs of the format : http://mydomain.com/backends/* There's an ACL in the HAProxy config to match the '/backends/*' path. Here's a simplified version of my HAProxy config (removed extra unrelated entries and changed names): global daemon maxconn 4096 user haproxy group haproxy nbproc 4 defaults mode http timeout server 86400000 timeout connect 5000 log global #this frontend interface receives the incoming http requests frontend http-in mode http #process all requests made on port 80 bind *:80 #set a large timeout for websockets timeout client 86400000 # Default Backend default_backend www_backend # Loadfire (socket cluster) acl is_loadfire_backends path_beg /backends use_backend loadfire_backend if is_loadfire_backends # NGinx backend backend www_backend server www_nginx localhost:12346 maxconn 1024 # Loadfire backend backend loadfire_backend option forwardfor # This sets X-Forwarded-For option httpclose server loadfire localhost:7101 maxconn 2048 It's really quite confusing for me why the behaviour appears to be "random", since being hard to reproduce it's hard to debug. I appreciate any insight on this.

    Read the article

  • Durability of Websockets Server

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for.

    Read the article

  • Exchange MSExchangeIS Mailbox Store Error

    - by Bart Silverstrim
    Boss asked me to check to see if I could figure out why he's had to restart the services on the Exchange server three mornings in a row now. While going through the system logs I ran across an error from the MSExchangeIs Mailbox Store, category General, Event 9690. The message said (edited to make generalized): Exchange store 'First Storage Group\Mailbox Store (Servername)': The logical size of this database (the logical size equals the physical size of the .edb file and the .stm file minus the logical free space in each) is 22GB. This database size has exceeded the size limit of 22 GB. This database will be dismounted immediately. Hmm...happened at five in the morning, and I'm thinking this is a pretty good hint that this leads to the culprit. Thing is I'm not an Exchange expert, so I'm still googling around to figure out how to fix the problem. Any better guidance out there? Or am I barking up the wrong binary tree? Exchange System Manager reports that the server is "version 6.5 build 7638.2, SP2", standard, which I believe is Exchange 2003. It's running on Windows Server 2003 R2 Standard, SP2.

    Read the article

  • Firewall for internal networks

    - by Cylindric
    I have a virtualised infrastructure here, with separated networks (some physically, some just by VLAN) for iSCSI traffic, VMware management traffic, production traffic, etc. The recommendations are of course to not allow access from the LAN to the iSCSI network for example, for obvious security and performance reasons, and same between DMZ/LAN, etc. The problem I have is that in reality, some services do need access across the networks from time to time: System monitoring server needs to see the ESX hosts and the SAN for SNMP VSphere guest console access needs direct access to the ESX host the VM is running on VMware Converter wants access to the ESX host the VM will be created on The SAN email notification system wants access to our mail server Rather than wildly opening up the entire network, I'd like to place a firewall spanning these networks, so I can allow just the access required For example: SAN SMTP Server for email Management SAN for monitoring via SNMP Management ESX for monitoring via SNMP Target Server ESX for VMConverter Can someone recommend a free firewall that will allow this kind of thing without too much low-level tinkering of config files? I've used products such as IPcop before, and it seems to be possible to achieve this using that product if I re-purpose their ideas of "WAN", "WLAN" (the red/green/orange/blue interfaces), but was wondering if there were any other accepted products for this sort of thing. Thanks.

    Read the article

  • Exchange 2010 Hub cannot deliver to Exchange 2007 Hub - "451 5.7.3 Cannot achieve Exchange Server authentication"

    - by Graeme Donaldson
    We have an existing Exchange 2007 server in Site A (exch07). I've installed an Exchange 2010 server in Site B (exch10). Both servers have the CAS, Mailbox and Hub roles. Messages sent via SMTP on exch10 which are destined for mailboxes on exch07 are queued with the "Last Error" reported in Queue Viewer as '451 4.4.0 Primary target IP address responded with: "451 5.7.3 Cannot achieve Exchange Server authentication." Attempted failover to alternate host, but that did not succeed. Either there are no alternate hosts, or delivery failed to all alternate hosts.' I've found that some people have resolved this by creating new Receive Connectors which are scoped specifically to apply to connections from the remote hub/s, but I have had no luck doing this. Specifically I created new receive connectors on both servers with the following settings: Remote IP = IP/s of remote server Authentication = "Transport Layer Security (TLS)" and "Exchange Server authentication" Permission Groups = "Exchange servers" and "Legacy Exchange Servers" This made no difference, I see the same error message. What am I missing? Update: We noticed that the Application log had this error message from MSExchangeTransportService: Microsoft Exchange could not find a certificate that contains the domain name exch07.domain.local in the personal store on the local computer. Therefore, it is unable to support the STARTTLS SMTP verb for the connector exch10 with a FQDN parameter of exch07.domain.local. If the connector's FQDN is not specified, the computer's FQDN is used. Verify the connector configuration and the installed certificates to make sure that there is a certificate with a domain name for that FQDN. If this certificate exists, run Enable-ExchangeCertificate -Services SMTP to make sure that the Microsoft Exchange Transport service has access to the certificate key. It turns out that the default self-signed certificate was no longer enabled for the SMTP service for some reason. After enabling the self-signed certificate for SMTP, we no longer get the error in the event logs, but delivery is still failing with the same error message. Update 2: I put a mailbox on exch10 and attempted to deliver a message via SMTP on exch07 and I get the same error.

    Read the article

< Previous Page | 711 712 713 714 715 716 717 718 719 720 721 722  | Next Page >