Search Results

Search found 3857 results on 155 pages for 'samuel front'.

Page 114/155 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Nginx, HAproxy, Unicorn, Rails and Node settings

    - by Julien Genestoux
    Our application is currently only a "regular" web app, with no fancy things like streaming HTTP or websockets. It's mostly a Rails app, served by a few (20 on 2 machines) Unicorn workers, proxied by a venerable nginx server which deals with load balancing. This has been working quite well for the past year and the app now serves between 400 and 800 requests per second at any point during the day. We're soon releasing 2 new APIs, which are both served by a Node application : a websocket one, as well as a long polling HTTP one. (the fancy thing like the Twitter streaming API where HTTP connections never end). They both use the same port on node and since the node app is stateless, we can certainly deploy a few of them to handle the traffic. The app (node) is now deployed in 5 instances and are now listening on 5 different 'private' ports on the same host. We need to put something in front of them to load balance, but also something that is able to deal with sockets (either websocket or HTTP streaming) which are intended to stay 'up' for days. The question is then : what? I read somewhere that HAProxy does a better job than Nginx at this. What do you recommend?

    Read the article

  • Disconnect from PHP after output generated

    - by Oli
    I have a LEMP stack. Nginx sitting in front of PHP-FPM. Because some of the sites are heavy and there's OPCode caching, PHP is set up so that there are only 5 child processes running. The aim being that each child can deal with any request in less than half-a-second and then move onto the next request. One problem I've found is that if it's a big chunk of HTML that's getting sent out, and the user has a slow connection, that PHP thread stays occupied until they've finished downloading. Because of my current setup, I have a pretty unforgiving timeout inside PHP where the script is killed after 20 seconds. This is to make sure everybody gets a turn but on a slow connection, this can mean the user gets cut off with a 504 Gateway timeout. I was wondering if there was some sort of buffer solution that I could implement within or just behind Nginx that sent the request through and then... well... buffered the content into its own cache and feed that onto the client as and when they could download it. The aim being that the underlying PHP thread can be freed up. What I'm asking for doesn't have to be PHP-specific. Anything that deals with FastCGI, or even any Nginx-upstream might have a similar issue to this.

    Read the article

  • Exchange Connector Won't Send to External Domains

    - by sisdog
    I'm a developer trying to get my .Net application to send emails out through our Exchange server. I'm not an Exchange expert so I'll qualify that up front!! We've set up a receive Connector in Exchange that has the following properties: Network: allows all IP addresses via port 25. Authentication: Transport Layer Security and Externally Secured checkboxes are checked. Permission Groups: Anonymous Users and Exchange Servers checkboxes are checked. But, when I run this Powershell statement right on our Exchange server it works when I send to a local domain address but when I try to send to a remote domain it fails. WORKS: C:\Windows\system32Send-Mailmessage -To [email protected] -From [email protected] -Subject testing -Body testing -SmtpServer OURSERVER (BTW: my value for OURSERVER=boxname.domainname.local. This is the same fully-qualified name that shows up in our Exchange Management Shell when I launch it). FAILS: C:\Windows\system32Send-Mailmessage -To [email protected] -From [email protected] -Subject testing -Body testing -SmtpServer OURSERVER Send-MailMessage : Mailbox unavailable. The server response was: 5.7.1 Unable to relay At line:1 char:17 + Send-Mailmessage <<<< -To [email protected] -From [email protected] -Subject testing -Body himom -SmtpServer FTI-EX + CategoryInfo : InvalidOperation: (System.Net.Mail.SmtpClient:SmtpClient) [Send-MailMessage], SmtpFailed RecipientException + FullyQualifiedErrorId : SmtpException,Microsoft.PowerShell.Commands.SendMailMessage EDIT: From @TheCleaner 's advice, I ran the Add-ADPermission to the relay and it didn't help; [PS] C:\Windows\system32Get-ReceiveConnector "Allowed Relay" | Add-ADPermission -User "NT AUTHORITY\ANONYMOUS LOGON" -E xtendedRights "Ms-Exch-SMTP-Accept-Any-Recipient" Identity User Deny Inherited -------- ---- ---- --------- FTI-EX\Allowed Relay NT AUTHORITY\ANON... False False Thanks for the help. Mark

    Read the article

  • Mac OS X duplex printing problem: one- vs. multi-paged documents

    - by Christian Lindig
    I like to print on pre-printed stationery using the Preview.app and a duplex-capable HP Color Laserjet 4700 (PostScript) printer. The print dialog handles one and two-paged documents differently: the paper needs to be placed differently into the tray if the document contains one page versus when it contains two pages. This is not obvious when printing on plain paper but becomes obvious when front and reverse side of sheets are marked. Otherwise the first page would end up on the reverse side of the first sheet. I believe the problem is caused by the printer driver setting duplex printing to false (using the PostScript setpagedevice operator) when emitting a single-page document versus keeping it set to true when emitting multi-page documents. All this despite that duplex printing is always specified in the printer dialog. When printing a single-sided document, duplex=true and duplex=false seem to make a difference with respect which side of a sheet gets printed on. It would be also helpful if others could confirm the problem actually exists. I suspect this problem is not limited to specific printers. I'm on OS X 10.6 and I checked two different HP printers.

    Read the article

  • Web page layout becomes broken when moved to live.

    - by Moses
    I want to preface my question with the fact that I'm only a front-end web developer, so please excuse my gross lack of knowledge in this area. My company has three webservers: one for development (IIS v5), one for staging (IIS v5), and one for live deployment (IIS v6). Staging is an exact mirror of live. When I compare the staging and live web pages side by side in Firefox (3.6), the pages are identical. However, when I compare the staging and live pages with Internet Explorer (8), there are major differences... In staging, the squares for bulleted lists are small. In live, the squares are big. In staging, the borders for tables are thick. In live, the borders are thin. In staging, an ASP generated image is the proper height. In live, the image is cropped at the bottom by about 10px. In the end, the layout on live became broken because of these tiny differences, but why? Does the fact that live is on IIS 6 and staging is on IIS 5 account for the small variance in display? And is there any way I can change this server side? Also, is there any reason why Firefox displays both correctly and IE displays both incorrectly?

    Read the article

  • How do I load balance between two Linux machines?

    - by William Hilsum
    Inspired by the Stack Overflow network, I am now obsessed with HAProxy and trying to use it myself. At the moment, each HAProxy box has got two network cards (well, two configured, I can have a maximum of 4 and wasn't sure if they needed their own one for management between the boxes). On both machines, the backend one (eth1) is a private IP that goes to a switch connected to the webservers, and the front facing one (eth0) has a public internet IP that is routed straight though. In addition, I have created an additional virtual ip for eth0 called eth0:0 which has got a third public ip address. I just about get how to use it for load balancing between multiple web servers that are behind it, but, I am failing to load balance between the two HAProxy boxes - they appear to fight for the virtual IP, but, this does not appear to be a smart solution. Now, by using the virtual shared IP address, this solution appears to work and does seem to give me maximum uptime, but, is this the correct way to do it, or is there a smarter way? I have been looking at other Linux packages such as keepalived, but, I have only been using Linux (server) for a week now and am at the limits of my understanding. Is there anyone who has done this before and can you advise anything for maximum uptime?

    Read the article

  • Homebrew large data cluster access for 2 user levels?

    - by Yegor
    The title probably makes little sense, so here is an example. I have a file hosting site, that serves a large amount of semi-randomly accessed files. The setup is as follows: High horsepower front-end +DB server that also does encoding for files that need encoding Fresh file server, which stores newly uploaded content, thats probably (and usually) rapidly accessible, which has 500GB of raided SSD storage, that can push over 3GBit of traffic. 3 cheap node servers, containing 2 x 750GB SATA drives in raid1, where files older than 2 weeks are archived, from the SSD server (mentioned above). Files on each server are accessed via subdomains (via modsec) in a straight forward fashion (server1.domain.com, server2.domain.com, etc) Where I have the problem is this. I introduced a "premium" service where people pay a small fee every month, and get ad-free, quick accesses to stuff on the site. Once they are logged in, they access same files via premium.server1.domain.com via a different modsec script, with a different pass phrase. That all works fine and dandy.... except the cheap node servers are all IO bound, so accessing the files on them via a different, unsaturated network makes no difference, since it cannot read off the drive fast enough. What would be a good way to make files on the site be accessible via 2 different network routes, 1 of which will be saturated (the "free network") while all other files are on an un-saturated "premium" network?

    Read the article

  • Affordable combined Ruby/Rails/Redmine + Subversion hosting?

    - by Pekka
    I'm a self employed web developer and after nine years of hard work, I'm looking to become a bit more "vagrant" starting next year, do some much-needed traveling and a bit and work off and on, making use of one of the greatest advantages of a programming job: The ability to work virtually from everywhere. For that, I am looking for a reliable hosting company I can entrust my code to in the form of a number of Subversion repositories, and an installation of the Redmine project management tool. As my financial situation may vary during traveling, I am looking for something I can pay up front for a year or two, and is obviously not too pricey. I don't care where the company is located, as long as it's trustworthy and solid, meaning it's not likely to go out of business next month. Does anybody know good recommendations? Preferably from own, personal, good experience. I have looked at CVSDude / Codesion and while they are certainly great, they don't offer Redmine of course, and seem to be aiming toward bigger organizations mainly. What I would need: 2-5 Gigs of space minimum, freely distributable between SVN, and Redmine attachments Unlimited number of Subversion projects Access control (team members / checkout-only accounts / etc.) I don't mind configuring the svn settings on file basis myself I need the possibility to map a custom domain to the package that is hosted elsewhere Frequent backups and access to those backups through FTP or other means I have been running my own virtual server for this until now, but I don't want the hassle, especially on the security side, while I may not always have the internet connection to fix problems that may come up.

    Read the article

  • PS3 controller -> PC -> emulators -> TV

    - by abrereton
    I'm researching a media PC for the living room. Playing videos, audio and streaming Internet is straightforward enough. I would also like to run a gaming console system. I was wondering if anyone has any thoughts on this. So far I've discovered that a PS3 controller (thankfully it uses USB and Bluetooth) can be connected to a PC. I've also found that MAME, MESS and PCSX2 are all the emulators I need (I can even emulate a TI-83 calculator with MESS). These emulators can re-map keys, so for example I can make the Nintendo's A button to the PS3 X button, or the SNES key pad could be the PS3 keypad or the analog stick. There are also front-ends to these emulators which can do fancy things like image scaling, anti-aliasing and double-buffering to improve the image quality of an 8-bit Mario on a 50 inch plasma. My set up would be this: PS3 controller connecting over Bluetooth to the PC, PC with Windows, PS3 controller drivers, all my emulators, Network drive with all my ROMs, PC connected to TV via HDMI TV playing Super Mario Kart Does this sound feasible? Does anyone have experience of doing anything like this? Is this a good idea or should I grow up and stop living in the past?

    Read the article

  • SharePoint extranet security concerns, am I right to be worried?

    - by LukeR
    We are currently running MOSS 2007 internally, and have been doing so for about 12 months with no major issues. There has now been a request from management to provide access from the internet for small groups (initially) which are comprised of members from other Community Organisations like ours. Committees and the like. My first reaction was not joy when presented with this request, however I'd like to make sure the apprehension is warranted. I have read a few docs on TechNet about security hardening with regard to SharePoint, but I'm interested to know what others have done. I've spoken with another organisation who has already implemented something similar, and they have essentially port-forwarded from the internet to their internal production MOSS server. I don't really like the sound of this. Is it adviseable/necessary to run a DMZ type configuration, with a separate web front-end on a contained network segment? Does that even offer me any greater security than their setup? Some of the configurations from a TechNet doc aren't really feasible, given our current network budget. I've already made my concerns known to management, but it appears it will go ahead in some form or another. I'm tempted to run a completely isolated, seperate install just for these types of users. Should I even be concerned about it? Any thoughts, comments would be most welcomed at this point.

    Read the article

  • chrooted sftp user with write permissions to /var/www

    - by matthew
    I am getting confused about this setup that I am trying to deploy. I hope someone of you folks can lend me a hand: much much appreciated. Background info Server is Debian 6.0, ext3, with Apache2/SSL and Nginx at the front as reverse proxy. I need to provide sftp access to the Apache root directory (/var/www), making sure that the sftp user is chrooted to that path with RWX permissions. All this without modifying any default permission in /var/www. drwxr-xr-x 9 root root 4096 Nov 4 22:46 www Inside /var/www -rw-r----- 1 www-data www-data 177 Mar 11 2012 file1 drwxr-x--- 6 www-data www-data 4096 Sep 10 2012 dir1 drwxr-xr-x 7 www-data www-data 4096 Sep 28 2012 dir2 -rw------- 1 root root 19 Apr 6 2012 file2 -rw------- 1 root root 3548528 Sep 28 2012 file3 drwxr-x--- 6 www-data www-data 4096 Aug 22 00:11 dir3 drwxr-x--- 5 www-data www-data 4096 Jul 15 2012 dir4 drwxr-x--- 2 www-data www-data 536576 Nov 24 2012 dir5 drwxr-x--- 2 www-data www-data 4096 Nov 5 00:00 dir6 drwxr-x--- 2 www-data www-data 4096 Nov 4 13:24 dir7 What I have tried created a new group secureftp created a new sftp user, joined to secureftp and www-data groups also with nologin shell. Homedir is / edited sshd_config with Subsystem sftp internal-sftp AllowTcpForwarding no Match Group <secureftp> ChrootDirectory /var/www ForceCommand internal-sftp I can login with the sftp user, list files but no write action is allowed. Sftp user is in the www-data group but permissions in /var/www are read/read+x for the group bit so... It doesn't work. I've also tried with ACL, but as I apply ACL RWX permissions for the sftp user to /var/www (dirs and files recursively), it will change the unix permissions as well which is what I don't want. What can I do here? I was thinking I could enable the user www-data to login as sftp, so that it'll be able to modify files/dirs that www-data owns in /var/www. But for some reason I think this would be a stupid move securitywise.

    Read the article

  • How do I repair my Logitech Anywhere MX?

    - by Stefano Palazzo
    My Anywhere Mouse has got mushy mouse button syndrome. That is, the left mouse button feels a little bit soft, and it easily double clicks, let's go when I drag something. Before I repair it at home, rather than bringing it to the store (I kind of need it, it's the only one I have), I'd like to know exactly what I'm doing. It'd be too bad if I tried to repair it, voided the warranty and didn't succeed. I'm guessing there are screws to open it under the rubber pads. And I suppose I can take those off without breaking them, and put them back on without bending them. How is this mouse held together, and what's the safest way to open it? Once I have it open, will I be able to fix the problem? What's causing the mushy mouse button? Here's what I know so far: It might be the switch itself that's broken, in which case I shouldn't open it (I can't get a replacement, voiding the warranty to "have a look" seems pointless) If there are screws underneath the rubber pads, they're only on the 'front', the back two thirds of the mouse are all battery cover: There's nothing I can see under the batteries either. In the mouse I had before this one, there were sort of springy things connecting the actual button with the switch soldered to the board. They were just lying inside of a bit of plastic, and I could swap the left and right ones easily. If repairing it is more difficult, transferring the problem to the right mouse button would be a very good start.

    Read the article

  • New build won't boot: some fans turning, no beep or video

    - by Dave
    When my new system is powered up, the case fan and power supply fans turn fine. The CPU fan twitches, but never gets going. Although I've heard that with AMDs and Gigabyte motherboards that is not necessary a problem. Hard drive is spinning. However, there is absolutely no indication that anything else is happening. The motherboard, as far as I can tell, does not have an internal speaker, but I harvested one from another machine and plugged it in and still no beeps at all. The monitor screen stays black, on both the integrated VGA and DVI. This is a brand new build, and has never successfully booted. My parts are: AMD Athlon II X2 245 Regor 2.9GHz Socket AM3 65W Dual-Core Processor Model ADX245OCGQBOX - includes CPU cooler) GIGABYTE GA-MA785GPMT-UD2H AM3 AMD 785G HDMI Micro ATX AMD Motherboard - Retail G.SKILL Ripjaws Series 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Desktop Memory Model F3-10666CL8D-4GBRM - Retail CORSAIR CMPSU-400CX 400W ATX12V V2.2 80 PLUS Certified Compatible with Core i7 Power Supply - Retail SAMSUNG Spinpoint F3 HD502HJ 500GB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive COOLER MASTER Elite 341 RC-341C-KKN1-GP Black Steel MicroATX Mid Tower Computer Case - Retail I also have a DVD burner, but it acts the same whether that is plugged in or not. I'm using the on board video. What I've tried so far: I've switched power supplies, with no difference. I've tried different monitors (of which all are working on other machines) with no difference. I have tried putting it one memory module at a time, with no difference. I have tried the absolute minimum I can think of (power supply into motherboard, power button ONLY plugged into front panel, CPU fan plugged in), with no difference. I appreciate any ideas anyone might have. Do I need to RMA the motherboard? This is my first build, so there might be something obvious. I was very careful in assembly with static; I'm confident nothing was zapped during assembly.

    Read the article

  • How do I stop VLC from stealing my volume buttons

    - by MGOwen
    when I press the volume buttons on my laptop, usually the system volume is changed. However, when I do this with VLC it "steals" the presses and adjusts it's own "volume" instead. The system volume is also changed. I can't find any way to turn this off in VLC. Does anyone know? Update: Sorry, some more details I should have included originally: VLC VERSION: 1.1.4 (and a few previous releases, back to about 1.1.0 or so, I think) OS: Win Vista Pro 32 HARDWARE: Dell 1720 laptop (the volume buttons are little buttons on the front of the unit, they may work something like "media" keyboard volume buttons) Update: The buttons seem to map to Ctrl+Alt+b and Ctrl+Alt+c (according to the shortcut key box in windows shortcut properties) but the VLC advanced preferences hotkeys screen doesn't list these as the keys it uses for volume. I changed it so there are no volume hotkeys in VLC settings - no luck it still steals the presses and adjusts the volume. Also, pressing Ctrl+Alt+b or c doesn't change my system volume, so who knows what windows or VLC are doing to recognise those volume buttons. :(

    Read the article

  • Why does Exchange 2003 silently reject emails with large attachments?

    - by Cypher
    Our environment: Exchange Server 2003 Standard, single instance, running on Windows Server 2003 Standard. configured to not send/receive mail with attachments larger than 10 MB. NDRs are not enabled. The issue: When an external sender sends an email with an attachment larger than 10MB, Exchange, as configured, does not receive the message. However, the sender of that message does not receive any notifications from his own mail server that the message could not be delivered due to attachment size. However, if an external user tries to send an email to a non-existent user, they do receive a message from their mail server indicating that the user does not exist. Why is that, and is there anything I can do about it? It would be nice if the sender received notification that the attachment file size exceeds our limits and their message was never received... Update The Exchange server has a SpamAssassin box in front of it... could that have something to do with it? Here is one of the last lines from SpamAssassin's logs when searching for my test e-mails: mail postfix/smtp[19133]: 2B80917758: to=, relay=10.0.0.8[10.0.0.8]:25, delay=4.3, delays=2.6/0/0/1.7, dsn=2.6.0, status=sent (250 2.6.0 Queued mail for delivery) My assumption is that Spam Assassin thinks the message is OK and is forwarding it off to Exchange. Update I've verified that Exchange is receiving the message and generating an NDR. However, delivery of NDRs are disabled to prevent Backscatter. Is there something that I can do to get Exchange to send a bounce message to the sending mail server (or verify that message is being sent) so the sending mail server can notify its sender of the bounce?

    Read the article

  • Windows 7 notebook turn off by itself, how to check if it is due to CPU being too hot?

    - by Jian Lin
    I have a Dell Studio 15 notebook, and it just started turning off by itself yesterday. Could it be that the CPU is too hot? I have had several notebooks before and every one of them I can put them on the bed without any problem. This Dell Studio Notebook, however, seems like have the air / fan outlet pointed outward from the bottom back of the notebook, so I suspect that the air is partially blocked when it is on the bed. Are there Win 7 tools that can monitor the CPU temperature, or will some 3rd party tool be needed? (I try to stick to official tools nowadays). Also, it is running Win 7 Ulitmate, there is actually no utility or background service from Win 7 or from Dell that detects when the temperature is too hot (or 95% near the max), pop out a message box giving a warning and say that the computer will go into sleep mode in 1 minute, but instead just turn off the computer by brute force (cutting out the power) right then and there? Update: it turned off right in front of my eyes -- it is not doing any windows update or anything. just normal use and jooooop, it turned off.

    Read the article

  • Keepalived for more than 20 virtual addresses

    - by cvaldemar
    I have set up keepalived on two Debian machines for high availability, but I've run into the maximum number of virtual IP's I can assign to my vrrp_instance. How would I go about configuring and failing over 20+ virtual IP's? This is the, very simple, setup: LB01: 10.200.85.1 LB02: 10.200.85.2 Virtual IPs: 10.200.85.100 - 10.200.85.200 Each machine is also running Apache (later Nginx) binding on the virtual IPs for SSL client certificate termination and proxying to backend webservers. The reason I need so many VIP's is the inability to use VirtualHost on HTTPS. This is my keepalived.conf: vrrp_script chk_apache2 { script "killall -0 apache2" interval 2 weight 2 } vrrp_instance VI_1 { interface eth0 state MASTER virtual_router_id 51 priority 101 virtual_ipaddress { 10.200.85.100 . . all the way to . 10.200.85.200 } An identical configuration is on the BACKUP machine, and it's working fine, but only up to the 20th IP. I have found a HOWTO discussing this problem. Basically, they suggest having just one VIP and routing all traffic "via" this one IP, and "all will be well". Is this a good approach? I'm running pfSense firewalls in front of the machines. Quote from the above link: ip route add $VNET/N via $VIP or route add $VNET netmask w.x.y.z gw $VIP Thanks in advance. EDIT: @David Schwartz said it would make sense to add a route, so I tried adding a static route to the pfSense firewall, but that didn't work as I expected it would. pfSense route: Interface: LAN Destination network: 10.200.85.200/32 (virtual IP) Gateway: 10.200.85.100 (floating virtual IP) Description: Route to VIP .100 I also made sure I had packet forwarding enabled on my hosts: $ cat /etc/sysctl.conf net.ipv4.ip_forward=1 net.ipv4.ip_nonlocal_bind=1 Am I doing this wrong? I also removed all VIPs from the keepalived.conf so it only fails over 10.200.85.100.

    Read the article

  • h264 inside FLV container vs. MP4 container?

    - by Gotys
    I am developing a tube site, and currently having issues with h264 format . By looking at youtube, I noticed they are putting their hi-def videos into mp4 container, so logically I did the same. Next, I installed mod_h264_streaming for lighttpd to make streaming and timeline-scrubbing work. Problem is, that large files (500mb+ at somewhat high resolution) take for EVER to even start buffering ( I read the flowplayer or other flash players need to download metadata first) . I moved the xmov atom to the front of the file with MP4Box (i tried qt-quickstart too) , and the problem didn't go away. Next I read online I need to interleave audio tracks, so I did that too. No change in slowness. So I tried putting the same exact h264 movie into an FLV container, and the playback buffering starts almost instantly - no slowness. So what am I missing here? Why would I choose MP4 container with mod_264_streaming module , which seems super-slow over a regular FLV container with lighttpd's built-in mod_flv_streaming ? Obviously many websites pick mp4 container , but I fail to understand why ? And as a side question - I tried using HTML5's VIDEO tag to try the same h264 MP4 movie, and the scrubbing is LIGHTING FAST! I looked into lighttpd's log file, and i noticed taht Flash Players append video.mp4?start=234 each time timeline is scrubbed, wheres HTML5's video tag does no such thing . Is this some sort of limitations of Flash ? Why Can't flash streaming be same fast as HTML5 streaming? Thanks to ALL who can help. I very much appreciate this community.

    Read the article

  • Exchange 2010 CAS Removal == Broken???

    - by Doug
    Hi there, I recently upgraded to exchange 2010 and have a setup with 2 of my servers running CAS roles - EXCH01, EXCH02 EXCH02 just happens to also have a mailbox role where a lot of the users sit EXCH01 is my front facing CAS server, and is facing the net with SSL etc and incoming mail moving through it as a hub transport layer server as well. As i was trying to lean things out in my VM environment i removed the CAS role from EXCH02 and all hell broke loose. All the mail users that have a mailbox on EXCH02 had their homeMTA set to a deleted items folder in AD and so did their msExchHomeServer properties. After a complete battle i manually fixed these issues to the oldvalues, and in the mean time reinstalled CAS on EXCH02 (management was going nuts with out OUTLOOK working so i just put things back the way they were in a hurry.) I must add as a strange thing on the side, that before i reset these to point at EXCH02 i tried EXCH01 and it failed. I still want to remove the CAS role from EXCH02 as it should really not have it (error on install/planning on my part) and would have thought that this would not cause the issues it did, i assumed that the fact that there was another CAS server in the admin group all would be good. Was i wrong in my assumption? and what can i do to complete this successfully the second time round? Do i need to rehome all the mailboxes to the CAS server? is this a bug in the role uninstall?

    Read the article

  • How can I proxy multiple LDAP servers, and still have grouping of users on the proxy?

    - by Chris
    I have 2 problems that I'm hoping to find a common solution to. First, I need to find a way to have multiple LDAP servers (Windows AD's across multiple domains) feed into a single source for authentication. This is also needed to get applications that can't natively talk to more than one LDAP server to work. I've read this can be done with Open LDAP. Are there other solutions? Second, I need to be able to add those users to groups without being able to make any changes to the LDAP servers I'm proxying. Lastly, this all needs to work on Windows Server 2003/2008. I work for a very large organization, and to create multiple groups and have large numbers of users added to, moved between, and removed from them is no small task. This normally requires tons of paperwork and a lot of time. Time is the one thing we don't normally have; dodging the paperwork is just a plus. I have very limited experience in all this, so I'm not even sure what I'm asking will make sense. Atlassian Crowd comes close to what we need, but falls short of having it's own LDAP front end. Can anyone provide any advice or product names? Thanks for any help you can provide.

    Read the article

  • Nginx : Proper use of limit_req_zone and limit_req

    - by xperator
    I have 2 website running on VPS. Their purpose is sharing music files and publishing news. Both of them use wordpress. What I am trying is that I want to prevent little hackers from flooding the webserver and putting stress on the server to make it crash. The problem is that after using limit_req_zone and limit_req my website became very slow. Browsing Wordpress control panel takes a long long time. I tried changing values but it didn't improve much. I guess the problem is Wordpress because it's the only script I am using on both front and back end. Here is the last setting which seems to be more responsive than others : limit_req_zone $binary_remote_addr zone=flood:5m rate=10r/m; location ~ \.php$ { limit_req zone=flood burst=100 nodelay; } What are the optimal values that should be used in my case (wp) ? I want the website have it's normal behavior, On the other hand stopping lifeless people from flooding. Another question, Is it safe and enough to use limit_req only on php files ?

    Read the article

  • nginx reverse proxy to apache mod_wsgi doesn't work

    - by user11243
    I'm trying to run a django site with apache mod-wsgi with nginx as the front-end to reverse proxy into apache. In my Apache ports.conf file: NameVirtualHost 192.168.0.1:7000 Listen 192.168.0.1:7000 <VirtualHost 192.168.0.1:7000> DocumentRoot /var/apps/example/ ServerName example.com WSGIDaemonProcess example WSGIProcessGroup example Alias /m/ /var/apps/example/forum/skins/ Alias /upfiles/ /var/apps/example/forum/upfiles/ <Directory /var/apps/example/forum/skins> Order deny,allow Allow from all </Directory> WSGIScriptAlias / /var/apps/example/django.wsgi </VirtualHost> In my nginx config: server { listen 80; server_name example.com; location / { include /usr/local/nginx/conf/proxy.conf; proxy_pass http://192.168.0.1:7000; proxy_redirect default; root /var/apps/example/forum/skins/; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } After restarting both apache and nginx, nothing works, example.com simply hangs or serves index.html in my /var/www/ folder. I'd appreciate any advice to point me in the right direction. I've tried several tutorials online to no avail.

    Read the article

  • limiting connections from tomcat to IIS - proxy? iptables?

    - by Chris Phillips
    Howdy, I've webapp on tomcat6 which is connecting to an M$ PlayReady DRM instance on IIS6.0 The performance is seen to be best when we bench mark (using ab) the DRM service with 25 concurrent connections, which gives about 250 requests per second, which is ace. higher concurrent connections results in TCP/IP timeouts and other lower level mess. But there is no way to control how the tomcat app connects to the service - it's not internally managing a pool of connections etc, they are all isolated http connections to the server. Ideally I'd like a situation where we can have 25 http 1.1 connections being kept alive permanently from tomcat and requesting the licenses through this static pool of connections, which I think would the best performance. But this is not in the code, so was looking for a way to possibly simulate this at the Linux level. I was possibly thinking that iptables connlimit might be able to gracefully handle these connections, but whilst it could limit, it'd probably still annoy the app. What about a proxy? nginx (or possibly squid) seems potentially appealing to run on the tomcat server and hit on localhost as we might want to add additional DRM servers to use under load balance anyway. Could this take 100 incoming connections from tomcat, accept them all and proxy over the the IIS server in a more respectful manner? Any other angles? EDIT - looking over mod_proxy for apache, which we are already using for conventional use on an apache instance in front of this tomcat instance, might be ideal. I can set a max value on the proxy_pass to only allow 25 connections, and keep them alive permanently. Is that my answer? Many thanks, Chris

    Read the article

  • Asus K50I sound issues

    - by MrStatic
    I have an Asus K50IJ (Bestbuy) laptop and have issues with my sound. Speakers themselves work fine but when I plug into the headphone jack it auto mutes the front channel and no sounds comes out of either the speakers or the headphones. If I then unmute the channel I get sound from both the speakers and the headphones. alsamixer shows the Headphone channel as all grayed out. /etc/modprobe.d/alsa-base.conf I have tried snd-hda-intel model="asus-laptop" and snd-hda-intel model="asus" In Sound Preferences I have gone to output and changed the Connector to 'Analog Headphones' that results in no sound from either speakers or headphones. As one forum suggested I tried to comment out blacklist snd_pcsp in the blacklist.conf which resulted in no change. lspci -v shows: 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) Subsystem: Santa Cruz Operation Device 1043 Flags: bus master, fast devsel, latency 0, IRQ 45 Memory at fe9f4000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [130] Root Complex Link Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel

    Read the article

  • IIS 7.5 returning 404 for unknown host names

    - by WaldenL
    This just doesn't seem correct to me, so I'm looking for someone to tell me how I've misconfigured IIS... Configuration is IIS7.5 (2008R2), without SP1. I have IIS 7.5 configured w/several sites. ALL sites have hostnames defined in the bindings, there is NO site w/out a hostname. However, if I request an unknown hostname from the server IIS (technically Microsoft-HTTPAPI/2.0) return a 404 error, not a 400 error. I would expect a 400 (or some other major error) rather than a lowly 404. This causes a problem when I have nginx in front of multiple IISs and want to stop a site so nginx takes it out of rotation. Since IIS still returns a 404 for the request even when there is no active site for that name, nginx doesn't know the server is dead. NB: IIS returns the 404 regardless of whether there is a server, but it's stopped, or there is no server. Thoughts? Solutions? -- Additional info: OK, I added a site on a port other than 80 (5000) and then on a connection to that port asked for a site that doesn't exist, and I get the expected error 400 (Invalid hostname). So, while IIS isn't listening for generic (no host name) connections on port 80 it would seem that something is. Any ideas how to get HTTPSys to dump the list of what it's listening for?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >