Search Results

Search found 23782 results on 952 pages for 'claims based authorizatio'.

Page 730/952 | < Previous Page | 726 727 728 729 730 731 732 733 734 735 736 737  | Next Page >

  • Cisco ASA user authentication options - OpenID, public RSA sig, others?

    - by Ryan
    My organization has a Cisco ASA 5510 which I have made act as a firewall/gateway for one of our offices. Most resources a remote user would come looking for exist inside. I've implemented the usual deal - basic inside networks with outbound NAT, one primary outside interface with some secondary public IPs in the PAT pool for public-facing services, a couple site-to-site IPSec links to other branches, etc. - and I'm working now on VPN. I have the WebVPN (clientless SSL VPN) working and even traversing the site-to-site links. At the moment I'm leaving a legacy OpenVPN AS in place for thick client VPN. What I would like to do is standardize on an authentication method for all VPN then switch to the Cisco's IPSec thick VPN server. I'm trying to figure out what's really possible for authentication for these VPN users (thick client and clientless). My organization uses Google Apps and we already use dotnetopenauth to authenticate users for a couple internal services. I'd like to be able to do the same thing for thin and thick VPN. Alternatively a signature-based solution using RSA public keypairs (ssh-keygen type) would be useful to identify user@hardware. I'm trying to get away from legacy username/password auth especially if it's internal to the Cisco (just another password set to manage and for users to forget). I know I can map against an existing LDAP server but we have LDAP accounts created for only about 10% of the user base (mostly developers for Linux shell access). I guess what I'm looking for is a piece of middleware which appears to the Cisco as an LDAP server but will interface with the user's existing OpenID identity. Nothing I've seen in the Cisco suggests it can do this natively. But RSA public keys would be a runner-up, and much much better than standalone or even LDAP auth. What's really practical here?

    Read the article

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • App for family tech support tracking?

    - by slothbear
    I do tech support for several groups within my family. They usually have a document or notebook of questions for me. They often record my advice, but then ask me again later. Some communications are by email (nice record for me, although they never think to search). Some sessions are in person, usually with a followup email from me for the record. Which they forget about. I'm not trying to force them to be more 'professional', but I would like to streamline my support a bit, and give them a place to look for past answers. Some of them would like a standard place like that, rather than reasking me the same questions. The solution has to be free. And web-based, although email-in for questions would be great. I'll be doing most (all?) updating of the system. Mobile/iPhone access would be nice, but not required. Ideally, a system with topics and responses would be good, but I'd need a way to promote one response as 'the answer'.

    Read the article

  • How do I hook into Tar with BASH?

    - by orb
    Long Story Short I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images. Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images? In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system. Background Information not included in the Long Story Short I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.

    Read the article

  • Make case-sensitive SMB share case-insensitive

    - by fungs
    I am running a legacy XP app that I would like to move on a network share. It is very simple and works in theory but the server providing the share is based on Linux (cannot configure) and the software does not work correctly because it is programmed case-insensitively, it seems. After some research, network shares behave like the filesystem they use underneath. This is normal. Unfortunately I cannot fix the software myself. Is there any way to turn the case-sensitivity into case-insensitivity for a Windows network drive on the client side? I fould two approaches: First, something like icasefile (http://wnd.katei.fi/icasefile/) that wraps around the program and intercepts the file I/O. This is for UNIX only. Secondly, a proxy virtual file system (e. g. something using Dokan). Unfortunately I couldn't find any suitable fs, the only possibility would be to put a case-insensitive filesystem on an image file and put this on the share using for example lmdisk (http://www.ltr-data.se/opencode.html/#ImDisk). Do you have any better ideas?

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • open-source LAMP package for simple accouting and receiving?

    - by user26664
    Hello. I am involved with a computer-based charity where we take donation of old equipment, often recycle it, mostly rehabilitate it and make it available through grants and 'adoptions', and sell some items. What we're looking for is a LAMP package that can handle our records of donations of equipment, print receipts for donators, and also track our thrift store sales with receipts. Donators will want receipts of their donations for tax deduction purposes. We'll want to print reports of incoming items from time to time, say monthly and yearly. For the thrift store, we'll also need receipts for that, reports of cash, especially for reconciling cash drops, and also reports of items sold from time to time. I'm thinking this might be a single package, but it might be two. We don't want to track our shop inventory with either of these programs -- that's another program/project. We just need to know what was donated and what was sold. It must be open source, and ideally we'd like it to run on LAMP -- Linux, Apache, MySQL, PHP, but we will consider other open-source platforms.

    Read the article

  • Probability of Blade Chassis Failure

    - by ChrisZZ
    In my organisation we are thinking about buying blade servers - instead of rack servers. Of course technology vendors also make them sound very nice. A concern, that I read very often in different forums, is, that there is a theoretical possibility of the server chassis going down - which would in consequence take all the blades down. That is due to shared infrastructure. My reaction on this probability would be to have redundancy and by two chassis instead of one (very costly of course). Some people (including e.g. HP Vendors) try to convince us, that the chassis is very very unlikely to fail, due to many redundancies (redundant power supply, etc.). Another concern on my side is, that if something goes down, spare parts might be required - which is difficult in our location (Ethiopia). So I would ask to experienced administrators, that have managed blade server: What is your experience? Do they go down as a whole - and what is the sensible shared infrastructure, that might fail? That question could be extended to shared storage. Again I would say, that we need two storage units instead of only one - and again the vendors say, that this things are so rock solid, that no failure is expected. Well - I can hardly believe, that such a critical infrastructure can be very reliable without redundancy - but maybe you can tell me, whether you have successfull blade-based projects, that work without redundancy in its core parts (chassis, storage...) At the moment, we look at HP - as IBM looks much to expensive... thanks a lot best regards Christian

    Read the article

  • A-2-Z web hosting on Amazon AWS

    - by JDelage
    All, I am studying web dvp, and one of my classes is project-based. We have to build a functional site that demonstrate our understanding of: HTML, CSS, Javascript, php, MySQL, And potentially Ajax or some other web component. For the project, we can use a local server using WampServer and basically build the site entirely on our laptop. If I have time, I would like to create a real site, and I thought it would be a good way to familiarize myself with Amazon's AWS services. So if I purchase a domain name, can I rely on AWS to host the site from A-to-Z? I understand I can use AWS to host content, the database, and do the background computations, if needed. What else do I need and what are the parts that AWS cannot help me with? Second, is there good documentation for a beginner to navigate AWS and learn how to use it (either on Amazon, or some 3rd party sites, or even a good book, as long as is up to date). The ideal documentation would be a tutorial on creating a web site from a-to-z on AWS, as detailed as possible. As you can guess, I have limited understanding of the IT issues. I have 0 Linux or sysadmin experience, but this is a good opportunity to change that. I hope you can help me. Thank you, JDelage PS: Please keep the answers AWS-specific. At this point, I am only interested in alternative services to the extent that they plug a hole in Amazon's offering.

    Read the article

  • Access keystore on Sun ONE Webserver 6.1 for 2048 bit key length SSL

    - by George Bailey
    We want to get 2048 bit key length CSR requests. The browser based GUI provides us with a 1024 bit CSR and I don't know how to change that. It seems that 1024 bit key lengths will no longer supported by SSL companies. (Lower cost options only support 2048 bit. Thawte who is much more expensive say they accept 1024 for only one or two year certificates, but not 3). The legacy systems in question are running Sun ONE Webserver 6.1. Upgrading would be time consuming and we would rather not have to do that right now. We will be phasing these out but it will take awhile, so... Got it!! http://middlewarekb.wordpress.com/2010/06/30/how-to-generate-2048-bit-keypair-using-sun-one-or-iplanet-6-1-servers/ It is for the same version webserver I am using. /opt/SUNWwbsvr/bin/https/admin/bin/certutil -R -s "CN=sub.domain.ext,OU=org unit,O=company name,L=city,ST=spelled state,C=US,E=email" -a -k rsa -g 2048 -v 12 -d /opt/SUNWwbsvr/alias -P https-sub.domain.ext-hostname- -Z SHA1 Previous efforts edited out.

    Read the article

  • Test A SSH Connection from Windows commandline

    - by IguanaMinstrel
    I am looking for a way to test if a SSH server is available from a Windows host. I found this one-liner, but it requires the a Unix/Linux host: ssh -q -o "BatchMode=yes" user@host "echo 2>&1" && echo "UP" || echo "DOWN" Telnet'ing to port 22 works, but that's not really scriptable. I have also played around with Plink, but I haven't found a way to get the functionality of the one-liner above. Does anyone know Plink enough to make this work? Are there any other windows based tools that would work? Please note that the SSH servers in question are behind a corporate firewall and are NOT internet accessible. Arrrg. Figured it out: C:\>plink -batch -v user@host Looking up host "host" Connecting to 10.10.10.10 port 22 We claim version: SSH-2.0-PuTTY_Release_0.62 Server version: SSH-2.0-OpenSSH_4.7p1-hpn12v17_q1.217 Using SSH protocol version 2 Server supports delayed compression; will try this later Doing Diffie-Hellman group exchange Doing Diffie-Hellman key exchange with hash SHA-256 Host key fingerprint is: ssh-rsa 1024 aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa Initialised AES-256 SDCTR client->server encryption Initialised HMAC-SHA1 client->server MAC algorithm Initialised AES-256 SDCTR server->client encryption Initialised HMAC-SHA1 server->client MAC algorithm Using username "user". Using SSPI from SECUR32.DLL Attempting GSSAPI authentication GSSAPI authentication initialised GSSAPI authentication initialised GSSAPI authentication loop finished OK Attempting keyboard-interactive authentication Disconnected: Unable to authenticate C:\>

    Read the article

  • How do I setup routing for 2 companies with different Internet connections on the same LAN?

    - by Clint Miller
    Here's the setup: 2 companies (A & B) share office space and a LAN. A 2nd ISP is brought in and company A wants it's own Internet connection (ISP A) and company B wants it's own Internet connection (ISP B). VLANs are deployed internally to separate the 2 company's networks (company A: VLAN 1, company B: VLAN 2, shared VOIP: VLAN 3). With separate VLANs it's simple enough to use separate DHCP servers (or separate scopes on the same server) to assign the default gateway to each company's gateway for their Internet connection. Static routes can be created on each gateway to point traffic destined for the other company's VLAN or the voice VLAN so that all nodes are reachable as expected. However, I think this is a form of asymmetrical routing, right? (The path from node A1 to node B1 is not the same as the path back from node B1 to node A1). Can I setup policy-based routing to correct this? In that case, can I assign the same default gateway to every device on all VLANs and create a routing policy on a L3 switch to look at the source address and forward traffic to the appropriate next hop? In that case, I want the routing logic to go like this: If the destination address is known, forward the traffic (traffic destined for a different VLAN). If the destination address is unknown, forward the traffic to ISP A's gateway if the source address is on VLAN A; or forward the traffic to ISP B's gateway if the source address is VLAN B. Am I thinking about this problem in the correct way? Is there another way to solve this problem that I am overlooking?

    Read the article

  • Cross OS data recover question, USB drive involved.

    - by Moshe
    Here's the story: A MacBook had OS X 10.4 and Windows XP dual booting using rEFIt. Then the Windows partition gets corrupted and it won't boot. Presumably a virus. There were sensitive files there and those were successfully copied to a USB drive and then 10.5 was installed on the hard drive, formatting the drive in the process. The USB drive's contacts cracked and he data is lost from there, unless it can be resoldered. The issues is that there is too much solder there already. So, how can the data in question be recovered? The files were Microsoft Money (not the latest version) files for the Windows version of the program. Right now, only OS X is installed on the MacBook. Is there Mac based program that can recover the Windows data or am I better off trying to resolder the drive? Does anyone know how to best resolder a USB drive more than once, where the first solder is ther, but detached from the silicon? Also, what format (extension) are Microsoft Money files? In need of help!

    Read the article

  • Two parts: linux startup script to connect to bluetooth and cron to keep it connected

    - by D.R.
    I have a mini bluetooth keyboard and a Raspberry Pi running a Debian-based distro. I know the MAC address of the keyboard but for this question, let's just use AA:BB:CC:DD:EE:FF. Right now I have to have a wired keyboard connected as well as my bluetooth dongle for the mini-keyboard and on the wired keyboard, I have to run the following when the device boots up: sudo hidd --connect AA:BB:CC:DD:EE:FF If the device goes idle for too long, then the bluetooth disconnects and I have to pull out my wired keyboard and retype that same command. What I'm looking for it a way to have that command run at startup and a way to sense if it gets disconnected so that it will auto reconnect. The annoying thing is that they keyboard has to be in pairing mode (even though it has already been paired) when I run that command, otherwise it tells me the host is down. So perhaps the script needs to prevent it from disconnecting due to inactivity, otherwise I'll have to put it back in pairing mode to reconnect. So to recap: - A script to connect at startup (I can make sure to put the keyboard into pairing mode before turning it on) - A script to prevent it from disconnecting (maybe some sort of signal to send to it every 60 seconds or something?) Any help with this is greatly appreciated! StackOverflow is always the best place to find answers to weird questions! I've been searching long and hard for an answer, but finally had to resort to coming here! Thanks!

    Read the article

  • How To Find Reasons of Why Site Goes Online/Offline

    - by HollerTrain
    Seems today a website I manage has been going online and offline throughout the entire day. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I use a program that pings the server every minute and when the server is not responding me it emails me, so I can know exactly when the site is online and offline. The site between 8pm to 12pm 12.28, and around the 1a hour early morning 12.29 (New York City timezone, and all times below are in same timezone). At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • Per connection bandwidth limit

    - by Kyr
    Apparently, our server box running Windows Server 2008 R2 has a per connection bandwidth limit of 0.2 MB/s. Meaning, while one TCP connection can pull at max 0.2 MB/s, 60 parallel connections can pull 12 MB/s. We first noticed this when trying to checkout large SVN repository from this server. I used a simple Java application to test this, transferring data from server to workstation using variable number of threads (one connection per thread). Server part of the application simply writes 1 MB memory buffer to socket 100 times, so there is no disk involvement. Each connection topped at 0.2 MB/s. Same per connection limit was for only one as was for 60 parallel connections. The problem is that I have no idea from where this limit comes from. I have very little experience administrating Windows Server, so I was mostly trying to find something by googling. I have checked the following: Local Computer Policy QoS Packet Scheduler Limit reservable bandwidth: it's Not configured; Group Policy Management Console: we have two GOPs, but neiher has any Policy-based QoS defined; There isn't any bandwidth limiter program installed, as far as I can tell. We're using standard Windows Firewall. I can update this question with any additional information if needed.

    Read the article

  • Free tiered storage automation in linux?

    - by NginUS
    I have a couple virtualized fileservers running in QEMU/KVM on ProxmoxVE. The physical host has 4 storage tiers with significant performance variances. They're attached both locally and via NFS. These will be provided to the fileserver(s) as local disks, abstracted into pools, and handling multiple streams of data for the network. My aim is for this abstraction layer to intelligently pool the tiers. There's a similar post on the site here: Home-brew automatic tiered storage solutions with Linux? (Memory - SSD - HDD - remote storage) in which the accepted answer was a suggestion to abandon a linux solution for NexentaStor. I like the idea of running NexentaStor. It almost fits the bill. NexentaStor provides Hybrid Storage Pools, and I love the idea of checksumming. 16TB without incurring licensing fees is a huge plus as well. After the expense of the hardware, free is about all my budget can handle. I don't know if zfs pools are adaptive or dynamically allocated based on load, but it becomes irrelevant since NexentaStor doesn't support virtio network or block drivers, which is a must in my environment. Then I saw a commercial solution called SmartMove: http://www.enigmadata.com/smartmove.html And it looks like a step in the right direction, but I'm so broke I'd be wasting their time to even ask for a quote, so I'm looking for another option. I'm after a linux implementation that supports virtio drivers, and I'm at a loss as to which software is up to it.

    Read the article

  • Inconsistent SMTP Access

    - by Mike Hanson
    I have a mail server setup on Windows Server 2008. All was working fine, until I wanted to map a drive on the server so that I can access files on another machine. Windows prompted me to configure Network Discovery, which I did with the "Home/Office" option rather than "Public". After that, several access points that worked before stopped working, like VNC, SMTP, etc. After reinstalling those packages, things appeared to be working again. Unfortunately, problems have returned with my SMTP server. I can use an web-based SMTP tester, and it connects in 62msec (as expected). However, if I telnet from my machine on the same LAN, it takes more than 20 seconds to connect! When I try to send messages from Outlook, it times out entirely with the message: Sending' reported error (0x80042109) : 'Outlook cannot connect to your outgoing (SMTP) e-mail server. If you continue to receive this message, contact your server administrator or Internet service provider (ISP).' I've checked the firewall settings, I've tried configuring it to use port 587 instead of 25, but nothing gets around this problem. Does any have any useful insights? Thanks in advance!

    Read the article

  • Exchange 2010 DAG + VMWare HA = no support?

    - by Dan
    We currently have an Exchange 2003 clustered environment (two machine cluster) that we're looking to upgrade to 2010. We recently purchased a VMWare virtualization environment (three Dell R710's with an EMC NS-120 serving up NFS datastores - iSCSI is available) that we wish to use for this new environment. I'm seeing that Microsoft does not support Exchange 2010 DAGs with a virtualization high availability solution (see links below). I would like to utilize the DAG to ensure the data stays available if one host goes down, and HA to ensure that if the physical host goes down, the VM will come back up on the other available host. Does anybody know why MS does not support this? VMWare HA will only restart the VM if it is hung/down - I don't see any difference between this and restarting the physical box if someone pulled the power... Will we only run into issues with support if it has something to do with HA/DAG failover or will they see we have HA and tell us to put it on a physical box even if it has nothing to do with HA? If we disable HA for these VM's will that satisfy them on a support case? Has anybody set up an Exchange 2010 DAG on VMware with HA enabled? Will they have any issues with using an NFS datastore? We have much greater flexibility on the EMC with NFS vs iSCSI, so I would prefer to continue utilizing that. Thanks for any input! http://www.vmwareinfo.com/2010/01/verifying-microsoft-exchange-2010.html Take a look at the second image under "Not Supported" http://technet.microsoft.com/en-us/library/aa996719.aspx "Microsoft doesn't support combining Exchange high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions. DAGs are supported in hardware virtualization environments provided that the virtualization environment doesn't employ clustered root servers."

    Read the article

  • How to serve pages through multiple frameworks/template engines efficiently

    - by Leftium
    I would like to render a file that has both PHP tags and Web2Py tags mixed together. To do this, I would like the web server to pass the file through Web2Py, then PHP. I found a method to call PHP from Web2py via Python (based on this method for running PHP on top of django), but this method loses the benefits of any server optimizations from mod_php or FastCGI like caching and multi-threaded operation. A new process is created for each PHP request, which is very slow. Is there a better way to efficiently render pages with both Web2Py(Python) and PHP tags in the same file? Note I am not looking for methods of serving PHP-only and Web2Py-only files from the same server/domain. I prefer solutions for Apache2 or Cherokee. I'm open to using other web servers, though. Background info: I prefer to develop in Web2Py, but we have this pre-existing system written in PHP. I would like to augment the PHP system with some of Web2Py's features like auth authentication/user management and the T() internationalization object. Also it would make it much easier to port the PHP project to Web2Py if it could be done piecemeal. Since the PHP project consists of many files, it would greatly help if they did not need modification.

    Read the article

  • HTTP Error: 413 Request Entity Too Large

    - by Torben Gundtofte-Bruun
    What I have: I have an iPhone app that sends HTTP POST requests (XML format) to a web service written in PHP. This is on a hosted virtual private server so I can edit httpd.conf and other files on the server, and restart Apache. The problem: The web service works perfectly as long as the request is not too large, but around 1MB is the limit. After that, the server responds with: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>413 Request Entity Too Large</title> </head><body> <h1>Request Entity Too Large</h1> The requested resource<br />/<br /> does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. </body></html> The web service writes its own log file, and I can see that small messages are processed fine. Larger messages are not logged at all so I guess that something in Apache rejects them before they even reach the web service? Things I've tried without success: (I've restarted Apache after every change. These steps are incremental.) hosting provider's web-based configuration panel: disable mod_security httpd.conf: LimitXMLRequestBody 0 and LimitRequestBody 0 httpd.conf: LimitXMLRequestBody 100000000 and LimitRequestBody 100000000 httpd.conf: SecRequestBodyLimit 100000000 At this stage, Apache's error.log contains a message: ModSecurity: Request body no files data length is larger than the configured limit (1048576) It looks like my step #4 didn't really take, which is consistent with step #1 but does not explain why mod_security appears to be active after all. What more can I try, to get the web service to receive large messages?

    Read the article

  • Testing realistic loads for new versions of existing web app

    - by David Cournapeau
    Assuming I have a relatively complex web application, I am interested in testing performances of a new version using a traffic as realistic as possible. Traffic is relatively complex (session-based, lots of internal logic which depends on incoming requests). The webapp depends on many servers (databases, frontends, etc...). I can think of two basic directions: Recording every incoming request with its timestamp in production in a centralized manner and replaying it from N clients to reproduce a load as close as possible as the original. Issue: because we have many servers, getting the centralized log is not trivial. having a system duplicating requests to a staging area so that I could "plug" a dev version of my webapp to it at anytime without affecting the production. Issue: I have not found much information about it expect this, which suggests to me that may not be the best solution. OTOH, it is realistic by definition. What is the standard way of doing this kind of testing ? I did not find much information about load testing with complex, realistic traffic.

    Read the article

  • Loss of network connectivity when playing video on Optoma HD180 projector

    - by Jeff Fohl
    Hi Folks - New to Super User, so I hope this question fits in with the guidelines. Very strange problem I am having, and I am at a loss as to how to continue troubleshooting this one. The basic problem is that when I attempt to watch streamed video on a particular display device (an Optoma HD180 projector), my network connectivity drops like a stone to barely measurable levels. This is my setup: I have a Dell H2C 730x running Windows 7 64bit. This particular computer has two ATI Radeon HD 4800 video cards. I have two Samsung 22" monitors connected to one card, and an Optoma HD180 digital projector connected to the other card via an HDMI cable. My internet connection is normally a reliable 6Mbps. The problem I am having occurs when I stream video (or even just browse the web) on the Optoma Projector. When I do this, my internet connection drops to practically zero (just a few kilobits per second). When I move the browser away from the projector, and over to one of my Samsung monitors, the internet connection comes right back. Note that the Optoma projector is on and enabled as a third monitor all this time. I can move the mouse around on the projector without triggering the problem. I tried pinging my router when I was playing a movie on one of the monitors, and I get a 1 millisecond response. However, when I have the movie playing on the Optoma projecter, pinging the router gives me response times in the hundreds of milliseconds, or times out completely. So, it clearly is something local to my machine - and not some sort of throttling occurring down the line. I would think that it is possibly something to do with the HDMI driver conflicting somehow with my network driver (which is a USB-based wireless connection). This one has me really stumped. Anyone have any ideas?

    Read the article

  • iptables forwarding to a dummy interface

    - by madinc
    Hi, I'm trying to accomplish the following: I have a box with a service listening on a dummy interface (say 172.16.0.1), udp port 5555. Now what I'd like to do is to take packets that arrive on interfaces eth0 (1.1.1.1:5555) and eth1 (2.2.2.2:5555) and forward them to the service on the dummy interface, and have replies go back to clients out the same physical interface they came in. Clients must think they're talking to 1.1.1.1:5555 or 2.2.2.2:5555. I think I need a mix of iptables rules and packet marking, plus some iproute rules (if it's possible at all). What I tried is to catch packets coming in from eth0 and eth1, udp port 5555, and mark them with 1 and 2 respectively, and --save-mark in the connmark. Then I used a DNAT to 172.16.0.1. The service seems to be getting the packets. Now I'm not sure how to do the reverse. It seems that for packets originating from the box, you can't do anything before the routing decision, but that would be the place to restore the marks, and thus make a routing decision based on those. Here's what I have so far: iptables -t mangle -A PREROUTING -d 1.1.1.1 -p udp --port 5555 -j MARK --set-mark 1 iptables -t mangle -A PREROUTING -d 2.2.2.2 -p udp --port 5555 -j MARK --set-mark 2 iptables -t mangle -A PREROUTING -d 1.1.1.1 -p udp --port 5555 -j CONNMARK --save-mark iptables -t mangle -A PREROUTING -d 2.2.2.2 -p udp --port 5555 -j CONNMARK --save-mark iptables -t nat -A PREROUTING -m mark --mark 1 -j DNAT --to-destination 172.16.0.1 iptables -t nat -A PREROUTING -m mark --mark 2 -j DNAT --to-destination 172.16.0.1 # What next? As I said, I'm not even sure it can be done. To give a bit of background, it's an old OpenVPN installation that cannot be upgraded (otherwise I'd install a recent version that supports multihoming natively). Thanks for any help.

    Read the article

  • Mac updated just now, postgres now broken

    - by Dave
    I run postgres 9.1 / ruby 1.9.2 / rails 3.1.0 on a maxbook air for local dev. It's all been running smoothly for months, (though this is the first time I've done development on a mac.) It's a macbook air from last year, and today I got the mac osx software update message as I have a few times before, and my system downloaded approx 450mb of updates and restarted. It now says it's on OSX 10.7.3. Point is, postgres has stopped working, when I start my thin server (mirror heroku cedar) as normal, and then browse to my rails app I get: PG::Error could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? What happened? After browsing around a few questions I'm still confused, but here's some extra info: Running psql from command line gives same error I can run pgadmin 3 and connect via it and run SQL no problems Running which psql shows the version as /usr/bin/psql I created a PostgreSQL user back when I got the mac (it's always been on lion) I've no idea why, almost certainly I was following a tutorial which I neglected to store in my notes. Point is I am aware there is a _postgres user as well. I know it's rubbish, but apart from a note on passwords, I don't have any extra info on how I configured postgres - though the obvious implication is that I did not use the _postgres user. Anyone have suggestions or information on what might have changed / what I can try to debug and fix? Thanks. Edit: Playing around based on this question and answer: http://stackoverflow.com/questions/7975414/check-status-of-postgresql-server-mac-os-x, see this string of commands: $ sudo su postgreSQL bash-3.2$ /Library/PostgreSQL/9.1/bin/pg_ctl start -D /Library/PostgreSQL/9.1/data pg_ctl: another server might be running; trying to start server anyway server starting bash-3.2$ 2012-04-08 19:03:39 GMT FATAL: lock file "postmaster.pid" already exists 2012-04-08 19:03:39 GMT HINT: Is another postmaster (PID 68) running in data directory "/Library/PostgreSQL/9.1/data"? bash-3.2$ exit

    Read the article

< Previous Page | 726 727 728 729 730 731 732 733 734 735 736 737  | Next Page >