Search Results

Search found 11665 results on 467 pages for 'ajax push'.

Page 361/467 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Is there an SSL equivelent to an ssh agent?

    - by Matthew J Morrison
    Here is my situation: There are a number of developers who all need to have access to be able to install ruby gems and python eggs from a remote source. Currently, we have a server inside our firewall that hosts the gems and eggs. We now want the ability to be able to install things hosted on that server outside of our firewall. Since some of the gems and eggs that we host are proprietary I would like to somewhat lock access to that machine down, as unobtrusively as possible to the developers. My first thought was using something like ssh keys. So, I spent some time looking at SSL mutual authentication. I was able to get everything set up and working correctly, testing with curl, but the unfortunate thing was that I had to pass extra arguments to curl so it knows about the certificate, key and certificate authority. I was wondering if there is anything like the ssh agent that I can set up to provide that information automatically so that I can push the certificates and keys to the developer's machines so the developers don't have to log in or provide keys each time they try to install something. Another thing that I want to avoid is having to modify the 'gem' command and the 'pip' command to provide keys when they make the http connection. Any other suggestions that may solve this problem (not related to ssl mutual auth) are also welcome. EDIT: I've been continuing to research this and I came across stunnel. I think this may be what I'm looking for, any feedback regarding stunnel would also be great!

    Read the article

  • Split big Apache log to folder structure

    - by Dough
    I just changed my Apache log behavior because it was making me having very BIG files... So I now use cronolog to split my logs to log/httpd/2012/11/access_2012.11.30.log for exemple, pattern : %Y/%m/access_%Y.%m.%d.log I now want to split my old 42GB file to the same structure but really don't know how to do that efficiently. I tried some simple commands with cat, egrep, awk... but really don't know how to handle all that in a more powerful script. Here is how the log looks like : x.x.237.134 - - [08/Apr/2011:14:43:09 +0200] "GET... x.x.50.15 - - [08/Apr/2011:14:43:09 +0200] "GET... [...] x.x.254.19 - - [28/Feb/2012:15:24:48 +0100] "GET... So I need for yeah line to get : year %Y (ex. 2012) month %m (ex. 11) day %d And to push out the entire line to : %Y/%m/access_%Y.%m.%d.log Can someone give me clues to get that working ? Thanks a lot for your interest.

    Read the article

  • Apache ScriptAlias and access error?

    - by Parhs
    First of all after much pain i figured out how to make it work in Apache 2.4 windowz. Here is my configuration that seems to work successfully for git clone and push and everything. Problem First of all my configuration works. There is a "Require all denied" at / directory. I want only git functionality and nothing else. Example Request from a git client 192.168.100.252 - - [07/Oct/2012:04:44:51 +0300] "GET /git/simple/info/refs?service=git-upload-pack HTTP/1.1" 200 264` Error caused by that Request [Sun Oct 07 04:44:51.903334 2012] [authz_core:error] [pid 6988:tid 956] [client 192.168.100.252:13493] AH01630: client denied by server configuration: C:/git-server/web/simple There isnt any error at gitclient everything works fine but i get this at error log. Is there any solution for this error to not appear?I worry about log size. <VirtualHost *:80> DocumentRoot "C:\git-server\web" ServerName git.****censored**** DirectoryIndex index.php SetEnv GIT_PROJECT_ROOT c:/git-server/repositories SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER ScriptAlias /git/ "C:/Program Files (x86)/Git/libexec/git-core/git-http-backend.exe/" <LocationMatch "^/.*/git-receive-pack$"> Options +ExecCGI AuthType Basic AuthName intranet AuthUserFile "C:/git-server/config/users" Require valid-user </LocationMatch> <Directory /> Options All Require all denied </Directory> <Directory "C:\Program Files (x86)\Git\libexec\git-core"> Options +ExecCGI Options All Require all granted </Directory> </VirtualHost>

    Read the article

  • How do I prevent or override a group policy on Windows 7?

    - by Kevin
    A few months ago my company was purchased by a large corporation. We recently switched our network over to the large corporate network which has more restrictions requirements. One of these is the requirement to use a proxy server for Internet traffic. However, some of our internal servers are not recognized by the corporate DNS, so we need to provide the fully qualified domain name. For W7, we make changes to the Internet Properties for IE8 and Chrome to include our domain name as an exception to the proxy server (e.g., *.foobar.com). The problem is that a group policy that does not include our domain name is continually pushed out to my systems throughout the day. This requires me to make the appropriate changes to the Internet Properties several times a day in order to access our internal servers. Is there a way that I can prevent the group policy from being pushed to my systems or detect when the group policy is pushed and override it? I am an administrator on all of my systems. I do have Firefox installed which is not subject to the same group policy push, but I need to have IE8 and Chrome working.

    Read the article

  • gitweb- fatal: not a git repository

    - by Robert Mason
    So I have set up a simple server running debian stable (squeeze), and have configured git. Using gitolite, I have all functionality (at least the basic clone/push/pull/commit) working. Installation of gitweb went without any issues. However, when I access gitweb, I get a gitweb screen without any repos listed. # tail -n 1 /var/log/apache2/error.log [DATE] [error] [client IP_ADDRESS] fatal: Not a git repository: '/var/lib/gitolite/repositories/testrepo.git' # cd /var/lib/gitolite/repositories/testrepo.git # ls branches config HEAD hooks info objects refs Here is what I see in /var/lib/gitolite/projects.list: testrepo.git And in /etc/gitweb.conf: # path to git projects (<project>.git) $projectroot = "/var/lib/gitolite/repositories"; # directory to use for temp files $git_temp = "/tmp"; # target of the home link on top of all pages #$home_link = $my_uri || "/"; # html text to include at home page $home_text = "indextext.html"; # file with project list; by default, simply scan the projectroot dir. $projects_list = "/var/lib/gitolite/projects.list"; # stylesheet to use $stylesheet = "gitweb.css"; # javascript code for gitweb $javascript = "gitweb.js"; # logo to use $logo = "git-logo.png"; # the 'favicon' $favicon = "git-favicon.png"; What is missing?

    Read the article

  • Controlling clone access to multiple mercurial repos served via hgwebdir.cgi

    - by chrislawlor
    I'm trying to host multiple hg repositories to use for my clients. I need to control access to each repository individually - not just push access, but clone as well. I've got an .htaccess set which requires authentication globally: AuthUserFile /path/to/hgweb.passwd AuthGroupFile /dev/null AuthName "Chris Lawlor Client Mercurial Repositories" AuthType Basic <Limit GET POST PUT> Require valid-user </Limit> <FilesMatch "\.(htaccess|passwd|config|bak)$"> Order Allow,Deny Deny from all </FilesMatch> Then in each repository, I've got a .hg/hgrc file requiring a valid user [web] allow_push = <comma seperated user list> This almost does what I need. The problem is that I need to add ALL my clients to hgweb.passwd, which gives them clone access to ALL of the repositories. The only solution I can think of is to have another .htaccess and .passwd file in EACH repository. I don't really want to do that though, seems a little convoluted. I can already specify a list of authorized users for each repository in that repos' hgrc file with the allow_push setting. If only there were an allow_clone setting as well... All the documentation I've found for hgwebdir.cgi is incomplete. I've read: http://mercurial.selenic.com/wiki/HgWebDirStepByStep http://hgbook.red-bean.com/read/collaborating-with-other-people.html#sec:collab:cgi http://hgbook.red-bean.com/read/collaborating-with-other-people.html And others. I've yet to find a comprehensive list of hgrc settings. I guess this is as much an Apache question than a mercurial question. Unless I can find a better approach, I'll be going with a seperate .htaccess and .passwd file for each repo. This is a virtual host on Webfaction if it matters - set up roughly like this http://docs.webfaction.com/software/mercurial.html

    Read the article

  • Using WebDAV for automated downloads

    - by Geo Ego
    I currently manage a number of sites (at one point about a dozen, currently four, but soon growing into the dozens or hundreds) that serve a piece of software to clients at their remote locations. Our web server is Windows SBS Server 2k3, and the remote servers are Windows Server 2k3.When we have new versions of the software, I upload this new software to a specific directory and rename it; each time the clients boot, they pull their software from that specific directory. With just a few sites, it's no problem for me to RDP in and copy the files over. As the number grows, this will quickly become quite unwieldy. So I'm thinking that WebDAV would be part of a solution, so that I could simply push the newest version to our server (Windows SBS Server 2003) and make it available to the sites to grab. However, on the remote server side, what are some suggestions for automating the download? I only want the servers to download the files during downtime (between 3 AM and 9 AM), and I only want them to download if there is a new version available. I had thought of writing a program that checked the files on the WebDAV server at a regular interval, compared a hash of the current software to a hash of the software on the server, and only downloaded if they were different, but I'm wondering if there is something I am unaware of that can automate the process.

    Read the article

  • Homebrew large data cluster access for 2 user levels?

    - by Yegor
    The title probably makes little sense, so here is an example. I have a file hosting site, that serves a large amount of semi-randomly accessed files. The setup is as follows: High horsepower front-end +DB server that also does encoding for files that need encoding Fresh file server, which stores newly uploaded content, thats probably (and usually) rapidly accessible, which has 500GB of raided SSD storage, that can push over 3GBit of traffic. 3 cheap node servers, containing 2 x 750GB SATA drives in raid1, where files older than 2 weeks are archived, from the SSD server (mentioned above). Files on each server are accessed via subdomains (via modsec) in a straight forward fashion (server1.domain.com, server2.domain.com, etc) Where I have the problem is this. I introduced a "premium" service where people pay a small fee every month, and get ad-free, quick accesses to stuff on the site. Once they are logged in, they access same files via premium.server1.domain.com via a different modsec script, with a different pass phrase. That all works fine and dandy.... except the cheap node servers are all IO bound, so accessing the files on them via a different, unsaturated network makes no difference, since it cannot read off the drive fast enough. What would be a good way to make files on the site be accessible via 2 different network routes, 1 of which will be saturated (the "free network") while all other files are on an un-saturated "premium" network?

    Read the article

  • GitLab post-receive hook not firing

    - by Ben Graham
    Apologies if this isn't the right stackexchange. I have a GitLab install. It was installed over the top of a gitolite install that was only a few days old, and I assume this non-standard setup is at the root of my problem, but I cannot pin it down. The problem is straightforward: post-receive hooks are not fired. This prevents 'project activity' appearing in GitLab. The problem looks like: $ git push #... error: cannot run hooks/post-receive: No such file or directory Hook Exists The post-receive hook/symlink exists and is executable: -rwxr-xr-x 1 git git 470 Oct 3 2012 .gitolite/hooks/common/post-receive lrwxrwxrwx 1 git git 45 Oct 3 2012 repositories/project.git/hooks/post-receive -> /home/git/.gitolite/hooks/common/post-receive It's Executable By GitLab The gitlab user can execute the script (I have removed the /dev/null redirect and fed in blank input to get an 'OK' as output): sudo su - gitlab -c /home/git/.gitolite/hooks/common/post-receive OK GitLab Can Find It GitLab is looking for hooks in the correct location: $ grep hooks /srv/gitlab/gitlab/config/gitlab.yml hooks_path: /home/git/.gitolite/hooks/ and $ bundle exec rake gitlab:app:status RAILS_ENV=production # ... /home/git/.gitolite/hooks/common/post-receive exists? ............YES Environment The env -i line in the hook is commonly cited as an issue. I think that would occur after this problem, but for completeness, redis-cli is found OK: $ env -i redis-cli redis> I've run out of debugging ideas on this one. Does anybody have any suggestions?

    Read the article

  • Nginx wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • How to serve media across home network?

    - by TK Kocheran
    I'm looking to share my media across my home network. Router fully supports running a DLNA server, but I don't know if it'd be better to run the server from my main server computer instead of from the router, as the router would have to operate off of a network share and my server can operate directly off of the files. Here's what I need to serve, in order of importance: ISO 1:1 DVD rips (4-8GB files), MP4/H.264 encoded videos, MKV videos, MP3 files, JPEG/CR2 images. Maybe I'm completely ludicrous for wanting to push full DVD files across my network, but in reality, I would assume that only the parts of the actual file needed (ie: menu, main video payload for main title) would be served at any one time. Plus, encoding takes time and precious disk space, so why not stream it 1:1 ;) Does anyone know of the best way to accomplish this? Main goal is to serve it to Logitech Revue downstairs and secondary goal is to serve it to other computers in the house. For music, I assume I could run a DAAP server, but I don't think that the Revue supports that (and I can't exactly throw together an app that does it just yet).

    Read the article

  • Supermicro X8SIL-F with Enermax Modu82+ 625W PSU booting issue

    - by Richard Whitman
    I am assembling a custom PC. The configuration is below: Motherboard: Supermicro X8SIL-F Processor: Intel Xeon 3430 Power Supply: Enermax Modu82+ 625W. Memory: Kingston KVR1333D3LQ8R9S/8GEC 8GBx1 installed in DimmA1 This power switch: Frozen CPU switch When I turn on the PSU, the motherboard tries to start itself before I even push the power switch. The following happens: The CPU fan rotates like once or twice, and then stops. After 1-2 seconds, the CPU fan tries to rotate again and stops after about one or two rotations. Finally, after another 1-2 seconds, it again starts and this time it rotates for about 3-4 seconds before stopping. If I pull out the Power switch, and turn on the PSU, again the MB turns on itself and the following happens: The CPU fan rotates like once or twice, and then stops. After 1-2 seconds, the CPU fan tries to rotate again and stops after about one or two rotations. Finally, after another 1-2 seconds, it again starts and the system boots properly I am sure there is nothing wrong with any of the components, because I have two sets of identical components (2 MBs, 2 CPUs, 2 PSUs, 2 switches and so on). And both of the systems show the same symptoms. Why is the MB booting up by itself? Why does it fail to boot when the Power Switch is installed? Is something wrong with the type of Power Switch I am using? PS: the power switch is installed correctly, I have double checked the MB manual to make sure its connecting the right pins.

    Read the article

  • Mirroring the Global Address List on Blackberries

    - by Wyatt Barnett
    In times immemorial, back in the day when men were men and blackberries still took AA batteries, we rolled them out to our users for our 100 person operation. At that time, there was no such thing as address list lookups, so we were forced to hack a bit. The ingenious hack we came up with was to mirror the GAL as a public folder and then synch up blackberries to that. While there have been a few downsides here and there, they have been mere annoyances. And our users, having grown fat and prosperous in the intervening years, have been used to seeing every single employee and department here listed on their hand-held automatically. Alas, it appears that Outlook 2010 breaks this functionality as Blackberry desktop manager is completely incompatible with it. Moreover, this presents us with an opportunity to change things for the better given that public folders are going away next time we upgrade exchange. So, we are in search of a tool or technique that will allow us to mimic current functionality--that is to: Push an essentially arbitrary list of ~100 contacts to blackberry address books Said list shall be centrally updated Without requiring desktop manager or exchange public folders. Any suggestions, crowd?

    Read the article

  • Calendar booking issue - Exchange 2003 and 2010

    - by NaOH
    In our organization we are running Exchange 2003 and 2010 simultaneously, with the hopes of migrating everyone to Exchange 2010 sometime within the next few months. Everyone is using Outlook 2010. Recently, we had an issue with transaction log storage on the Exchange 2003 server. This was resolved, but for some reason no meeting rooms on the Exchange 2003 server will automatically book meetings any longer. I have played around with this for a while, changing calendar permissions, turning resource scheduling off and back on, etc. No dice. My next step was to try migrating a resource to the Exchange 2010 server. After doing so, and setting it up as a Room, enabling Auto-Accept and removing the EnableDirectBooking registry entry on my PC, I can book a meeting with this room. If EnableDirectBooking is enabled, I get an error message stating: "Meeting Room" declined your meeting because it is recurring. You must book each meeting separately with this resource. This is despite the fact that the meeting I'm attempting to create has no recurrence. Now, I have also created a new test Room from scratch on the Exchange 2010 server, and I can book a meeting with this Room regardless of whether or not I have the EnableDirectBooking reg entry in place. All users here have this registry entry, and I'd rather not have to figure out how to push something out to remove it from every PC. Rather, I'd like to figure out what's different between the configurations of these two meeting rooms so that I could book a meeting room regardless of whether EnableDirectBooking is enabled or not. Any ideas, anyone? Thanks!

    Read the article

  • Adding custom service to nagiosgraph

    - by ravloony
    I have successfully added nagiosgraph to our nagios installation. I also added a memory checker plugin, from here : http://blog.vergiss-blackjack.de/2010/04/nagios-plugin-to-check-memory-consumption/. However I can't seem to get the graph of this service to be output by nagiosgraph. The plugin returns a single line like this: 31% (3785 of 11903 MB) used so i added a rule like this to the map file: /output:(\d+)% \((\d+) of (\d+) MB\) used/ and push @s, ['Mem', ['Percentage', 'GUAGE', $1], ['Used', 'GUAGE', $2], ['Total', 'GUAGE', $3] ]; I have also read this : http://www.mail-archive.com/[email protected]/msg36835.html and made sure that process_performance_data=1 in the nagios conf file. So far I have no graph for the Mem service on any host, and no rrd file either. I am unsure how to proceed to get this working. The documentation is rather difficult to follow and I haven't managed yet to understand it enough to do this. Can anyone point me to a tutorial, or some documentation which explains the steps needed to get a service noticed and graphed by nagiosgraph?

    Read the article

  • How to access a port via OpenVpn only

    - by Andy M
    I've set up an openvpn server alongside an apache website that can only be accessed on port 8100 on the same machine. My /etc/openvpn/server.conf file looks like this: port 1194 proto tcp dev tun ca ./easy-rsa2/keys/ca.crt cert ./easy-rsa2/keys/server.crt key ./easy-rsa2/keys/server.key # This file should be kept secret dh ./easy-rsa2/keys/dh1024.pem # Diffie-Hellman parameter server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt # make sure clients can still connect to the internet push "redirect-gateway def1 bypass-dhcp" keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 Now I tried to let only clients connected to the vpn network access the website on apache via port 8100. So I defined a few iptables rules: #!/bin/sh # My system IP/set ip address of server SERVER_IP="192.168.0.2" # Flushing all rules iptables -F iptables -X # Setting default filter policy iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP # Allow incoming access to port 8100 from OpenVPN 10.8.0.1 iptables -A INPUT -i tun0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o tun0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # outgoing http iptables -A OUTPUT -o tun0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i tun0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT Now when I connect to the server from my client computer and try to access the website on 192.168.0.2:8100, my browser can't open it. Will I have to forward traffic from tun0 to eth0? Or is there anything else I'm missing?

    Read the article

  • How to manage multiple email addresses on multiple domains in Exchange

    - by CAD bloke
    Using Hosted Exchange Server, mostly because I use an iPhone, webmail & Outlook on 2 laptops. I want to keep everything consistent and unfragmented. Also, I want push notifications. I have 2 domains, a professional one & a personal one. Each domain has about 5 (give or take) email addresses I use for various purposes. Each domain also has a few parked domains (.net, .org, .info) aliased to the .com domain. I would like to keep emails from the 2 domains separated. Do I need an extra mail box, meaning extra expense or can I create another Exchange user on the same mailbox and create an extra account in Outlook? In either case I will have to wait for iOS4 on the iPhone to manage 2 Exchange accounts. Or am I better off just using a set of rules and folders? The aliased domains are another joy to behold entirely. It looks like I will have to add each email address variant individually. Alternatively, I reckon I may just leave the aliased domains at the pop3 host and let Outlook gather those as edge-cases. Surely I can't be the only one making my life this difficult. Anyone out there done this? From the left field - is this (much) easier in gMail? I'm not committed to Exchange (yet). Previously I used Outlook as a pop3 client with a set of filters to direct incoming traffic to folders. This worked with the aliased domains because my host directed all the aliased TLDs to the same mailbox.

    Read the article

  • Interruptionless Uploads / Rollback in IIS

    - by NickatUship
    I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas?

    Read the article

  • Dell PowerEdge 860 won't boot but gets power

    - by fierflash
    I have a Dell PowerEdge 860. Got an 2008 R2 running on it. Here is the scenario I'm currently in: I can boot it up and work as usuall. But when I restart it, it just shuts down, doesn't reboot, and it won't boot up again. Then it stays in this "mode", On/off button blinking green(standby mode?) and System Indicator LED blinking amber. If I try to press the On/Off button on the frontpanel it won't boot. It's like you can hear the fan from the PSU running but nothing else is trying to boot. If I unplug the powercable and then put it back in again it's the same, the extremley noisy fans won't start and it's just sitting there. If I let it "rest" for like 1 to 24 hours while the powercable is unplugged and I plug it in again, it boots directly, without me having to push the On/Off button. What I have tried: Run diagnostics(not the one in POST phase) and everything is fine Booting without any cables except the powercable Boot without memory installed Boot without the raidcable plugged in Cleared NVRAM Reseated all the components and cables Googling like a freak for any possible solution Tried Dell Support but I don't have any warranty left so they can't help me unless i pay some ridiculous amount of money I suppos Do anyone have some experience with the same issue? Do I need to replace something or is there any other way to determine the problem? Any help will be greatly appreciated.

    Read the article

  • How far should we take the N+N redundancy craziness ?

    - by Brann
    The industry standard when it comes from redundancy is quite high, to say the least. To illustrate my point, here is my current setup (I'm running a financial service). Each server has a RAID array in case something goes wrong on one hard drive .... and in case something goes wrong on the server, it's mirrored by another spare identical server ... and both server cannot go down at the same time, because I've got redundant power, and redundant network connectivity, etc ... and my hosting center itself has dual electricity connections to two different energy providers, and redundant network connectivity, and redundant toilets in case the two security guards (sorry, four) needs to use it at the same time ... and in case something goes wrong anyway (a nuclear nuke? can't think of anything else), I've got another identical hosting facility in another country with the exact same setup. Cost of reputational damage if down = very high Probability of a hardware failure with my setup : <<1% Probability of a hardware failure with a less paranoiac setup : <<1% ASWELL Probability of a software failure in our application code : 1% (if your software is never down because of bugs, then I suggest you doublecheck your reporting/monitoring system is not down. Even SQLServer - which is arguably developed and tested by clever people with a strong methodology - is sometimes down) In other words, I feel like I could host a cheap laptop in my mother's flat, and the human/software problems would still be my higher risk. Of course, there are other things to take into consideration such as : scalability data security the clients expectations that you meet the industry standard But still, hosting two servers in two different data centers (without extra spare servers, nor doubled network equipment apart from the one provided by my hosting facility) would provide me with the scalability and the physical security I need. I feel like we're reaching a point where redundancy is just a communcation tool. Honestly, what's the difference between a 99.999% uptime and a 99.9999% uptime when you know you'll be down 1% of the time because of software bugs ? How far do you push your redundancy crazyness ?

    Read the article

  • Poor performance on IIS7, only on Windows Server 2008, fine on Windows 7?

    - by user32005
    Hi, I'm new to IIS7 but have experience with other versions. I've been working on an application that works great in a dev environment (as always) but when I push it to a windows server 2008/IIS7 box performance takes a noticeable hit. The dev environment is Windows7/IIS7. The configuration in IIS is the same on the dev box as the server. I've tried all sorts of things to try and find a reason for this but I cant come to any conclusion. I've ruled out database problems on the live box as all data is cached after the first request. I've confirmed this to be true and made sure there is no additional database traffic. I've ruled out network issues with a combination of monitoring requests with fiddler and local debugging on the server. Whenever the code runs on the server there seems to be a performance issue. The server: Intel Core 2 Quad CPU Q6600 @ 2.40ghz with 2gb RAM. I know this is not fantastic but I was expecting it to at least perform as well as my dev environment (which is running much more on a lower spec). The CPU using peaks at under 60%, and memory usage is less than half of the available. I've enabled failed request tracing and most of the time is spent in a custom HttpModule, this module works to handle every request, I cant get any more detail as to what within the module may be causing the problem. Any ideas, I've been pulling my hair out for days now. Thanks

    Read the article

  • How do I run multiple MVC apps within a subdomain on IIS7?

    - by Matthew Patrick Cashatt
    Hello and thanks for looking. Background I am currently wrapping up a development contract and the client would like for me to push a build of the application to their IIS 7-based server in which they would like to run multiple MVC apps. One of the issues I have off of the bat is that this server is already a subdomain on their larger network. So, if I enter SERVERNAME in my browser, it automatically directs to SERVERNAME.COMPANYNAME.COM. Now, this is just fine if I place my application in the default website/root. In this scenario, clicking a link that requests admin.html directs to `SERVERNAME.COMPANYNAME.COM/admin.html' as usual. BUT they want me to place the app in a subdomain on this server so that they can also run other apps on the same server. So I assume that I need MYAPP.SERVERNAME.COMPANYNAME.COM but I have no idea how to do that. Complicating matters is that my app and the future ones they wish to install are all MVC based which intercepts and re-writes URLs. I assume that this takes care of itself if I can just successfully get my app into a subdomain to begin with. What I have tried Creating a new site on the server in it's own app pool Setting the binding for that site to MYAPP.SERVERNAME.COMPANYNAME.COM Setting the binding for that site to MYAPP Setting the binding for that site to MYAPP.SERVERNAME Setting the binding for that site to MYAPP.SERVERNAME.COM Setting the binding for that site to MYAPP.COMPANYNAME.COM Nothing is working. Am I missing something simple here? Thanks, Matt

    Read the article

  • ssh-add insists on passphrase

    - by Sam Walton
    I have a new ssh key problem. I have successfully used them for years with Heroku, Git and other servers so I can login without having to issue a passphrase. A few weeks ago, I was unable to push a git repository on my machine to my Heroku and it responded with Permission denied (publickey). Hmm. Everything else but this Heroku function still works. So I ssh-keygen -t rsa -C "newHeroku" with no passphrase (hit return so it would be empty). So I enter: sudo chmod 600 ~/.ssh/newHeroku* Then: ssh-add ~/.ssh/newHeroku.pub Returning return for the passphrase asked it exits without error. The next step is to: ssh-add /Users/sam/.ssh/newHeroku.pub To verify that it's "live" I enter: ssh-add -l To which the output is still The agent has no identities. Okay, to eliminate variables, I repeat the key generation process but entering in a passphrase for a new key. I ssh-add the new key and get the "Enter passphrase" as expected. Now this is why I'm posting here and not on a Heroku blog because ssh-add fails because the passphrase I used keeps getting rejected. It appears, even though I have no problem with my keys elsewhere, that something is wrong with passphrase because even though I get no errors, I get errors when on the one that expects a passphrase. One question, should I expect the Passphrase request for ssh-add when I have not generated a passphrase? It's been suggested that this is a clue and I offer it. Or maybe I have a poor understanding of what ssh-add is doing. Wouldn't be the first time I asked a stupid Q. Also, I'm on Lion and have updated no system updates in the few weeks of this period except application updates.

    Read the article

  • I deployed Flash Player via a Software Installation policy. How to upgrade?

    - by eleven81
    I have a Windows Server 2008 machine as my DC. Earlier this year I created a Software Installation GPO to deploy Adobe Flash Player plugin MSI. I assigned the policy to the computers, about half run Windows XP x86 and the other half Windows 7 x64. That all works like clockwork. When I created the Software Installation Policy, I disabled the Flash Player plugin's automatic update feature by editing the MSI in Orca. I did this because I wanted all of my machines to run the exact same version of the plugin. Now, some time has passed and a newer version of the Flash Player plugin has been released. It is time for me to push out the updated version of the plugin. I already have the new MSI, but I am lost on what to do next. I see the upgrades tab in the Software Installation GPO, but everything there reads like that would be used for add-ons to a larger master program and not for updates that are released over time. I have read that it is best to create a new Software Installation policy with the new MSI, revoke the old GPO, and assign the new GPO. I feel as though, over time, I will wind up with more revoked policies than active ones. I have also read that some people have had success by replacing the old MSI with the new MSI and simply telling the GPO to redeploy. This seems like a backdoor method that will only get me in to trouble. In short, what is the correct, best-practice, or preferred way to roll out the new version via Group Policy?

    Read the article

  • How to stop NAT dropping idle connections?

    - by WGH
    I have a TCP connection that can be idle for many hours. The traffic is flowing from the server to the client only. One might say it's kind of push notification. My home router, however, tends to drop the connection silently after 20 minutes (the value of /proc/sys/net/netfilter/nf_conntrack_tcp_timeout_established). The server detects the loss once it tries to send anything (I assume it receives RST from the router itself). As client never sends anything, it never detects the loss. RFC 5382 "NAT Behavioral Requirements for TCP" states the following: A NAT can check if an endpoint for a session has crashed by sending a TCP keep-alive packet and receiving a TCP RST packet in response. It makes sense. It's much more effective than sending keep-alives by the host itself (as only NAT knows its own timeout). And probably not hard to implement. Is there any NAT solutions implementing this? It would be great if there was a way to enable this in iptables.

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >