Search Results

Search found 21004 results on 841 pages for 'assembly load'.

Page 663/841 | < Previous Page | 659 660 661 662 663 664 665 666 667 668 669 670  | Next Page >

  • Standards for documenting/designing infrastructure

    - by Paul
    We have a moderately complex solution for which we need to construct a production environment. There are around a dozen components (and here I'm using a definition of "component" which means "can fail independently of other components" - e.g. an Apache server, a Weblogic web app, an ftp server, an ejabberd server, etc). There are a number of weblogic web apps - and one thing we need to decide is how many weblogic containers to run these web apps in. The system needs to be highly available, and communications in and out of the system are typically secured by SSL Our datacentre team will handle things like VLAN design, racking, server specification and build. So the kinds of decisions we still need to make are: How to map components to physical servers (and weblogic containers) Identify all communication paths, ensure all are either resilient or there's an "upstream" comms path that is resilient, and failover of that depends on all single-points of failure "downstream". Decide where to terminate SSL (on load balancers, or on Apache servers, for instance). My question isn't really about how to make the decisions, but whether there are any standards for documenting (especially in diagrams) the design questions and the design decisions. It seems odd, for instance, that Visio doesn't have a template for something like this - it has templates for more physical layout, and for more logical /software architecture diagrams. So right now I'm using a basic Visio diagram to represent each component, the commms between them with plans to augment this with hostnames, ports, whether each comms link is resilient etc, etc. This all feels like something that must been done many times before. Are there standards for documenting this?

    Read the article

  • Find slow network nodes between two data centers

    - by 2called-chaos
    I've got a problem with syncing big amount of data between two data centers. Both machines have got a gigabit connection and are not fully occupied but the fastest that I am able to get is something between 6 and 10 Mbit = not acceptable! Yesterday I made some traceroute which indicates huge load on a LEVEL3 router but the problem exists for weeks now and the high response time is gone (20ms instead of 300ms). How can I trace this to find the actual slow node? Thought about a traceroute with bigger packages but will this work? In addition this problem might not be related to one of our servers as there are much higher transmission rates to other servers or clients. Actually office = server is faster than server <= server! Any idea is appreciated ;) Update We actually use rsync over ssh to copy the files. As encryption tends to have more bottlenecks I tried a HTTP request but unfortunately it is just as slow. We have a SLA with one of the data centers. They said they already tried to change the routing because they say this is related to a cheap network where the traffic gets routed through. It is true that it will route through a "cheapnet" but only the other way around. Our direction goes through LEVEL3 and the other way goes through lambdanet (which they said is not a good network). If I got it right (I'm a network intermediate) they simulated a longer path to force routing through LEVEL3 and they announce LEVEL3 in the AS path. I basically want to know if they're right or they're just trying to abdicate their responsibility. The thing is that the problem exists in both directions (while different routes), so I think it is in the responsibility of our hoster. And honestly, I don't believe that there is a DC2DC connection which only can handle 600kb/s - 1,5 MB/s for weeks! The question is how to detect WHERE this bottleneck is

    Read the article

  • configuring lighttpd for large downloads

    - by ahmedre
    i run a web site that hosts pages that are just general scripts (php, etc) and mp3 downloads (some of which are fairly large - up to 200mb). i am running lighttpd on the servers on linux (ubuntu 64). everything is fine, but under high load, the server is not accessible (or very slow - even sshing in takes a while), and i am guessing this is due to a huge number of mp3 downloads at that time. consequently, dns sees the server as down and redirects all the traffic to the other servers, and after a while, it comes back up and things work again. so what's the best way to fix this? ideally, i want the server to continue running (and the web pages - php etc - to always work, but downloads don't always have to work). should i just have 2 web servers running (one for the downloads and one for the php pages), or is it perhaps something i can fix in my lighttpd configuration? here are the snippets from my configuration: server.max-worker = 4 server.max-fds = 2048 server.max-keep-alive-requests = 4 server.max-keep-alive-idle = 4 server.stat-cache-engine = "fam" fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "64", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) # normal php site $HTTP["host"] =~ "bar.com" { server.document-root = "/usr/local/www/sites/bar.com/" accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/bar.log" } # download site $HTTP["host"] =~ "(download|stream).foo.com" { server.document-root = "/home/audio/" dir-listing.activate = "enable" dir-listing.hide-dotfiles = "enable" evasive.max-conns-per-ip = 1 evasive.silent = "enable" # connection.kbytes-per-second = 256 accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/download.log" }

    Read the article

  • What to do when a device has no driver for Windows 7 but it has Vista, XP drivers

    - by Mehper C. Palavuzlar
    This has always been a bothersome matter for me. Some devices (printers, scanners, etc.) have drivers for older versions of Windows (Vista, XP, 2000, NT) but no driver for Windows 7. What are my chances to install such devices on Windows 7? Example case: I have a Sharp printer & scanner (Sharp AR-122E N) which I have used for my old Windows XP based PC. Now I want to install it on a Windows 7 x64 based PC. Windows 7 cannot load its driver. I used the original driver CD but when I run the setup.exe (which is included in AR122EN111.exe, 6713KB), it says Cannot install driver on this operating system. Supported operating systems are: Windows 2000, XP, Vista. I tried to install the driver using compatibility settings. I tried Windows Vista and Windows XP SP3, but to no avail. The setup gave the same error. I also googled for Windows 7 driver for "Sharp AR-122E N" but it only listed the original driver that I tried. The official site of Sharp does not even list the driver for this product. In the past, the compatibility setting workaround did work for some devices, but this time it failed. What else can I do to overcome this problem?

    Read the article

  • Windows 7 64 bit Installation freezes after a while durig the setup

    - by vinz243
    I have a windows 7 32 bits on my computer. Because i have 5 gb of ram (kingston) on my asus M2N motherboard and only 3 were able to be used, I bought W7 x64 and install it. It loads the wizard, but after a while, it freezes, and I have to force reboot. It first crashed while unzipping w7 files, but if I wait a while on the terms page for example, it can crash before, which make me think that it is a matter of time. I remember I had the same issue while booting on Ubuntu x64, it crashed randomly but not load completely. No bip or other messages. Configuration: Software OS (before) W7 x86 Pro New OS : W7 x64 Pro Antivirus : avast (bios verification ?) BIOS 03/27/2008 - v08.00.12 Hardware : Motherboard : Asus M2N Processor : AMD Athlon 64 dualcore @ 2.6 GHZ Memory : 5120 MB ((2 + 2) + (1)) NOTES : I ran a memory test using openSUSE cd, though i have not finished it, it ran. EDIT: I tried not to run the setup but wait, and i get the BSOD : A problem... TL;DW IRQL_NOT_LESS_OR_EQUAL If it is.. TL;DW ***STOP: 0x0000000A (0x0000000000000000,0x0000000000000002, 0x0000000000000001, 0xFFFFF8001A49ED1F)

    Read the article

  • Apache httpd workers retry

    - by David Newcomb
    I have an Apache httpd web server running mod_proxy and mod_proxy_balancer. The whole of /somedir is sent to 2 worker machines which service the requests using the round robin scheduler. Each worker machine is running IIS but I don't think that is important. I can demonstrate the load balancer working by repeatedly requesting a single page which contains the IP address of the machine and can see that it switches from one to the other in a predictable round robin fashion. If I switch off one of the IIS servers and start requesting the same page then each page only contains the IP address of the machine that is up. However, if I start IIS and don't run my IIS application then /somedir returns 500 (as it should). I've added 500 to the failonstatus (Apache 2.4) so when it hits the error Apache places the worker machine into error state. Apache still returns the proxy error to the client though. How can I make Apache catch the proxy failure and retry using a different worker in the same way that a connection failure does. Update There is almost the same question asked in StackOverflow so joining them together. http://stackoverflow.com/questions/11083707/httpd-mod-proxy-balancer-failover-failonstatus-transperant-switching

    Read the article

  • Help, my CentOS servers keep going down , No route to host after a random uptime

    - by user249071
    Hello , I have a couple of Centos linux servers, that have a very simple task, they run nginx + fastcgi for php , and some NFS mounts between them, readonly They have some RPC commands to start some downloading processes with wget, nothing fancy , from a main server, but their behavior is very unstable, they simply go down, we tried to monitor ram , processor usage, even network connections, they don't load up so much, max network connections up to... 250 max, 15% processor usage and memory , well, doesn't even fill up, 2.5GB from 8GB max , I have no ideea why can a linux server go down like that, they aren't even public servers, no domain names installed no public serving, for sites. The only thing that I've discovered was that if i didn't restart the network service every couple of hours or so... the servers were becoming very slow, starting apps very slow, but not repoting a high usage of resources...Maybe Centos doesn't free the timeout connections, or something like that...It's based on Red Hat right? I'm not a linux expert , but I'm sure that there are a few guys out there that can easily have an answer to this , or even have some leads to what i can do ... I haven't installed snort, or other things to view if we have some DOS attacks, still the scheduled script that restarts the network each hour should put the system back online, and it doesn't.... Thank you in advance

    Read the article

  • Photo managing software that supports network drives?

    - by musicfreak
    My dad is a photographer in his free time, and he's been using Lightroom to manage his photos. However, recently, we put all of our photos on a NAS drive to allow us to access them from any computer at any time. The problem with this is that Lightroom cannot load catalogs from network drives. We need support for network drives because we'd like to be able to browse the photos from any computer, and for any computer to be able to add photos to the collection. Right now we're just syncing the Lightroom catalog file between us, but the extra step is a pain, and doing it manually makes it error-prone. Is there any software (free or commercial) that has proper support for network drives? The only real feature I need is to be able to sort photos by date and by some sort of tags. I don't need any editing features like those found in Lightroom; my dad is comfortable using Photoshop to edit photos. Also, if there is another solution to this that I haven't thought of, feel free to share.

    Read the article

  • Why is this setting for Name-based Virtual Host settings not working?

    - by Kave
    I have two domains (siteA.com & SiteB.com) that point to the same webserver and I would like to show different web pages for each. The steps I have taken so far are: Copy the default site (siteA) to siteB 1) sudo cp /etc/apache2/sites-available/default /etc/apache2/sites-available/siteB 2) sudo vim /etc/apache2/sites-available/siteB <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/siteB <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/siteB> Options Indexes FollowSymLinks MultiViews AllowOverride FileInfo Indexes Order allow,deny allow from all </Directory> </VirtualHost *:80> Then I created under /var/www/siteB and created a sample index.html in there. However when I load my domain siteB.com I still get directed to /var/www/siteA. Why is that? Do I have to rename the /etc/apache2/sites-available/default to /etc/apache2/sites-available/siteA as well? UPDATE: Thanks to the answer below it seems I had forgotten next to enabling the site also another entry: <VirtualHost *:80> ServerAdmin [email protected] ServerName siteB.com ServerAlias www.siteB.com </VirtualHost *:80> in order to include all subdomains as well then do: <VirtualHost *:80> ServerAdmin [email protected] ServerName siteB.com ServerAlias *.siteB.com </VirtualHost *:80> Same goes for siteA.

    Read the article

  • How to embed/hardcode SRT subtitles into mp4 videos with VLC?

    - by Jens Bannmann
    I'm looking for a way to "burn in" or render/rembed/hardcode subtitles (from an SRT file) into an MP4 video with VLC. But no matter what options I use, it never works properly. I get a file that plays video way too fast (audio is normal), or one that plays normally, but actually does not have embedded subtitles. Also, with some options (like the one below) it does not play in QuickTime, only in VLC. So the main question is: how can I make this work in VLC? Secondary questions are: How do I decide which options I should set? Which settings are best if I want to leave the file bitrate etc. the same as much as possible, only embed subtitles? It seems I cannot leave the field empty or Video/Audio unchecked, so I guess I would first need to figure out the original audio and video bitrate. What do the "Scale" and "Channels" options mean? ... none of which are answered within the VLC documentation. For example, this is one set of options I used in the "Advanced Open File…" dialog: Advanced Open File… myFileName.mp4 [ ] Treat as a pipe rather than as a file [x] Load subtitles file: mySubtitleFileName.srt [ ] Play another media synchronously [x] Streaming/Saving Streaming and Transcoding Options [ ] Display the stream locally (o) File [outputFileName.mp4 ] [ ] Dump raw input Encapsulation Method: (MPEG 4 ) Transcoding options [x] Video (mp4v ) Bitrate (kb/s) [256 ] Scale [1 ] [x] Audio (mp3 ) Bitrate (kb/s) [128 ] Channels [1 ]

    Read the article

  • Improving performance by using an additional static file server

    - by Max
    Hello there, I´m planning for a large website that includes many static assets (js, css, images and thumbnails) in the generated pages. That website will use TYPO3 as CMS (is is a customer requirement). I guess I could seriously improve performance / page load times by using a two server setup. One server where the main application (PHP) runs and another one where the static files sit being served by a trimmed down version of apache or something like lighthttpd. Including e. g. js or css files from the file server is of course no big deal. Just use an absolute url http://static.example.com/js/main.js and be done with it. But: that website will have pages with MANY thumbnails of e. g. product images on it. So I see two problems when the main application tries to create a thumbnail of some image: the original image like products/some.jpg is uploaded on the static file server and therefore not on the same server as the PHP application which tries to create the thumbnail. TYPO3 writes created thumbnails to a temp directory which is expected to be on the same server. Therefore, hundreds of thumbnails will be written and served from that temp directory which is on the same server as the main application - the static file server is in that case basically useless, all thumbnails will be requested from the server of the main application. So, my question is: how to overcome this shortcomings? Is it possible to "symlink" some directories to another server? So, for example, if PHP tries to open the original products image for thumbnail creation with imagecreate("products/some.jpg") the products folder actually "points" to the products folder on the static image server? I know something like this can be done with .htaccess but is it possible on file system level?

    Read the article

  • How browsers handle multiple IPs

    - by Sandman4
    Can someone direct me to information on exact browsers behavior when browser gets multiple A records for a given hostname (say ip1 and ip2), and one of them is not accessible. I interested in EXACT details, like (but not limited to): Will browser get 2 IPs from OS, or it will get only one ? Which ip will browser try first (random or always the first one) ? Now, let's say browser started with the failed ip1 For how long will browser try ip1 ? If user hits "stop" while it waits for ip1, and then clicks refresh which IP will browser try ? What will happen when it times-out - will it start trying ip2 or give error ? (And if error, which ip will browser try when user clicks refresh). When user clicks refresh, will any browser attempt new DNS lookup ? Now let's assume browser tried working ip2 first. For the next page request, will browser still use ip2, or it may randomly switch ips ? For how long browsers keep IPs in their cache ? When browsers sends a new DNS request, and get SAME ips, will it CONTINUE to use the same known-to-be-working IP, or the process starts from scratch and it may try any of the two ? Of course it all may be browser dependent, and may also vary between versions and platforms, I'd be happy to have maximum of details. The purpose of this - I'm trying to understand what exactly users will experience when round-robin DNS based used and one of the hosts fails. Please, I'm NOT asking about how bad DNS load balancing is, and please refrain from answering "don't do it", "it's a bad idea", "you need heartbeat/proxy/BGP/whatever" and so on.

    Read the article

  • Graphite not running

    - by River
    I'm currently trying to install graphite 0.9.9 on a gentoo box using these instructions from the graphite wiki. Essentially, it fronts graphite using apache and mod_wsgi. Everything seems to have gone well, except that apache / the graphite webapp never seem to return a response to the web browser (the browser continuously waits to load the page). I've turned on the graphite debug info, but the only message in the log files is this, repeated over and over again in info.log (with the pid always changing): Thu Feb 23 01:59:38 2012 :: graphite.wsgi - pid 4810 - reloading search index These instructions have worked for me before to set up graphite on an Ubuntu machine. I suspect that mod_wsgi is dying, but I have confirmed that mod_wsgi works fine when not serving the graphite webapp. This is what my graphite.conf vhost file looks like: WSGISocketPrefix /etc/httpd/wsgi/ <VirtualHost *:80> ServerName # Server name DocumentRoot "/opt/graphite/webapp" ErrorLog /opt/graphite/storage/log/webapp/error.log CustomLog /opt/graphite/storage/log/webapp/access.log common # I've found that an equal number of processes & threads tends # to show the best performance for Graphite (ymmv). WSGIDaemonProcess graphite processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120 WSGIProcessGroup graphite WSGIApplicationGroup %{GLOBAL} WSGIImportScript /opt/graphite/conf/graphite.wsgi process-group=graphite application-group=%{GLOBAL} WSGIScriptAlias / /opt/graphite/conf/graphite.wsgi Alias /content/ /opt/graphite/webapp/content/ <Location "/content/"> SetHandler None </Location> # XXX In order for the django admin site media to work you # must change @DJANGO_ROOT@ to be the path to your django # installation, which is probably something like: # /usr/lib/python2.6/site-packages/django Alias /media/ "/usr/lib64/python2.6/site-packages/django/contrib/admin/media/" <Location "/media/"> SetHandler None </Location> # The graphite.wsgi file has to be accessible by apache. It won't # be visible to clients because of the DocumentRoot though. <Directory /opt/graphite/conf/> Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • Setup: Eclipse in Ubuntu with Apache2 and Subversion

    - by Ricalsin
    Trying to setup Eclispe. I am running ubuntu 10.10 (Maverick). Apache2.2.16 Subversion 1.6.12 The Eclipse help/about/installed software says: Eclipse Platform 3.5.2 Subclipse 1.0.0 Version Control with Subversion 1.1.1 The Subclips wiki I followed is here I have installed the libsvn-java app as discussed. I added the line "-Djava.library.path=/usr/lib/jni" to the eclipse.ini file I checked the Eclipse help/about/confirguration settings and both of these lines are listed: eclipse.vmargs=-Djava.library.path=/usr/lib/jni java.library.path=/usr/lib/jni I checked that those files are in those directories. Still, when I check the preferencesteamsvn an error dialog shows: Failed to load JavaHL Library. These are the errors that were encountered: no libsvnjavahl.1 in java.library.path Incompatible JavaHL library loaded 1.3.x or later required I followed the "Testing JavaHL libraries" troubleshooting section at the bottom of the wiki: I downloaded the tarbal and ran it in a folder on my desktop with no problems. Then, I followed the instructions and placed that file INSIDE the path (usr/lib/jni/testJavaHL) and ran it from there. There are 50 tests performed and each one of them came back with this same error (posting only one for brevity): 50) testCommitRevprops(org.tigris.subversion.javahl.BasicTests)java.io.FileNotFoundException: /usr/lib/jni/testJavaHL/local_tmp/greek_files/iota (No such file or directory) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:209) at java.io.FileOutputStream.<init>(FileOutputStream.java:160) at org.tigris.subversion.javahl.WC.materialize(WC.java:70) at org.tigris.subversion.javahl.SVNTests.buildGreekFiles(SVNTests.java:303) at org.tigris.subversion.javahl.SVNTests.setUp(SVNTests.java:222) at org.tigris.subversion.javahl.RunTests.main(RunTests.java:111) FAILURES!!! Tests run: 50, Failures: 0, Errors: 50 Any ideas as to how/why the "local_tmp/greek_files/iota" is appended to the directory? I assume that's my problem.. I'm also having a problem with newrepository location = ...as the directory location of my svn repository is one level above the home directory - which is prepended to whatever I place in the dialog box - resulting in this error: svn: '/home/ricalsin/file:/home/svn' does not exist Thank you for any help.

    Read the article

  • Site Goes Offline Every Day At Midnight - No One Knows Why

    - by HollerTrain
    0 down vote favorite Seems today a website I manage has been going online and offline between 12a and 12:25a. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I have a pingdom account which alerts me when the site goes offline so we can see every day, like clockwork, the site goes on/off. At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • Having trouble keeping a 1GB RAM Centos server running

    - by Josh
    This is my first time configuring a VPS server and I'm having a few issues. We're running Wordpress on a 1GB Centos server configured per the internet (online research). No custom queries or anything crazy but closing in on 8K posts. At arbitrary intervals, the server just goes down. From the client side, it just says "Loading..." and will spin more or less indefinitely. On the server side, the shell will lock completely. We have to do a hard reboot from the control panel and then everything is fine. Watching "top" I see it hovering between 35 - 55% memory usage generally and occasional spikes up to around 80%. When I saw it go down, there were about 30 - 40 Apache processes showing which pushed memory over the edge. "error_log" tells me that maxclients was reached right before each reboot instance. I've tried tinkering with that but to no avail. I think we'll probably need to bump the server up to the next RAM level but with ~120K pageviews per month, it seems like that's a bit overkill since it was running fairly well on a shared server before. Any ideas? httpd.conf and my.cnf values to add? I'll update this with the current ones if that helps. Thanks in advance! This has been a fun and important learning experience but, overall, quite frustrating! Edit: quick top snapshot: top - 15:18:15 up 2 days, 13:04, 1 user, load average: 0.56, 0.44, 0.38 Tasks: 85 total, 2 running, 83 sleeping, 0 stopped, 0 zombie Cpu(s): 6.7%us, 3.5%sy, 0.0%ni, 89.6%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 2051088k total, 736708k used, 1314380k free, 199576k buffers Swap: 4194300k total, 0k used, 4194300k free, 287688k cached

    Read the article

  • Apache2: Limit simultaneous requests & throttle bandwidth per IP/client?

    - by xentek
    I want to limit simultaneous requests & throttle bandwidth per IP/Client on a single apache vhost. In other words, I want to ensure that this site, which hosts large media files, doesn't get hammered by someone trying to download everything all at once (just happened the other night). I'd like to limit the outgoing transfer speed overall for this site, as well as limit the number of connections a single IP can make to the server to a sane default (i.e. within normal browser limits for multiple requests so page loads aren't effected too much). Bonus points if I can actually scope it to file types (i.e. leave web files alone, but apply these rules to just the media files). We're running Ubuntu 9.04 on all the servers, and have two apache/php servers being load balanced via Round Robin by a squid proxy server. MySQL is running on its own box as well. We've got plenty of bandwidth to give them, so I don't really want overall caps, but just want to throttle the amount of memory/CPU it takes to serve this site. There other sites on these servers that we don't want to apply these rules too, just want to keep this one from hogging all the resources. Let me know if you need more info! Thanks in advance for your suggestions!

    Read the article

  • Http header 304 and caching?

    - by Royi Namir
    Our company uses these settings( don't ask me why) - for every request they want a new request from server. this is an intranet system which uses only IE. They defined it in : We also have windows authentication NTLM in the iis7. I have 2 questions please. Question #1) when the browser make a request ( css ) : (leave the 401 response for now - this is how ntlm works) He is requesting it with if-modified-since header. why is he adding this header ? How can I configure it ? why doesn't he use the settings from IE and try to download it each time - as I showed in the first picture ? Question #2) The response ( after ntlm negotiation) for that was : Response with Not-modified which is 304 header. and I assume its because we sent the request with the if-modified-since header. But there is a problem. He is actually tells me to download from my cache. But I told him explicitly in the IE settings - not to load from cache. Wham am I missing here ? Thanks a lot.

    Read the article

  • Web browsing through SSH tunnel gets stuck/clogged

    - by endolith
    I use tools like Tunnelier to log into my home Tomato router through SSH, and then use it as a proxy for web browsing, tunnel for Remote Desktop/VNC, etc. Most days it works great, but some days every page I try to view gets stuck, like the tunnel is clogged. I load a web page and it seems to be loading, then stops, with the little loading icon spinning and nothing happening. I refresh the page, I reboot the router, I reboot the other computers on my home network and turn off any bandwidth-hogging services on them, I've turned on QoS on the router to prioritize SSH. I don't understand what's getting stuck. Rebooting or disconnecting/reconnecting the SSH tunnel improves responsiveness for a minute, but then it gets clogged again. It also seems to help if I don't do anything on the tunnel for a few minutes, then it will be responsive for a bit and then get clogged again. Trying to open a terminal console from Tunnelier is also unresponsive, so it's not just a web browsing problem. Likewise, connecting to http://192.168.1.1 in the browser (to the router's web config through its own tunnel) is also slow/laggy/halting. The realtime bandwidth reported by the router is nowhere near my DSL connection's limits, though it does show big spikes during the laggy times, and the connection is responsive when it shows low bandwidths. How do I troubleshoot something like this?

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • How configure 2 Lan cards in Windows 7/8 pc one to connect to Internet and other to Local Network

    - by Maharshi Raval
        I am about to install a dedicated VOIP server in our office. It is a 3CX pbx system on Windows 7/8 machine. The environment currently is a Windows SBS 2011 with 8 client machines. I want to use a dedicated broadband connection for the PBX (3CX) box, but the box also needs to be accessible in the local network as we will be using IP Phones and software IP phones. How configure two network cards on PBX box, so that one will be always used to connect to our SIP host over the Internet and the other will be connected to local network accessible from other client pc to connect to the pbx system. It must be noted that currently the Windows SBS 2011 acts as the Primary Domain Controller and gateway for all the client machines.     I cannot use a load balancer as it will conflict and cause issues within the current setup of our SBS2011 as it is also our Exchange Server. Any input is much appreciated. thanks in advance

    Read the article

  • How configure 2 Lan cards in Windows 7/8 pc one to connect to Internet and other to Local Network

    - by Maharshi Raval
        I am about to install a dedicated VOIP server in our office. It is a 3CX pbx system on Windows 7/8 machine. The environment currently is a Windows SBS 2011 with 8 client machines. I want to use a dedicated broadband connection for the PBX (3CX) box, but the box also needs to be accessible in the local network as we will be using IP Phones and software IP phones. How configure two network cards on PBX box, so that one will be always used to connect to our SIP host over the Internet and the other will be connected to local network accessible from other client pc to connect to the pbx system. It must be noted that currently the Windows SBS 2011 acts as the Primary Domain Controller and gateway for all the client machines.     I cannot use a load balancer as it will conflict and cause issues within the current setup of our SBS2011 as it is also our Exchange Server. Any input is much appreciated. thanks in advance

    Read the article

  • Windows 7 Install: No drives were found

    - by Albert Bori
    I was building a computer for my wife with an older SATA hard drive that I had lying around, and when attempting to do a new install of Windows 7 on it, the installer says: "No drives were found. Click Load Driver to provide a mass storage driver for installation." I ran the diskpart command: list volume, and it showed up as "Raw". So, I formatted it to NTFS and then it showed up as a healthy drive in diskpart. I also ran check disk on it with no errors. Windows 7 installer STILL can't find the drive. As far as BIOS settings, I have tried "Native IDE", AHCI, and Both AHCI/IDE mode (SATA slots 0-2 AHCI, 3-4 IDE). I tried all combinations... still "no drives were found". At this point, I'm just scratching my head. Using the installation dos window, I can see and talk to the drive just fine, but the installer just doesn't see it at all. I've even written folders and files to the drive, and it still "can't be seen". Any help would be great. Items of interest: Motherboard model: Gigabyte GA-A75M-UD2H - BIOS Version F5 (latest) Hard drive model: 80GB Seagate Barracuda 7200.7 ST380817AS (no other drives) Installing Windows 7 using a FAT32 formatted USB Drive, which I've used for other installs

    Read the article

  • GlassFish v2.1 -- getting Application Client and Eclipselink to work together?

    - by Nick
    We are trying to use Eclipselink 1.1 with Glassfish v2.1. Following the instructions on: http://wiki.glassfish.java.net/Wiki.jsp?page=FaqEclipseLinkGlassFishV2 I adapted the instructions for the appclient script on linux by adding the lines: APPCPATH=$APPCPATH:$AS_INSTALL/lib/eclipselink-1.1.1.jar export APPCPATH to the appclient shell script. This however is not working. On running the application client (using Glassfish's webstart), I get the error: WARNING: "IOP00810257: (MARSHAL) Could not load class org.eclipse.persistence.indirection.IndirectList" Anyone else succeed in getting GF v 2.1 to work with eclipselink? or any ideas on what I might be doing wrong? I found this bug report: http s://glassfish.dev.java.net/issues/show_bug.cgi?id=8204 (New users can't post more than 1 link, so remove the space between 'http' and 's'.) Where Tim Quinn (tjquinn) said: App client container support for persistence is not yet in place I think this refers only to Glassfish v3, and it should be working in Glassfish v2. Is this correct? I'm working on the assumption that this will work once the ACC knows where to find the eclipselinks jar. Thanks in advance, Nick.

    Read the article

  • mod_fcgi in virtualmin: graceful kill fail, sending SIGKILL?

    - by mgjk
    Yesterday around 1am, our server ground to a crawl. This doesn't happen often, but I'm trying to get to the bottom of it. There is no unusual traffic volume, no unusual processes running, just all of the sudden the server started killing fcgid processes. [Thu Aug 02 01:17:32 2012] [warn] mod_fcgid: process 26460 graceful kill fail, sending SIGKILL ... for as many fcgid processes as we have... CPU idle fell to 0% and I/O seemed to take up most of the load. The issue lasted about 5 minutes. I suspect there was some swap activity, although I'm not sure if it was due to killed processes being swapped in to die, or if it was because some process ramped up memory usage faster than my process watching scripts can see them. The oom-killer wasn't triggered (at least it's not logged), so I think this was Apache for some reason restarting the processes. This is not regular, and nothing obvious appears in cron. Is there a normal Apache process which might cause this? We run dozens of different sites, and it was late at night, so volume was very, very low. (maybe 200 requests in a 10 minute period).

    Read the article

< Previous Page | 659 660 661 662 663 664 665 666 667 668 669 670  | Next Page >