Search Results

Search found 1087 results on 44 pages for 'serving'.

Page 38/44 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Client-Side Dynamic Removal of <script> Tags in <head>

    - by merv
    Is it possible to remove script tags in the <head> of an HTML document client-side and prior to execution of those tags? On the server-side I am able to insert a <script> above all other <script> tags in the <head>, except one, and I would like to be able to remove all subsequent scripts. I do not have the ability to remove <script> tags from the server side. What I've tried: (function (c,h) { var i, s = h.getElementsByTagName('script'); c.log("Num scripts: " + s.length); i = s.length - 1; while(i > 1) { h.removeChild(s[i]); i -= 1; } })(console, document.head); However, the logged number of scripts comes out to only 1, since (as @ryan pointed out) the code is being executed prior to the DOM being ready. Although wrapping the code above in a document.ready event callback does enable proper calculation of the number of <script> tags in the <head>, waiting until the DOM is ready fails to prevent the scripts from loading. Is there a reliable means of manipulating the HTML prior to the DOM being ready? Background If you want more context, this is part of an attempt to consolidate scripts where no option for server-side aggregation is available. Many of the JS libraries being loaded are from a CMS with limited configuration options. The content is mostly static, so there is very little concern about manually aggregating the JavaScript and serving it from a different location. Any suggestions for alternative applicable aggregation techniques would also be welcome.

    Read the article

  • Lightweight HTTP application/server for static content

    - by PartlyCloudy
    Hi, I am in need of a scalable and performant HTTP application/server that will be used for static file serving/uploading. So I only need support for GET and PUT operations. However, there are a few extra features that I need: Custom authentication: I need to check credentials against a database for each request. Thus I must be able to integrate propietary database interaction. Support for signed access keys: The access to resources via PUT should be signed using a key like http://uri/?key=foo The key then contains information about the request like md5(user + path + secret) which allows me to block unwanted requests. The application/server should allow me to check for this. Performance: I'd like to avoid piping content as much as possible. Otherwise the whole application could be implemented in Perl/etc. in a few lines as CGI. Perlbal (in webserver mode) looks nice, however the single-threaded model does not fit with my database lookup and it does also not support query strings. Lighttp/Nginx/… have some modules for these tasks, however it is not feasible putting everything together without ending up writing own extensions/modules. So how would you solve this? Are there other leightweight webservers available for this? Should I implement an application inside of a webserver (i.e. CGI). How can I avoid/speed up piping content between the webserver and my application. Thanks in advance!

    Read the article

  • PHP Check slave status without mysql_connect timeout issues

    - by Jonathon
    I have a web-app that has a master mysql db and four slave dbs. I want to handle all (or almost all) read-only (SELECT) queries from the slaves. Our load-balancer sends the user to one of the slave machines automatically, since they are also running Apache/PHP and serving webpages. I am using an include file to setup the connection to the databases, such as: //for master server (i.e. - UPDATE/INSERT/DELETE statements) $Host = "10.0.0.x"; $User = "xx"; $Password = "xx"; $Link = mysql_connect( $Host, $User, $Password ); if( !$Link ) ) { die( "Master database is currently unavailable. Please try again later." ); } //this connection can be used for READ-ONLY (i.e. - SELECT statements) on the localhost $Host_Local = "localhost"; $User_Local = "xx"; $Password_Local = "xx"; $Link_Local = mysql_connect( $Host_Local, $User_Local, $Password_Local ); //fail back to master if slave db is down if( !$Link_Local ) ) { $Link_Local = mysql_connect( $Host, $User, $Password ); } I then use $Link for all update queries and $Link_Local as the connection for SELECT statements. Everything works fine until the slave server database goes down. If the local db is down, the $Link_Local = mysql_connect() call takes at least 30 seconds before it gives up on trying to connect to the localhost and returns back to the script. This causes a huge backlog of page serves and basically shuts down the system (due to the extremely slow response time). Does anyone know of a better way to handle connections to slave servers via PHP? Or, is there some kind of timeout function that could be used to stop the mysql_connect call after 2-3 seconds? Thanks for the help. I searched the other mysql_connect threads, but didn't see any that addressed this issue.

    Read the article

  • What are CDN Best Practices?

    - by Wild Thing
    Hi, I have recently started using the Rackspace Cloudfiles CDN (Limelight), about which I have some questions: I am using jQuery, jQuery UI and jQuery tools in addition to custom JS code. Also, my site is written in ASP.Net, which means there is some ASP.Net generated JS code. Right now what I have done is that I have combined all of the js (including the jquery code), except the ASP.Net generated JS into one file. I am hosting this on the Rackspace CDN. I am wondering if it would make more sense to just get the jQuery, jQuery UI files from the Google hosted CDN (which I suspect would work very well in serving these files, since they will be in many users' cache already)? This would mean one extra HTTP request, so I'm not sure if it'll help. Right now I have multiple containers for my assets. For example, in Rackspace I have 3 containers: JS, CSS and Images. The URL subdomain for all 3 is different. Will that lead to a performance penalty? Should I just use one container (and thus one domain for the CDN)? Is there a way of having the MS ASP.Net generated JS loaded from MS CDN? Would this have a performance hit as per the above question? Thanks in advance, WT

    Read the article

  • WP7 - group of textboxes into some sort of template?

    - by Jeff V
    I'm still pretty new at Silverlight so there might be a way to do this, but I'm just unfamiliar with the terminology... I basically have this grouping of textboxes and textblocks and I would like to repeat this same grouping whenever the addNew button is clicked. Is there a way to do this by creating some sort of template? Or do I have to add each item individually. <Grid> <toolkit:ListPicker Height="70" HorizontalAlignment="Right" Name="listPicker1" VerticalAlignment="Top" Width="56" ItemTemplate="{StaticResource PickerItemTemplate}" FullModeItemTemplate="{StaticResource PickerFullModeItemTemplate}" Margin="0,97,167,0"></toolkit:ListPicker> <TextBlock Height="33" HorizontalAlignment="Left" Margin="10,7,0,0" Name="tbDate" Text="Date:" VerticalAlignment="Top" Width="266" /> <TextBlock Height="42" HorizontalAlignment="Left" Margin="9,55,0,0" Name="tbItem" Text="Item:" VerticalAlignment="Top" Width="90" /> <TextBox Height="75" HorizontalAlignment="Left" Margin="92,33,0,0" Name="tbItemName" Text="" VerticalAlignment="Top" Width="341" /> <TextBlock Height="42" HorizontalAlignment="Left" Margin="5,118,0,0" Name="tbServing" Text="Serving:" VerticalAlignment="Top" Width="99" /> <TextBox Height="70" HorizontalAlignment="Left" Margin="90,96,0,0" Name="tbServingValue" Text="" VerticalAlignment="Top" Width="75" /> <TextBlock Height="42" HorizontalAlignment="Left" Margin="156,120,0,0" Name="tbUOM" Text="UOM:" VerticalAlignment="Top" Width="60" /> <Button Content="" HorizontalAlignment="Right" Height="63" Margin="0,100,13,0" VerticalAlignment="Top" Width="132" RenderTransformOrigin="0.455,0.286" Style="{StaticResource wp7_buttonAddNew}" x:Name="btnAddNewItem" Click="btnAddNewItem_Click"/> </Grid> Thanks!

    Read the article

  • What is the simplest way to map a folder on the file system to a url in Tomcat?

    - by Simon
    Here's my problem... I have a small prototype app (happens to be in Grails hosted on AWS) and I want to add the ability of the user to upload a few (max 10) images. I want to persist these images on disk on the server machine, in a folder location which is outside my WAR. I realise that there is probably a super-scalable solution involving more web servers and optimised static asset serving, but for the approximately 100 users I am likely to get, it's really not worth the effort and cost. So, what is the simplest way I can have a virtual folder from my url map to a physical folder on disk? I sort of want... http://myapp.com/static to map to a folder which I can configure e.g. /var/www/static so I can then have in my code... <img src="/static/user1/picture.jpg"/> I don't particularly mind whether the resulting physical folders are directly browsable. Security will eventually be an issue, but it isn't at the start. So, what are my options? I have looked at virtual hosts on the apache site, but it feels more complicated than I need. I don't want to use the Grails static rendering plugins.

    Read the article

  • How to debug JBoss out of memory problem?

    - by user561733
    Hello, I am trying to debug a JBoss out of memory problem. When JBoss starts up and runs for a while, it seems to use memory as intended by the startup configuration. However, it seems that when some unknown user action is taken (or the log file grows to a certain size) using the sole web application JBoss is serving up, memory increases dramatically and JBoss freezes. When JBoss freezes, it is difficult to kill the process or do anything because of low memory. When the process is finally killed via a -9 argument and the server is restarted, the log file is very small and only contains outputs from the startup of the newly started process and not any information on why the memory increased so much. This is why it is so hard to debug: server.log does not have information from the killed process. The log is set to grow to 2 GB and the log file for the new process is only about 300 Kb though it grows properly during normal memory circumstances. This is information on the JBoss configuration: JBoss (MX MicroKernel) 4.0.3 JDK 1.6.0 update 22 PermSize=512m MaxPermSize=512m Xms=1024m Xmx=6144m This is basic info on the system: Operating system: CentOS Linux 5.5 Kernel and CPU: Linux 2.6.18-194.26.1.el5 on x86_64 Processor information: Intel(R) Xeon(R) CPU E5420 @ 2.50GHz, 8 cores This is good example information on the system during normal pre-freeze conditions a few minutes after the jboss service startup: Running processes: 183 CPU load averages: 0.16 (1 min) 0.06 (5 mins) 0.09 (15 mins) CPU usage: 0% user, 0% kernel, 1% IO, 99% idle Real memory: 17.38 GB total, 2.46 GB used Virtual memory: 19.59 GB total, 0 bytes used Local disk space: 113.37 GB total, 11.89 GB used When JBoss freezes, system information looks like this: Running processes: 225 CPU load averages: 4.66 (1 min) 1.84 (5 mins) 0.93 (15 mins) CPU usage: 0% user, 12% kernel, 73% IO, 15% idle Real memory: 17.38 GB total, 17.18 GB used Virtual memory: 19.59 GB total, 706.29 MB used Local disk space: 113.37 GB total, 11.89 GB used

    Read the article

  • Elegant way to import XHTML nodes from xhr.responseXML into HTML document in IE?

    - by Weston Ruter
    While navigating through a site, I'm dynamically loading pages via Ajax and then only updating the elements of the page that are changed, such as the navigation state and main content area. This is similar to Lala. I am serving the site as XHTML in order to be able to have access to xhr.responseXML which I then traverse in parallel with the current document and copy the nodes over. This works very well in browsers other than IE. For IE, I have to iterate over all of the properties of each XML element I want to import into the HTML document to create it from scratch (using a function convertXMLElementToHTML()). Here's the code I'm currently using: try { nodeB = document.importNode(nodeB, true); } catch(e){ nodeB = nodeB.cloneNode(true); if(document.adoptNode) document.adoptNode(nodeB); } try { //This works in all browsers other than IE nodeA.parentNode.replaceChild(nodeB, nodeA); } //Manually clone the nodes into HTML; required for IE catch(e){ nodeA.parentNode.replaceChild(convertXMLElementToHTML(nodeB), nodeA); } Is there a more elegant solution to mirror-translating XML nodes into HTML?

    Read the article

  • Handling large datasets with PHP/Drupal

    - by jo
    Hi all, I have a report page that deals with ~700k records from a database table. I can display this on a webpage using paging to break up the results. However, my export to PDF/CSV functions rely on processing the entire data set at once and I'm hitting my 256MB memory limit at around 250k rows. I don't feel comfortable increasing the memory limit and I haven't got the ability to use MySQL's save into outfile to just serve a pre-generated CSV. However, I can't really see a way of serving up large data sets with Drupal using something like: $form = array(); $table_headers = array(); $table_rows = array(); $data = db_query("a query to get the whole dataset"); while ($row = db_fetch_object($data)) { $table_rows[] = $row->some attribute; } $form['report'] = array('#value' => theme('table', $table_headers, $table_rows); return $form; Is there a way of getting around what is essentially appending to a giant array of arrays? At the moment I don't see how I can offer any meaningful report pages with Drupal due to this. Thanks

    Read the article

  • Apache's AuthDigestDomain and Rails Distributed Asset Hosts

    - by Jared
    I've got a server I'm in the process of setting up and I'm running into an Apache configuration problem that I can not get around. I've got Apache 2.2 and Passenger serving a Rails app with distributed asset hosting. This is the feature of Rails that lets you serve your static assets from assets0.example.com, assets1, assets2, and so on. The site needs to be passworded until launch. I've set up HTTP authentication on the site using Apache's mod_auth_digest. In my configuration I'm attempting to use the AuthDigestDomain directive to allow access to each of the asset URLs. The problem is, it doesn't seem to be working. I get the initial prompt for the password when I load the page, but then the first time it loads an asset from one of the asset URLs, I get prompted a 2nd, 3rd, or 4th time. In some browsers, I get prompted for every single resource on the page. I'm hoping that this is only a problem of how I'm specifying my directives and not a limitation of authorization in Apache itself. See the edited auth section below: <Location /> AuthType Digest AuthName "Restricted Site" AuthUserFile /etc/httpd/passwd/passwords AuthGroupFile /dev/null AuthDigestDomain / http://assets0.example.com/ http://assets1.example.com/ http://assets2.example.com/ http://assets3.example.com/ require valid-user order deny,allow allow from all </Location>

    Read the article

  • Best way to convert log files (*.txt) to web-friendly files (*.html, *.jsp, etc)?

    - by prometheus
    I have a bunch of log files which are pure text. Here is an example of one... Overall Failures Log SW Failures - 03.09.2010 - /logs/swfailures.txt - 23 errors - 24 warnings HW Failures - 03.09.2010 - /logs/hwfailures.txt - 42 errors - 25 warnings SW Failures - 03.10.2010 - /logs/swfailures.txt - 32 errors - 27 warnings HW Failures - 03.10.2010 - /logs/hwfailures.txt - 11 errors - 31 warnings These files can get quite large and contain a lot of other information. I'd like to produce an HTML file from this log that will add links to key portions and allow the user to open up other log files as a result... SW Failures - 03.09.2010 - <a href="/logs/swfailures.txt">/logs/swfailures.txt</a> - 23 errors - 24 warnings This is greatly simplified as I would like to add many more links and other html elements. My question is -- what is the best way to do this? If the files are large, should I generate the html before serving it to the user or will jsp do? Should I use perl or other scripting languages to do this? What are your thoughts and experiences?

    Read the article

  • User clicks on Image, video starts playing

    - by cf_PhillipSenn
    I have an image, and I would like to make the image link to an embedded YouTube video, such that if the user clicks on the image, it starts playing in the place where the picture used to be. <div id="myvideo"> <a href="http://www.youtube.com/watch?v=Msef24JErmU&playnext_from=TL&videos=dgzKE_Lyv7o"> <img src="starryeyedsurprise.jpg"></a> </div> Something like what the above code does, except not just hyperlink to it. So I know I have to have javaScript replace what's in #myvideo with: <object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/Msef24JErmU&hl=en_US&fs=1&rel=0&color1=0x2b405b&color2=0x6b8ab6"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/Msef24JErmU&hl=en_US&fs=1&rel=0&color1=0x2b405b&color2=0x6b8ab6" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object> And then automatically start playing. I guess I don't need it to be a YouTube video per se, I could just host the video on my own site (it's so low volume that I don't have to worry about serving up a video every once in a while).

    Read the article

  • Show a number with specified number of significant digits

    - by dreeves
    I use the following function to convert a number to a string for display purposes (don't use scientific notation, don't use a trailing dot, round as specified): (* Show Number. Convert to string w/ no trailing dot. Round to the nearest r. *) Unprotect[Round]; Round[x_,0] := x; Protect[Round]; shn[x_, r_:0] := StringReplace[ ToString@NumberForm[Round[N@x,r], ExponentFunction->(Null&)], re@"\\.$"->""] (Note that re is an alias for RegularExpression.) That's been serving me well for years. But sometimes I don't want to specify the number of digits to round to, rather I want to specify a number of significant figures. For example, 123.456 should display as 123.5 but 0.00123456 should display as 0.001235. To get really fancy, I might want to specify significant digits both before and after the decimal point. For example, I might want .789 to display as 0.8 but 789.0 to display as 789 rather than 800. Do you have a handy utility function for this sort of thing, or suggestions for generalizing my function above? Related: Suppressing a trailing "." in numerical output from Mathematica

    Read the article

  • How specific do I get in BDD scenarios?

    - by CodeSpelunker
    Take two different ways of stating the same behavior. Option A: Given a customer has 50 items in their shopping cart When they check out Then they will receive a 10% discount on their order Option B: Given a customer has a high volume of items in their shopping cart When they check out Then they will receive a high volume discount on their order The former is far more specific. If someone has some question about exactly when a customer gets a high volume discount or how much to give them, reading this scenario makes it very clear. Serving the purposes of documenting the behavior, it's about as specific as it can be, although any change in those values will require changing the scenario. The second is more generalized and doesn't have the clarity of the first. Automating it would require incorporating the values "50" and "10" in the step implementations. On the other hand, the scenario captures the core business need: a high volume customer gets a discount. If we later decide to use "40" and "15", the scenario doesn't have to change because the core business need hasn't really changed (though the step implementation would). Also, the term "high volume customer" communicates something about why we're giving them the discount. So, which is better? Rather, under what circumstances should I favor the former or the latter?

    Read the article

  • How can you know what is w3wp.exe doing? (or how to diagnose a performance problem)

    - by Daniel Magliola
    I'm having a performance problem in a site we've made, and I'm not exactly sure how to start diagnosing it. The short description is: We have a very small site (http://hearablog.com) with very little traffic, in a crappy dedicated server, CPU is always very high, sometimes it stays at 100% for minutes, and w3wp.exe is taking most of it. A typical scenario is w3wp.exe takes 60%, and SQL Server takes about 30%. Our DB is pretty small too. Long description and more details: The site is hosted in a very crappy server by Cari.Net. From the beginning we had the feeling that the server didn't quite behave correctly, like some things would take just too long, so this could be a configuration problem from the get go. It may also be that we are getting a virtual server while we're supposed to have a dedicated one, although we have no evidence that'd indicate this, except for the fact that the server tends to be quite slow. The server is Windows 2008 Standard 64-bit, with SQL 2008 Express Hardware is a Celeron 2.80 GHz, 1Gb RAM The website is developed in ASP.Net MVC, using Entity Framework for data access. Now, this is pretty crappy hardware, but i've had other servers with these guys, with equivalent (or worse) HW, and performance is much better than this one. That said, the other servers have W2003 and SQL2005, and I'm using ASP.Net "WebForms" 2.0, no MVC, no LINQ, no EF; so I'm not sure whether going to 2008 / the other stuff means a big performance penalty is expected. I'm serving MP3 files (5-20 Mb) regularly, which is a slightly unusual load, maybe that is causing some kind of problems? Would that cause w3wp to use a lot of CPU? Disk usage seems very low. Memory is usually around 90%, but disk usage seems to indicate it's not paging much. I get tons of e-mails every day about SQL timeouts, for queries taking over 30 seconds, although all our queries are pretty straightforward (or should be, but EF may be screwing it up). This is what resource monitor looks like in one of these "sprints" of 100% CPU, in case there's anything useful there. And a snapshot of some performance counters: Now, what confuses me very much is that CPU usage of w3wp is just so high. It shouldn't be doing much really... So my questions are... Is there any way of finding out "what" it is doing? Maybe even profile it? Any performance counters I should be looking at? Is this to be expected given this hardware/software configuration? Is this could be cause by some kind of configuration failure, where would you start looking? Thank you VERY much. Daniel Magliola

    Read the article

  • All the Gear and No Idea: Suggestions for re-designing my home/office/entertainment network

    - by 5arx
    Help/ Advice/ Suggestions please: I have a load of kit that I love but which currently operate in disconnected, sometimes counter-productive way. Because I never really had a masterplan I just added these things one after another and connected them up in ad hoc ways. Since I bought my Macbook I've found I spend much less time on the MacPro that was until then my main machine. Perversely, as my job involves writing .Net software, I spend a lot of Mac time actually inside a Windows 7 VM. I stream media from the HP box to the PS3 and thus to the TV, but its not without its limitations/annoyances. We listen to each other's iTunes libraries but the music files are all over the place and it would be good to know they were all safely in one location (and fully backed up). I need to come up with a strategy that will allow me to use all the kit for work, play (recording live music, making tunes, iMovie work), pushing/streaming media to the TV and sharing files with my other half (she uses a Windows laptop and her iPod touch). Ideally I'd like to be able to work on any of the machines and have a shared homedrive that was visible to all machines so all my current files were synced up wherever i was. It would be great if I could access everything securely and quickly over the web. I'd also like to be able to set up a background backup process. The kit list thus far: Apple MacPro 8GB/3x250GB RAID0 + 1TB Apple MacBook Pro 13" 8GB/250GB - I spend a lot of my work time on a Windows 7 VM on this. Crappy Acer laptop (for children's use - iPlayer, watching movies/tv files) HP Proliant Server 4GB/80GB+160GB+300GB Sun Ultra 10 2 x 80GB (old, but in top-notch condition) PS3 160GB iPod Classic 2 x 8GB iPod Touch Observations: Part of the problem is our dual use of Windows and OS X - we can't go for a pure NT style roaming profile. Because the server is also used for hosting test/beta applications and a SQL Server db, it can't be dedicated to file serving. The two Macs really could do with sharing a roaming profile or similar. I'd love to be able to do something useful with the Ultra 10. My other half has been trying to throw it away for over five years now and regularly ask what function it serves in my study :-( I've got no shortage of 500GB external USB hard drives iMovie files are very large and ideally would be processed on a RAID system. Apple's TimeMachine isn't so great. If anyone could suggest all or part of a setup that would fulfil some of my requirements I'd be very grateful. I am willing to consider purchasing one or two more bits of kit (an Apple TV and a Squeezebox have been moted by friends) if they will help make efficiencies rather than add to the chaos and confusion. Thanks for looking.

    Read the article

  • Guests can't access KVM host server by name although nslookup and dig returns correct record

    - by user190196
    So I have a KVM host that also runs an apache server with some yum repos. The VM guests are connected to the default virtual network, which is configured to offer DHCP and forwarding with NAT on virbr0 (192.168.12.1). The guests can successfully access the yum repos on the host by IP address, so for example curl 192.168.122.1/repo1 returns the content without problems. But I'd like to have the guests be able to reach the web server on the host by name rather IP address. I added the desired name record to the host's /etc/hosts file and libvirt's dnsmasq service seems to be serving that correctly to the guests since nslookup and dig successfully resolve the name on the guests: [root@localhost ~]# nslookup repo Server: 192.168.122.1 Address: 192.168.122.1#53 Name: repo Address: 192.168.122.1 [root@localhost ~]# dig repo ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> repo ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55938 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;repo. IN A ;; ANSWER SECTION: repo. 0 IN A 192.168.122.1 ;; Query time: 0 msec ;; SERVER: 192.168.122.1#53(192.168.122.1) ;; WHEN: Tue Sep 17 02:10:46 2013 ;; MSG SIZE rcvd: 38 But curl/ping/etc still fail: [root@localhost ~]# curl repo curl: (6) Couldn't resolve host 'repo' While a request via ip address works: [root@localhost ~]# curl 192.168.122.1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <html> <head> <title>Index of /</title> [...] Same with ping: [root@localhost ~]# ping repo ping: unknown host repo [root@localhost ~]# ping 192.168.122.1 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.110 ms 64 bytes from 192.168.122.1: icmp_seq=2 ttl=64 time=0.146 ms 64 bytes from 192.168.122.1: icmp_seq=3 ttl=64 time=0.191 ms ^C --- 192.168.122.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2298ms rtt min/avg/max/mdev = 0.110/0.149/0.191/0.033 ms I tried adding repo 192.168.122.1 to the guests' /etc/hosts files but still no dice. Also tried changing guests' /etc/nsswitch.conf with both: hosts: files dns and hosts: dns files I've read the relevant libvirt documentation and I'm not sure where else to learn more about this and be able to move forward with it.

    Read the article

  • EC2 Filesystem / Files stored on the wrong partiton after launching new instance from AMI

    - by Philip Isaacs
    Today I set up a new EC2 Instance from and AMI I created from an older EC2 instance. When I launched the new instance I took the AMI that was on a small instance and launched with a medium instance. From what I can tell this is pretty standard stuff. But here's the stang part. According to AWS these are the differences Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of local instance storage, 32-bit or 64-bit platform Medium Instance 3.75 GB of memory, 2 EC2 Compute Units (1 virtual core with 2 EC2 Compute Units each), 410 GB of local instance storage, 32-bit or 64-bit platform Okay now here's where I'm having an issue. I when I log into the new bigger instance it still reports only having 1.7 GB of ram. The other strange part is that all my old partitions are still their in the same configurations. I see a new larger partition /mnt which is essential empty. Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.9G 5.9G 1.6G 79% / none 846M 120K 846M 1% /dev none 879M 0 879M 0% /dev/shm none 879M 76K 878M 1% /var/run none 879M 0 879M 0% /var/lock none 879M 0 879M 0% /lib/init/rw /dev/sda2 335G 195M 318G 1% /mnt /dev/sdf 16G 9.9G 5.1G 67% /var2 This EC2 is a web server and I was serving files off the /var2 directory but for some reason the instance is storing everything on / Okay here's what I'd like to do. Move all my website files to /mnt and have the web server point to that. Any suggestions? If it helps here is what my fstab looks like as well. root@myserver:/var# mount -l /dev/sda1 on / type ext3 (rw) [cloudimg-rootfs] proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) /dev/sda2 on /mnt type ext3 (rw) /dev/sdf on /var2 type ext4 (rw,noatime) I hope this question makes sense. Basically i want my old files on this new partition. Thanks in advance

    Read the article

  • What "pieces" are needed in order to set up a cluster of physical servers?

    - by Chris Dutrow
    Background: Currently, we use Rackspace cloud servers. We have no intention to stop using them, but would like to look into setting up a cluster of physical servers (probably desktop computers in the $400 range with 8gb memory each) to offset some of our load and work as a secondary, more powerful, less reliable system. To put things in perspective, we can buy comparable desktop computers for the same price as we pay in one month to rent them on Rackspace Cloud. I understand that this is generally a dumb idea. However, in this particular instance, the server cluster is needed for its computation power. It is not mission-critical, it does not host a consumer-facing website, and if it goes down for a day or two, its not really a problem. Currently, we have access to business class verizon fios. If I understand correctly, we can get at least 25 dedicated IP addresses with this service, this should be enough. Requirements: Each server runs Linux Centos 6.3 Some of the servers run Python and execute processes from a task queue (Redis or RabbitMQ) Some of the servers are capable of serving static files and Python driven REST APIs Some of the servers host a Cassandra database cluster One or more of the servers are a Redis database servers One or more of the servers are PostgreSQL servers Questions: What kind of router or switch is needed? We would like the computers to be able to communicate effectively with each other via internal IP addresses. This is especially important for communicating with servers hosting Redis that need to be able to respond to requests very quickly. Are there special switches or routers that need to be used to connect the servers together? Are Desktop computers ok for this? We have found that we are mostly RAM-bottle necked, I understand that some servers have highly superior CPUs, but I'm not sure we need CPU power as much as we need RAM, which is cheap in Desktop computers. Will we have problems with the WIFI cards in the desktops or any other unexpected hardware limitation? What tools should be used to "image" the servers. For example, when we get an installation right for a Redis server or Cassandra node, are there tools that come with Linux Centos 6.3 to image the server to a USB drive or something like that? Or do we need to use some other software for this? What other things are we missing that we should be concerned about? Thanks so much!

    Read the article

  • How to have Windows Server DNS use hosts file to resolve specific host names

    - by user41079
    Hello, everyone, I'm facing a small problem with Windows Server 2003 DNS service. In my corporation, I'm running Microsoft DNS server(172.16.0.12) to do name resolution to my company intranet(domain name ends in dev.nls. resolving to IP 172.16..), and it is also configured as a DNS forwarder to forward other domain names(e.g. *.google.com , *.sf.net) to Internet real DNS servers. This internal DNS server never tends to serve users from outside world. And, we are running a mail server(serving incoming mail for a real Internet domain @nlscan.com) inside company firewall which can be accessed in either way: by connecting to 172.16.0.10 from within intranet. by connecting to mail.nlscan.com(resolved to 202.101.116.9) from Internet. Note that 172.16.0.10 and 202.101.116.9 is not the same physical machine. The 202 one is a firewall machine who do port forwarding of port 25 and 110 to intranet address 172.16.0.10 . Now my question: If users inside corporate LAN want to resolve mail.nlscan.com, it resolves to 202.101.116.9. That's correct and workable, BUT NOT GOOD, because the mail traffic goes to the firewall machine then bounces to 172.16.0.10 . I hope that our internal DNS server can intercept the name mail.nlscan.com and resolve it to 172.16.0.10 . So, I hope that I can write an entry in "hosts" file on 172.16.0.12 to do this. But, how can Microsoft DNS server recognize this "hosts" file? Maybe you suggest, why not have intranet user use 172.16.0.10 to access my mail server? I have to say it is inconvenient, suppose a user(employee) works on his laptop, daytime in office and nighttime at home. When he is at home, he cannot use 172.16.0.10 . Creating a zone for nlscan.com on our internal DNS server is not feasible, because the name server for nlscan.com domain is on our ISP, and it is responsible for resolving other host names and sub-domains under nlscan.com . Thank you in advance.

    Read the article

  • Blank Page: wordpress on nginx+php-fpm

    - by troutwine
    Good day. While this post discusses a similar setup to mine serving blank pages occasionally after having made a successful installation, I am unable to serve anything but blank pages. My setup: Wordpress 3.0.4 nginx 0.8.54 php-fpm 5.3.5 (fpm-fcgi) Arch Linux Configuration Files php-fpm.conf: [global] pid = run/php-fpm/php-fpm.pid error_log = log/php-fpm.log log_level = notice [www] listen = 127.0.0.1:9000 listen.owner = www listen.group = www listen.mode = 0660 user = www group = www pm = dynamic pm.max_children = 50 pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 500 nginx.conf: user www; worker_processes 1; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; include /etc/nginx/sites-enabled/*.conf; } /etc/nginx/sites-enabled/blog_sharonrhodes_us.conf: upstream php { server 127.0.0.1:9000; } server { error_log /var/log/nginx/us/sharonrhodes/blog/error.log notice; access_log /var/log/nginx/us/sharonrhodes/blog/access.log; server_name blog.sharonrhodes.us; root /srv/apps/us/sharonrhodes/blog; index index.php; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { # This is cool because no php is touched for static content try_files $uri $uri/ /index.php?q=$uri&$args; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini include fastcgi_params; fastcgi_intercept_errors on; fastcgi_pass php; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } }

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well.

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Ubuntu unattended-upgrades stops apache

    - by Robbie
    This morning i was alerted to the fact that both apache instances serving my app were not responding to requests from my load balancer. I attempted apachectl restart and it said apache was not running. So, i started apache on both instances and got the service up again. I then followed the logs and worked out that both had performed upgrades via the unattended-upgrades package moments before they stopped responding. /var/log/unattended-upgrades/unattended-upgrades.log 2013-07-02 06:30:51,875 INFO Starting unattended upgrades script 2013-07-02 06:30:51,875 INFO Allowed origins are: ['o=Ubuntu,a=precise-security'] 2013-07-02 06:33:57,771 INFO Packages that are upgraded: accountsservice apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common apparmor apport apt apt-transport-https apt-utils bind9-host binutils dbus dnsutils gnupg gpgv isc-dhcp-client isc-dhcp-common krb5-locales libaccountsservice0 libapt-inst1.4 libapt-pkg4.12 libbind9-80 libc-bin libc-dev-bin libc6 libc6-dev libcurl3-gnutls libdbus-1-3 libdbus-glib-1-2 libdns81 libdrm-intel1 libdrm-nouveau1a libdrm-radeon1 libdrm2 libexpat1 libfreetype6 libgc1c2 libgnutls-dev libgnutls-openssl27 libgnutls26 libgnutlsxx27 libisc83 libisccc80 libisccfg82 liblwres80 libruby1.8 libx11-6 libx11-data libxcb1 libxext6 libxml2 linux-firmware linux-image-virtual linux-libc-dev linux-virtual multiarch-support openssl perl perl-base perl-modules python-apport python-crypto python-keyring python-problem-report python-software-properties ri1.8 ruby1.8 ruby1.8-dev sudo tzdata update-manager-core 2013-07-02 06:33:57,772 INFO Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg_2013-07-02_06:33:57.772399.log' 2013-07-02 06:36:10,584 INFO All upgrades installed I'm running Ubuntu 12.04 on Amazon EC2 servers. I have unattended-upgrades installed and configured as follows: /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these (origin:archive) pairs Unattended-Upgrade::Allowed-Origins { "${distro_id}:${distro_codename}-security"; // "${distro_id}:${distro_codename}-updates"; // "${distro_id}:${distro_codename}-proposed"; // "${distro_id}:${distro_codename}-backports"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { }; /etc/apt/apt.conf.d/20auto-upgrades APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Unattended-Upgrade "1"; I've struggled to find documentation about what happens to running processes during an upgrade. - Is this expected behaviour? Or should unattended-upgrades restart apache after upgrading it? - What can I do to ensure apache is restarted correctly? Should I just blacklist the apache package?

    Read the article

  • Apache2 graceful restart stops proxying requests to passenger

    - by Rob
    Issue with apache mod proxy, it stops proxying requests after a graceful restart but not all the time. It seems to happen only on a Sunday when a graceful restart is triggered by logrotate. [Sun Sep 9 05:25:06 2012] [notice] SIGUSR1 received. Doing graceful restart [Sun Sep 9 05:25:06 2012] [notice] Apache/2.2.22 (Ubuntu) Phusion_Passenger/3.0.11 configured -- resuming normal operations [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(492) failed in child 26153 for worker proxy:reverse [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(486) failed in child 26153 for worker http://api.myservice.org/api [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(487) failed in child 26153 for worker http://api.myservice.org/editor/$1 [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(489) failed in child 26153 for worker http://api.myservice.org/build [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(490) failed in child 26153 for worker http://api.myservice.org/help [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(491) failed in child 26153 for worker http://api.myservice.org/motd.html [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(480) failed in child 26153 for worker http://api.myservice.org/api [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(481) failed in child 26153 for worker http://api.myservice.org/editor/$1 [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(483) failed in child 26153 for worker http://api.myservice.org/build [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(484) failed in child 26153 for worker http://api.myservice.org/help [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(485) failed in child 26153 for worker http://api.myservice.org/motd.html [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(479) failed in child 26153 for worker http://api.myservice.org/motd.html After these lines, the logs are flooded with 404's because the requests are not being proxied. It's worth noting that the destination is just another vhost on the same apache instance, but the vhost (http://api.myservice.org) is serving passenger (mod_rails) I was thinking that maybe there's some startup issues with the passenger workers not being ready during a graceful restart? After a full restart resolves it and everything returns to normal. //Edit Here's the vhost config, thanks :) <VirtualHost *:80> UseCanonicalName Off LogFormat "%V %h %l %u %t \"%r\" %s %b" vcommon <Directory /var/www/vhosts> RewriteEngine on AllowOverride All </Directory> RewriteEngine on RewriteCond /var/www/vhosts/%{SERVER_NAME} !-d RewriteCond /var/www/vhosts/%{SERVER_NAME} !-l RewriteRule ^ http://sitenotfound.myservice.org/ [R=302,L] VirtualDocumentRoot /var/www/vhosts/%0/current # Rewrite requests to /assets to map to the /var/file-store/<SERVER_NAME>/ RewriteMap lowercase int:tolower RewriteCond %{REQUEST_URI} ^/assets/ RewriteRule ^/assets/(.*)$ /var/file-store/${lowercase:%{SERVER_NAME}}/$1 # Map /login to /editor.html as it's far friendlier. RewriteCond %{REQUEST_URI} ^/login RewriteRule .* /editor.html [PT] # Forward some requests to the API ProxyPass /api http://api.myservice.org/api ProxyPass /site.json http://api.myservice.org/api/editor/site ProxyPassMatch ^/editor/(.*)$ http://api.myservice.org/editor/$1 ProxyPassMatch ^/api/(.*) http://api.myservice.org/api/$1 ProxyPass /build http://api.myservice.org/build ProxyPass /help http://api.myservice.org/help ProxyPass /motd.html http://api.myservice.org/motd.html <Proxy *> Order allow,deny Allow from all </Proxy> # TODO generate slightly more specific Error Documents for 401/403/500's, # but for now the 404 page is good enough ErrorDocument 401 /404.html ErrorDocument 403 /404.html ErrorDocument 404 /404.html ErrorDocument 500 /404.html </VirtualHost>

    Read the article

  • Secret, unlogged, transparent, case-sensitive proxy in IIS6?

    - by Ian Boyd
    Does IIS have a secret, unlogged, transparent, case-sensitive proxy built into it? A file exists on the web-server: GET http://www.stackoverflow.com/javascript/ModifyQuoteArea.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:20:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... Problem is that a changes made to the file will not get served, the old (i.e. February of last year) version keeps getting served: HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:23:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... The same old file gets served, even though we've: renamed the file deleted the file restarted IIS The request for this file does not appear in the IIS logs (e.g. C:\WINNT\System32\LogFiles\W3SVC7\) And this only happens from the outside (i.e. the internet). If you issue the request locally on the server, then you will: - get the current file (file there) - 404 (file renamed) - 404 (file deleted) But if i change the case of the requested resource, i.e.: GET http://www.stackoverflow.com/javascript/MoDiFyQuOtEArEa.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com Note: MoDiFyQuOtEArEa.js verses ModifyQuoteArea.js Then i do get the proper file (or get the 404 as i expect if the file is renamed or deleted). But any subsequent changes to the file will not show up until i change the case of the file i'm asking for. Checking the IIS logs all indicate that the (internet) requests are all coming the correct client on the internet (i.e. not from some intermediate proxy). Since the file doesn't exist on the hard drive anymore, i conclude that there is a proxy. The requests serviced from this proxy are not logged in the IIS logs. The requests for new files are logged, and from the client IP, not a proxy IP. The proxy is case sensitivie. This does not sound like something Microsoft, or IIS, would do: - a transparent proxy - case-sensitivie - unlogged - surviving restarts of IIS - surviving in a cache for hours i can't believe that our customer's IIS is doing these things. i'm assuming there is some other transparent proxy in front of IIS. Or, does IIS have a transparent, unlogged, case-sensitive, memory based, proxy, that caches content for at least 7 hours? (Come Monday morning, IIS is serving the correct file, unlogged).

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44  | Next Page >