Search Results

Search found 7159 results on 287 pages for 'green hosting'.

Page 258/287 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • Loading a gallery of external images with JQuery

    - by pingu
    Hi guys, I'm creating a gallery of images pulled in via a twitter image hosting service, and I want to show a loading graphic before they are displayed, but the logic seems to be failing me. I'm trying to do it the following way: <ul> <li> <div class="holder loading"> </div> </li> <li> <div class="holder loading"> </div> </li> <li>...</li> <li>...</li> </ul> Where there "loading" class has an animated spinner set as it's background image. JQuery: $('.holder').each(function(){ var imgsrc = http://example.com/imgsrc; var img = new Image(); $(img).load(function () { $(this).hide(); $(this).parent('.holder').removeClass('loading'); $(this).parent().append(this); $(this).fadeIn(); }) .error(function () { }).attr('src', imgsrc); }); Which isn't working - I'm not sure that the logic is correct and how to access the "holder" from inside the img.load nested function. What would be the best way to build this gallery so that the images display only after they've been loaded?

    Read the article

  • details in one Row without any redundancy in the CATID

    - by alkitbi
    $query1 = "select * from linkat_link where emailuser = '$email2' or linkname ='$domain_name2' ORDER BY date desc LIMIT $From,$PageNO"; id   catid      discription            price ------------------------------------ 1         1     domain name     100 2         1      book                  50 3        2      hosting               20 4        2      myservice           20 in this script i have one problem , if i have an ID for Each Cantegory , i have some duplicated CATID which has different content but shares the same CATID, i need to make any duplicated CATID to show in one , and all the discription will be in the same line (Cell) on the same row . So Each CatID will have all the details in one Row without any redundancy in the CATID

    Read the article

  • I would like to interactively detect when an ActiveX component has been installed, and asynchronousl

    - by Brian Stinar
    Hello, I am working on a website, and I would like to refresh a portion of the page after an ActiveX component has been installed. I have a general idea of how to do this with polling, which I am working on getting going : function detectComponentThenSleep(){ try{ // Call what I want ActiveX for, if the method is available, or // ActiveXComponent.object == null --- test for existance document.getElementById("ActiveXComponent").someMethod(); } catch{ // Try again, if the method is not available setTimeout(detectComponentThenSleep, 100); } } However, what I would REALLY like to do is something like this: ActiveXObject.addListener("onInstall", myfunction); I don't actually have the source for the ActiveX component, but I have complete control of the page I am hosting it on. I would like to use JavaScript, if possible, to accomplish this. So, my question is 1.) will this actually work with the polling method? and 2.) Is there an interrupt/listener like way of doing this? I am sure I am missing something with connecting the dots here, I can already detect if the component is present, but I am having trouble doing this asynchronously. Thank you very much for your time and help, -Brian J. Stinar-

    Read the article

  • IIS6, ASP.NET MVC 1 and random slowdowns

    - by Mr Snuffle
    I've recently deployed a MVC application to an IIS6 web server. One strange behaviour I've been having is the load times will randomly blow up to 30sec+ and then return to normal. Our tests have shown this occurring on multiple connections at the same time. Once the wait has passed, the site become responsive again. It's completely random when this will occur, but will probably happen about once every 15 minutes or so. My first thought was the application was being restarted by the web server for some reason, but I determined this wasn't the case because the process recycling is set very infrequently, and I placed some logging in the application startup. It's also nothing to do with the database connection. This slowdown happens simply by moving between static pages too. I've watched the database with a SQL profiler, and nothing is hitting it when these slowdowns occur. Finally, I've placed entry and exit logging on my controller actions, the slowdown always happens outside of the controller. The entry and exit time for a controller action is always appropriately fast. Does anyone have any ideas of what could be causing this? I've tried running it locally on IIS7 and I haven't had the issue. I can only think it's something to do with our hosting provider.

    Read the article

  • Can this code cause a "500" internal server error ?

    - by Scott B
    A few of my customers are reporting that they are getting "500" Internal Server errors lately. I believe it might be caused by various plugins they are using but each time, the hosting company (multiple hosts) are saying that the htaccess file had to be replaced to fix the issue. I'm submitting the code below from my custom theme because its the only place where I trigger an htaccess write. And I want to be sure that there are no problems here that could cause an issue that might contribute to the 500 errors... if (file_exists(ABSPATH.'/wp-admin/includes/taxonomy.php')) { require_once(ABSPATH.'/wp-admin/includes/taxonomy.php'); if(get_option('permalink_structure') !== "/%postname%/" || get_option('mycustomtheme_permalinks') !=="/%postname%/") { $mycustomtheme_permalinks = get_option('mycustomtheme_permalinks'); require_once(ABSPATH . '/wp-admin/includes/misc.php'); require_once(ABSPATH . '/wp-admin/includes/file.php'); global $wp_rewrite; $wp_rewrite->set_permalink_structure($mycustomtheme_permalinks); $wp_rewrite->flush_rules(); } if(!get_cat_ID('topMenu')){wp_create_category('topMenu');} if(!get_cat_ID('hidden')){wp_create_category('hidden');} if(!get_cat_ID('noads')){wp_create_category('noads');} } if (!is_dir(ABSPATH.'wp-content/uploads')) { mkdir(ABSPATH.'wp-content/uploads'); }

    Read the article

  • RegisterStartupScript doesn't appear to be working on page postback within update panel

    - by Jen
    OK - so am working on a system that uses a custom datepicker control (I know there are other ones out there.. but for consistency would like to understand why my current issue is happening and fix it). So its a custom user control with a textbox and on Page_PreRender does this: protected void Page_PreRender(object sender, EventArgs e) { string clientScript = @" $(function(){ $('#" + this.Date1.ClientID + @"').datepicker({dateFormat: 'dd/mm/yy', constrainInput: true}); });"; Page.ClientScript.RegisterStartupScript(this.GetType(), this.ClientID, clientScript, true); //Type t = this.GetType(); //if (!Page.ClientScript.IsStartupScriptRegistered(t, this.ClientID)) //{ // Page.ClientScript.RegisterStartupScript(t, this.ClientID, clientScript, true); //} } Ignore commented out stuff - that was me trying something different - didn't help. My issue is that this all works fine when I load the page. But if I select something from a dropdownlist causing a page postback - when I click into my date fields they stop working. As in I should be able to click into the textbox and a nice calendar control appears. But after postback there is no nice calendar control appearing! It's currently all wrapped (in the hosting page) inside an update panel. So I comment out the update panel stuff and the dates are working after page postback. So it appears to be something related to that update panel. Any suggestions please? Thanks!!

    Read the article

  • What to use for version control with Visual Studio 2008 for inhouse projects?

    - by Boog
    We want to put a number of our in-house projects under version control. Our projects are C# .NET applications and assemblies. We originally decided to go Microsoft all the way (as is the norm around here), and tried installing Visual Studio Team Foundation Server. To say the least, it was way more trouble of trying to get a successful install than it's worth, and I'm afraid we don't have a entire server to dedicate to TFS itself, as the installer seems to insist. We also considered a generic solution like Subversion or CVS, or possibly some kind of free online hosting that doesn't make our source publicly available under some license like Google Code appears to. Does anyone have any suggestions for us that would best fit our in-house MS environment? We'd also get some benefit out of some kind of project management tools if they were nicely integrated in to the solution, but this would only be a perk. I should also mention that we haven't entirely ruled out TFS, but it's looking like a pain, so anything you guys have to say for or against it would be helpful.

    Read the article

  • Java EE Website Planning Questions

    - by Tom Tresansky
    I'm a .NET programming who is soon moving to the Java EE world. I have plenty of experience with .NET web technologies, web services, WebForms and MVC. I am also very familiar with the Java language, and have written a few servlets and modified a couple of JSP pages, but I haven't touched EE yet. I'd like to set up a public website using Java EE so I can familiarize myself with whats current. I'm thinking just a technology playground at this point with no particular purpose in mind. What Java technologies are the current hotness for this sort of thing? (For example, if someone asked me what I'd recommend learning to set up a new .NET site, I'd say use ASP MVC instead of WebForms and recommend LINQ-to-SQL as a quick, simple and widely used ORM.) So, what I'd like to know is: Is there a recommended technology for the presentation layer? Is JSP considered a good approach, or is there anything cleaner/newer/more widespread? Is Hibernate still widely used for persistence? Is it obsolete? Is there anything better out there? (I've worked with NHibernate some, so I wouldn't be starting from scratch.) Is cheap Java EE web hosting available? What should I know being a .NET web developer moving to the Java world?

    Read the article

  • divert asp.net child controls

    - by Eric
    Hi Folks, I am trying to create an asp.net custom control that acts as a hosting container for any other controls, similar to the existing ‘Panel’ control. Basically, I need to build a web control that groups a bunch of other controls. It will consist of a header and a body pane, similar to a normal window in desktop application. The header will contain some simple text and some JavaScript driven code that shows/hides the body pane. The body pane simply hosts any number of other controls. +------------------------------------------------------+ | User Details Show/Hide | +------------------------------------------------------+ | Name: [Eric ] | | Address: [Some where] | | Date of Birth: [01/01/1980] | | | | (any other fields goes on) | | | | | +------------------------------------------------------+ Ideally I want to create a control that packs the whole thing together, so at design time I could use the following markup. <myCtl:SuperContainer runat=”server” Title=”User Details”> <asp:label id=”lblName” runat=”server” text=”Name:”/> <asp:textbox id=”txtName” runat=”server”/> <asp:label id=”lblDOB” runat=”server” text=”Date of Birth:”/> <asp:textbox id=”txtDOB” runat=”server”/> (…other controls definition…) </myCtl:SuperContainer> I plan to include two panels in my control, one for the header and another one for the body, but as you can see, the key issue is to find a way to ‘divert’ the child controls that are defined in the markup to the body panel, instead of the default parent container. I feel it can be some how simply override (manipulate) the control property, but don’t know how to properly do so. Can any one give some idea about how to implement this ‘SuperContainer’ control? Many thank, Eric

    Read the article

  • RewriteCond and RewriteRule newbie

    - by mybrokengnome
    I'm taking over a website for a client that is running on a custom built CMS (that I didn't write). I don't mess with .htaccess files usually because a lot of the hosting I do is on IIS, or I used WordPress as a CMS and don't have to worry about messing with the .htaccess file. Here's the contents of the file: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ framework.php?%{QUERY_STRING}&resource=$1& [L] I get what it's doing (sending all requests through the framework.php file). The client wants a WordPress blog added to their site. I'm placing it in a /blog/ folder. The problem is that because of the rewrite rules and conditions in the .htaccess file whenever I try to go /blog/ the other CMS freaks out because it doesn't like me trying to go there. My question is how do I write a rule/cond that tells apache to send all requests made to the /blog/ folder to the /blog/ folder, but keep all other requests piped through the framework.php file like it is now? Any help is appreciated, thanks!

    Read the article

  • How much memory I would need for something like this?

    - by Vdas Dorls
    If I create social portal similar to Twitter (not exactly twitter, but similar in my country), how much space would I need for images? I guess with 100 gb of disk space it won't be enough, so could you please give me some information how much I would exactly need? And is there any suggestions how should I add the profile images? Is there any tactics from programming side, when I could save some space uploading and hosting images for profile users? I guess, each time user changes images, it would be good to delete the previous image, correctly? In additional, how much disk space would be needed for 1000000 user profiles, if we have like 15 default images, and part of the users won't upload their own images, but use some of the default ones. So 3 questions - How much disk space I would need to hold a good social portal? Is there any suggested way to deal with pictures with PHP to save disk space? How much disk space would be needed for 1000000 user profile, if we have like 15 default images and part of the users won't upload their own, but instead use one of the 15 default images?

    Read the article

  • Configurating JOOMLA's e-mail notification for new account

    - by Dion
    I'm using Joomla 1.5 to create a local site for my office. The site will be accessed locally via intranet, and my PC will be the localhost for the site. I'm using a Login pluggin, so that anyone who wanted to enter the site should create an account. In JOOMLA, all user who created their account for the first time will receive a notification e-mail like : "Hello pras, You have been added as a User to Information Center by an Administrator. This e-mail contains your username and password to log in to http://localhost/yaddayadda/ Username: hadisuryo.prasetio Password: xxxx Please do not respond to this message as it is automatically generated and is for information purposes only." but if the user click the URL in the mail, which is, "localhost/yaddayadda/" they will not be directed to my site, but to their own PC's localhost.... My question is : How can I Modified the e-mail or the site configuration so that the URL will not be "localhost/yaddayadda/" anymore, but will be "(My-IP adress)/yaddayadda" I'm not going to host my site to a web hosting service, just using my PC as a host. I've been trying to trace on each config and .ini files...it seems that i have to do something with the "JURI" function or the "$mosConfig_live_site" on the backlink.php file $mosConfig_absolute_path = JPATH_SITE; $mosConfig_live_site = JURI :: base(); $url_array = explode('/', $_SERVER['REQUEST_URI']); Can anyone give me assistance ? Thank You

    Read the article

  • Why does my CGI script keep redirecting links to localhost?

    - by Noah Brainey
    Visit this page http://online-file-sharing.net/tos.html and click one of the bottom footer links. It redirects you to your localhost in the address bar. I have no idea why it does this. This is in the main script that my entire website revolves around: upload.cgi $ENV{PATH} = '/bin:/usr/bin'; delete @ENV{'IFS', 'CDPATH', 'ENV', 'BASH_ENV'}; ($ENV{DOCUMENT_ROOT}) = ($ENV{DOCUMENT_ROOT} =~ /(.*)/); # untaint. #$ENV{SCRIPT_NAME} = '/cgi-bin/upload.cgi'; use lib './perlmodules'; #use Time::HiRes 'gettimeofday'; #my $hires_start = gettimeofday(); my (%PREF,%TEXT) = (); No file is displayed when someone visits the root directory, although I have a .htaccess file saying to open my upload.cgi file which is located in my root directory. When I point my browser directly to the CGI file it works but it brings me to my localhost again. I'm hosting this website on my own server, which is this computer, and using XAMPP if this information helps. I'm also using DynDNS as my nameservers. I hope you can give me some insight.

    Read the article

  • What is a good automated data import method for SQL Server?

    - by Joel Potter
    I'm in the process of porting some SQL Server 2005 databases to SQL Server 2008. One of these databases has an associated import application (Windows task) which uses SSIS with a DTS package to import a large dataset from an MS Access database nightly. In upgrading to SQL Server 2008, I discovered that I can't run the same console application which has been performing the imports due to the missing manageddts DLL in SQL Server 2008. It's several years old and in need of a rewrite for various reason, plus, I've been fairly unhappy with DTS in general. The original reason DTS was chosen was for speed (5 min import time compared to 30+ for ADO.NET). The format of the data to import is out of my control (the client likes Access). I would also like to be able to run the import from a machine completely separate from the server hosting SQL Server and preferably with minimal SQL features installed. Options I've considered: Creating an Access application to connect to both databases (SQL Server and Access) and perform the import (Ugh!) Revisiting ADO.NET to see if the original implementation was poorly written. Updated SSIS packages. What other technologies should I be considering for this job?

    Read the article

  • Replacement for Hamachi for SVN access

    - by Piers
    My company has been using Hamachi to access our SVN repository for a number of years. We are a small yet widely distributed development team with each programmer in a different country working from home. The server is hosted by a non-techie in our central office. Hamachi is useful here since it has a GUI and supports remote management. This system worked well for a while, but recently I have moved to a country with poor internet speeds. Hamachi will no longer connect 99% of the time - instead I get a "Probing..." message that doesn't resolve. It's certain to be a latency issue, as the same laptop will connect without problems when I cross the border and connect using a different ISP with better speeds. So I really need to replace Hamachi with some other VPN/protocol that handles latency better. The techie managing the repository is not comfortable installing and configuring Apache or IIS, so it looks like HTTP is out. I tried to convince my boss to go for a web hosting company, but he doesn't trust a 3rd party with our source. Any other recommended options / experiences out there for accessing our SVN repos that would be as simple as Hamachi for setup; but be more tolerant of network latency issues?

    Read the article

  • what makes a Tomcat5.5 cannot be "aware" of new Java Web Applications?

    - by Michael Mao
    This is for uni homework, but I reckon it is more a generic problem to the Tomcat Server(version 5.5.27) on my uni. The problem is, I first did a skeleton Java Web Application (Just a simple Servlet and a welcome-file, nothing complicated, no lib included) using NetBeans 6.8 with the bundled Tomcat 6.0.20 (localhost:8084/WSD) Then, to test and prove it is "portable" and "auto-deploy-able", I cleaned and built a WSD.war file and dropped it onto my Xampp Tomcat (localhost:8080/WSD). The war extracted everything accordingly and I can see identical output from this Tomcat. So far, so good. However, after I tried to drop to war onto uni server, funny thing happens: uni server Even though I've changed the war permission to 755, it is simply not "responding". I then copied the extracted files to uni server, the MainServlet cannot be recognized from within its Context Path "/WSD", basically nothing works, expect the static index.jsp. I tried several times to stop and restart uni Tomcat, it doesn't help? I wonder what makes this happen? Is there anything I did wrong with my approach? To be frank I paid no attention to a server not under my control, and I am unfortunately not a real active day-to-day Java Programmer now. I understand the fundamentals of MVC, Servelets, JSPs, JavaBeans, but I really feel frustrated by this, as I cannot see why... Or, should I ask, a Java Web Application, after cleaned and built by NetBeans6.8, is self-contained and self-configured so ready to be deployed to any Java Web Container? I know, I can certainly program everything in plain old JSP, but this is soooo... unacceptable to myself... Update : I am now wondering if there is any free Tomcat Hosting so that I would like to see if my war file and/or my web app can go with them without any configuration at all?

    Read the article

  • Why can't I simply copy installed Perl modules to other machines?

    - by pistacchio
    Being very new to Perl but not to dynamic languages, I'm a bit surprised at how not straight forward the manage of modules is. Sure, cpan X does theoretically work, but I'm working on the same project from three different machines and OSs (at work, at home, testing in an external environment). At work (Windows 7) I have problem using cpan because of our firewall that makes ftp unusable At home (Mac OS X) it does work In the external environment (Linux CentOs) it worked after hours because I don't have root access and I had to configure cpan to operate as a non-root user I've tried on another server where I have an access. If the previous external environment is a VPS and so I have a shell access, this other one is a cheap shared hosting where I have no way to install new modules other than the ones pre-installed At the moment I still can't install Template under Windows. I've seen that as an alternative I could compile it and I've also tried ActiveState's PPM but the module is not existent there. Now, my perplexity is about Perl being a dynamic language. I've had all these kind of problems while working, for example, with C where I had to compile all the libraries for all the platform, but I thought that with Perl the approach would have been very similar to Python's or PHP's where in 90% of the cases copying the module in a directory and importing it simply works. So, my question: if Perl's modules are written in Perl, why the copy/paste approach will not work? If some (or some part) of the modules have to be compiled, how to see in CPAN if a module is Perl-only or it relies upon compiled libraries? Isn't there a way to download the module (tar, zip...) and use cpan to deploy it? This would solve my problem under Windows.

    Read the article

  • Script to install and compile Python, Django, Virtualenv, Mercurial, Git, LessCSS, etc... on Dreamho

    - by tmslnz
    The Story After cleaning up my Dreamhost shared server's home folder from all the cruft accumulated over time, I decided to start afresh and compile/reinstall Python. All tutorials and snippets I found seemed overly simplistic, assuming (or ignoring) a bunch of dependencies needed by Python to compile all modules correctly. So, starting from http://andrew.io/weblog/2010/02/installing-python-2-6-virtualenv-and-virtualenvwrapper-on-dreamhost/ (so far the best guide I found), I decided to write a set-and-forget Bash script to automate this painful process, including along the way a bunch of other things I am planning to use. The Script I am hosting the script on http://bitbucket.org/tmslnz/python-dreamhost-batch/src/ The TODOs So far it runs fine, and does all it needs to do in about 900 seconds, giving me at the end of the process a fully functional Python / Mercurial / etc... setup without even needing to log out and back in. I though this might be of use for others too, but there are a few things that I think it's missing and I am not quite sure how to go for it, what's the best way to do it, or if this just doesn't make any sense at all. Check for errors and break Check for minor version bumps of the packages and give warnings Check for known dependencies Use arguments to install only some of the packages instead of commenting out lines Organise the code in a manner that's easy to update Optionally make the installers and compiling silent, with error logging to file failproof .bashrc modification to prevent breaking ssh logins and having to log back via FTP to fix it EDIT: The implied question is: can anyone, more bashful than me, offer general advice on the worthiness of the above points or highlight any problems they see with this approach? (see my answer to Ry4an's comment below) The Gist I am no UNIX or Bash or compiler expert, and this has been built iteratively, by trial and error. It is somehow going towards apt-get (well, 1% of it...), but since Dreamhost and others obviously cannot give root access on shared servers, this looks to me like a potentially very useful workaround; particularly so with some community work involved.

    Read the article

  • AJAX Post Not Sending Data?

    - by Jascha
    I can't for the life of me figure out why this is happening. This is kind of a repost, so forgive me, but I have new data. I am running a javascript log out function called logOut() that has make a jQuery ajax call to a php script... function logOut(){ var data = new Object; data.log_out = true; $.ajax({ type: 'POST', url: 'http://www.mydomain.com/functions.php', data: data, success: function() { alert('done'); } }); } the php function it calls is here: if(isset($_POST['log_out'])){ $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('logOutSuccess')"; $connection->runQuery($query); // <-- my own database class... // omitted code that clears session etc... die(); } Now, 18 hours out of the day this works, but for some reason, every once in a while, the POST data will not trigger my query. (this will last about an hour or so). I figured out the post data is not being set by adding this at the end of my script... $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('POST FAIL')"; $connection->runQuery($query); So, now I know for certain my log out function is being skipped because in my database is the following data: if it were NOT being skipped, my data would show up like this: I know it is being skipped for two reasons, one the die() at the end of my first function, and two, if it were a success a "logOutSuccess" would be registered in the table. Any thoughts? One friend says it's a janky hosting company (hostgator.com). I personally like them because they are cheap and I'm a fan of cpanel. But, if that's the case??? Thanks in advance. -J

    Read the article

  • Get subdomain of XML page from XSL

    - by fudgey
    I am working with a guild hosting site that provides an XML/XSL transformation widget where all I need to do is enter the URL for the XML and XSL and it does the rest. I have written an XSL transform to display a World of Warcraft armory page: Here is an example XML page (view source to see it) of the group I'm trying to help now. So, the user is entering their own XML page url (which has an eu subdomain in this case, but it is not within the XML itself). So when I make links to the character url, I need to add the entire url <a target="_blank" href="http://eu.wowarmory.com/character-sheet.xml?{@url}"> <xsl:value-of select="@name"/> </a> But I can't just set the domain to eu since there are multiple regions. Here are the possibilities: us = www, europe = eu, korea = kr, china = cn and taiwan = tw. Here is a snippet of the XML which shows the url parameters: <character achPoints="4275" classId="3" genderId="0" level="80" name="Virtex" raceId="4" rank="2" url="r=Drek%27Thar&amp;cn=Virtex"/> I guess I could just have the user add a small bit of HTML with their region in another part of their page, something like <div id="region">eu</div>, but I'm trying to make this work without any extra coding on their part. Edit: Ok, my question explicitly stated: How do I get the URL subdomain using XSL?

    Read the article

  • connecting to secure database from website host

    - by jim
    Hello all, I've got a requirement to both read and write data via a .net webservice to a sqlserver database that's on a private network. this database is currently accessed via a vpn connection by remote client software (on standard desktop machines) to get latest product prices and to upload product stock sales. I've been tasked with finding a way to centralise this access from a webservice that the clients then access, rather than them using the vpn route to connect directly to the database. My question is related to my .net service's relationship to the sqlserver database. What are the options for connecting to a private network vpn from a domain host in order to achive the functionality of allowing the webservice to both read and write data to the database. For now, I'm not too concerned about the client connectivity and security (tho i appreciate that this will have to be worked out too), I'm really just interested in discovering the options available in order to allow my .net webservice to connect to the private network in as painless and transparent a way as posible. The option of switching the database onto public hosting is not an option, so I have to work with the sdcenario as described above for now, unless there's a compelling rationale presented to do otherwise. thanks all... jim

    Read the article

  • Jetty 7 will not allow me to customize a session cookie path

    - by Bob Obringer
    Using Jetty 7.0.2, I am unable to set a custom session cookie path. I am hosting multiple sites on the same server using apache to proxy requests to the proper context. (replaced http as htp as stackoverflow thinks my multiple links might be spam) <VirtualHost *:80> ServerName context.domain.com ProxyRequests On ProxyPreserveHost Off <Proxy *:80> Order deny,allow Allow from 127.0.0.1 </Proxy> ProxyPass / htp://localhost:8080/context/ ProxyPassReverse / htp://localhost:8080/context/ <Location /> Order allow,deny Allow from all </Location> </VirtualHost> Jetty is running on the same server on port 8080 and my context is available @ /context The user accesses the application @ htp://context.domain.com but jetty is setting the path for the session cookie @ /context. This prevents the browser from accessing the cookie since the the actual path to the context is not being used. I need to override Jetty's default setting to set the cookie for the context, and set the path at the root ( / ). In my Jetty's webdefault.xml I have the following, which is partially working: <context-param> <param-name>org.eclipse.jetty.servlet.SessionCookie</param-name> <param-value>CustomCookieName</param-value> </context-param> <context-param> <param-name>org.eclipse.jetty.servlet.SessionPath</param-name> <param-value>/</param-value> </context-param> The cookie is properly set with a custom name, but it is NOT setting the SessionPath. No matter what I set the value to... it refuses to set a cookie at any path but /context. This has been driving me crazy so any help would be greatly appreciated.

    Read the article

  • How to find Tomcat's PID and kill it in python?

    - by 4herpsand7derpsago
    Normally, one shuts down Apache Tomcat by running its shutdown.sh script (or batch file). In some cases, such as when Tomcat's web container is hosting a web app that does some crazy things with multi-threading, running shutdown.sh gracefully shuts down some parts of Tomcat (as I can see more available memory returning to the system), but the Tomcat process keeps running. I'm trying to write a simple Python script that: Calls shutdown.sh Runs ps -aef | grep tomcat to find any process with Tomcat referenced If applicable, kills the process with kill -9 <PID> Here's what I've got so far (as a prototype - I'm brand new to Python BTW): #!/usr/bin/python # Imports import sys import subprocess # Load from imported module. if __init__ == "__main__": main() # Main entry point. def main(): # Shutdown Tomcat shutdownCmd = "sh ${TOMCAT_HOME}/bin/shutdown.sh" subprocess.call([shutdownCmd], shell=true) # Check for PID grepCmd = "ps -aef | grep tomcat" grepResults = subprocess.call([grepCmd], shell=true) if(grepResult.length > 1): # Get PID and kill it. pid = ??? killPidCmd = "kill -9 $pid" subprocess.call([killPidCmd], shell=true) # Exit. sys.exit() I'm struggling with the middle part - with obtaining the grep results, checking to see if their size is greater than 1 (since grep always returns a reference to itself, at least 1 result will always be returned, methinks), and then parsing that returned PID and passing it into the killPidCmd. Thanks in advance!

    Read the article

  • ASPX page renders differently when reached on intranet vs. internet?

    - by MattSlay
    This is so odd to me.. I have IIS 5 running on XP and it's hosting a small ASP.Net app for our LAN that we can access by using the computer name, virtual directory, and page name (http://matt/smallapp/customers.aspx), but you can also hit that IIS server and page from the internet because I have a public IP that my firewall routes to the "Matt" computer (like http://213.202.3.88/smallapp/customers.aspx [just a made-up IP]). Don't worry, I have Windows domain authentication is in place to protect the app from anonymous users. So all the abovea parts works fine. But what's weird is that the Border of the divs on the page are rendered much thicker when you access the page from the intranet, versus the internet, (I'm using IE8) and also, some of the div layout (stretching and such) acts differently. Why would it render different in the same browser based on whether it was reached from the LAN vs. the internet? It does NOT do this in FireFox. So it must be just an IE8 thing. All the CSS for the divs is right in the HTML page, so I do not think it is a caching matter of a CSS file. Notice how the borders are different in these two images: Internet: http://twitpic.com/hxx91 . Lan: http://twitpic.com/hxxtv

    Read the article

  • asp.net on linux/mono broken? weird load then 404/resource error

    - by acidzombie24
    I am testing out my asp.net on mono's VM Ware Image using opensuse From the home page it says Trying out your own code You can test your own applications by connecting with the file manager on this machine to the machine hosting your application, and copying over the directory containing application and its associated files. To test your ASP.NET applications, copy your code to a new directory in /srv/www/htdocs , then visit the following url: http://localhost/directoryname/page.aspx Where directoryname is the directory where you deployed your application, and page.aspx is the initial page for your software, typically Default.aspx. Mirror: http://tortillaflatscafe.com/ I saw my app had errors. After i restarted the VM i can no longer run my app. Instead of getting error messages or anything i get this error instead. Server Error in '/myfoldername' Application The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /Default.aspx I tried /default.aspx, using sudo to set permissions to 777 recursively, restarting apache via terminal i could not get this error to go away. How do i fix this?

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >