Search Results

Search found 11671 results on 467 pages for 'man pages'.

Page 26/467 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Best way to use mod_rewrite to replace WordPress pages with static files

    - by David Moles
    Here's the situation: I've got an old WordPress installation that I'd like to archive as static files, but I'd also like to preserve old URLs. I've already created the static archive with wget and sorted out the filenames and links. Now I'd like to configure Apache to intercept requests for the old dynamic URL and replace them with the new static one, e.g.: http://www.example.org/log/?p=1234 or http://www.example.org/log/index.php?p=1234 should redirect to http://www.example.org/log/archives/1234.html I've tried adding the following to the VirtualHost config for example.org, but to no effect -- I just get the PHP page. RewriteCond %{REQUEST_URI} /log/ RewriteCond %{QUERY_STRING} p=([^&;]*) RewriteRule ^/$ http://%{SERVER_NAME}/log/archives/%1.html [R,L] I've enabled logging and I can see what look like other rules being applied, but not this one. None of my other guesses at match patterns for %{REQUEST_URI} seem to have any effect either (log, log/, log.*, even .*). I'm new to mod_rewrite and this is mostly cargo cult, so I'm pretty sure I've gotten it wrong. Anyone know what I should be doing here?

    Read the article

  • Httpd and LDAP Authentication not working for sub-pages

    - by DavisTasar
    I just recently installed a Nagios implementation, and I'm trying to get LDAP authentication working for httpd on Red Hat. (nagios.conf for Apache config below, sanitized of course) ScriptAlias /nagios/cgi-bin "/usr/local/nagios/sbin" <Directory "/usr/local/nagios/sbin"> #SSLRequireSSL Options ExecCGI AllowOverride none AuthType Basic AuthName "LDAP Authentication" AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthzLDAPAuthoritative off AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Alias /nagios "/usr/local/nagios/share" <Directory /usr/local/nagios/share> #SSLRequireSSL Options None AllowOverride none AuthBasicProvider ldap AuthType Basic AuthName "LDAP Authentication" AuthzLDAPAuthoritative off AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Now, the initial authentication works, so when you first hit the page you can log in just fine. However, when you go anywhere else, it prompts you for authentication, fails (asking for a re-prompt), and gives this error message: [Mon Oct 21 14:46:23 2013] [error] [client 172.28.9.30] access to /nagios/cgi-bin/statusmap.cgi failed, reason: verification of user id '<myuseraccount>' not configured, referer: http://<nagiosserver>/nagios/side.php I'm almost certain its a simple flag or option, but I just can't find it, and I don't have a lot of experience working with Apache. Any assistance or help would be greatly appreciated.

    Read the article

  • Unauthorized access error to html pages in IIS 7.0

    - by George2
    I am using VSTS 2008 + C# + .Net 3.5 + IIS 7.0. I have created a new web site and put an html file into the directory. And when I use browse function in IIS manager to browse the html file, I met with the following error, any ideas what is wrong? BTW: I am very confused about unauthorized error since I run the worker process under administrator account. From the error message, I am confused why the logon method is anonymous and not using administrator account? HTTP Error 401.3 - Unauthorized You do not have permission to view this directory or page because of the access control list (ACL) configuration or encryption settings for this resource on the Web server. Module IIS Web Core Notification AuthenticateRequest Handler StaticFile Error Code 0x80070005 Requested URL http://localhost:80/a.html Physical Path C:\test\simplehosttest\a.html Logon Method Anonymous Logon User Anonymous thanks in advance, George

    Read the article

  • Dns works, can ping, but cannot load web pages in browser

    - by user1224595
    Yesterday I changed routers, and my desktop computer started acting up. I could ping websites, and nslookup was able to resolve names to addresses, but neither chrome, firefox, nor ie could load any webpages. None of my other computers connected to the same wireless router have any problems. I connect my desktop to the router through a cheap wifi dongle. I did a wireshark capture of the browser request, and I have uploaded the pcap here. https://drive.google.com/file/d/0B7AsPdhWc-SwbTV0bUJLQXo4UUE/edit?usp=sharing One strange thing I noticed was the spamming of ssdp packets. I am not super familiar with networking, but it seems that it is not a problem with the router, as dns works, and so does dhcp (the desktop is assigned an address correctly). Any help would be appreciated.

    Read the article

  • Route global shortcuts to pages opened in Firefox

    - by zamza
    I like listening to music online on sites like stereomood.com. There is a major problem, however. I can not control the player with my keyboard. And even with a mouse - when I want to play/pause I must activate firefox window, select the tab where music plays and hit play/pause button manually - this is a pain, especially when you play a fullscreen game that can not minimize itself. Thus said, global keyboard shortcuts would be a perfect solution. I understand that different online media players have different controls and each site must be configured individually (like, select button with id 'play' and press it), but I believe that can be done in principle. I also guess that such tricks are impossible without some third-party native app which captures shortcuts and routes them to Firefox window. So, any solutions? Maybe some AutoHotkey hacks or similar.

    Read the article

  • Good speedtest results, but web pages don't load

    - by dmt0
    I have strange connection problems. Ping and download times are good - speedtest.net showed ping 65ms and download 2.17Mbps. Torrent is working well, giving me up to 300MBps. Webpages are loading very poorly though. They are timing out each time - I'd have to refresh 4-5 times to get any simple page to load. It has been happening consistently for the last few days. Same with different browsers on different machines (same network), Windows and Linux. There is no proxy in the browser. Is there any setting in Windows or in a browser that I can change to help this? Some background: I live on this island in Thailand, where internet connection is through radio to another island and than to mainland - it's very weather dependent, but generally OK. As I mentioned, ping is good. Any input is very appreciated.

    Read the article

  • How can I remove unwanted cropped pages from Acrobat

    - by Servant
    Executing the crop command in Acrobat from a 3000pt * 2000pt document to 1500pt*1800pt only hides the document outside of the new boundaries but the original document still remains without change; if anyone uses the touch-up tool and moves the content, all "hidden" information outside the cropped page may appear again by dragging it into view. The page acting as a window (or a mask) to display the 3000pt * 2000pt. I am wondering if there is a solution to crop permanently the document without reprinting it again into PDF file? Please find pictures attached: http://i.stack.imgur.com/5JTPg.png http://i.stack.imgur.com/HPokv.png

    Read the article

  • PHP pages are not parsed by Apache on CentOS

    - by Ram
    I have installed Centos 5.x, Apache 2.2, PHP 5.3 and MySQL 5.5. I also installed phpMyAdmin. I am able to access phpMyAdmin through the browser without any issues. However, when I create a simple index.php with phpinfo() function in the default directory, that page is served without php parsing. As we all know, phpMyAdmin is a php application. This is working fine from the same server but not the simple php page from the doc root directory ??!!!. Of course, I tried moving this page into phpMyAdmin folder and tried accessing it, but no success. Please note that I updated httpd.conf file with appropriate directives based on the php installation guide.Following directives were added to httpd.conf. AddTyoe application/x-httpd-php LoadModule php5_module /usr/lib/httpd/modules/libphp5.so <FilesMatch "\.php$"> SetHandler application/x-httpd-php </FilesMatch> File locations are: docroot - /var/www/html phpMyAdmin folder - /var/www/html/phpMyAdmin File privileges are: [root@linuxdev1 html]# ls -Z -rwxr-xr-x root root index.php drwxr-xr-x root root phpMyAdmin -rw-r--r-- root root phpMyAdmin-3.4.3.2-english.tar.gz drwxr-xr-x root root test1 Any help is appreciated.

    Read the article

  • PHP pages are not parsed by Apache on CentOS

    - by infotoknowledge
    I have installed Centos 5.x, Apache 2.2, PHP 5.3 and MySQL 5.5. I also installed phpMyAdmin. I am able to access phpMyAdmin through the browser without any issues. However, when I create a simple index.php with phpinfo() function in the default directory, that page is served without php parsing. As we all know, phpMyAdmin is a php application. This is working fine from the same server but not the simple php page from the doc root directory ??!!!. Of course, I tried moving this page into phpMyAdmin folder and tried accessing it, but no success. Please note that I updated httpd.conf file with appropriate directives based on the php installation guide. docroot - /var/www/html phpMyAdmin folder - /var/www/html/phpMyAdmin Any help is appreciated.

    Read the article

  • Block all third party domains from web pages

    - by wizlb
    When I'm browsing the web, I'd like to not be tracked by any third party services like Facebook or Google. For instance, if I visit somepage.com I don't want my browser requesting things from facebook.com unless I allow it. However, if I visit facebook.com, Facebook still works. Does anyone know of a Chrome or Firefox extension that will allow me to do this? AdBlock in Chrome doesn't seem to work because it just hides the web page elements, it doesn't stop the browser from downloading them. I imagine that some kind of proxy/browser extension hybrid would be the best. Any suggestions? Thank you.

    Read the article

  • want to make my pages end with HTML

    - by user41997
    here is my current .htaccess For security reasons, Option followsymlinks cannot be overridden. Options +FollowSymlinks Options +SymLinksIfOwnerMatch ErrorDocument 404 /404.php RewriteEngine on rewritecond %{http_host} ^jugep.com [nc] rewriterule ^(.*)$ http://www.jugep.com/$1 [r=301,nc] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^peliculas/([^/]+)$ pelicula.php?pelicula=$1 [L] RewriteRule ^descargar/([^/]+)$ descargar.php?descargar=$1 [L] RewriteRule ^peliculas$ peliculas.php [L] RewriteRule ^peliculas/$ peliculas.php [L] RewriteRule ^buscar$ buscar.php [L] RewriteRule ^buscar/$ buscars.php [L] RewriteRule ^contactar$ contactar.php [L] RewriteRule ^contactar/$ contactars.php [L] +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Can someone help me out here, I would like my links to end with a HTML for every link currently a link on my site looks like this (http://www.jugep.com/peliculas/Casino_Royale) I would like it to look like this(http://www.jugep.com/peliculas/Casino_Royale.html) any help is greatly apreciated

    Read the article

  • Force firefox to open pages in a specific tab using command line

    - by user36306
    Hey Guys, Here's the challenge--I developed Softphone Screenpop PHP App that takes caller id info and searches for a match in our db, also allows us to collect call statistics. Great for our management but it's driving our reps nuts. We use firefox here and when our softphone pops to the external page, every time it opens in a new tab, the girls quickly get 5-10 open and it becomes confusing. Our softphone will also run command line. I wondering if there is a way to have a URL open in a certain tab. Otherwise does anyone have any other ideas? Thanks!

    Read the article

  • Serve pages based on domain by using htaccess

    - by Safwan Erooth
    I have a site mysite.com with different sections. There are several domains pointing to the same site. What I'm trying to achieve is. Say if my site has the domains mysite.com, section1.com and section2.com. If the user comes from section1.com the user should get mysite.com/section1/ it should not be redirected, it should be rewritten like this : mysite.com/section1/something should become section1.com/something mysite.com/section1/another should become section1.com/another Same way if the user is coming from section2.com the content served should be mysite.com/section2/ and the domain rewritten like mysite.com/section2/something should become section2.com/something mysite.com/section2/another should become section2.com/another

    Read the article

  • Multiple "pages" in GWT with human friendly URLs

    - by Andreas Borglin
    Hi. I'm playing with a GWT/GAE project which will have three different "pages", although it is not really pages in a GWT sense. The top views (one for each page) will have completely different layouts, but some of the widgets will be shared. One of the pages is the main page which is loaded by the default url (http://www.site.com), but the other two needs additional URL information to differentiate the page type. They also need a name parameter, (like http://www.site.com/project/project-name. There are at least two solutions to this that I'm aware of. Use GWT history mechanism and let page type and parameters (such as project name) be part of the history token. Use servlets with url-mapping patterns (like /project/*) The first choice might seem obvious at first, but it has several drawbacks. First, a user should be able to easily remember and type URL directly to a project. It is hard to produce a human friendly URL with history tokens. Second, I'm using gwt-presenter and this approach would mean that we need to support subplaces in one token, which I'd rather avoid. Third, a user will typically stay at one page, so it makes more sense that the page information is part of the "static" URL. Using servlets solves all these problems, but also creates other ones. So my first questions is, what is the best solution here? If I would go for the servlet solution, new questions pop up. It might make sense to split the GWT app into three separate modules, each with an entry point. Each servlet that is mapped to a certain page would then simply forward the request to the GWT module that handles that page. Since a user typically stays at one page, the browser only needs to load the js for that page. Based on what I've read, this solution is not really recommended. I could also stick with one module, but then GWT needs to find out which page it should display. It could either query the server or parse the URL itself. If I stick with one GWT module, I need to keep the page information stored on server side. Naturally I thought about sessions, but I'm not sure if its a good idea to mix page information with user data. A session usually lives between user login and logout, but in this case it would need different behavior. Would it be bad practise to handle this via sessions? The one GWT module + servlet solution also leads to another problem. If a user goes from a project page to the main page, how will GWT know that this has happened? The app will not be reloaded, so it will be treated as a simple state change. It seems rather ineffecient to have to check page info for every state change. Anyone care to guide me out of the foggy darkness that surrounds me? :-)

    Read the article

  • Navigating between pages in a Facebook Platform iframe application

    - by Jimmy Cuadra
    I'm working on a Facebook Platform application that runs in iframe mode, and I'm having trouble understanding how to navigate between pages within the app. Let's say the first page that is loaded within the iframe at my canvas URL is one.html. Within that page, there is a link to two.html that just changes the source of the iframe and doesn't reload the Facebook chrome. When I do this, all the Facebook fb_sig_* query string parameters that Facebook passes to the original page aren't included, and so two.html has no awareness of the connection to Facebook and no ability to make API calls to generate the content for the page. One possible solution would be to manually extract all the Facebook parameters from one.html and append it to the link to two.html myself. This seems really ugly and I figured there had to be a cleaner way. For reference, my application is written in Perl and uses the WWW::Facebook::API module as a client library. I didn't see anything in it that I can use to easily reconstruct the Facebook parameters for use with links in iframe apps. Another possible solution would be to store all the Facebook parameters in a session on my server on the first page load, and just use the values in that session on subsequent page views. But what happens if the data I've stored no longer matches what Facebook would have sent if it were a completely new request (i.e. something in the user's Facebook session changed)? Is there something obvious I'm missing? What is the standard approach to navigating between pages within an iframe app? Facebook's documentation is atrocious and I haven't been able to find anything that clearly explains how this works. I also realize this wouldn't be an issue with an app using FBML instead of an iframe, but my understanding is that iframe apps are now encouraged over FBML apps, though again this seems ambiguous since so much of Facebook's documentation is outdated and contradictory.

    Read the article

  • Django internationalization for admin pages - translate model name and attributes

    - by geekQ
    Django's internationalization is very nice (gettext based, LocaleMiddleware), but what is the proper way to translate the model name and the attributes for admin pages? I did not find anything about this in the documentation: http://docs.djangoproject.com/en/dev/topics/i18n/internationalization/ http://www.djangobook.com/en/2.0/chapter19/ I would like to have "???????? ????? ??? ?????????" instead of "???????? order ??? ?????????". Note, the 'order' is not translated. First, I defined a model, activated USE_I18N = True in settings.py, run django-admin makemessages -l ru. No entries are created by default for model names and attributes. Grepping in the Django source code I found: $ ack "Select %s to change" contrib/admin/views/main.py 70: self.title = (self.is_popup and ugettext('Select %s') % force_unicode(self.opts.verbose_name) or ugettext('Select %s to change') % force_unicode(self.opts.verbose_name)) So the verbose_name meta property seems to play some role here. Tried to use it: class Order(models.Model): subject = models.CharField(max_length=150) description = models.TextField() class Meta: verbose_name = _('order') Now the updated po file contains msgid 'order' that can be translated. So I put the translation in. Unfortunately running the admin pages show the same mix of "???????? order ??? ?????????". I'm currently using Django 1.1.1. Could somebody point me to the relevant documentation? Because google can not. ;-) In the mean time I'll dig deeper into the django source code...

    Read the article

  • User Control not loading based on location

    - by mwright
    I have an ASP.net MVC solution that uses nested master pages to load content. On the first Master page I load a header, then have the Content Placeholder, and then load a footer. This master page is referenced by another master page which adds some additional information based on the user being logged in or not. When I load a page that references these master pages, the header loads, but the footer does not. If I move the footer up above the Content Place Holder it loads into the page. Any ideas why this might be the case? The code for the master page that contains the footer is as follows: <%@ Master Language="C#" Inherits="System.Web.Mvc.ViewMasterPage" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title> <asp:ContentPlaceHolder ID="TitleContent" runat="server" /> </title> </head> <body> <div class="header"> <% Html.RenderPartial("Header"); %> </div> <div> <asp:ContentPlaceHolder ID="MainContent" runat="server"> </asp:ContentPlaceHolder> </div> <div class="footer"> <% Html.RenderPartial("Footer"); %> </div> </body> </html>

    Read the article

  • Remove "Flash" between pages while using Internet Explorer modal boxes

    - by AaronS
    I have an internal web application, that is IE specific, and uses a lot of IE specific modal boxes: (window.showModalDialog). I recently received a request to remove the "flash" when navigating between pages of the site. To accomplish this, I just added a meta transition tag to my master page: <meta http-equiv="Page-Enter" content="blendTrans(duration=0.0)" /> This works perfectly except for the modal boxes. When you launch a modal box, and then move it around, the web page behind it keeps a trail of the modal box instead of re-drawing the web page content. This prevents the user from moving the modal box to read anything that was behind it. Is there a way to prevent the "flash" between pages in an IE specific site and have the site still work with modal boxes? Please note, this is a large and complex site, so re-architecting it to not use modal boxes isn't an option. This is an asp.net, c# web application, and all of my users are using IE 7 and IE 8 if it makes any difference. -Edit- To duplicate this, put the following into an html page, and open it in Internet Explorer: <html> <head> <title>Test</title> <meta content="blendTrans(duration=0.0)" http-equiv="Page-Exit"> </head> <body> <script language="javascript"> window.showModalDialog('modal.htm', window); </script> </body> </html>

    Read the article

  • Unobtrusive, self-hosted comments function to put onto existing web pages

    - by Pekka
    I am building a new site which will consist of a mix of dynamic and static pages. I would like to add commenting functionality to those pages with as little work as possible. I'm curious as to whether such a solution exists in PHP. The ideal set of features would be: Completely independent from the surrounding page / site: PHP code gets dropped into page, a page ID is added, done. Simple "write a comment" form Comments for each page are displayed using a PHP function Nice, clean output of <ul><li>.... that can be styled by the surrounding site Optional Captcha Optional Gravatar sensitivity Minimalistic administration area to moderate/delete comments, no ACL, can protect it using .htaccess The ideal integreation would be like this: <?php show_comments("my_page_name"); ?> this would 1. display a form to add a new comment that gets automatically associtated with my_page_name; and 2. display all comments that were made through this form using this ID. Does anybody know a solution like this? Bounty I am setting up a bounty because while there were some good suggestions, they all point to external services. I'm really curious to see whether there isn't anything self-hosted around. If this doesn't exist yet, it sure would be great to see as an Open Source project.

    Read the article

  • How to create custom pages (Drupal 6.x)

    - by jc70
    In the template.php file I inserted the code below: I found a tutorial online that gives the code, but I'm confused on how to get it to work. I copied the code below and inserted it into the template.php from the theme HTML5_base. I duplicated the page.tpl.php file and created custom pages -- page-gallery.tpl.php and page-articles.tpl.php. I inserted some text to the files just see that I've navigated to the pages w/ the changes. It looks like Drupal is not recognizing gallery.tpl.php and page-articles.tpl.php. In the template.php there are the following functions: html5_base_preprocess_page() html5_base_preprocess_node() html5_base_preprocess_block() In the tutorial it uses these functions: phptemplate_preprocess_page() phptemplate_preprocess_block() phptemplate_preprocess_node() function phptemplate_preprocess_page(&$vars) { //code block from the Drupal handbook //the path module is required and must be activated if(module_exists('path')) { //gets the "clean" URL of the current page $alias = drupal_get_path_alias($_GET['q']); $suggestions = array(); $template_filename = 'page'; foreach(explode('/', $alias) as $path_part) { $template_filename = $template_filename.'-'.$path_part; $suggestions[] = $template_filename; } $vars['template_files'] = $suggestions; } } function phptemplate_preprocess_node(&$vars) { //default template suggestions for all nodes $vars['template_files'] = array(); $vars['template_files'][] = 'node'; //individual node being displayed if($vars['page']) { $vars['template_files'][] = 'node-page'; $vars['template_files'][] = 'node-'.$vars['node']->type.'-page'; $vars['template_files'][] = 'node-'.$vars['node']->nid.'-page'; } //multiple nodes being displayed on one page in either teaser //or full view else { //template suggestions for nodes in general $vars['template_files'][] = 'node-'.$vars['node']->type; $vars['template_files'][] = 'node-'.$vars['node']->nid; //template suggestions for nodes in teaser view //more granular control if($vars['teaser']) { $vars['template_files'][] = 'node-'.$vars['node']->type.'-teaser'; $vars['template_files'][] = 'node-'.$vars['node']->nid.'-teaser'; } } } function phptemplate_preprocess_block(&$vars) { //the "cleaned-up" block title to be used for suggestion file name $subject = str_replace(" ", "-", strtolower($vars['block']->subject)); $vars['template_files'] = array('block', 'block-'.$vars['block']->delta, 'block-'.$subject); }

    Read the article

  • Disable control in .aspx from Masterpage conditionally

    - by miccet
    Ok, this might be a bit weird, so I'll start with explaining what I'm trying to do. I have several masterpages for my site, and in they inherit each other. In the second of them (4 in total) I have a background image. Here comes the trick, I'd like to override this image from the final aspx page. I can't change the position of this image, it has to be in masterpage 2, since some pages uses that very page as masterpage. One idea I had was to create a ContentPlaceHolder next to the image and if there are any images in that (check in Page_Load) then the main image would be hidden. I did this with a recursive function, that finds the image by looping through the ContentPlaceHolder's controls. When I set the visibility property to false though, nothing happens. Any other ideas to how this could be done, or why the above doesn't work? Edit: It's not about changing items in the master pages, rather the other way around, that from the Masterpages codebehind dig down into the page that is displayed currently and see if it has controls in a specific ContentPlaceHolder.

    Read the article

  • Some web pages (especially Apple documentation) cause heavy CPU usage in Windows IE8

    - by Mark Lutton
    Maybe this belongs in Server Fault instead, but some of you may have noticed this issue (particularly those developing on Mac, using a Windows machine to read the reference material). I posted the same question on a Microsoft forum and got one answer from someone who reproduced the problem, so it's not just my machine. No solution yet. Ever since this month's security updates, I find that many web pages cause the CPU to run at maximum for as long as the web page is visible. This happens in both IE7 and IE8 on at least three different computers (two with Windows XP, one with Vista). Here is one of the pages, running on XP with IE 8: http://learning2code.blogspot.com/2007/11/update-using-subversion-with-xcode-3.html Here is one that does it in Vista with IE8: http://developer.apple.com/iphone/library/documentation/Cocoa/Reference/Foundation/Classes/NSString_Class/Reference/NSString.html You can leave the page open for hours and the CPU is still at high usage. This doesn't happen every time. It is not always reproduceable. Sometimes it is OK the second or third time it loads. In IE7 the high usage is in ieframe.dll, version 7.0.6000.16890. In IE8 the high usage is in iertutil.dll, version 8.0.6001.18806.

    Read the article

  • Internet Explorer randomly drops sessions between pages in cakePHP

    - by Emerson Taymor
    Hello everyone, I've come across an extremely unusual bug that my team has literally no idea how to solve. Doing some research, I found some similar solutions that I thought would work, but alas did not. Here is my situation, let me know if I can provide additional insight to help solve the problem. The first step is that someone chooses a country via a flash map. Flash passes this region name (as well as a date) through the URL, which we then convert to a session. The next page contains no Flash and doesn't display the selected region, but it does hold on to it for further down the process. Everything works perfectly in Safari and Firefox; however, in IE sometimes unexpected results occur. Frequently (but not always), the session is dropped completely and no sessions are stored between the first and 2nd pages. Here are the steps that I have taken thus far, unsuccessfully: 1. Changed Security from Medium - Low 2. Changed CheckUserAgent from True - False 3. Changed storing of sessions from PHP - Database Some additional information that may be useful: I have tried printing out the session data in Debug (debug($_SESSION) on my view file and debug set to 2 in config). In Internet Explorer everything prints out as expected EXCEPT when the region and date don't get set. For example: If the region and date don't get set NOTHING is printed out for debug. I don't get the session details at the top, and I don't get the normal dump of calls at the bottom of the page either. I am not using redirection on these pages. Please let me know if you have ANY idea of what is causing this or any solutions. I am beyond frustrated and have tried as much as I can to solve this. Thanks!

    Read the article

  • Search engine recommendation for 100 sites of about 4000 pages

    - by fwkb
    I am looking for a search engine that can regularly (daily-ish) scan about 100 pages for changes and index an associated site if changes since the last scan are found. It should be able to handle about 100 sites, each averaging 4000 pages of about 5k average size, each on a different server (but only the one centralized search engine). Each of these sites will have a search form that gets submitted to this search engine. The results that are returned must be specific to the site that submitted them. I create the templates for the external sites, so I can give the search form a hidden field that specifies which site the form is submitted from. What would you recommend I look into? I would love to use a Python-based system for this, if feasible. I am currently using something called iSearch2. It doesn't seem very stable at this scale, the description of the product states it is not really intended to do multiple sites, is in PHP (which is less comfortable to me than Python), and has a few other shortcomings for my specific situation.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >