Search Results

Search found 33151 results on 1327 pages for 'www browser'.

Page 308/1327 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • ServerName wildcards in Apache name-based virtual hosts?

    - by Martijn Heemels
    On our LAN I've set up several 'fake' TLDs in the DNS server, with the intention of using them for Apache name-based virtual hosting. I'd like to combine this with mass-virtual-hosting (i.e. VirtualDocumentRoot) on an Ubuntu 10.04 LAMP server. However, I can't get it to select the right vhost! Here is a summary of the Apache config: NameVirtualHost 10.10.0.205 <VirtualHost 10.10.0.205> ServerName *.test VirtualDocumentRoot /var/www/%-3.0.%-2/test/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> <VirtualHost 10.10.0.205> ServerName *.dev VirtualDocumentRoot /var/www/%-3.0.%-2/dev/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> A hostname such as www.domain.com.dev, correctly resolves to 10.10.0.205, but always selects the top vhost, instead of the bottom one, which matches more closely. I was under the impression that Apache would first try to match the ServerName before defaulting to the top vhost for a given IP. What am I doing wrong? Or is this not possible and must I use another IP for each TLD? apachectl -S outputs (trimmed): 10.10.0.205:* is a NameVirtualHost default server *.test port * namevhost *.test port * namevhost *.dev

    Read the article

  • Linux apache developing configuration

    - by Jeffrey Vandenborne
    Recenly reinstalled my system, and came to a point where I need apache and php. I've been searching a long time, but I can't figure out how to configure apache the best way for a developer computer. The plan is simple, I want to install apache 2 + mysql server so I can develop some php website. I don't want to install lamp though, just the apache2, php5 and mysql. The problem that I've been looking an answer for is the permissions on the /var/www/ folder. I've tried making it my folder using the chown command, followed by a chmod -R 755 /var/www. Most things work then, but fwrite for example won't work, because I need to give write permissions to everyone, unless I change my global umask to 000 I'm not sure what I can do. In short: I want to install apache2, php5, mysql-server without using lamp, but configured in a way so I can open up netbeans, start a project with root in /var/www/, and run every single function without permission faults. Does anyone have experiences or workarounds to this? Extra: OS: Ubuntu 10.04 ARCH: x86_64

    Read the article

  • Is there a way to get Chatzilla in the Windows 7 "jumplist" for Firefox?

    - by hippietrail
    Windows 7 has a feature called "Jumplists" which, in the Start menu, adds a kind of context menu alongside the selected app listing things such as files recently or frequently used with that app, and tasks the app can perform: Often I use Firefox's add-on IRC client, Chatzilla but use Chrome for browsing. I have to start the Firefox browser, run Chatzilla from the menua, then close the browser again. It seems this is exactly the kind of thing Jumplists are for. Is there a way to customize them in Windows 7? Or does Firefox offer a way to customize its own Jumplists?

    Read the article

  • fcgiwrap listening to a unix socket file: how to change file permissions

    - by user36520
    I have a web server (nginx) and a CGI application (gitweb) that is ran with fcgiwrap to enable Fast CGI access to it. I want the Fast CGI protocol to take place over a unix socket file. To start the fcgiwrap daemon, I run: setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" (this is a daemontools daemon) The problem is that my web server runs as the user www-data and not the user git. And fcgiwrap creates the socket fastcgi.sock with user git, group git and read only fort the non owner. Thus, nginc with the user www-data can't access the socket. Apparently, fcgiwrap is not able to select permissions of unix socket files. And this is quite annoying. Moreover, if I manage to have the socket file exists before I run fcgiwrap (which is quite difficult given I did not find any shell command to create a socket file), it quits with the following error: Failed to bind: Address already in use The only solution I found is to start the server the following way: rm -f fastcgi.sock # Ensure that the socket doesn't already exists (sleep 5; chgrp www-data fastcgi.sock; chmod g+w fastcgi.sock) & exec setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" Which is far from the most elegant solution. Can you think of anything better ? Thanks

    Read the article

  • Is this a normal operating temperature range for a New Third-Generation MacBook Air (2.13 MHz, 120 G

    - by doug
    Even with just my text editor open (no web browser) and maybe the terminal, the baseline temperature is usually above 40 C. When i open 4-5 browser tabs in Safari (even if none of Sites have Flash) the temp can quickly go over 50 C. (In addition, i am observing these temps even though i have turned the fan up to 3000 rpms). (i have install smcFanControl on my MBA so i can see the temp in the menu bar.) So this means my MBA is running much warmer than my MBP; and in practice, it means that i have to be very careful how i use my MBA. Of course if i load a Site with Flash, it just freaks out, and often quickly goes above 65 C (I've installed a flash blocker to avoid this). Is anyone else observing this behavior? I have checked the Apple boards and sure enough, there are a lot of complaints, but nothing from Apple.

    Read the article

  • Changing Servers - Redirect to new IP = No Downtime?

    - by Denis Pshenov
    I am changing servers of my website. The IP of old server cannot be moved to the new one. To have no downtime I am planing to do the following, please someone confirm it will work: Setup the new server and listen on the new IP Old server redirect all traffic to the new IP Change DNS records to point to the new IP My logic tells me that when I redirect to the new IP from my old box, the user will not see the domain name in the browser but will see the new IP. Is there a way to redirect to the new IP and send along the HOSTNAME with it so that the user will see the domain name in the browser? Im doing this because the site is in constant use and simply changing DNS settings won't do as database won't be synced between the new and old servers during propagation.

    Read the article

  • catch-22 with apt-get

    - by Mark J Seger
    I'd recently installed a package and discovered my scp stopped working. After removing and installing some things I got it fixed but then I stated getting errors in apt-get about dpkg: error: configuration error: /etc/dpkg/dpkg.cfg.d/multiarch:1: unknown option 'foreign-architecture' so I commented it out and thought that fixed the problem until I discovered the chrome icon in my launcher turned into a ? and chrome no longer worked. I tried to reinstall it and got the apt-get error: "ambiguous package name 'libglib2.0-0' with more than one installed instance" if I try to remove libglib2 I get the error The following packages have unmet dependencies: iceape-browser : Depends: iceape but it is not going to be installed iceape-chatzilla : Depends: iceape (>= 2.7.11) but it is not going to be installed Depends: iceape (<= 2.7.11-1.1~) but it is not going to be installed E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. so then I tried to remove icape-browser and it complained about the 2 instances of libglinb2. in fact virtually any command I issue does the same thing so I don't know what I can do to untangle things.

    Read the article

  • Remotely from Chrome or IE page loads ~60seconds, from Firefox or IE on local machine - instantly.

    - by Janis Veinbergs
    The problem: If i access SharePoint from Windows 7 with IE8 or Chrome5 - I must wait for like a minute to get a response. If i use other Windows 7 with IE8, just the same - just wait a MINUTE. If i use Firefox3.6 on W7 machine - page opens up instantly. Now switch to IE rendering engine in Firefox, you will have to wait just as with IE. Now i tried IE8 on XP SP3 - page opens up instantly. I tried IE8 on Windows Server 2003 SP2 (machine on which SharePoint is hosted) - page opens up instantly. IIS6 Logs I did request almost instantly from all 3 browsers and this is what shows up in IIS logs (first 2 entries for each browser): Chrome Ok, IIS saw first Chrome request when i Hit enter in browser, but i had to wait long for things to move on 2010-06-01 05:46:04 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/533.4+(KHTML,+like+Gecko)+Chrome/5.0.375.55+Safari/533.4 401 2 2148074254 Loading... 2010-06-01 05:47:07 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/533.4+(KHTML,+like+Gecko)+Chrome/5.0.375.55+Safari/533.4 401 1 0 ... etc... Firefox All Instantly 2010-06-01 05:46:06 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+lv;+rv:1.9.2.3)+Gecko/20100401+Firefox/3.6.3 401 2 2148074254 2010-06-01 05:46:06 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+lv;+rv:1.9.2.3)+Gecko/20100401+Firefox/3.6.3 401 1 0 ... etc... IE I did hit enter when it was 05:46:06, but these are first entries in IIS logs 2010-06-01 05:47:08 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.1;+Trident/4.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+Tablet+PC+2.0;+.NET+CLR+1.1.4322;+.NET4.0C;+.NET4.0E) 401 1 0 2010-06-01 05:47:08 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.1;+Trident/4.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+Tablet+PC+2.0;+.NET+CLR+1.1.4322;+.NET4.0C;+.NET4.0E) 401 1 0 ... etc... Nothing to see in Event Logs. The question Similar question has been asked but there is no response and i`m trying to access page without SSL and that happens even on GET requests. Where do I look? Where would be the problem? Browser? OS? I don't even know what to think about. Just a note Just a note about chrome's process isolation: I found it sad that while I was waiting that minute with Chrome, i could not use any other tab (i could switch, but i could not, for example, scroll or use any controls)

    Read the article

  • Where did the text of my .html file go in text edit on mac.

    - by David
    I recently created a .html file in text edit on my mac (os x 10.5.8). I then opened that .html file in my browser and it showed the page i created just fine. I closed the .html file and text edit and refreshed the page. It still worked fine. Then i opened up the .html file in text edit again and all the text was gone (the page in the browser still works fine though). Where did all the text go?

    Read the article

  • How to configure Windows 2008 R2 server for LAN and wireless internet connections

    - by Alchemical
    For special testing purposes, we need a Windows server to allow the following: A team member can log in remotely to the server. When remotely logged in, they can disconnect the wireless connection, perform a few tests, and then reconnect the wireless connection. In general, the LAN connection would just be used for the remote login, the wireless connection would be used for performing tests including using a web browser to test certain web sites, etc. How can we successfully configure the server to support 2 network connections like this? (A regular LAN connection + a wireless connection). And also make sure that the tests we perform using the browser utilize the wireless connection for the outgoing internet activity.

    Read the article

  • Trying to run a codeigniter app on custom php

    - by hamstar
    I have a CodeIgniter app that I deployed to a server with php 5.2 and my dev box has 5.3, and some stuff doesn't work anymore. I didn't want to upgrade php and risk the other app on the server having issues. Anyway I compiled a custom PHP and added the following to a single .conf file in /etc/httpd/conf.d/zcid.conf with all the other conf files. <VirtualHost *:80> DocumentRoot /var/www/cid/app ServerName sub.example.co.nz </VirtualHost> <Directory "/var/www/cid/app"> authtype Basic authname "oh dear how did this get here i am no good with computer" authuserfile /path/to/auth require valid-user RewriteEngine on RewriteCond $1 !^(index\.php|robots\.txt|createEvent\.php|/cgi-bin) RewriteRule ^(.*)$ /index.php/$1 [L] AddHandler custom-php .php Action custom-php /cgi-bin/php53.cgi </Directory> In /var/www/cid/app I have the cgi-bin folder and the php53.cgi that I copied from /usr/local/php53/bin/php-cgi But now when I navigate to the subdomain it says: The requested URL /cgi-bin/php53.cgi/index.php/ was not found on this server. And if I try to browse to /cgi-bin it says (what it is supposed to?): You don't have permission to access /cgi-bin/ on this server. Quite confused now. Anyone know what to do here? Thanks :)

    Read the article

  • Configuring DNS and IIS for multiple domains on a single server

    - by RichardS
    I might be over complicating this but...I am hosting several websites and dns for the domains on a single server: domain1.net domain1.com domain2.net I have three items which I'm trying to work out whether to achieve by DNS, by IIS hostnames(bindings), or by IIS redirect. 1. Where I have domain1.net and domain1.com, I want everything from both (all emails and web requests) to just point to the domain1.net. Can I do this at the DNS level, or do I have to set up the email as forwarders on the email server and the domain as a hostname in IIS? For example: [email protected] [email protected] www.domain1.com www.domain1.net 2. I want to make sure that requests for domain1.net and www.domain1.net both resolve to the same place. Should this be done with DNS or with multiple hostnames, or with IIS redirects? 3. If I then want to have one webmail site serving all of domains (webmail.domain1.net, webmail.domain2.net), is it best to this with a cname in DNS or with host headers in IIS?

    Read the article

  • Jenkins shell command isn't executing

    - by Dmitro
    In Jenkins project I add command for executiong rm /var/www/ru.liveyurist.ru/tmp/* But when I build project I get error: Started by user anonymous Building in workspace /var/www/ru.myproject.ru Updating https://subversion.assembla.com/svn/myproject/trunk At revision 1168 no change for https://subversion.assembla.com/svn/liveexpert/trunk since the previous build [ru.myproject.ru] $ /bin/sh -xe /tmp/hudson7189633355149866134.sh FATAL: command execution failed java.io.IOException: Cannot run program "/bin/sh" (in directory "/var/www/ru.myproject.ru"): java.io.IOException: error=12, Cannot allocate memory at java.lang.ProcessBuilder.start(ProcessBuilder.java:475) at hudson.Proc$LocalProc.<init>(Proc.java:244) at hudson.Proc$LocalProc.<init>(Proc.java:216) at hudson.Launcher$LocalLauncher.launch(Launcher.java:709) at hudson.Launcher$ProcStarter.start(Launcher.java:338) at hudson.Launcher$ProcStarter.join(Launcher.java:345) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:82) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:703) at hudson.model.Build$RunnerImpl.build(Build.java:178) at hudson.model.Build$RunnerImpl.doRun(Build.java:139) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:473) at hudson.model.Run.run(Run.java:1410) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:238) Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.<init>(UNIXProcess.java:164) at java.lang.ProcessImpl.start(ProcessImpl.java:81) at java.lang.ProcessBuilder.start(ProcessBuilder.java:468) ... 16 more Build step 'Execute shell' marked build as failure Finished: FAILURE I started Jenkins from root user. Please advise what can be reason of this error?

    Read the article

  • Some sites won't load on Ubuntu/Mint

    - by Or W
    I have a REALLY weird problem with either my network or my OS. Last week I've suddenly had difficulties loading some websites or even more odd some parts of different websites. For example, I could load gmail.com, login and view the list of emails in my inbox but when I clicked one of them it would just time out. Another example is http://www.ynet.co.il, I can view the home page but going into any one of the articles fails (times out). I've tried Chrome, Firefox and Opera, all fail the same way. If I take a URL of a page I cannot load via the browser and try to wget it though the console I get the file just fine. I've formatted my machine (Used to run Ubuntu 13.04) and installed Mint Linux this time, it worked fine for a few days and now, again, having the same exact issues. Important to note that I have other machines connected either directly or via Wi-Fi to the router and they are all working fine (two win7 machines and 1 raspberry pi). Another strange behavior is that I can ftp or ssh to remote machines but cannot send files via ftp (times out) even if I set passive mode ON and when using ssh I can do just about anything but I cannot paste text into the remote machine, for example if I nano a file on the remote machine and try to paste anything from my clipboard it freezes. What I've tried so far: Disable IPv6 on the networking admin (and on firefox disabling ipv6 on the about:config page) Changing the port and the network cable I went to the store and bought a new standalone PCIe network adapter Connected my win7 laptop using the same cable and router port (sites that were not working on my Mint are working just fine on the win7 machine) Loaded Mint from a livecd, got the same result Tried changing the MTU (was 1500, tried 1492) Some observations: When I clear my browser cache and go to facebook.com for example, the homepage loads but I fail to load any profile/group page. If I refresh facebook.com homepage a couple of times it stops and fails to load until I clear my browser cache. I changed the chrome cache folder permissions to 0777 but that did not help. When I run netstat -n I see A LOT of connections that are in 'FIN_WAIT' mode (I'm guessing that's when I try to refresh pages that are not working and timing out), I have no idea what it means or if it helps anyone figure out what's wrong. The sites that are not loading correctly are always that same, they don't vary or anything and they fail to load exactly the same way on all three browsers that I've tried. When I Googled 'Ubuntu some sites not loading' I see a huge amount of complaints just like mine, but none of them that I could find actually says what the problem is or how they fixed it. Technical stuff: netstat -n ps aux netstat -nr

    Read the article

  • Is .htaccess slowing down my dedicated server?

    - by David Robles
    First of all, I consider myself more a programmer than a servers guy. I have a website where I receive about 3,000 visits per day, which I think is a lot less than the max capacity for a dedicated server. However, I've noticed that the connection to the website is pretty slow, e.g., to load images, to connect to it via SSH, etc. I configured .httaccess recently to avoid hotlinking to images in my server (i.e. .jpg, .gif and .png), and I was wondering if that could be slowing down my website. This is the configuration that I have: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteEngine on RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://www.mysite.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.mysite.com$ [NC] RewriteRule .*\.(jpg|jpeg|gif|png|bmp|swf)$ http://www.google.com/ [R,NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I found some code to do that in google, and I just copied to .htacces since I'm not an expert in apache. It works, but I don't know if that is the best way to do it. How can I see if that is the reason why the server is slow? Are there any tools to monitor it? What would you do guys? Thanks in advance!

    Read the article

  • Download a website that requires log-in with HTTtrack Copier

    - by H.Moss
    Hi guys! I have been researching of how to download content of a site that requires username and password. This is actually harder than I thought it would be. I tried to use both HTTtrack Copier and followed the instruction below, but it's not working! Q: I can not access several pages (access forbidden, or redirect to another location), but I can with my browser, what's going on? A: You may need cookies! Cookies are specific data (for example, your username or password) that are sent to your browser once you have logged in certain sites so that you only have to log-in once. For example, after having entered your username in a website, you can view pages and articles, and the next time you will go to this site, you will not have to re-enter your username/password. To "merge" your personnal cookies to an HTTrack project, just copy the cookies.txt file from your Netscape folder (or the cookies located into the Temporary Internet Files folder for IE) into your project folder (or even the HTTrack folder)

    Read the article

  • Can Apache 2 be configured to start sending gzipped data early?

    - by rikh
    We have Apache set up to gzip compress html pages before they are sent to the client browser. However, some of our pages are slowish to generate and it seems that Apache is holding on until it has the complete page, compressing it, then sending it to the browser. There are big chunks of the page (the main important bits) that are actually generated and output fairly quickly. Is it possible to configure Apache to start compressing and send data for the page as soon as the script starts outputting something? Is it is, can you offer any help is how to do this? If not, can you suggest any other way to get gzip compression working for the server? The scripts that generate the pages are written in PHP. We are using Apache 2.0 on Linux.

    Read the article

  • Can you change the icon of a pinned IE 9 web application? And how do you do it?

    - by Rick Roth
    In IE 9 you have the ability to click and drag an open browser tab to the Windows 7 taskbar and pin the shortcut to the taskbar. This has the effect of creating a pseudo-application experience where the shortcut can have it's own custom jumplist and is not grouped with other IE 9 browser tabs on the taskbar. Windows uses the "shortcut icon" or "favicon" defined in the HTML for the icon on the taskbar. If no shortcut icon is defined, then the generic IE shortcut icon is used. If you have a bunch of these shortcuts pinned to the taskbar that don't have different icons it can be confusing to the user which is which. Can you change the icon of a pinned IE 9 web application? And how do you do it?

    Read the article

  • Squid/Kerberos authentication with only Linux

    - by user28362
    Hi, I would like to know if it possible to let a Windows Xp machine authenticate to Squid (Linux) using Kerberos without the need of an Active Directory domain. I only want to create a Kerberos ticket on the client side, which should give the client access to squid (using I.E.). I only found tutorials about configuring A.D./Squid, not an environment with only Linux servers. Thanks Update: The kerberos setup is correctly done, the proxy and client can get tickets. As for the browser (FF/IE), I get: ERROR Cache Access Denied While trying to retrieve the URL: http://www.google.com/ The following error was encountered: * Cache Access Denied. Sorry, you are not currently allowed to request: http://www.google.com/ from this cache until you have authenticated yourself. In kerberos, I get: squid_kerb_auth: Got 'YR ElRNTVMTUABBAABAB4IIogAAAAAAAAAAAAAAAAAAAAAFASgDAAAADw==' from squid (length: 59). squid_kerb_auth: parseNegTokenInit failed with rc=101 squid_kerb_auth: received type 1 NTLM token This message is strange, as I didn't configure NTLM. It looks like the browser uses the wrong authentication methode.

    Read the article

  • configs for several sites in apache with ssl

    - by elCapitano
    i need to secure two different sites in apache. One of them should only be a proxy for a different server which is running on port 8069. Now one (which is natively included in apache) runs with SSL: <VirtualHost *:443> ServerName 192.168.1.20 SSLEngine on SSLCertificateFile /etc/ssl/erp/oeserver.crt SSLCertificateKeyFile /etc/ssl/erp/oeserver.key DocumentRoot /var/www/cloud ServerPath /cloud/ #CustomLog /var/www/logs/ssl-access_log combined #ErrorLog /var/www/logs/ssl-error_log </VirtualHost> The other one is not running and even not registered. When i try to access it, i get an exception (ssl_error_rx_record_too_long): <VirtualHost *:443> ServerName 192.168.1.20 ServerPath /erp/ SSLEngine on SSLCertificateFile /etc/ssl/erp/oeserver.crt SSLCertificateKeyFile /etc/ssl/erp/oeserver.key ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyVia On ProxyPass / http://127.0.0.1:8069/ ProxyPassReverse / http://127.0.0.1:8069 RewriteEngine on RewriteRule ^/(.*) http://127.0.0.1:8069/$1 [P] RequestHeader set "X-Forwarded-Proto" "https" SetEnv proxy-nokeepalive 1 </VirtualHost> My whish is the following configuration: 192.168.1.20 ->> unsecured local path to website 192.168.1.20/cloud/ ->> secured local documentpath from cloud 192.168.1.20/erp/ ->> secured proxy on port 80 for http://192.168.1.20:8069 how is this possible? is this even possible? perhaps cloud.192.168.1.20 and erp.192.168.1.20 is better?! Thank you

    Read the article

  • Importing Delicious export into Firefox 3

    - by Jordan Reiter
    The HTML export file from Delicious creates an HTML file that, while importable into Firefox, loses all of the tags, which pretty much misses the point of delicious. Every link I've found to do the import doesn't work: The online converter Delicious-to-Firefox 3 keeps throwing a server error; the "better" version is down I tried using the trick of syncing Delicious bookmarks to Flock and then restoring that file to my Firefox browser. Although the bookmarks are in Firefox, they don't show up anywhere. I know they're in Firefox because when I create a backup file, they're in the JSON file but they refuse to show in the browser anywhere (clicking on any tag shows an empty list) Basically, I'm looking for someone who has successfully imported their delicious bookmarks, with tags, into Firefox 3.6.13 (Mac).

    Read the article

  • How can I cornify a SharePoint site?

    - by Chris Farmer
    Inspired by the April 1 gravatar changes and the memory of last year's cornification of Stack Overflow, I wanted to add a cornify button to my company's SharePoint app. I just added their html snippet to a Content Editor Web Part. <a href="http://www.cornify.com" onclick="cornify_add();return false;"> <img src="http://www.cornify.com/assets/cornifycorn.gif" width="52" height="51" border="0" alt="Cornify" /> </a><script type="text/javascript" src="http://www.cornify.com/js/cornify.js"></script> The button renders all glittery and beautiful, and the magical functionality works fine in Chrome and Firefox (I'm on Windows 7) for me. But, in IE8, all the gorgeous unicorn images get added at the bottom of the page such that you can't see them unless you scroll down. Since most of our users are IE users, I fear that this just isn't going to be all that much fun. So, is there some known way to force this to work better in IE8, or is there another similarly fun site adornment utility that I could use that might behave better in a SharePoint 2007 app running in IE7/8?

    Read the article

  • Ubuntu 11.04 and OpenLDAP - where is the config?

    - by Tom SKelley
    I've been asked to setup a multimaster LDAP environment on Ubuntu 11.04 - instead of a single master server. I cloned the master server and recreated it into two VMs. I am trying to follow the instructions on the OpenLDAP documentation here: http://www.openldap.org/doc/admin24/replication.html and it talks about modifying the cn=config tree within LDAP. The subdirectory tree appears to be there at: /etc/ldap/slapd.d/ and a slapcat -b cn=config drops out a load of config information. When I try to connect using a browser and the admin bind credentials: ldapsearch -D '<adminDN>' -w <password> -b 'cn=config' I get: # extended LDIF # # LDAPv3 # base <> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # search result search: 2 result: 32 No such object I don't see the config context when I connect via an LDAP browser either. I'm sure I'm missing something, but I can't see what it is!

    Read the article

  • Why is Google Home page changing spontaneously

    - by Bob
    Periodically, my Google Chrome home page (which I have originally set myself) changes spontaneously on opening the browser to display, instead, the page showing the rectangular small images of webpages I have visited before. In addition the bookmarks bar and my homepage icon has disappeared. I am also not directed to any other website location as might occur if a virus or malware were involved. I can correct all of these changes back to my original settings the way I prefer (and nothing prevents my resetting it) and can continue to browse with no difficulty but this changed situation has happened several times in the past month. I had assumed that this was an idiosyncrasy of how Chrome (mis)behaves but worry whether virus or malware might be involved. I have monitoring by antiviral and antimalware software and run periodic complete scans and occasionally these find Trojans and other malware which are removed but this browser behavior seems to recur I would like to prevent the surprises Thank you.

    Read the article

  • Nginx > Varnish > Gunicorn Error Too many Redirections

    - by kollo
    I have the following config: Nginx Varnish Gunicorn Django I want to cache 2 versions of same site (mobile & web) with Varnish. Gunicorn : WEB: gunicorn_django --bind 127.0.0.1:8181 MOBILE: gunicorn_django --bind 127.0.0.1:8182 Nginx: WEB: server { listen 80; server_name www.mysite.com; location / { proxy_pass http://127.0.0.1:8282; # pass to Varnish proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } MOBILE: server { listen 80; server_name m.mysite.com; location / { proxy_pass http://127.0.0.1:8282; # pass to Varnish proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Varnish: default.vcl backend mobile_mysite { .host = "127.0.0.1"; .port = "8182"; } backend mysite { .host = "127.0.0.1"; .port = "8181"; } sub vcl_recv { if (req.http.host ~ "(?i)^(m.)?mysite.com$") { set req.http.host = "m.mysite.com"; set req.backend = mobile_mysite; }elsif (req.http.host ~ "(?i)^(www.)?mysite.com$") { set req.http.host = "mysite.com"; set req.backend = mysite; } if (req.url ~ ".*/static") { /* do not cache static content */ return (pass); } } The problem: On Nginx if I setup Mobile version with Varnish (port 8282) and let WEB version with Gunicorn( port 8181), MOBILE is cached by varnish, both WEB & MOBILE works but WEB is not cached. If I set the proxy_pass of WEB version to Varnish (port 8282) and restart Nginx I got an error when accessing web version (www.mysite.com) "Too many redirections" . I Think my problem come from the Varnish config file, as the site works well if I setup Nginx proxy_pass to Gunicorn ports (MOBILE & WEB).

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >