Search Results

Search found 52885 results on 2116 pages for 'http redirect'.

Page 268/2116 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • Serious problem with my sound system, No Hardware detected Suddenly, Please Help

    - by Aravind
    I'm Quite new to this Ubuntu but recently started using it, 2 days back I had problem with muting the Laptop speaker when headphone jack is plugged, to resove this i searched and somehow tried with the Alsa mixer and got it perfect. FYI this is the output of the Alsa script: [http://www.alsa-project.org/db/?f=9ec8099800aca2cb74ee35c2bf58125e45ca9f43][1] But since today morning there is no sound and no sound hardware are detected!! it looks like below http://i.imgur.com/gnK8R.png http://i.imgur.com/nxgvU.png Please help, - regards Aravind

    Read the article

  • Other link relationships and impact to the SEO

    - by haha
    Example other link relationships <head> <link rel='index' title='Main Title' href='http://domain.com/' /> <link rel='start' title='Part Three' href='http://domain.com/part-3/' /> <link rel='prev' title='Part Two' href='http://domain.com/part-2/' /> <link rel='next' title='Part Four' href='http://domain.com/part-4/' /> </head> Questions Have a big impact to make my site get a nice rank to Search Engine?

    Read the article

  • How to prevent Google Analytics from adding a second slash between domain and page specific URL when viewing a page?

    - by Jeromy Anglim
    I have a blog http://foo.tumblr.com. I sometimes go to Site Content - All Pages on Google Analytics and then navigate to page listing and then click the icon to take me to that page on my blog. However, instead of opening http://foo.tumblr.com/post/1234/blah.html Google Analytics is opening http://foo.tumblr.com//post/1234/blah.html (i.e., it is adding a second slash between the domain the page specific component of the URL). How can I stop Google Analytics from doing this?

    Read the article

  • New Gencode Sql to Linq

    Hi all members I have a programme generator code for c# It gen by 3tier and include MS SQL to Linq, MS SQL, and Access. You can use it and give idia to I can do it better. Links down is below:   http://depositfiles.com/files/38hcd9xf8 ho?c http://www.easy-share.com/1910377507/Ge ... artent.rarThanks you for using it!   record include example and Guide you can down in page dofactory.com or Links below: http://www.easy-share.com/1910377763/Guide Do Patterns In Action 3.5.pdf http://depo...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to wipe RAM on shutdown (prevent Cold Boot Attacks)?

    - by proper
    My system is encrypted using Full Disk Encryption, i.e. everything except /boot is encrypted using dmcrypt/luks. I am concerned about Cold Boot Attacks. Prior work: https://tails.boum.org/contribute/design/memory_erasure/ http://tails.boum.org/forum/Ram_Wipe_Script/ http://dee.su/liberte-security http://forum.dee.su/topic/stand-alone-implementation-of-your-ram-wipe-scripts Can you please provide instructions on how to wipe the RAM once Ubuntu is shutdown/restarted? Thanks for your efforts!

    Read the article

  • ecommerce item deleted by user, 301 rediret to HOME PAGE or 404 not found?

    - by Marco Demaio
    I know this question is someway similar to this one where they reccomend using 404, but after reading this other one where they suggest to use 301 when changing site urls (in the specific case was due to redesign/refactoring) I get a bit of confused and I hope someone could clarify for this specific example: Let's say I have an ecommerce site, let's also say the final user inserted some interesting items in the site and the ecommerce webapp created the item pages at the urls: http://...?id=20, http://...?id=30 etc. Now let's say some of these interesting items got many external links toward them from many other sites because some people found those items very interesting and linked to them. After some years the final user deletes those items, so obviously the pages/urls http://...?id=20, http://...?id=30, etc. now do not exist anymore, but still many pages on the web are linking toward them. What should the ecommerce site do now, just show a 404 page for those items? But, I'm confused, wouldn't this loose all the Google PR passed by the external links to the items pages? So isn't it better to use 301 redirect to HOME PAGE that at least passes the PR to the HOME PAGE? Thanks, EDIT: Well, according to answeres the best thing to do so far is to do a 404/410. In order to make this question more complete, I would like to talk about a special case, just to make sure I understood. properly. Let's say the user creates those items again (the ones he previously deleted at point 4), maybe he changes a bit their names and description, but they are basically the same items. The webapp has no way to know these new added items were the old items so it obviously create them as new items with new urls http://...?id=100, http://...?id=101, does it makes sense at this point to redirect 301 the old urls to the new ones? MORE EDIT (It would be VERY IMPORTANT TO UNDERSTAND): Well according to the clever answers received so far it seems for the special case, explained in my last EDIT, I could use 301, since it's something of not deceptive cause basically the new pages is a replacement for the old page in term of contents. This is basically done to keep the PR passed from external link and also for better user experience. But beside the user experince, that is discussible (*1), in order to preserve PR from external broken linlks why not just always use 301, In my understanding Google dislikes duplicated contents, but are we sure that 301 redirect to HOME PAGE is seen as duplicated contents for Google?! Google itself suggests to redircet 301 index.html to document root so if they consider 301 as duplicated contents wouldn't that be considered duplicated contents too?! Why do they suggest it? Let me provoke you: “why not just add a 301 to HOME PAGE for every not found page?” (*1) as a user, when I follo a broken url from some external link to some website's page I would stick more on this website if I get redirected to HOME PAGE rather than seeing a 404 page where I would think the webiste does not even exist anymore and maybe I don't even try to go to HOME PAGE of the website.

    Read the article

  • "apt-get --print-uris" to list URI for downloading extra functionality package for amarok,reqonk

    - by munir
    Hello I do not have internet connection at my home. So i use "apt-get --print-uris" to list the http's and take it to other PC with internet connection and use wget to download them. I am facing difficulty about how to list the http's that amarok will download from for the extra functionality (like mp3 codecs). KDE deamon is showing me that these packages are needed for extra functionality of amarok but i don't know how to list their http's. "apt-get --print-uris upgrade" does not list http's need for amarok/reqonk.

    Read the article

  • Chrome Countdown Extension [migrated]

    - by Mike Saffold
    I have modified this countdown script to countdown to 4:20pm everyday. I have attempted to create a Google Chrome app that displays the countdown. The javascript is supposed replace a paragraph tag with id of "note" with the time left. It works when I load the page in chrome, but does not work when I load the extension. Example, if I put: <p id="note">asdf</a> I get just the text, "asdf", but when I open the html file I get the countdown. Here is the manifest.json file: { "name": "My First Extension", "version": "1.0", "manifest_version": 2, "description": "The first extension that I made.", "browser_action": { "default_icon": "icon.png", "default_popup": "popup.html" } } Here is the popup.html code: <html> <head> <title>4:20PM Countdown</title> <!-- Our CSS stylesheet file --> <link rel="stylesheet" href="http://fonts.googleapis.com/css?family=Open+Sans+Condensed:300" /> <link rel="stylesheet" href="http://treesmoke.com/cd/assets/css/styles.css" /> <link rel="stylesheet" href="http://treesmoke.com/cd/assets/countdown/jquery.countdown.css" /> </head> <body> <p id="note">asdf</p> <!-- JavaScript includes --> <script type="text/javascript" src="http://code.jquery.com/jquery-1.7.1.min.js"></script> <script type="text/javascript" src="http://treesmoke.com/cd/assets/countdown/jquery.countdown.js"></script> <script type="text/javascript" src="http://treesmoke.com/cd/assets/js/script.js"></script> </body> </html> Here's the popup.html page, showing that the script works. Thanks guys, it isn't that big of a deal if I can't get it to work. I was just bored and decided to learn a little.

    Read the article

  • Is it a good idea to add robots "noindex" meta tags to deep low content pages, e.g. product model data

    - by Cognize
    I'm considering adding robots "noindex, follow" tags to the very numerous product data pages that are linked from the product style pages in our online store. For example, each product style has a page with full text content on the product: http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE Then many data pages with technical data for each model code is linked from the product style page. http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-1 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-2 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-3 It is these technical data pages that I intend to add the no index code to, as I imagine that this might stop these pages from cannibalizing keyword authority for more important content rich pages on the site. Any advice appreciated.

    Read the article

  • How to make RewriteCond+RewriteRule change domain2/folder1 to domain1/folder1

    - by gman
    There's actually 2 questions. One is, how do I make RewriteCond+RewriteRule change domain2/folder1 to domain1/folder1 Actually what I want is any domain that tries to access folder1 that is not domain1 gets switched to domain1. So for example domain2.com/domain1/foo - domain1.com/domain1/foo as well as domain3.com/domain1/foo - domain1.com/domain1/foo This is what I tried RewriteCond %{HTTP_HOST} !^domain1\.com$ [NC] RewriteCond %{REQUEST_URI} ^/folder1/ RewriteRule ^/folder1/(.*)$ http://domain1.com/folder1/$1 [L,R=permanent] But that doesn't work. Next I tried some a simpler rule to see if I could narrow down the issue. RewriteCond ${HTTP_HOST} domain2\.com [NC] RewriteRule ^(.*)$ http://google.com/ [L] I though that would make ANY request to domain2.com go to google.com so I tried http://domain2.com/foo but I get domain2.com/foo not google.com If I go to http://domain2.com I get google. Why don't I get there if I go to http://domain2.com/foo? What am I not understanding about mod_rewrite?

    Read the article

  • Trouble Updating to Version 13.10 (from Version 13.04)

    - by user206783
    By trying to update to Version 13.10 (German) from Version 13.04 i'd received the following Problem-Message: W:Fehlschlag beim Holen von "http: //de.archive.ubuntu.com/ubuntu/dists/natty-backports/main/source/Sources 404 Not Found" W:Fehlschlag beim Holen von "http: //de.archive.ubuntu.com/ubuntu/dists/natty-backports/restricted/source/Sources 404 Not Found" W:Fehlschlag beim Holen von "http: //de.archive.ubuntu.com/ubuntu/dists/natty-backports/universe/source/Sources 404 Not Found" W:Fehlschlag beim Holen von "http: //de.archive.ubuntu.com/ubuntu/dists/natty-backports/multiverse/source/Sources 404 Not Found" E:Einige Indexdateien konnten nicht heruntergeladen werden. Sie wurden ignoriert oder alte an ihrer Stelle benutzt. Update rolls back Anyone got an solution?

    Read the article

  • Boot time seems unusually long on MSI GX660R (bootchart included)

    - by Sman789
    After upgrading (clean install) to Ubuntu 12.04, the speed issue when running programs has reduced on my MSI GX660R laptop. However, the boot time is still much longer (over a minute, even after BIOS) than on the many less powerful laptops I have encountered running the same OS, and I was wondering if anyone could help me improve it. I use the FGLRX driver, if that makes any difference. I have uploaded a boot chart, it can be found here http://imageshack.us/photo/my-images/4/bootchartl.png/ As you can see, the boot time is over a minute even after BIOS. A 'designed for Vista' laptop from ages ago which I installed Ubuntu on boots in around thirty seconds, so I think it's a bit strange. Output of dmesg: http://paste.ubuntu.com/1081359/ Output of /var/log/kern.log : http://paste.ubuntu.com/1081363/ Output of /var/log/syslog : http://paste.ubuntu.com/1081365/

    Read the article

  • Updating Google sitemap for mobile

    - by dimo414
    I have a series of utilities to generate Google sitemaps for my whole site. These files are massive, and slow to build. We want to start telling Google these pages are mobile-crawl-able too, by adding them to mobile sitemaps, but the documentation is unclear if I need to specify physically different files for my mobile URLs than for my normal ones. If this is my current sitemap: <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://mobile.example.com/article100.html</loc> </url> </urlset> Can I simply change it to: <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:mobile="http://www.google.com/schemas/sitemap-mobile/1.0"> <url> <loc>http://mobile.example.com/article100.html</loc> <mobile:mobile/> </url> </urlset> Or do I need to create new files with the additional markup, alongside my existing files?

    Read the article

  • OpenWeb(String) method

    - by ybbest
    I guess this is a SharePoint beginner problem ,however it took me a while to figure out what the problem is and I will blog it to help me to remember. Basically I wrote the following code to grab some list item from my SharePoint subsite http://win-oirj50igics/RestAPI,however I got the error stating that : “<nativehr>0×80070002</nativehr><nativestack></nativestack>There is no Web named / http://win-oirj50igics/RestAPI”. The problem is that OpenWeb(String) method returns the web site that is located at the specified server-relative or site-relative URL. It is the relative URL , so after I changed http://win-oirj50igics/RestAPI to RestAPI, everything works fine. using (SPSite site = new SPSite(http://win-oirj50igics/)) { SPWeb web = site.OpenWeb("http://win-oirj50igics/RestAPI"); SPQuery query = new SPQuery(); query.Query = camlDocument.InnerXml; SPListItemCollection items = web.Lists["Songs"].GetItems(query); IEnumerable<Song> sortedItems = from item in items.OfType<SPListItem>() orderby item.Title select new Song {SongName = item.Title, SongID = item.ID}; songs.AddRange(sortedItems); }

    Read the article

  • EZ Systems publie trois patchs de sécurité, qui concernent des failles sur les versions 4.1 et 4.2 d

    EZ System, éditeur du gestionnaire de contenu EZ Publish vient de publier une série de trois patchs de sécurité. [IMG]http://djug.developpez.com/rsc/Ez-publish-Logo_medium.gif[/IMG] ces patchs concernent des failles affectant les versions 4.1 et 4.2 du CMS, il est vivement recommandé d'appliquer ce patch. -> Les patchs se trouvent ici http://ez.no/developer/security/secu...y_in_ez_search -> Communiqué officiel http://share.ez.no/blogs/ez/security...lish-instances...

    Read the article

  • Reverse Proxy (mod_rewrite) and Rails (absolute paths)

    - by SooDesuNe
    I have front end rails app, that reverse proxies to any of a number of backend rails apps depending on URL, for example http://www.my_host.com/app_one reverse proxies to http://www.remote_host_running_app_one.com such that a URL like http://www.my_host.com/app_one/users will display the contents of http://www.remote_host_running_app_one.com/users I have a large, and ever expanding number of backends, so they can not be explicitly listed anywhere other than a database. This is no problem for mod_rewrite using a prg:/ rewrite map reverse proxy. The question is, the urls returned by rails helpers have the form /controller/action making them absolute to the root. This is a problem for the page served by mod_rewrite because links on the proxied page appear as absolute to the domain. i.e.: http://www.my_host.com/app_one/controller/action has links that end up looking like /controller/action/ when they need to look like /app_one/controller/action mod_proxy_html seems like the right idea, but it doesn't seem to be as dynamic as I would need, since the rules need to be hard coded into the config files. Is there a way to fix this server-side, so that the links will be routed correctly?

    Read the article

  • ISC-DHCP not providing address

    - by kiler129
    I just replaced my old router using server with Ubuntu. Everything's fine except DHCP. When I tried connecting iPhone - it works: http://pastebin.com/NNEeiRLY but unfortunately some of my devices can't get IP from server, e.g. my computer: http://pastebin.com/N6LnsEWC Here's my isc configuration: http://pastebin.com/N5KQnhZV I've also tried running DHCP server as root (because of some permission denied in logs on lease file). What can I do?

    Read the article

  • MySQL Enterprise Monitor 3.0.11 has been released

    - by Andy Bang
    We are pleased to announce that MySQL Enterprise Monitor 3.0.11 is now available for download on the My Oracle Support (MOS) web site. It will also be available via the Oracle Software Delivery Cloud in about 1 week. This is a maintenance release that includes a few new features and fixes a number of bugs. You can find more information on the contents of this release in the change log. You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet. You will also find the binaries on the Oracle Software Delivery Cloud in approximately 1 week. Choose "MySQL Database" as the Product Pack and you will find the Enterprise Monitor along with other MySQL products. Based on feedback from our customers, MySQL Enterprise Monitor (MEM) 3.0 offers many significant improvements over previous releases. Highlights include: Policy-based automatic scheduling of rules and event handling (including email notifications) make administration of scale-out easier and automatic Enhancements such as automatic discovery of MySQL instances, centralized agent configuration and multi-instance monitoring further improve ease of configuration and management The new cloud and virtualization-friendly, "agent-less" design allows remote monitoring of MySQL databases without the need for any remote agents Trends, projections and forecasting - Graphs and Event handlers inform you in advance of impending file system capacity problems Zero Configuration Query Analyzer - Works "out of the box" with MySQL 5.6 Performance_Schema (supported by 5.6.14 or later) False positives from flapping or spikes are avoided using exponential moving averages and other statistical techniques Advisors can analyze data across an entire group; for example, the Replication Configuration Advisor can scan an entire topology to find common configuration errors like duplicate server UUIDs or a slave whose version is less than its master's More information on the contents of this release is available here: What's new in MySQL Enterprise Monitor 3.0? MySQL Enterprise Edition: Demos MySQL Enterprise Monitor Frequently Asked Questions MySQL Enterprise Monitor Change History More information on MySQL Enterprise and the Enterprise Monitor can be found here: http://www.mysql.com/products/enterprise/ http://www.mysql.com/products/enterprise/monitor.html http://www.mysql.com/products/enterprise/query.html http://forums.mysql.com/list.php?142 If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact. If you haven't looked at MEM recently, and especially MEM 3.0, please do so now and let us know what you think. Thanks and Happy Monitoring! - The MySQL Enterprise Tools Development Team

    Read the article

  • Allowing outbound traffic with APF/iptables for OpenVZ container

    - by David
    I have apf installed on a OpenVZ container (proxmox 2.1). The config is pretty much vanilla and things are working. My external services like ssh and http are working. My problem is that all outbound traffic on http/https is blocked. How do I allow all outbound traffic for http/https. If I change EGF to 1 like this, all inbound and outbound traffic gets blocked EGF="1" EG_TCP_CPORTS="21,25,80,443,43,53" EG_UDP_CPORTS="20,21,53" EG_ICMP_TYPES="all" I opened a single outbound rule with the following # /usr/local/sbin/apf -a downloads.wordpress.org How do I allow all outbound traffic on http/https without blocking all traffic? Why would I allow all inbound ssh/http traffic and block all outbound traffic?

    Read the article

  • URL rewriting via forward proxy

    - by Biggroover
    I have an app that runs inside my firewall and talks out to multiple end points via HTTP/HTTPS on a non-standard port e.g. http://endpoint1.domain.com:7171, http://endpoint2.domain.com:7171 What I want to do is route these requests through a forward proxy that then rewrites the URL to something like http://allendpoints.domain.com/endpoint1 (port 80 or 443) then on the other end have a reverse proxy that unwinds what I did on the forward proxy to reach the specific endpoints. The result being that I can route existing app requests through to specific endpoints across the internet without having to change my app software. My questions are: is this even possible? is it a good idea, are their better ways to do this? Can this be done with IIS and Apache as the proxies?

    Read the article

  • Programming Windows Identity Foundation - ISBN 978-0-7356-2718-5

    - by TATWORTH
    This book introduces a new technology that promises a considerable improvement on the ASP.NET membership system. If you ever had to write an extranet, system you should be aware of the problems in setting up membership for your site. The Windows Identity Foundation promises to be an excellent replacement. Therefore the book Programming Windows Identity Foundation - ISBN 978-0-7356-2718-5 at  http://oreilly.com/catalog/9780735627185, is breaking new ground. I recommend this book to all ASP.NET development teams. You should reckon on 3 to 5 man-days to study it and try out the sample programs and see if it can replace your bespoke solution. Rember this is version 1 of WIF and give yourself adequete time to read this book and familiarise yourself with the new software. Some URLs for more information: WIF home page at http://msdn.microsoft.com/en-us/security/aa570351.aspx The Identity Training Kit at http://www.microsoft.com/downloads/en/details.aspx?displaylang=en&FamilyID=c3e315fa-94e2-4028-99cb-904369f177c0 The author's blog at http://www.cloudidentity.net/

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >