Search Results

Search found 2680 results on 108 pages for 'soft 404'.

Page 48/108 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • How do i completely remove phpmyadmin?

    - by blade19899
    I messed up my phpmyadmin, I haven't logged in, in phpmyadmin, in a while, and as a result i forgot my password, so i purged it like so: sudo apt-get purge phpmyadmin. I did get some error messages asking for my password but i forgot that, so i just pressed ignore, after that i installed phpmyadmin again like so: sudo apt-get install phpmyadmin. This time i wont be forgetting my password. But now, when i logging my phpmyadmin I get a 404 not found error page!? Question: How do i completely remove phpmyadmin and as a result get phpmyadmin working again Note: I am running Ubuntu 12.10(AMD64)

    Read the article

  • Looking for a CDN

    - by Bill
    Most of the CDN's that I've seen require you to upload your content in advance. I'm looking for a CDN that, upon receiving a request for a resource it hasn't seen, will contact my application server. If the application server returns something, it should be sent to the user and then cached in the CDN. If not, it should just return a 404. If the user requests an unexpired item, the CDN should just serve it without bothering my app server. Does anything like this exist? Is there a way to get Cloudfront to work like this?

    Read the article

  • Google still has record of my old site URL - what to do?

    - by Mayeenul Islam
    I had a blog site, i.e. http://example2.com, then I bought a new domain, i.e. http://example.com and 301 permanent redirected example2.com to example.com. But when I get into the Google Webmaster Tools, if I get some 404, and then click into the link and see the "Linked from" tab, it shows some links like: http://example.com/post-1 http://example2.com/feed http://example2.com/post-1 According to Google, if you change your domain, just use a redirection for at least 4-6 months, but it almost passed. Then why Google has still traces of my old site? The issue is important, because I don't want to pay for the old domain anymore. I tried deleting my existing sitemap.xml and recreating it from the new site, but still such links are stored. What could I do?

    Read the article

  • Google Webmaster Tools shows invalid data

    - by Altar
    Webmaster Tools shows 1 URL error (not found page). The report says that 5 pages are linking to a page (let's call it x) that does not exist (and because it doesn't exists it returns a soft 404). HOWEVER, I look in those 5 pages (in the source code) and none is linking to the x page. It is like Google sees an old page that was indeed pointing to x. What is the problem? How do I know if Google cached an old version for those 5 pages?

    Read the article

  • "Enable Wireless" option is disabled in network settings

    - by silenTK
    I'm using Ubuntu 11.10 (dual booted with Windows 7) but I'm unable to access Internet wireless even though I can do so on Windows 7. The output for rfkill list all is given below: rfkill list all 0: brcmwl-0: Wireless LAN Soft blocked: no Hard blocked: yes 1: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no The output for sudo lshw -C network *-network DISABLED is: description: Wireless interface product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth1 version: 01 serial: 11:11:11:11:11:11 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 latency=0 multicast=yes wireless=IEEE 802.11 resources: irq:16 memory:c2500000-c2503fff *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 05 serial: 22:22:22:22:22:22 size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8105e-1.fw latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:42 ioport:3000(size=256) memory:c0404000-c0404fff memory:c0400000-c0403fff Broadcom STA wireless driver is installed, activated, and currently in use. My laptop is a HP-Pavilion-g6-1004tx. My hardware switch is on. Enable Wireless option is also disabled in network settings.

    Read the article

  • Why is nesting or piggybacking errors within errors bad in general?

    - by dietbuddha
    Why is nesting or piggybacking errors within errors bad in general? To me it seems bad intuitively, but I'm suspicious in that I cannot adequately articulate why it is bad. This may be because it is not in general bad and that it is only bad in specific instances. Why is it detrimental to design error/exception handling in such a way. The specific instance is that of a REST service. There is a desire by some to use http errors (specifically the 500 response) as a way to indicate any problem with specific instances of a resource. An example of an instance resource in this case would be: http://server/ticket/80 # instance http://server/ticket # not an instance So this is the behavior that is being proposed. If ticket 80 does not exist return a http response code of 500. Within the body of the error return the "real" error as an additional error code and description. If the ticket resource doesn't exist return a response code of 404.

    Read the article

  • Shouldn't storage classes be taught early in a C class or book?

    - by Adam Mendoza
    Shouldn't storage classes be taught early in a C class or book? I notice that a lot of books, even some of the better ones, covert it toward and end of the book and some books just add it as an appendix. I would teach it together with variables. This is so foundational and I think unfortunately many do not make it that far in a book. Now that auto has a different meaning (vs being optional) it may confuse people that didn't realize it has always been there. for example: C Programming: A Modern Approach 18.2 Storage Classes 401 Properties of Variables 401 The auto Storage Class 402 The static Storage Class 403 The extern Storage Class 404 The register Storage Class 405 The Storage Class of a Function 406 Summary 407

    Read the article

  • mismatch of version of libkdcraw20 and libkdcraw-data

    - by naveen jankar
    I'm using Ubuntu 12.04 LTS. When I'm installlin digiKam, I'm getting a mismatch in the versions of libkdcraw20 and libkdcraw-data wanted by it and that available in the repositories. It wants version 4.8.5-0ubuntu0.2 (or in other words that is the latest version according to synaptic) but the available one is 4.8.5-0ubuntu0.3 in both cases. Is there a work-around? Or how do I request the Ubuntu managers to rectify this? addenda On Synaptic i selected digikam to be installed. it downloaded all the dependencies but the 2 in question were not found - the message it gave "W: Failed to fetch security.ubuntu.com/ubuntu/pool/main/libk/libkdcraw/… 404 Not Found [IP: 91.189.91.13 80]". I google-searched the 2 files in the repositories to find that the version available there is ubuntu0.3 instead of ubuntu0.2.

    Read the article

  • Static HTML to Wordpress Migration SEO Implications?

    - by Kayle
    Recently, I migrated a client's site to a new server and a new home within wordpress so they could more easily edit their website and start a blog section. The static site was 10 years old a was showing up at place #3 for it's primary keyword, consistently, according to my client, and has dropped to rank #6-8 following the migration. At launch, we made sure the urls were identical (save the removal of ".htm" which we used 301 redirects to compensate for) and we generated a new XML map and pinged google with the new site. We keep a 404 log to make sure we're not losing any incoming links. We also have Google Webmaster Tools on this site and have zero errors/suggestions, everything seems ok. I was told by numerous sources that Google would not penalize us for the use of 301s, but it's the only thing I can think of right now that is different about the site, other than the platform. Any ideas about what we could be getting docked for?

    Read the article

  • JEOS

    - by john.graves(at)oracle.com
    JEOS stands for Just Enough Operating System.  It is  great environment for building virtual machines without all the clutter of a windowing system, games, office products, etc.  It is from Ubuntu and you install it using the Ubuntu server install, but rather than picking a standard install, press F4 and choose “Install a minimal system.” Note: The “Install a minimal virtual machine” is specific to VMWare and I plan to use VirtualBox. Be sure to include Open SSH in the install so that it installs sshd. *** Also, if you plan to install XE, you’ll need to modify the partitions to have a larger swap space (at least 1.5 G). *** Once the install is done, I find it useful to install a few other items. Update Ubuntu apt-get update Install some other tools apt-get openjdk-6-jre Yes, java will be included in any of the WebLogic installs, but I need this one if I want to do remote display (for config wizards, etc). apt-get gcc Some apps require to rebuild the kernel modules, so you’ll need a basic compiler. Install guest additions (Choose the VirtualBox Devices->Install Guest Additions…” option.  This sets up a /dev/cdrom or /dev/cdrom1.  You’ll need to manually mount this temporarily: sudo mount /dev/cdrom /mnt Then run the linux .bin file. Update nofile limits.  Most java apps fail with the standard ubuntu settings: edit /etc/security/limits.conf and add these lines at the end: *     soft nofile 65535 *     hard nofile 65535 root  soft nofile 65535 root  hard nofile 65535 These numbers are very high and I wouldn’t do this on a production system, but for this environment it is fin. To get rid of the annoying piix error on boot, add the following line to the /etc/modprobe.d/blacklist.conf file blacklist i2c_piix4

    Read the article

  • CentOS 5 - Unable to resolve addresses for NFS mounts during boot

    - by sagi
    I have a few servers running CentOS 5.3, and am trying to get 2 NFS mount-points to mount automatically on boot. I added 2 lines similar to the following to fstab: server1:/path1 /path1 nfs soft 0 0 server2:/path2 /path2 nfs soft 0 0 When I run 'mount -a' manually, the mount points are properly mounted as expected. However, when I reboot the machine, only /path2 is mounted. For /path1 I get the following error: mount: can't get address for server1 It obviously looks like a DNS issue, but the record is properly configured in all the DNS servers and is mounted properly if I re-try the mount after the reboot is completed. I could properly fix this by using IP address instead of hostnames in /etc/fstab or adding server1 to /etc/hosts but I would rather not do that. What might be the reason for failing to resolve this specific address during boot time? Why the problem is only with the 1st mount point and the 2nd is properly mounted despite having identical configuration?

    Read the article

  • How can I stop a bot attack on my site?

    - by tnorthcutt
    I have a site (built with wordpress) that is currently under a bot attack (as best I can tell). A file is being requested over and over, and the referrer is (almost every time) turkyoutube.org/player/player.swf. The file being requested is deep within my theme files, and is always followed by "?v=" and a long string (i.e. r.php?v=Wby02FlVyms&title=izlesen.tk_Wby02FlVyms&toke). I've tried setting an .htaccess rule for that referrer, which seems to work, except that now my 404 page is being loaded over and over, which is still using lots of bandwidth. Is there a way to create an .htaccess rule that requires no bandwidth usage on my part? I also tried creating a robots.txt file, but the attack seems to be ignoring that. #This is the relevant part of the .htaccess file: RewriteCond %{HTTP_REFERER} turkyoutube\.org [NC] RewriteRule .* - [F]

    Read the article

  • "Invalid operation" status code in a HATEOAS REST API

    - by FinnNk
    In a HATEOAS API links are returned which represent possible state transitions. A conforming client should just be retrieving and following those links, but if a non-conforming client is constructing URIs rather than following the supplied links what would be the most appropriate status code/response to return? 400 would work, together with some information in the response body - this is what we're currently doing 403 I guess would be wrong, as it implies that the request could never work - but potentially the link may be available in the future 404 sounds plausible - at this point in time the resource doesn't exist What do people think? I know that conditional requests can handle requests based on stale responses (resulting in e.g. 412s), but this is a slightly different situation.

    Read the article

  • Forward to other domain with CNAME

    - by xybrek
    In my GoDaddy DNS manager, I made some A Record that points to *.mirror for my domain Now when I access URL 123.mirror.mydomain.com from the browser I can see that my app is loaded, and its all OK. My problem now is when doing a CNAME point to the URL above on another domain like this: Accessing 123.otherdomain.com which I expect to "forward to" 123.mirror.mydomain.com I only get this 404 error: The IP 173.194.71.121 is actually ghs.googlehosted.com What I am missing here? Why 123.otherdomain.com which points to 123.mirror.mydomain.com cannot open that page and I think google is handing the web page request?

    Read the article

  • HP Envy dv6t-7300: Disabled WiFi through button and can't enable it anymore

    - by Mateus B. Cassiano
    Well, I have a HP Envy dv6t-7300 laptop that came with a Ralink RT5390 WiFi card. Everything was working perfectly, and eventually I press the WiFi button in my keyboard to toggle the card on/off. Until today, all worked right: if the wifi was off (wifi LED amber) and I press the wifi button, after a few seconds the LED turn white and everything works. If I repeat the process, the wifi LED turn amber and the card get disabled, but now, I can't turn it on anymore. running sudo rfkill list all I get: 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: yes So, I ran sudo rfkill unblock all but nothing changed. As a side note, if I run sudo ifconfig wlan0 up, the indicator LED gets white (indicating that the card was enabled), but Ubuntu still say that the card is blocked by hardware. Extra information: the card works without issues in windows and in Ubuntu installer (booting from a live CD). I'm using the card out-of-box, using the drivers already included in Ubuntu 12.10. The module rt2800pci is loaded and working fine, not blacklisted, etc, etc. The card and the button toggle worked flawlessly until today, when I toggled it off and can't turn it on anymore... The problem is back, but in a different manner: if I don't press the wifi key a few times during the grub loading, in the login screen the wifi button will be ambar (disabled), pressing it will toggle it white (enabled) or ambar (disabled) again, but ubuntu still says that the network card was disabled by hardware and doesn't connect... In other words, if I don't press the WiFi button a few times when Ubuntu is booting, it will be stuck with the "network card was disabled by hardware" message, even if the light is white (enabled). Any clue? Maybe a error in some startup script or config file?

    Read the article

  • Tips for managing internal and external links using WordPress [closed]

    - by keruilin
    So I'm looking for ways to optimize my site for user and search engine purposes. I've read several articles and looked at several different plugins. To say the least, I'm thoroughly confused as what are the best practices for managing internal and external links. Here is a list of some of my questions: Which internal links should be set to "nofollow"? Which external links should be set to "nofollow"? To what degree does actively managing links contribute to your PR? Should you use "nofollow" blindly on all links in comments? If a link to an external site is broken (404 or whatever), should you "nofollow" that link? What about "noindex"? As you can see, lots of questions. I'm hoping that you experienced webmasters can give a newb some best-practice advice.

    Read the article

  • Web Platform Installer issues deploying Azure SDK 1.4 on refreshed systems.

    - by Enrique Lima
    Recently I have been doing quite a bit of testing on different means to deploy the Azure SDKs and such. After a very successful couple of systems, I started running into issues last night. Here is the problem, if I go to the Windows Azure Website, and go to Develop, then click on the SDK and Tools, then Get Tools & SDK, it launches the Web Platform Installer.  All seems well at that point, except it will go through the initial process, will find the SDK files for 1.4, but since the tools for Visual Studio are still 1.3, the location throws back a 404, which causes the Installer to fail.  NOTE:If you already had SDK 1.3 and the tools in place, it will go through. The fix is to go directly to the Microsoft Download Center location and download the files.  Here is the link … http://www.microsoft.com/downloads/en/details.aspx?FamilyID=7a1089b6-4050-4307-86c4-9dadaa5ed018

    Read the article

  • Ubuntu 12.10 Help! Everything is incredibly slow

    - by Keith
    I installed 12.10 from usb onto this machine. Intel celeron 2.00 GHz 496MB RAM I had to modify GNU-Grub to read "nomodeset" or i could not see the GUI. I have an Nvidia graphics card. Takes about 2 minutes to boot. The icons on the left of desktop take about 1 min to slowly open their menu. Have a network connection but mozilla is 404 and i cannot update. Where can i find a blow by blow explanation for troubleshooting and repairing this problem?

    Read the article

  • Blocking path scanning

    - by clinisbut
    I'm seeing in my access log a number of request very suspicious: /i /im /imaa /imag /image /images /images/d /images/di /images/dis They part from a known resource (in the above example /images/disrupt.jpg). All comming from same IP. Requests varies from 1/sec to 10/sec, seems somewhat random. It's obviously they are trying to find something and seems they are using a script. How do I block this kind of behaviour? I though of blocking the IP request, at least for a given time. Keeping in mind that: Request intervals seems legitimate (at least I think so). I don't want to end blocking a search engine bot, which may find 404 urls too (and that's a different problem, I know). ¿Do they use always same IP?

    Read the article

  • RewriteRule working local but not on remote server

    - by m0tv
    I have a .htaccess file with one simple RewriteRule: RewriteEngine on RewriteRule ^([A-Za-z0-9-]+)$ ?site=$1 I want to have an url like http://www.example.com/imprint and forward it to http://www.example.com/?site=imprint I checked this rule with an RewriteRule tester which gave me the results I want to achieve. On my local development system it works well too. But on a remote server the URLs just give me a 404 error. Other more simple rewrite rules are working with no problems, so everything must be set up correctly (I think..). The problem is that I don't have access to any error logs or the server configs. So the only thing I can do is to guess... Can anyone tell me if theres something wrong with this rule? Or anything else I can do or test to solve this? Or has someone an idea what could be wrong on the server?

    Read the article

  • Configuring php on Ubuntu server

    - by mk_89
    I have been following this tutorial http://www.howtoforge.com/installing-apache2-with-php5-and-mysql-support-on-ubuntu-12.04-lts-lamp And I have got to the part where I am running a simple test to determine whether php has been installed properly the installation went fine, I installed php5 using the following command apt-get install php5 libapache2-mod-php5 and I then restarted the server. to see whether php5 has been installed I did the following created a file vi /var/www/info.php edited the file <?php phpinfo(); ?> after trying to run it on my server I get a 404 Not Found error. What could be the problem?

    Read the article

  • Can't get wireless working after installing ubuntu 12.10 on acer aspire 5560-7414

    - by markdel
    I have been struggling all day trying different solutions from different posts on how to get my wifi working but none have worked. Any help would be greatly appreciated. Below are some wireless preferences to help find the problem. mark@mark-Aspire-5560:~$ sudo rfkill list all [sudo] password for mark: 0: acer-wireless: Wireless LAN Soft blocked: no Hard blocked: no 1: brcmwl-0: Wireless LAN Soft blocked: no Hard blocked: no *-network description: Ethernet interface product: NetLink BCM57785 Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:01:00.0 logical name: eth0 version: 10 serial: 20:6a:8a:7f:63:82 size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi msix pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.123 duplex=full firmware=sb ip=158.65.194.244 latency=0 link=yes multicast=yes port=twisted pair speed=10Mbit/s resources: irq:16 memory:f0000000-f000ffff memory:f0010000-f001ffff memory:f0050000-f00507ff *-network description: Wireless interface product: BCM43227 802.11b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth1 version: 00 serial: 08:ed:b9:01:e0:8b width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.112 latency=0 multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:f0100000-f0103fff

    Read the article

  • Deleting Pages and SEO

    - by Lynda
    I am in the process of re-designing my website. I will be changing the URL structure of several pages and in some cases the pages are going to be deleted as they are no longer necessary or obsolete. My question is this: I do not want to effect any SEO by having a lot of 404 errors pop-up. For the pages that I am changing the URL I will set 301 redirects but how do I handle the pages that I am deleting outright and have no redirects? From my understanding if those pages start showing 404s it will hurt SEO. Is this correct? How do I handle deletion of pages?

    Read the article

  • I can not download anything

    - by Jason Machen
    I am very new to ubuntu but decided to wipe my windows 7 and install it. I can not download anything from the software center. This is the error message I get. I can use the web in all other ways including this site. What can I do? Thanks, Jason W:Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/main/source/Sources 404 Not Found [IP: 91.189.91.13 80] W:Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/restricted Plus about 20 other lines.

    Read the article

  • I am unable to use the "wubi" install I get the message "ERROR TaskList: Cannot download the metalink and therefore the ISO"

    - by pat
    used WUBI a few months ago on both XP and win7 systems with no problem. Unable to install on either for the last 2 weeks? p From the log 09-05 11:36 DEBUG CommonBackend: Could not find any ISO or CD, downloading one now 09-05 11:36 DEBUG TaskList: New task get_metalink 09-05 11:36 DEBUG TaskList: ### Running get_metalink... 09-05 11:36 DEBUG downloader: downloading http://cdimage.ubuntu.com/xubuntu/releases/12.04/release/xubuntu-12.04-desktop-amd64.metalink > C:\ubuntu\install 09-05 11:36 ERROR CommonBackend: Cannot download metalink file http://cdimage.ubuntu.com/xubuntu/releases/12.04/release/xubuntu-12.04-desktop-amd64.metalink err=[Errno 14] HTTP Error 404: Not Found

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >