Search Results

Search found 18609 results on 745 pages for 'url shortening'.

Page 489/745 | < Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >

  • Visits-PageViews-Bounce Rate-New Visitors-Visit Duration (Google Analytics), which one is top priority for seo?

    - by HOY
    This is the case: My site is getting a lot of trafic from an image (a company logo image) because this image is ranked 1.st in google search results for a company's title. (I have no idea how that happened) This image is must for my website, but it is not relevant with site content so irrelevant people search for the image and finds out about my site, so that I get interesting statistics: http://postimage.org/image/3oyvrjoz9/ Pros: Total Visits & Avg. New Visits Cons: Avg. Page/Visit, Avg. Visit Duration, Bounce Rate In summary I am confused if this image is helpful to my website ? Because I don't know the balance between those 5 statistics P.S: My website is 2 months old, and we are working on seo at the moment Another P.S: Kindly ask you to not provide assumtions, because I also have assumptions, I need real knowledge. Edit: Search Keyword is: arcelik logo Search Site: google.com.tr Search URL: https://www.google.com.tr/search?hl=en&q=arcelik+logo&bav=on.2,or.r_gc.r_pw.r_qf.&bvm=bv.41524429,d.Yms&biw=1366&bih=667&um=1&ie=UTF-8&tbm=isch&source=og&sa=N&tab=wi&ei=oZIDUfutAseVswa9zYHwCw

    Read the article

  • Unauthorized access error to html pages in IIS 7.0

    - by George2
    I am using VSTS 2008 + C# + .Net 3.5 + IIS 7.0. I have created a new web site and put an html file into the directory. And when I use browse function in IIS manager to browse the html file, I met with the following error, any ideas what is wrong? BTW: I am very confused about unauthorized error since I run the worker process under administrator account. From the error message, I am confused why the logon method is anonymous and not using administrator account? HTTP Error 401.3 - Unauthorized You do not have permission to view this directory or page because of the access control list (ACL) configuration or encryption settings for this resource on the Web server. Module IIS Web Core Notification AuthenticateRequest Handler StaticFile Error Code 0x80070005 Requested URL http://localhost:80/a.html Physical Path C:\test\simplehosttest\a.html Logon Method Anonymous Logon User Anonymous thanks in advance, George

    Read the article

  • Why are some web clients requesting a page named "cache"?

    - by Toto
    We see errors like this in the apache error log: [Thu May 17 14:32:35 2012] [error] [client 192.168.1.1] File does not exist: /home/www-data/mywebsite.com/r/cache, referer: http://www.mywebsite.com/r/1010 It is strange because: There is no reference in the code/url about a folder/file "cache". The folder/file "cache" does not exist The client is randomly trying to access a "cache" folder everywhere on the website. It is always trying to access the folder/file "cache" following this pattern: Pattern: /level1/.../levelwhatever/filename (referer) /level1/.../levelwhatever/cache We run a LAMP (Debian stable: PHP 5.3.3-7+squeeze9. We also use APC 3.1.3p1). We use Google Analytics and AdSense. We do not know how to reproduce the problem. Note: I replaced the user's IP in the code for privacy.

    Read the article

  • What is the right iptables rule to allow apt-get to download programs?

    - by anthony01
    When I type something like sudo apt-get install firefox, everything work until it asks me: After this operation, 77 MB of additional disk space will be used. Do you want to continue [Y/n]? Y Then error messages are displayed: Failed to fetch: <URL> My iptables rules are as follows: -P INPUT DROP -P OUTPUT DROP -P FORWARD DROP -A INPUT -i lo -j ACCEPT -A OUTPUT -o lo -j ACCEPT -A INPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT -A OUTPUT -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT What should I add to allow apt-get to download updates? Thanks

    Read the article

  • Find hosted directories/ports in Jetty/Apache

    - by Paul Creasey
    Hi, I first asked this on SO, but i didn't get a response and i think it is probably more appropriate here. Let say I have a directory which is being hosted by Jetty or Apache (i'd like an answer for both), i know the URL including the port and i can log into the server. How can i find the directory that is being hosted by a certain port? I'd also like to go the other way, i have a folder on the server, which i know if being hosted, but i don't know the port so i can't find it in a web browser. How can i find a list of directories that are being hosted? This has been bugging me for ages but i've never bothered to ask before! Thanks.

    Read the article

  • How to handle multiple pages of the same site with the same outlinks

    - by pandafromchina
    I am developing a back link tool for Chinese SEO (our web site URL is: http://link.aizhan.com just like ahrefs.com. I encountered a problem which is how to handle multiple pages of the same site with the same out links. For example: Most pages of bbs.chinaz.com have the same out links such as: bbs.chinaz.com/Tea/thread-6293993-1-1.html bbs.chinaz.com/Tea/list-1.html bbs.chinaz.com/alimama/thread-6265032-1-1.html bbs.chinaz.com/alimama/thread-6265032-2-1.html?userid=-1&extParms= bbs.chinaz.com/Shuiba/list-1.html bbs.chinaz.com/FeedBack/thread-4456753-1-1.html etc.. All of the pages have the same out links in the top of the page: www.cnzz.com(anchor text:????) www.313.com(????) www.idc123.com(????) Suppose I store these outlinks into database. The SEO will find there are six backlinks from bbs.chinaz.com of www.cnzz.com. This is obviously no sense for the SEO. Can you tell me how do you deal with this problem?

    Read the article

  • How do I fix Nginx config to work with multiple hosts of Unicorn?

    - by fred deAlmeida
    I have no problem instantiating multiple instances of unicorn on different unix sockets and ports. Works fine if I do url:port. My problem comes in correctly formatting nginx.conf to allow multipe upstream conditions. Whatever i do does not seem to work. One instance is fine works fine. Multiple gives me a ""upstream" directive is not allowed here error I am using the base nginx sample from the unicorn site. and doubling up the upstream area with differing terms. each is part of the http set. Any help would be amazing!

    Read the article

  • postfix cannot send email

    - by AKLP
    I'd like to mention that im really new to this so please bear with me. I'm trying to setup a forum software to send emails via postfix but I think my server has the port 25 blocked. I tried running these: works: ping alt2.gmail-smtp-in.l.google.com don't work: telnet alt2.gmail-smtp-in.l.google.com 25 telnet 66.249.93.114 25 tried flushing iptables and then using these rules but didn't work either: sudo iptables --flush sudo iptables -P INPUT ACCEPT sudo iptables -P OUTPUT ACCEPT sudo iptables -P FORWARD ACCEPT sudo iptables -F sudo iptables -X doing a telnet on 25 port to localhost url works but nothing when telnet'ing in none local urls. mail.log: Oct 17 01:20:24 webhost postfix/smtp[3642]: connect to alt2.gmail-smtp-in.l.google.com[2607:f8b0:400e:c03::1a]:25: Connection timed out Oct 17 01:20:24 webhost postfix/smtp[3643]: connect to alt2.gmail-smtp-in.l.google.com[2607:f8b0:400e:c03::1a]:25: Connection timed out Oct 17 01:20:24 webhost postfix/smtp[3642]: 4744380032: to=<[email protected]>, relay=none, delay=2892, delays=2741/0.03/150/0, dsn=4.4.1, status=deferred (connect to alt2.gmail-smtp-in.l.google.com[2607:f$

    Read the article

  • Planning and Budgeting Cloud Service - Partner Webcast

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Please join us for a 90 minutes live Partner Webcast which will overview the upcoming Oracle Planning and Budgeting Cloud Service (PBCS) offering on Tuesday, 26th November, 2013 at 5:00 pm CET / 4:00 pm UK. Look out for the joining URL and instruction in my November Newsletter coming soon. As a reminder, there was also a Partner Webcast recorded in August 2103 about PBCS which included a demo. Replay link here. Topics include: Latest news from Product Management; live demo; overview of assets and collaterals; Q&A session Oracle Planning and Budgeting Cloud Service (PBCS) offers organizations the market-leading Oracle Hyperion Planning and Budgeting solution delivered via Oracle’s public cloud service. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Dreamweaver not connecting RDC

    - by thomas
    I am trying to use Dreamweaver to Remote Desktop Connect into a computer where a website is hosted. Normal traffic to this server is routed to another domain, but when accessed by IP (it is my understanding) the traffic never hits any sort of nameserver request, and so stays there. This server is set up so that the only way I can access it is RDC. I can connect using an RDC program, but when I attempt to use Dreamweaver, I get a vague error. The desired action could not be completed because an unexpected client/server communication error occurred. I have Web URL as http://my.ip.number/directory And in the settings panel, I have Host name: my.ip.number Port: 3388 Host directory: C:\Program Files (x86)\directory/directory Username: user Password: ••••••• What could be preventing Dreamweaver form connecting? Thanks! SERVER DETAILS It is a virtual server Windows SERVER 2008 R2 Standard Service Pack 1

    Read the article

  • Is it possible to mod_rewrite BASED on the existence of a file/directory and uniqueID? [closed]

    - by JM4
    My site currently forces all non www. pages to use www. Ultimately, I am able to handle all unique subdomains and parse correctly but I am trying to achieve the following: (ideally with mod_rewrite): when a consumer visits www.site.com/john4, the server processes that request as: www.site.com?Agent=john4 Our requirements are: The URL should continue to show www.site.com/john4 even though it was redirected to www.site.com?index.php?Agent=john4 If a file (of any extension OR a directory) exists with the name, the entire process stops an it tries to pull that file instead: for example: www.site.com/file would pull up (www.site.com/file.php if file.php existed on the server. www.site.com/pages would go to www.site.com/pages/index.php if the pages directory exists). Thank you ahead of time. I am completely at a crapshot right now.

    Read the article

  • JavaScript and callback nesting

    - by Jake King
    A lot of JavaScript libraries (notably jQuery) use chaining, which allows the reduction of this: var foo = $(".foo"); foo.stop(); foo.show(); foo.animate({ top: 0 }); to this: $(".foo").stop().show().animate({ top: 0 }); With proper formatting, I think this is quite a nice syntactic capability. However, I often see a pattern which I don't particularly like, but appears to be a necessary evil in non-blocking models. This is the ever-present nesting of callback functions: $(".foo").animate({ top: 0, }, { callback: function () { $.ajax({ url: 'ajax.php', }, { callback: function () { ... } }); } }); And it never ends. Even though I love the ease non-blocking models provide, I hate the odd nesting of function literals it forces upon the programmer. I'm interesting in writing a small JS library as an exercise, and I'd love to find a better way to do this, but I don't know how it could be done without feeling hacky. Are there any projects out there that have resolved this problem before? And if not, what are the alternatives to this ugly, meaningless code structure?

    Read the article

  • Replace %26 in htaccess to %2526

    - by Patrick
    Hi all, I would like htaccess to rewrite example.com/something_%26_else into example.com/something_%2526_else. I'm importing a bunch of pages that have ampersands in the title from Mediawiki. These are encoded as %26. Drupal, for various reasons, has decided double encode the url it to have it become %2526. I simply can't create the alisis within Drupal so I have to use htaccess This is what I have as my rule so far as RewriteRule ^w/([^%26]+)\%26(.*)$ w/$1\%2526$2 [R=301] I asked this question three months ago on stackexchange and was not able to get it working. I tried hiring a contractor for this but was unable to find one. So this my last ditch effort before I completely give up. I really appreciate the help. All the best, Patrick

    Read the article

  • Take a snapshot with JavaFX!

    - by user12610255
    JavaFX 2.2 has a "snapshot" feature that enables you to take a picture of any node or scene. Take a look at the API Documentation and you will find new snapshot methods in the javafx.scene.Scene class. The most basic version has the following signature: public WritableImage snapshot(WritableImage image) The WritableImage class (also introduced in JavaFX 2.2) lives in the javafx.scene.image package, and represents a custom graphical image that is constructed from pixels supplied by the application. In fact, there are 5 new classes in javafx.scene.image: PixelFormat: Defines the layout of data for a pixel of a given format. WritablePixelFormat: Represents a pixel format that can store full colors and so can be used as a destination format to write pixel data from an arbitrary image. PixelReader: Defines methods for retrieving the pixel data from an Image or other surface containing pixels. PixelWriter: Defines methods for writing the pixel data of a WritableImage or other surface containing writable pixels. WritableImage: Represents a custom graphical image that is constructed from pixels supplied by the application, and possibly from PixelReader objects from any number of sources, including images read from a file or URL. The API documentation contains lots of information, so go investigate and have fun with these useful new classes! -- Scott Hommel

    Read the article

  • Is this the correct approach to an OOP design structure in php?

    - by Silver89
    I'm converting a procedural based site to an OOP design to allow more easily manageable code in the future and so far have created the following structure: /classes /templates index.php With these classes: ConnectDB Games System User User -Moderator User -Administrator In the index.php file I have code that detects if any $_GET values are posted to determine on which page content to build (it's early so there's only one example and no default): function __autoload($className) { require "classes/".strtolower($className).".class.php"; } $db = new Connect; $db->connect(); $user = new User(); if(isset($_GET['gameId'])) { System::buildGame($gameId); } This then runs the BuildGame function in the system class which looks like the following and then uses gets in the Game Class to return values, such as $game->getTitle() in the template file template/play.php: function buildGame($gameId){ $game = new Game($gameId); $game->setRatio(900, 600); require 'templates/play.php'; } I also have .htaccess so that actual game page url works instead of passing the parameters to index.php Are there any major errors of how I'm setting this up or do I have the general idea of OOP correct?

    Read the article

  • How do I save a Web Page?

    - by Remus Rigo
    Hi all, I have tried many programs and solutions to save web pages (html, mht, doc, pdf). My favorite software was an addon for browsers from Omnipage (OCR). What I like about this is that it prints the whole page (continuously) and it doesn't write the URL or page numbers on the page, which I find annoying. Does anyone know a software like this one (freeware or not)? I tried CutePDF and it didn't work for me. I want this for my offline use and would prefer a PDF.

    Read the article

  • Why do Google search results include pages disallowed in robots.txt?

    - by Ilmari Karonen
    I have some pages on my site that I want to keep search engines away from, so I disallowed them in my robots.txt file like this: User-Agent: * Disallow: /email Yet I recently noticed that Google still sometimes returns links to those pages in their search results. Why does this happen, and how can I stop it? Background: Several years ago, I made a simple web site for a club a relative of mine was involved in. They wanted to have e-mail links on their pages, so, to try and keep those e-mail addresses from ending up on too many spam lists, instead of using direct mailto: links I made those links point to a simple redirector / address harvester trap script running on my own site. This script would return either a 301 redirect to the actual mailto: URL, or, if it detected a suspicious access pattern, a page containing lots of random fake e-mail addresses and links to more such pages. To keep legitimate search bots away from the trap, I set up the robots.txt rule shown above, disallowing the entire space of both legit redirector links and trap pages. Just recently, however, one of the people in the club searched Google for their own name and was quite surprised when one of the results on the first page was a link to the redirector script, with a title consisting of their e-mail address followed by my name. Of course, they immediately e-mailed me and wanted to know how to get their address out of Google's index. I was quite surprised too, since I had no idea that Google would index such URLs at all, seemingly in violation of my robots.txt rule. I did manage to submit a removal request to Google, and it seems to have worked, but I'd like to know why and how Google is circumventing my robots.txt like that and how to make sure that none of the disallowed pages will show up in their search results. Ps. I actually found out a possible explanation and solution, which I'll post below, while preparing this question, but I thought I'd ask it anyway in case someone else might have the same problem. Please do feel free to post your own answers. I'd also be interested in knowing if other search engines do this too, and whether the same solutions work for them also.

    Read the article

  • On apache how do I allow access to only to a single file?

    - by sriram
    I have a apache machine which is serving a .js file. That file should be the only file that need to seen. I have configured to do so in my apache like this : <Location /var/www/test/test.js> Order allow,deny Allow from all </Location> The site address is test.in which points to test.js file in /var/www/test directory. That is working fine. But I wish when the user tries to hit test.in/someurl (which is not available) or some other url than test.in need to give an message with 401 error. How do I do that?

    Read the article

  • Site Action Button Missing

    - by David
    I have a Moss 2007 site collection which was running smoothly. However, after moving to Forms Authentication connected to Active Directory neither my Farm Administrator account nor the 2 Site collection administrators can get past the site homepage. There is no Site Action buttons availabe. When I log into CA witht he Farm Administrator account the Site Action button exists there so I suspect its something to do with the permissions to the site application. I have tried to connect directly to the settings URL http://mysite/_layouts/settings.aspx etc but get the ususal error Error: Access Denied Current User You are currently signed in as: xxxx (Farm Administrator Account) Sign in as a different user Request access I get the same error when logging in with the site collection admin accounts. Ive also checked to make sure that the sites are not locked out.

    Read the article

  • Case in-sensitivity for Apache httpd Location directive

    - by user57178
    I am working with a solution that requires the usage of mod_proxy_balancer and an application server that both ignores case and mixes different case combinations in URLs found in generated content. The configuration works, however I have now a new requirement that causes problems. I should be able to create a location directive (as per http://httpd.apache.org/docs/current/mod/core.html#location ) and have the URL-path interpret in case insensitive mode. This requirement comes from the need to add authentication directives to the location. As you might guess, users (or the application in question) changing one letter to capital circumvents the protection instantly. The httpd runs on Unix platform so every configuration directive is apparently case sensitive by default. Should the regular expressions in the Location directive work in this case? Could someone please show me an example of such configuration that should work? In case a regular expression can not be forced to work case insensitively, what part of httpd's source code should I go around modifying?

    Read the article

  • Forcing exact hostname match in IIS

    - by iis_newbie
    I am looking how to force an exact hostname match within IIS when using https. For instance, I want "https://works.mysite.com/resource" to be ok, but "https://noworks.mysite.com/resource" to return 404 (assuming they both resolve to the same IP). IIUC, the default behavior of IIS when going to "https://noworks.mysite.com/resource" is to get a cert warning, if the user presses continue, the user is able to access the URL. I was able to do this by generating a *.mysite.com SSL cert, and then specify the hostname within the bindings in IIS, but without the * in the beginning, the hostname field is disabled and blank. Am I missing something simple here?

    Read the article

  • Advertising on personalized pages behind a login

    - by johneth
    I am currently building a web app which requires a user to log in. After they log in, they can see the content they've added to the web app, and things that the web app has done with the content they added. The URL structure won't differentiate different users (e.g. all user's 'homepage' would be example.com/home, not something like example.com/username/home). This is much the same way that Facebook works (all FB user's messages are at facebook.com/messages, for example). This presents a problem with advertising. I know that you can use AdSense behind a login, but as far as I'm aware, that's for things like forums, where everyone sees the same things (which wouldn't be the case in this site). I also know that I could put AdSense on the pages without allowing it to log in, which would produce inferior ads. I'm fairly certain it would be against the Terms of Service to give AdSense a login to a 'dummy' account with typical content, as it would not be seeing the same thing as every other user (which is impossible, as they all see different things). So, my question is: Is there an ad network, or other method, that can serve ads behind a login, maybe based on keywords rather than content?

    Read the article

  • Prevent URLs from specific domains from being saved in Firefox history

    - by noam
    I want to prevent or block URLs of specific domains from being saved or shown in my history. I want to be able to go to these certain websites normally, just not have them saved and not have to use private or incognito mode. For instance, I don't want any of Google's search result pages to be saved in my history since then when I use the awesomebar I get a lot of Google's search results, which are of no use to me. Of course I can keep on deleting them, but I would like a way to specify that any URL starting with www.google.com shouldn't be saved.

    Read the article

  • Announcing Hackathon for Social Developers

    - by Mike Stiles
    Continuing our Social Developer theme, we're excited to announce a week long hackathon put on by the Oracle Social Developer Lab (OSDL). The event starts at JavaOne Oct 2nd and runs through Oct 9th. A winner will be announced and profiled in the following issue of Java Magazine. What's it about?The OSDL is on a mission to make social development easier for the Java community. You may have noticed the biggest social networks have created tools for Ruby, PHP, and other languages, but not as much for Java. We've decided to help fill the gap with a SocialLink social publishing library. You can learn more about it on Java.net. We're also interested in promoting other tools that facilitate social development such as DaliCore Framework.  For our hack, you've got one week to leverage our library and/or DailCore to create a social app. The only rules are it must be a new application, and it must leverage one or both of these tools.  How to submit Create a project that uses either the SocialLink library or the DaliCore Framework to read or publish social data. 1. Upload your hack to a new project on java.net 2. Submit the URL to your java.net project through the project submission form on the Oracle Social Developer Community Facebook page. Only projects that have been submitted to the Oracle Social Developer Community will be reviewed.  In addition to the review process, we'll be adding some projects to the SocialLink project as a "sibling" project. Should you participate?If you're a developer who aspires to integrate some social functionality into your Java application, then yes!  How else can I participate with OSDL?If you're not ready to participate in the hackathon but have ideas for how we can make social development easier for the Java community, come join our social developer community on Facebook. 

    Read the article

  • ftp connects but files aren't visible browsing

    - by YsoL8
    Hello If this should be on that other site, please don't shoot me, as I can't remember the name or the url. I have an ftp account in Dreamweaver that connects to the remote site and appears to be uploading files as normal. But when I browse to the location I can't see any new files or changes to the index page. (I've uploaded index.php and connect.php). I'm getting a 404 page. I suspect the host directory is wrong, but looking at the file tree, I can't see the folder I'm supposed to be using, so I'm uploading to the apparent site root. Any guidance on this?

    Read the article

< Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >