Search Results

Search found 9717 results on 389 pages for 'pro'.

Page 102/389 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Could not log-in properly but shows no error in joomla

    - by saeha
    This is what I did, I added variables in \libraries\joomla\database\table\user.php: var $img_content= null; //contains the blob type data var $img_name = null; var $img_type = null; then I added this code in \components\com_user\controller.php: $file = JRequest::getVar( 'pic', '', 'files', 'array' ); if(isset($file['name'])) { jimport('joomla.filesystem.file'); $fileName = $file['name']; $tmpName = $file['tmp_name']; $fileSize = $file['size']; $fileType = $file['type']; $fp = fopen($tmpName, 'r'); $content = fread($fp, filesize($tmpName)); //$content = addslashes($content); fclose($fp); $user->set('img_name', $fileName); $user->set('img_type', $fileType); $user->set('img_content', $content); } that works fine, but I found this problem in logging in with the new user with an uploaded photo, other user with an empty img_content field could login properly. What happens is when I log-in using the user with uploaded photo, it's not redirecting properly it just return to log-in, but when i log-in through backend using other user which is super admin, i could see that user which appears as logged in. I started saving the images in the database because I am having problem with the directory when I have uploaded the site. I think the log-in was affected by the blob type data in the database. Could that be the problem? What could be the solution? -saeha

    Read the article

  • how to assign javascript variable value to the google analytics script? [migrated]

    - by Vinoth Prakash
    I have assigned two values in the two hidden variables at server Side and accessed those values at client side using script. I have written the google analytics code. I have set two custom variable. I need to pass two values which is stored in the javascript variables to the "value" of custom variable. I have assigned the varibales but values not displaying. please telll what error i made in the script. My aspx code <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <br /> Total Pirce&nbsp; &nbsp;: <asp:Label ID="Label1" runat="server" Text="10"></asp:Label><br /> &nbsp;Ship Price&nbsp; &nbsp; : <asp:Label ID="Label2" runat="server" Text="5"></asp:Label> <br /> ------------------<br /> Grand Total : <asp:Label ID="Label3" runat="server" Text="15"></asp:Label><br /> ------------------</div> <asp:HiddenField ID="HiddenField1" runat="server" /> <asp:HiddenField ID="HiddenField2" runat="server" /> </form> <script type="text/javascript"> var serverhid1 = document.getElementById('HiddenField1').value; var serverhid2 = document.getElementById('HiddenField2').value; alert(serverhid1) alert(serverhid2) var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-35156990-1']); //Set Custom Variable _gaq.push(['_setCustomVar', 1, 'TotalPirce', serverhid1 , 3]); _gaq.push(['_setCustomVar', 2, 'Shipping','yes', 3]); _gaq.push(['_setCustomVar', 3, 'GrandTotal',check(), 3]); _gaq.push(['_setDomainName', 'none']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> </body> </html> cs Code protected void Page_Load(object sender, EventArgs e) { HiddenField1.Value = Label1.Text; HiddenField2.Value = Label2.Text; }

    Read the article

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

  • Share on Facebook does not show thumbnail images

    - by matt_tm
    I have a PHP application which has a "Share on Facebook" button that, On the development server shows the thumbnail images correctly and allows the user to select between them On the live server, it does NOT show the thumbnail images at all. The relevant portion of the .htaccess file is: # Set up caching on media files for 2 days <FilesMatch "\.(gif|jpg|jpeg|png|flv)$"> ExpiresDefault A172800 Header append Cache-Control "public" </FilesMatch> I'm using the exact same set of php files and .htaccess, but the server configuration is different. What could be causing this? Note that the text appears fine. Edit1 We are also doing some URL rewriting related to images in the .htaccess (on both servers): ... RewriteRule ^.*/content/image/(.*)$ content/image/$1 [L] ... RewriteRule ^.*/images/(.*)$ images/$1 [L] ... Would that be somehow making a difference? Images appear fine all throughout the site. (I posted this question earlier as http://stackoverflow.com/questions/4142597/share-on-facebook-does-not-show-thumbnail-images) )

    Read the article

  • website particular url suddenly disappeared from google search result

    - by Ragavendran Ramesh
    i have a website , in that a particular page url was indexed in google search result in the first 10 results , but suddenly it disappeared , not that page is not even in the 100results , what would be the reason. i am feeling that the page has be spammed by our competitors . is it possible to avoid that , or can i find that page has been spammed or not. Is it possible to find the particular page in a website is spam or malicious.

    Read the article

  • CSS tags/media queries for chrome on iPhone [migrated]

    - by Mick79
    So Chrome is here for iOS.. Hoorah! However now due to the different screen layout (no footer toolbar) it messes with the ability to make a perfect layout for iphone web pages. I have a site for my company that resided perfectly inside an iphone screen, no scrolling required, it looked like an app. However now that chrome is here (and wildly popular) with its different screen layout, sites that were sized for iphone safari now look odd. Is there, or will there be, ways to isolate out chrome from safari and give them different CSS?

    Read the article

  • Disqus-like comment server

    - by wxs
    Hi all, I'm looking at setting up a blog, and I think I want to go the static website compiler route, rather than the perhaps more conventional Wordpress route. I'm looking at using blogofile, but could use jekyll as well. These tools recommend using disqus to embed a javascript comment widget on blog posts. I'd go that route, but I'd rather host the comments myself, rather than use a third party. I could certainly write my own dirt-simple comment server, but I was wondering if anyone knew of one that already exists (of the open source variety). Thanks!

    Read the article

  • FTP file access problem

    - by Fahad Uddin
    I recently got a malware on my website. I have made the backup of the website on my computer and trying to wipe off my FTP. I am trying to delete the root folder but getting this error message on all of the malicious files, Response: 550 Could not delete index.php: Permission denied I am the sole administrator of the ftp so permission should not have been an issue. My host provider seems not to suffer from this problem as his websites are running well without any malware. I have also tried to change the root to 777 to see if the file permission change could help me delete the files but still I am getting the same error. Please help out. Thanks

    Read the article

  • Content theft - Where can I go from here?

    - by Toby
    I am the webmaster of a very successful blog in a fairly small niche. Recently our success has started to bite us with people copying posts on the site without consent and trying to pass them off as their own work. Most sites stop as soon as you contact them but there is one in particular that is a blogger site which persists in passing off our content as their own. Every post we find we report to Google and they have been fairly good at taking the posts offline within a day or two but this isn't good enough or a long term solution. Given the nature of what is being blogged about after 24 hours the post is pretty much useless so I need some way to just stop them from taking our content? Any ideas? I don't want to go down the route of using a third party for people to get our RSS feed but I guess that is one option?

    Read the article

  • Magento Bulk Product Import + Modules Nightmare

    - by mike
    Have 5,000 products in CSV file File has been re-saved as UT8 file in google documents and exported to CSV from excel File loads perfectly with all fields in demo of magento store manager (except we dont want to buy it:) When trying to upload in regular Magento..keep getting error messages on column duplicates....yes we have hundreds of duplicates as the titles of products in fields correspond with different sizes, etc...no way around it Any solutions around this or any open source software similar to store manager that can do the trick. Ready to give up and go to paid solution such as Big Commerce Also, Uploaded a bunch of free modules/keys...of open source bulk import products from magentocommerce but I cant find them anywhere in the main admin panel to use...there is no menu item for them anywhere??

    Read the article

  • Which token from a long User-Agent should I use in robots.txt?

    - by Gaia
    The definition of User-Agent states that several tokens can be included, as deemed necessary by the client. I want to block certain bots via robots.txt and I am confused as to which part of the User-Agent string to use, especially for more obscure bots. For example: Mozilla/5.0 (compatible; uMBot-LN/1.0; mailto: [email protected])" JS-Kit URL Resolver, http://js-kit.com/ Mozilla/5.0 (compatible; SEOkicks-Robot +http://www.seokicks.de/robot.html Do I use the second token? Can tokens contain spaces, or did the SEOkicks folks forget a semicolon after SEOkicks-Robot? I don't actually intend on making my question specific to a couple bots - I want to know the guideline: which part of UA do I place in robots.txt for these exotic bots with UA as long as a haiku? User-agent: uMBot-LN/1.0 Disallow: / PS: Thank you but I do not need to hear that undesirable bots are better blocked with mod_security. I already have commercial mod_sec rules in place.

    Read the article

  • AdBlock users statistics

    - by DataSmarter
    Are there statistics of internet users that use AdBlock or other ad blocking plug-ins? Are there some statistical breakdown, for example, per country (I assume it must vary a lot)? I was unable to google the information I am looking for. The reason I am asking is because I have just signed up for the "Amazon Partners" program and see that this affiliate program is listed on the AdBlock blacklist.

    Read the article

  • Which kind of sitemap directory should I build for a search based navigation site

    - by Noam
    I have a search based navigation web-site. Each query has filters as well as sort-by. The search results point to end-pages inside the site. Each of those pages has many outlinks to other end-pages. Currently I have a XML sitemap which directs crawlers to all the end pages. I'm trying to add a silo sitemap directory to improve SEO. Assuming this is the right direction I have a couple of options: end pages sorted alphabetically. Pages by major search filters, and then divide alphabetically. Pages for every filter and cross option between them and the sort-by. Which would you recommend and why?

    Read the article

  • Best tools to build an auction website

    - by Daniel Loureiro
    Can I get your feedback on the best tools to build an auction website with the following features: The site takes a commission (like 5%) on each transaction Each user can assign a rating (like 4.5 stars) to his completed transaction, and comment on the seller's profile. Accept payments in paypal and credit card I've been looking into Joomla! and JomSocial but they haven't convinced me much so far. I have some programming experience in C, Python and Java. If no CMS tools are of use I'd appreaciate if you could tell the best route to take in programming to get the auction site done.

    Read the article

  • Are copyright notices really required?

    - by Alasdair
    Ever since I made my first web page 13 years ago I have followed the pattern of showing a copyright notice in the footer of each page. Over the years the format of this notice has changed in the following way; Copyright © <NAME> yyyy. All rights are reserved. Copyright © <NAME> yyyy © yyyy <NAME> © <NAME> This has generally mirrored the format used by Google. However, I recently noticed that they no longer display a copyright notice on their home page nor have one in their source code/meta tags. I see they still display it on most (if not all) other pages. I understand that Google are very keen to keep the word count down on their homepage, which could be the reason for this sacrafice, but my question is more general and relates to all websites. Since I've always just done it out of habit, I'm hoping someone can explain if/when I a copyright notice is actually required to protect your content and rights. Also, when it is required, is there a format in which the notice must adhere to in order to be valid?

    Read the article

  • A record DNS, nameserver help

    - by Josip Gòdly Zirdum
    I just installed kloxo on my vps and I want to link my domain to that server...which it sort of already is...I made it connect to it via an A record. That works, like the IP is pointing to my server but how do I make a website using it. I tried adding the domain but this doesn't work ...I feel i'm not explaining this well um, on my server it asked me to create DNS template so I did I created the nameservers ns1.mydomain.com, ns2.mydomain.com Then I added the domain to the panel mydomain.com I go to the folder it creates to it, but no matter the file there it wont work...any ideas? Is there a way to possibly just not even add a domain to kloxo and just kind of treat the ip of the server as the domain? Since the A record points there anyway? I don't intend to host another website on the server anyway...

    Read the article

  • Google doesn't always show rich snippets when the site uses structured data [duplicate]

    - by Sam Se
    This question is an exact duplicate of: Google Structured Data [on hold] 1 answer I'm so tired of the Google structured data recipe. After some days, it loses the image and the extra information. Then I test it again, and it shows again. Some other days in the future it might go away even if it is still showing in test tool. What i can do? I tried with RDFa and schema.org microdata.

    Read the article

  • Website content copied - How can I prove that I wrote it?

    - by Remy
    One of our competitors constantly copies all our website content. Now, I assume the trouble is to proof that we wrote the content first and that it is not the other way round. I checked on http://www.archive.org, but there is nothing. Any other way to proof that? FYI: We are a swiss company, so different laws will apply. Solution: [Found later] You can upload your content to this service and that they basically timestamp it. https://myows.com/ Another way we found, is to just print out your copy/design, etc. put it into an envelop and send it to ourself (without actually opening it later of course).

    Read the article

  • Can third party content on sub-domains harm the main site's search rankings?

    - by dror
    I have a site that is a "portal" or "directory" for service providers. We opened every service provider's own page on our site, but now we get a lot of applications from those providers that want sites from their own. We want to make a full site for every service provider, but rather put them on sub domain URLs. (They don’t mind, it's OK for them.) So, my site is www.exaple.com Their site will be: provider.example.com Now I have two questions: Can the content on the provider sites harm my site in SEO? If one from those sub domains is punished by Google because the owner does "black hat SEO", how it will affect the rood domain? Can it make the root domain get punished?

    Read the article

  • Should a link validator report 302 redirects as broken links?

    - by Kevin Vermeer
    A while ago, sparkfun.com changed their URL structure from /commerce/product_info.php?products_id=9266 to /products/9266 This is nice, right? We don't need to know that it is (or was) a PHP page, and commerce, product_info, and products_id all tell us that we're looking at some products. The latter form seems like a great improvement. However, the change would have broken existing links. So, nicely, they stuck in 302 redirects. Visit http://www.sparkfun.com/commerce/product_info.php?products_id=9266 and your browser will issue GET /commerce/product_info.php?products_id=9266 HTTP/1.1 to which Sparkfun's servers reply HTTP/1.1 302 Found Location: http://www.sparkfun.com/products/9266 This 302 redirect is caught by Stack Exchange's link validator as a broken link. It's not broken it works just fine. Here, try it: http://www.sparkfun.com/commerce/product_info.php?products_id=9266 I understand that a 302 redirect is intended to be a temporary redirect, while a 301 should be used for permanent changes per RFC 2616. That said, Wikipedia and common practice use it as a redirect. Who is in error in this situation? Is this an error in Sparkfun's redirect implementation or in Stack Exchange's URL validator?

    Read the article

  • What could be causing this long waiting time on page load?

    - by Andrew Findlay
    What could be causing a 1.18s wait time when my page loads? Just to make sure I did not have any conflicting or parallel scripts loading, I completely deleted all the script on my home page and ran the speed test again. Although I had a blank website and 5kb file size, there was still a 900ms "waiting" time. I'm wondering if it could be my server? Any other thoughts or suggestions as it doesn't seem to be scripts. EDIT - Just ran a DNS test on pingdom and here are my results. Does this tell me anything? No nameservers found at child?

    Read the article

  • How do websites with no content rank so high?

    - by Akito
    I was just searching for, how to close a sim and I got this website on the first page of Google. The website does not have the solution to the problem. there is just a line written about it but the SEO or a trick is played so well that it has got so many real comments. The comments are users asking him to help me close their sims. Now I don't get this. There is no content then how does the site rank so high? Thankyou.

    Read the article

  • How do I avoid spam domains pointing to my site or IP

    - by Amol Ghotankar
    I came across an issue where I saw some xyz.com is pointing to mydomain.com. How do I avoid spam domains pointing to my domain? I read some posts about setting my virtual hosts and such, but nothing specific about how to avoid it in the first place. I searched on Google but most answers are for HTTP servers and there are no exact answers for Tomcat 7. I am not using Apache or IIS, but Tomcat directly.

    Read the article

  • Should my blog be directly on my website?

    - by steve
    I have my newly launched website at www.slicify.com (redirects to a secure subdomain). I currently have a separate blog on WordPress: slicify.wordpress.com for a couple of reasons: I don't really want to mix my site code (it's a complex ecommerce site written in ASP.Net) with blog code, for ease of maintenance etc. WordPress is already great at blogs - seems silly to reinvent the wheel by trying to integrate blog functionality into my site However is keeping my blog on a separate domain going to hurt me in terms of PageRank or traffic? FWIW: while it's early days, I can see from Google Analytics that a good deal of referral traffic is already coming from my WordPress site to my main site, so at least that seems to be drawing potential users in.

    Read the article

  • Strange Google Analytics result when new site launched

    - by Howard
    I have a web site which is mainly contains a few pages, and now we revamp-ed a new site which contains several hundred pages. We have Strange Google Analytics result, as follow: Before: Traffic sources (all traffic): 674 Content (all pages, unique PV): 674 After: Traffic sources (all traffic): 291 Content (all pages, unique PV): 1235 As you can see, the unique PV has increased as expected (as we have more pages now and the site is better), but why the traffic sources is lower and has a large gap? Any idea?

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >