Search Results

Search found 42685 results on 1708 pages for 'page speed'.

Page 623/1708 | < Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >

  • Canonicals with differing content

    - by Jimbo Jonny
    Interesting conundrum here with canonicals. Lets say I have a site with a "verified" system where other websites can become so and so "verified". Their url to send people to to confirm verification is something like "blah.com/verify/company1" and "blah.com/verify/company2". But logically "blah.com/verify" itself is not verifying anyone in particular, so it redirects to the signup form to get verified, at "blah.com/verify/register" As far as the actual companies registered, I figure it doesn't make sense to index every individual url with only the tiny difference of which company name it's saying yay or nay to being verified, so canonicals could come in handy on those pages to condense the indexing. Yet making "blah.com/verify" the canonical "hub" doesn't work well because it's a signup form, not a verification page, so technically has quite different content from the various verification pages themselves. But at the same time it's a bit unfair to choose 1 company to point all the canonical benefits too to use that as the "hub", yet a bit wasteful to have google index every individual verification page and spread out all that linkjuice. Basically, I'm just looking for advice, what's best for this from a search engine standpoint?

    Read the article

  • Preventing Users From Copying Text From and Pasting It Into TextBoxes

    Many websites that support user accounts require users to enter an email address as part of the registration process. This email address is then used as the primary communication channel with the user. For instance, if the user forgets her password a new one can be generated and emailed to the address on file. But what if, when registering, a user enters an incorrect email address? Perhaps the user meant to enter [email protected], but accidentally transposed the first two letters, entering [email protected]. How can such typos be prevented? The only foolproof way to ensure that the user's entered email address is valid is to send them a validation email upon registering that includes a link that, when visited, activates their account. (This technique is discussed in detail in Examining ASP.NET's Membership, Roles, and Profile - Part 11.) The downside to using a validation email is that it adds one more step to the registration process, which will cause some people to bail out on the registration process. A simpler approach to lessening email entry errors is to have the user enter their email address twice, just like how most registration forms prompt users to enter their password twice. In fact, you may have seen registration pages that do just this. However, when I encounter such a registration page I usually avoid entering the email address twice, but instead enter it once and then copy and paste it from the first textbox into the second. This behavior circumvents the purpose of the two textboxes - any typo entered into the first textbox will be copied into the second. Using a bit of JavaScript it is possible to prevent most users from copying text from one textbox and pasting it into another, thereby requiring the user to type their email address into both textboxes. This article shows how to disable cut and paste between textboxes on a web page using the free jQuery library. Read on to learn more! Read More >

    Read the article

  • Chrome "refusing to execute script"

    - by TestSubject528491
    In the head of my HTML page, I have: <script src="https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js"></script> When I load the page in my browser (Google Chrome v 27.0.1453.116) and enable the developer tools, it says Refused to execute script from 'https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. Indeed, the script won't run. Why does Chrome think this is a plaintext file? It clearly has a js file extension. Since I'm using HTML5, I omitted the type attribute, so I thought that might be causing the problem. So I added type="text/javascript" to the <script> tag, and got the same result. I even tried type="application/javascript" and still, same error. Then I tried changing it to type="text/plain" just out of curiosity. The browser did not return an error, but of course the JavaScript did not run. Finally I thought the periods in the filename might be throwing the browser off. So in my HTML, I changed all the periods to the URL escape character %2E: <script src="https://raw.github.com/cloudhead/less%2Ejs/master/dist/less-1%2E3%2E3.js"></script> This still did not work. The only thing that truly works (i.e. the browser does not give an error and the JS successfully runs) is if I download the file, upload it to a local directory, and then change the src value to the local file. I'd rather not do this since I'm trying to save space on my own website. How do I get the Chrome to recognize that the linked file is actually a javascript type?

    Read the article

  • Send Multiple InMemory Attachments Using FileUpload Controls

    - by bullpit
    I wanted to give users an ability to send multiple attachments from the web application. I did not want anything fancy, just a few FileUpload controls on the page and then send the email. So I dropped five FileUpload controls on the web page and created a function to send email with multiple attachments. Here’s the code: public static void SendMail(string fromAddress, string toAddress, string subject, string body, HttpFileCollection fileCollection)     {         // CREATE THE MailMessage OBJECT         MailMessage mail = new MailMessage();           // SET ADDRESSES         mail.From = new MailAddress(fromAddress);         mail.To.Add(toAddress);           // SET CONTENT         mail.Subject = subject;         mail.Body = body;         mail.IsBodyHtml = false;                        // ATTACH FILES FROM HttpFileCollection         for (int i = 0; i < fileCollection.Count; i++)         {             HttpPostedFile file = fileCollection[i];             if (file.ContentLength > 0)             {                 Attachment attachment = new Attachment(file.InputStream, Path.GetFileName(file.FileName));                 mail.Attachments.Add(attachment);             }         }           // SEND MESSAGE         SmtpClient smtp = new SmtpClient("127.0.0.1");         smtp.Send(mail);     } And here’s how you call the method: protected void uxSendMail_Click(object sender, EventArgs e)     {         HttpFileCollection fileCollection = Request.Files;         string fromAddress = "[email protected]";         string toAddress = "[email protected]";         string subject = "Multiple Mail Attachment Test";         string body = "Mail Attachments Included";         HelperClass.SendMail(fromAddress, toAddress, subject, body, fileCollection);            }

    Read the article

  • Fix 403 errors in Google Webmaster Tools

    - by Justin
    Hi Team, I have a domain that has "fallen off a cliff" for searches in Google. Searches that used to be in position 1-4 are now gone from page 1. The same search in Bing shows the typical position expected (top 5 results). In reviewing Google Webmaster Tools, I am seeing two problems: 1. The Sitemap is reporting two errors: General HTTP error: HTTP 403 error (Forbidden) URLs not accessible However, the URL they provide as "no accessible" is accessible. I can click the link Google provides and it works fine. There are 6,000 crawl errors of type 403. Again, most of these pages that have 403 are accessible in my browser (tried various browsers as well). About half are from January, the other half from November. There are no IP-specific firewall rules on ports 80 and 443 that could block the goolgebot Using the user agent switcher add-on for FF I confirmed that the page loads when the user agent is the googlebot I an confirm that most of the pages reported as 403 are accessible. A search of just "site:thedomain.com" does confirm there are over 9,000 in the index. But most searches don't return the site. I believe the 403 issues are the cause of the fall in search rankings, but I can't seem to find any information online with ideas about how to address this. Any ideas? jpe

    Read the article

  • Kubuntu 12.04 - DNS Issues

    - by AndrewJesaitis
    Starting yesterday (6/11/12), I've been having many network problems. When requesting a page in chrome, the page hangs on "Sending request" and then will eventually timeout. I'm within a VPN that has it's own DNS server. I've tried to manually set my DNS through the Network-Manager applet and by editing /etc/network/interfaces. Having no luck I unlinked the resolv.conf file and dumped the contents of my old resolv.conf into it. Again having no luck, I deactivated the dnsmasq server in /etc/NetworkManager/NetworkManager.conf by commenting out the dns=dnsmasq. $ cat NetworkManager.conf [main] plugins=ifupdown,keyfile #dns=dnsmasq no-auto-default=D0:67:E5:EA:B6:6B, [ifupdown] managed=false $ nm-tool NetworkManager Tool State: connected (global) - Device: eth0 [Wired connection 1] ------------------------------------------- Type: Wired Driver: tg3 State: connected Default: yes HW Address: D0:67:E5:EA:B6:6B Capabilities: Carrier Detect: yes Speed: 1000 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.254.122 Prefix: 24 (255.255.255.0) Gateway: 192.168.254.2 DNS: 192.168.254.1 What is strange is that the network will work fine for a few minutes then start to timeout. A few minutes later it will work again. I'm unable to hit internal or external sites when it is timing out. When I $dig local sites, I receive no answer. I do receive an answer from google.com. At this point, I would usually blame the DNS Server, especially since when I change to Google's DNS server things work. But, I need to use our internal DNS to hit our internal sites. Nobody else is having issues and they are all using DHCP. This group includes one user who is using 11.04. At this point, I'm at a loss for what to do, so any help would be appreciated.

    Read the article

  • Virtually the fastest way to try Solaris 11 (and Solaris 10 zones)

    - by dminer
    If you're looking to try out Solaris 11, there are the standard ISO and USB image downloads on the main page.  Those are great if you're looking to install Solaris 11 on hardware, and we hope you will.  But if you take the time to look down the page, you'll find a link off to the Oracle Solaris 11 Virtual Machine downloads.  There are two downloads there:A pre-built Solaris 10 zoneA pre-built Solaris 11 VM for use with VirtualBoxIf you're looking to try Solaris 11 on x86, the second one is what you want.  Of course, this assumes you have VirtualBox already (and if you don't, now's the time to try it, it's a terrific free desktop virtualization product).  Once you complete the 1.8 GB download, it's a simple matter of unzipping the archive and a few quick clicks in VirtualBox to get a Solaris 11 desktop booted.  While it's booting, you'll get to run through the new system configuration tool (that'll be the subject of a future posting here) to configure networking, a user account, and so on.So what about that pre-built Solaris 10 zone download?  It's a really simple way to get yourself acquainted with the Solaris 10 zones feature, which you may well find indispensible in transitioning an existing Solaris 10 infrastructure to Solaris 11.  Once you've downloaded the file, it's a self-extracting executable that'll configure the zone for you, all you have to supply is an IP address for the zone.  It's really quite slick!I expect we'll do a lot more pre-built VM's and zones going forward, as that's a big part of being a cloud OS; if there's one that would be really useful for you, let us know.

    Read the article

  • Quick Question, robots.txt Disallow: /*/ does what exactly?

    - by Exit
    A SEO firm suggested changing the robots.txt to: User-agent: * Disallow: /*/ Allow: /ims/ I'm not sure what that would do, but my guess is that is would tell all robots to index nothing but the ims folder. I understand the wildcard, but I'm confused by the slashes and don't know how they would play out in conjunction with the wildcard. * Update * I didn't mention that there is a sitemap listed in the robots.txt file, but according to one tech blogger, he realized that sitemaps trump robots exclusions. So, even though this says in Google Webmaster Tools that everything with a trailing slash will not be indexed, the sitemap contains the important links. I did notice that the link count on Google went from 360 to 336, and the sitemap links under the URL scaled back to 3 from 6. I'm not sure the cause or what links were removed, though. Perhaps it cleaned out garbage. I'm still clueless why they would add in 'Allow: /ims/', that seems pointless. And a quick list of what would index according to the robots rules above (withouth the sitemap) using /*/: domain.com Indexed domain.com/page.html Indexed domain.com/folder/ Not Indexed domain.com/folder/page.html Not Indexed

    Read the article

  • How do you set up the directory structure for a multilingual site without duplicating content?

    - by Ricardo
    I want to make a website in two languages. I've looked around and settled on the directory option of separating both languages. How do I make it work? Let's say I have the following three files for the landing homepage, the English page and the Spanish page: http://www.domain.com/index.html http://www.domain.com/en/index.html http://www.domain.com/es/index.html Let's also say that /index.html will be in English, with a link to /es/index.html. In turn, /es/index.html will have a link to the English version. Would this be back to /index.html or to /en/index.html. How do I get both English versions (the one at the root and the one in the directory) to actually be the same file in the same directory? I'm new to this, so I'm not using any scripts yet. To me, the obvious solution is to duplicate both English versions and have the one at the root point to files under the /en/ directory, but I'm not a fan of duplication and I've learned that search engines really frown upon that. Anyone point me in the right direction?

    Read the article

  • Why do people crawl sites without downloading pictures?

    - by Michael
    Let me show you what I mean: IP Pages Hits Bandwidth 85.xx.xx.xxx 236 236 735.00 KB 195.xx.xxx.xx 164 164 533.74 KB 95.xxx.xxx.xxx 90 90 293.47 KB It's very clear that these person are crawling my site with bots. There's no way that you could visit my site and use <1MB bandwidth. You might say that there's the possibility that they could be browsing the site using some browser or plug-in that does not download images, js/css files, etc., but the simple fact of the matter is that there are not 90-236 pages that are linked from the home page (outside of WP files), even if you visited every page twice. I could understand if these people were crawling the site for pictures, but once again, the bandwidth indicates that this isn't what is happening. Why, then, would they crawl the site to simply view the HTML/txt/js/etc. files? The only thing that I can come up with is that they are scanning for outdated versions of WordPress, SQL injection vulnerabilities, etc., which makes me inclined to outright ban the IPs, but I'm curious, is it possible that this person is a legitimate user, or at the very least, not intending to be harmful?

    Read the article

  • EPM Planning 11.1.2 - MassGridStatistics

    - by Keith Rosenthal
    A utility is available for Oracle Hyperion Planning that determines web form load times.  This utility, MassGridStatistics, opens all web forms within the Planning application.  After the forms are opened, an html page will appear showing the form options, suppression, number of row column and page members, and load times.  Any form having a load time longer than one second could potentially have scalability issues in a multi-user environment and should be considered for re-design.  Adding suppression (especially block suppression) and reducing the number of rows and columns are potential fixes that will reduce load times. The MassGridStatistics utility is located in a .7z file called MassGridStatistics.7z.  Extract the file using 7-Zip.  A readme file is provided listing the installation instructions and the steps to run the utility. MassGridStatistics is included with the 11.1.2.1.101 patch set and will also be in all future releases starting with 11.1.2.2.  For earlier Planning releases, an SR will be necessary to have Support provide the utility.

    Read the article

  • What are the boundaries of the product owner in scrum?

    - by Saeed Neamati
    In another question, I asked about why I feel scrum turns active developers into passive developers, and it seems that the overall problem is not scrumy (related to scrum), and rather it's related to the bad implementation of scrum. So, here I have some questions about the scope of the responsibilities of PO (product owner) and the limitations he/she shouldn't pass. Should PO interfere the UI design, when there are designers at work in scrum team? (an example of this which has happened to us, is to replace checkboxes with a drop down list with two items, namely, yes and no; or to make some boxes larger, or to left-align some content instead of centering them on the page, or stuff like that). If yeah, to what extent? Colors? Layout? Should PO interfere in Design and architecture of coding? This hasn't happened to us yet, but I'm really curious about the boundaries. For example does PO has the right to change the platform (moving from ASP.NET MVC to PHP, or something like that), or choosing the count of servers (tier architecture), etc. Should PO interfere in validation mechanisms? For example, this field should be required, or we don't need to get this piece of information from user. Sometimes, analyzers and designers confirm that something can be handled behind the scene, like extracting the user profile info from another source, instead of asking for it in UI. How granular could/should PO get into the analysis and design? For example, a user story might be: "As a customer, I'd like to be able to buy new domains online". However, scrum team can implement this user story in a wizard of five steps, or in one single page. To which level PO should monitor, or govern, or supervise the technical analysis, design, and implementation? I asked these questions to judge whether our implementation is right or wrong?

    Read the article

  • Help with "advanced" shell scripting | how to create an image preview of a pdf

    - by lucapozzobon
    First of all, sorry for my english: i'm not british/american. Here is my problem. I've got a folder named pdf with lots of files pdf inside it. I've got another folder named thumbnail, which is empty. I want to create jpg images preview of each pdf to use them in my HTML webpages as previews of the pdf. To do this I'm using a software called IMAGEMAGICK. I tried to put the code inside my PHP files to get the purpose, but it doesn't work. As you understood, I have created a small search engine with apache, mysql to search for pdf locally (offline). Now I want to add a "preview" of the first page of pdfs. Instead, it works by bash command line and the code is: convert pdf/name_of_the_file_pdf.pdf[0] name_of_the_imagefile.jpg (The zero stands for that the image is taken from the FIRST page of pdf) How can i make a script that takes each name of pdf files and put it into that code???? To list all the file, I did ls >pdf but with the little knowledge I have I can't go further.... Some pdf's names contain spaces....Is that a problem? PDF files are so many that i can't do the task typing every name,it wouldn't be a nice and clever work!!!! Thanks a lot in advance!!!

    Read the article

  • How to expose game data in the game without a singelton?

    - by zardon
    I'm quite new to cocos2d and games programming, and am currently I am writing a game that is currently in Prototype stage. Everything is going okay, but I've realized a potentially big problem and I am not sure how to solve it. I am using a singelton to store a bunch of arrays for everything, a global list of planets, a global list of troops, a global list of products, etc. And only now I'm realizing that all of this will be in memory and this is the wrong way to do it. I am not storing files or anything on the disk just yet, with exception to a save/load state, which is a capture of everything. My game makes use of a map which allows you to select a planet, then it will give you a breakdown of that planets troops and resources, Lets use this scenario: My game has 20 planets. On which you can have 20 troops. Straight away that's an array of 400! This does not add the NPC, which is another 10. So, 20x10 = 200 So, now we have 600 all in arrays inside a Singelton. This is obviously very bad, and very wrong. Especially as the game scales in the amount of data. But I need to expose pretty much everything, especially on the map page, and I am not sure how else to do it. I've been told that I can use a controller for the map page which has the information I need for each planet, and other controllers for other items I require global display for. I've also thought about storing each planet's data in a save file, using initWithCoder however there could be a boatload of files on the user's device? I really don't want to use a database, mainly because I would need to translate NSObjects and non-NSObjects like CGRects and CGPoints and Colors into/from SQL. I am open to other ideas on how to store and read game data to prevent using a singelton to store everything, everywhere. Thanks for your time.

    Read the article

  • Displaying Exceptions Thrown or Caught in Managed Beans

    - by Frank Nimphius
    Just came a cross a sample written by Steve Muench, which somewhere deep in its implementation details uses the following code to route exceptions to the ADF binding layer to be handled by the ADF model error handler (which can be customized by overriding the DCErrorHandlerImpl class and configuring the custom class in DataBindings.cpx file) To route an exception to the ADFm error handler, Steve used the following code ((DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry()).reportException(ex); The same code however can be used in managed beans as well to enforce consistent error handling in ADF. As an example, lets assume a managed bean method hits an exception. To simulate this, let's use the following code: public void onToolBarButtonAction(ActionEvent actionEvent) {    throw new JboException("Just to tease you !!!!!");        } The exception shows at runtime as displayed in the following image: Assuming a try-catch block is used to intercept the exception caused by a managed bean action, you can route the error message display to the ADF model error handler. Again, let's simulate the code that would need to go into a try-catch block public void onToolBarButtonAction(ActionEvent actionEvent) {    JboException ex = new JboException("Just to tease you !!!!!");  BindingContext bctx = BindingContext.getCurrent();    ((DCBindingContainer)bctx.getCurrentBindingsEntry()).reportException(ex); } The error now displays as shown in the image below As you can see, the error is now handled by the ADFm Error handler, which - as mentioned before - could be a custom error handler. Using the ADF model error handling for displaying exceptions thrown in managed beans require the current ADF Faces page to have an associated PageDef file (which is the case if the page or view contains ADF bound components). Note that to invoke methods exposed on the business service it is recommended to always work through the binding layer (method binding) so that in case of an error the ADF model error handler is automatically used.

    Read the article

  • Issue tracking multiple domains with Google Analytics

    - by user359650
    I have 2 domains mydomain.com and mydomain.net which I'm trying to track with the same GA code. Here are the options I turned on: Subdomains of mydomain ON Examples: www.mydomain.com -and- apps.mydomain.com -and- store.mydomain.com Multiple top-level domains of mydomain ON Examples: mydomain.uk -and- mydomain.cn -and- mydomain.fr Which gave me the following code: _gaq.push(['_setAccount', 'UA-123456789-1']); _gaq.push(['_setDomainName', 'mydomain.com']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview']); In this help page I read that _setDomainName must be changed for each domain which I did: -if you go to mydomain.net you get _gaq.push(['_setDomainName', 'mydomain.net']); -if you go to mydomain.com you get _gaq.push(['_setDomainName', 'mydomain.com']); When I generate traffic on both mydomain.dom and mydomain.net and watches GA push requests made with firebug I can see requests generated for both domains and the parameter called utmhn has the proper domain value (which matches that of _setDomainName and the browser address bar). However when I monitor the realtime statistics under Home->Real-Time->Overview I see pageviews for mydomain.net BUT NOT for mydomain.dom :( What am I missing to properly track both domains? PS: in the help page I mentioned they talk about setting up cross links which I didn't do for now as my understanding is that it shouldn't be needed to get what I'm trying to do to work. Also I want to mention that I do not have any tracking code for any of these 2 domains other than the one I mentioned.

    Read the article

  • I have done everything correct on my asp.net website, re: SEO; why aren't google backlinks showing?

    - by Jason Weber
    I recently implemented many SEO techniques for a company on their asp.net website; in 6 months, we jumped from a PR1 to a PR3. But I'm having issues with google backlinking. Here are some of the things I've done: Not only did I set up their own Google+ page 6 months ago, I update it pretty much daily with links, pictures, etc., and I blog about it on my own personal Google+ page and post links, etc. ... They have their own Twitter, Facebook, YouTube, and all are updated almost daily. I've listed in as many quality, relevant directories as possible 6 months ago; I've avoided link farms. The site is solid SEO-wise. Key-phrase rich URLs, schema.org & rich snippets. No duplicate content ... www or non-www 301's, trailing slashes, etc. ... all taken care of. Probably a ton of other things, but basically, the site is all set, SEO-wise. Here's what's confounding: When I do a link:www.example.com in Bing/Yahoo, it shows many backlinks. When I do a link:www.example.com in google, it shows up 0 links. Or when I use a site-ranker like Web Site Rank Tool it's showing 0 backlinks from Google. Any suggestions would be appreciated!

    Read the article

  • Brother MFC-J470DW scan function "Check Connection"

    - by user292599
    I have a Brother MFC-J470DW printer that I have connected to a Linux desktop (running Ubuntu 14.04) using a wireless router network. The printer works fine for printing and copying, but now I want to add the scan function. To set up the scan function, I went to the Brother web page for this printer: http://support.brother.com/g/b/downloadlist.aspx?c=eu_ot&lang=en&prod=mfcj470dw_us_eu_as&os=128 and under Scanner Drivers selected "Scanner driver 64bit (deb package)", "Scan-key-tool 64bit (deb package)", and "Scanner Setting file (deb package)". For each package, I clicked the EULA, and selected "open with Ubuntu Software Center". Then after the USC window pops up, I click on Install and the red line goes from left to right. In each case, the USC window then had a green checkmark and the Install box changes to Reinstall (that's how you know it worked). So now I try it out. Hitting the Scan button on the printer, selecting "Scan to file", and hitting ok produces the message "Check Connection". I checked the Brother Linux Information FAQ (scanner) page and the 14th question seems the same as mine: When I try to use the scan key on my network connected machine, I receive the error "Check connection" or I can not select anything except "scan to FTP". I explored the solution given for this FAQ, but found from ifconfig that I am already using eth0, the default setting, so presumably that is not the problem. I also found brscan-skey installed in /usr/bin and did drrm@drrmlinux2:~$ brscan-skey -t drrm@drrmlinux2:~$ brscan-skey but that didn't help - I still get the "Check connection" message. What can you suggest to fix this problem?

    Read the article

  • Google+ Platform Office Hours for April 4th 2012: Open Q&A

    Google+ Platform Office Hours for April 4th 2012: Open Q&A We hold weekly Google+ Platform Office Hours using Hangouts On Air most Wednesdays from 11:30am until 12:15pm PST. This week we opened the session up to your questions about the Google+ platform. Here's a list of the topics we addressed: - 1:40 - HTTPS and hangout apps - 4:48 - The Google+ badge on Blogger - 6:51 - Warnings logged to the console by the +1 button - 7:57 - +1 button count discrepancies between the button, Google Analytics and Google Webmaster Tools - 11:04 - Using Google+ to identify users on an external website Our starter projects include this functionality. You can find them here: developers.google.com - 14:12 - When will the feature I want be released? - 16:05 - Redirecting your domain to your Google+ Page Jenny mentions a blog entry about redirecting to your Google+ profile: goo.gl - 17:30 - Pulling public Google+ activity from your Google+ Page into your website The starter projects also demonstrate this functionality: developers.google.com - 19:43 - Integrating the Google+ badge with Google Analytics tracking Oops! Jenny mentions callbacks. She was in error. The +1 button provides callbacks but the badge does not at this time. Sorry about that. Discuss this video on Google+: goo.gl Learn more about our Office Hours: developers.google.com From: GoogleDevelopers Views: 114 5 ratings Time: 21:28 More in Science & Technology

    Read the article

  • Creating sitemap for Googlebot - how to mark dynamic content / dynamic subpages?

    - by ojek
    I have a website that is an Internet forum. This forum has many categories, and a single category page that contains a lot of subpages with listed threads. This Internet forum is brand new, and about a week ago I filled it with a few hundred thousand threads. I then looked at my Google Webmasters Tools page to see any changes in indexing, but the index went up from 300 to about 1200, so that means it did not index my added threads (although it added something). The following is what my sitemap.xml contains, which I uploaded to their website. Of course there is a lot more code, this is just a snippet for a single category. In my real sitemap file I have all the categories listed as below: <url> <loc>http://mysite.com/Forums/Physics</loc> <changefreq>hourly</changefreq> </url> Now, I would expect Googlebot to go into mysite.com/Forums/Physics, and crawl through all the subpages with thread links, and then crawl inside of each thread and index its content. How can I achieve this? Also if this is unclear, I will add a real link to my website.

    Read the article

  • Securing credentials passed to web service

    - by Greg Smith
    I'm attempting to design a single sign on system for use in a distributed architecture. Specifically, I must provide a way for a client website (that is, a website on a different domain/server/network) to allow users to register accounts on my central system. So, when the user takes an action on a client website, and that action is deemed to require an account, the client will produce a page (on their site/domain) where the user can register for a new account by providing an email and password. The client must then send this information to a web service, which will register the account and return some session token type value. The client will need to hash the password before sending it across the wire, and the webservice will require https, but this doesn't feel like it's safe enough and I need some advice on how I can implement this in the most secure way possible. A few other bits of relevant information: Ideally we'd prefer not to share any code with the client We've considered just redirecting the user to a secure page on the same server as the webservice, but this is likely to be rejected for non-technical reasons. We almost certainaly need to salt the password before hashing and passing it over, but that requires the client to either a) generate the salt and communicate it to us, or b) come and ask us for the salt - both feel dirty. Any help or advice is most appreciated.

    Read the article

  • Shared to Dedicated or Amazon CloudFront to improve performances and keep secured?

    - by user978548
    I have a Wordpress which currently takes about 1.8s to 2.5s for the home page to completely load in my country. The page weight is about 700Ko (static content included). In order to increase performances, I'm considering two solutions: Switching to a dedicated host. Using amazon s3 cloudfront to serve static contents. My current shared hosting have servers in a neighboring country but not exactly in mine, and both amazon and the dedicated hosting have some, so that's already an advantage. So considering all that, I still have three questions remaining: Currently having a low traffic (100 unique visitors/days, but growing) will it make a huge difference between my shared hosting and a dedicated server ? Knowing that I already use a cookie-less domain to deliver static contents (but using a redirection to the same server), would using amazon s3 make a real difference ? Talking about the cons of dedicated vs amazon s3, if I choose for the dedicated server something like Ubuntu server and do daily package updates and have only port 80 open, would it be sufficient in terms of security (in comparison with my current shared hosting which manage everything for me) ?

    Read the article

  • Trying to make ZeroRadiant work on Lubuntu 11.10

    - by maniat1k
    I'm looking to use ZeroRadiant to work on ubuntu, it's to create maps on urban terror, I know that works on windows and mac. I found this HOWTO but could not make it work. I've got stock on this when I do. ~/ZeroRadiant-src$ scons target=radiant,q3map2 config=release at the end shows me this: collect2: ld returned 1 exit status scons: *** [build/release/radiant/radiant.bin] Error 1 scons: building terminated because of errors. there's a note on the how to that says: If you get other errors, you may try asking for help in the GtkRadiant IRC channel, which is listed on the main ZeroRadiant page. the page does not there and I could find that IRC channel. EDIT thanks to @jokerdino in the chat sow me this If you have the proprietary NVIDIA driver installed and you get the following error when executing the build target above: /usr/bin/ld: cannot find -lGL collect2: ld returned 1 exit status scons: *** [build/release/radiant/radiant.bin] Error 1 scons: building terminated because of errors. then you should install one of the nvidia-glx-dev* packages as mentioned above in Step A, then try to execute the main build target to compile GtkRadiant again. I don't have nvidia; I do have ATI AMD Radeon HD 6320, and looks like it works. GL_VERSION: 4.1.11251 Compatibility Profile Context GL_VENDOR: ATI Technologies Inc. GL_RENDERER: AMD Radeon HD 6320 Graphics I'm think I do have video issues but.. how do I detect this? how can I continue?

    Read the article

  • Update Your NetBeans Plugin's "Supported NetBeans Versions" In The Next Two Weeks!

    - by Geertjan
    For each NetBeans plugin uploaded to the NetBeans Plugin Portal, the registration page starts like this: Note how the "Supported NetBeans Versions" field is empty, i.e., no checkbox is checked, for the plugin above. As you can also see, there is a red asterisk next to this field, which means it is mandatory. It is mandatory for the latest version of the NetBeans Plugin Portal, while it wasn't mandatory before, so that several plugins were registered without their supported version being set. Therefore, since the version is now mandatory, anyone who doesn't want their plugin to be hidden for the rest of this year, and removed on 1 January 2013 if no one complains about their absence, needs to go to their plugin's registration page and set a NetBeans Version. E-mails have been sent to plugin developers of unversioned plugins already, over the last weeks. Currently there are 91 plugins that still need to have their NetBeans Version set. Probably at least 1/3 of those are my own plugins, so this is as much a reminder to myself as anyone else! Whether or not you have received an e-mail asking you to set a NetBeans Version for your plugins, please take a quick look anyway and maybe this is a good opportunity to update other information relating to your plugin. You (and I) have two weeks: on Monday 16 April, any NetBeans plugin in the Plugin Portal without a NetBeans Version will be hidden. And then removed, at the start of next year, if no one complains.

    Read the article

  • What is your most preferred method of site pagination?

    - by John Smith
    There seem to be quite a few implementations of this feature. Some sites like like Stackexchange have it laid out like this: [1][2][3][4][5] ... [954][Next] Other sites like game forums may have something like this: [1][2][3] ... [10] ... [50] ... [500] ... [954][Next] Some sites like webcomics (XKCD comes to mind) have it laid out like this: [Last][Prev][Random][Next][First] Reddit has a very simple pagination with only: [Prev][Next] Sites like Stackexchange and Google also allow you to change how many results you want per page. Personally, I have never used this feature. Is it even worth including or does it just further confuse the design with needless features? Personally, I have only ever seen the need for the webcomic style (without the random). If I need to go to a specific page (which is very, very rare) then I can just edit the address bar. Is it good design to make something more complex for rare occasions where it might make save the user some time? Is having to edit the address bar to navigate the site effectively in some circumstances bad design?

    Read the article

< Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >