Search Results

Search found 34447 results on 1378 pages for 'google search appliance'.

Page 253/1378 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • How to return proper 404 for google while providing user friendly content to the user?

    - by Marek
    I am bouncing between posting this here and on Superuser. Please excuse me if you feel this does not belong here. I am observing the behavior described here - Googlebot is requesting random urls on my site, like aecgeqfx.html or sutwjemebk.html. I am sure that I am not linking these urls from anywhere on my site. I suspect this may be google probing how we handle non existent content - to cite from an answer to the linked question: [google is requesting random urls to] see if your site correctly handles non-existent files (by returning a 404 response header) We have a custom page for nonexistent content - a styled page saying "Content not found, if you believe you got here by error, please contact us", with a few internal links, served (naturally) with a 200 OK. The URL is served directly (no redirection to a single url). I am afraid this may discriminate the site at google - they may not interpret the user friendly page as a 404 - not found and may think we are trying to fake something and provide duplicate content. How should I proceed to ensure that google will not think the site is bogus while providing user friendly message to users in case they click on dead links by accident?

    Read the article

  • Search and replace hundreds of strings in tens of thousands of files?

    - by C Johnson
    I am looking into changing the file name of hundreds of files in a (C/C++) project that I work on. The problem is our software has tens of thousands of files that including (i.e. #include) these hundreds of files that will get changed. This looks like a maintenance nightmare. If I do this I will be stuck in Ultra-Edit for weeks, rolling hundreds of regex's by hand like so: ^\#include.*["<\\/]stupid_name.*$ with #include <dir/new_name.h> Such drudgery would be worse than peeling hundreds of potatoes in a sunken submarine in the antarctic with a spoon. I think it would rather be ideal to put the inputs and outputs into a table like so: stupid_name.h <-> <dir/new_name.h> stupid_nameb.h <-> <dir/new_nameb.h> stupid_namec.h <-> <dir/new_namec.h> and feed this into a regular expression engine / tool / app / etc... My Ultimate Question: Is there a tool that will do that? Bonus Question: Is it multi-threaded? I looked at quite a few search and replace topics here on this website, and found lots of standard queries that asked a variant of the following question: standard question: Replace one term in N files. as opposed to: my question: Replace N terms in N files. Thanks in advance for any replies.

    Read the article

  • Possible to rank partial matches in Postgres full text search?

    - by Joe
    I'm trying to calculate a ts_rank for a full-text match where some of the terms in the query may not be in the ts_vector against which it is being matched. I would like the rank to be higher in a match where more words match. Seems pretty simple? Because not all of the terms have to match, I have to | the operands, to give a query such as to_tsquery('one|two|three') (if it was &, all would have to match). The problem is, the rank value seems to be the same no matter how many words match. In other words, it's maxing rather than multiplying the clauses. select ts_rank('one two three'::tsvector, to_tsquery('one')); gives 0.0607927. select ts_rank('one two three'::tsvector, to_tsquery('one|two|three|four')); gives the expected lower value of 0.0455945 because 'four' is not the vector. But select ts_rank('one two three'::tsvector, to_tsquery('one|two')); gives 0.0607927 and likewise select ts_rank('one two three'::tsvector, to_tsquery('one|two|three')); gives 0.0607927 I would like the result of ts_rank to be higher if more terms match. Possible? To counter one possible response: I cannot calculate all possible subsequences of the search query as intersections and then union them all in a query because I am going to be working with large queries. I'm sure there are plenty of arguments against this anyway! Edit: I'm aware of ts_rank_cd but it does not solve the above problem.

    Read the article

  • Why is this attempt at a binary search crashing?

    - by Ian Campbell
    I am fairly new to the concept of a binary search, and am trying to write a program that does this in Java for personal practice. I understand the concept of this well, but my code is not working. There is a run-time exception happening in my code that just caused Eclipse, and then my computer, to crash... there are no compile-time errors here though. Here is what I have so far: public class BinarySearch { // instance variables int[] arr; int iterations; // constructor public BinarySearch(int[] arr) { this.arr = arr; iterations = 0; } // instance method public int findTarget(int targ, int[] sorted) { int firstIndex = 1; int lastIndex = sorted.length; int middleIndex = (firstIndex + lastIndex) / 2; int result = sorted[middleIndex - 1]; while(result != targ) { if(result > targ) { firstIndex = middleIndex + 1; middleIndex = (firstIndex + lastIndex) / 2; result = sorted[middleIndex - 1]; iterations++; } else { lastIndex = middleIndex + 1; middleIndex = (firstIndex + lastIndex) / 2; result = sorted[middleIndex - 1]; iterations++; } } return result; } // main method public static void main(String[] args) { int[] sortedArr = new int[] { 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29 }; BinarySearch obj = new BinarySearch(sortedArr); int target = sortedArr[8]; int result = obj.findTarget(target, sortedArr); System.out.println("The original target was -- " + target + ".\n" + "The result found was -- " + result + ".\n" + "This took " + obj.iterations + " iterations to find."); } // end of main method } // end of class BinarySearch

    Read the article

  • Handling file uploads with JavaScript and Google Gears, is there a better solution?

    - by gnarf
    So - I've been using this method of file uploading for a bit, but it seems that Google Gears has poor support for the newer browsers that implement the HTML5 specs. I've heard the word deprecated floating around a few channels, so I'm looking for a replacement that can accomplish the following tasks, and support the new browsers. I can always fall back to gears / standard file POST's but these following items make my process much simpler: Users MUST to be able to select multiple files for uploading in the dialog. I MUST be able to receive status updates on the transmission of a file. (progress bars) I would like to be able to use PUT requests instead of POST I would like to be able to easily attach these events to existing HTML elements using JavaScript. I.E. the File Selection should be triggered on a <button> click. I would like to be able to control response/request parameters easily using JavaScript. I'm not sure if the new HTML5 browsers have support for the desktop/request objects gears uses, or if there is a flash uploader that has these features that I am missing in my google searches. An example of uploading code using gears: // select some files: var desktop = google.gears.factory.create('beta.desktop'); desktop.openFiles(selectFilesCallback); function selectFilesCallback(files) { $.each(files,function(k,file) { // this code actually goes through a queue, and creates some status bars // but it is unimportant to show here... sendFile(file); }); } function sendFile(file) { google.gears.factory.create('beta.httprequest'); request.open('PUT', upl.url); request.setRequestHeader('filename', file.name); request.upload.onprogress = function(e) { // gives me % status updates... allows e.loaded/e.total }; request.onreadystatechange = function() { if (request.readyState == 4) { // completed the upload! } }; request.send(file.blob); return request; } Edit: apparently flash isn't capable of using PUT requests, so I have changed it to a "like" instead of a "must".

    Read the article

  • How can I have a VBA macro perform a Search/Replace in formulas across an entire workbook?

    - by richardtallent
    I distribute an Excel workbook to a number of users, and they are supposed to have a particular macro file pre-installed in their XLSTART folder. If they don't have the macro installed correctly, and they send the workbook back to me, any formulas depending on it include the full path of the macro, e.g.: 'C:\Documents and Settings\richard.tallent\Application Data\ Microsoft\Excel\XLSTART\pcs.xls'!MyMacroFunction() I want to create a quick macro I can use to remove the improper path from every formula in the workbook. I've successfully retrieved the special folder using GetSpecialFolder (an external function that is working just fine), but the Replace call itself shown below throws an "Application-defined or object-defined error". Public Sub FixBrokenMacroFormulas() Dim badpath As String badpath = "'" & GetSpecialFolder(AppDataFolder) & "\Microsoft\Excel\XLSTART\mymacro.xls'!" Dim Sheet As Worksheet For Each Sheet In ActiveWorkbook.Sheets Call Sheet.Activate Call Sheet.Select(True) Call Selection.Replace(What:=badpath, Replacement:="", LookIn:=xlFormulas) ''//LookAt:=xlPart, MatchCase:=False, SearchFormat:=False, ReplaceFormat:=False Next End Sub Automating search and replace isn't exactly my forte, what am I doing wrong? I've already commented out some of the parameters that don't seem essential.

    Read the article

  • Does F# documentation have a way to search for functions by their types?

    - by Nathan Sanders
    Say I want to know if F# has a library function of type ('T -> bool) -> 'T list -> int ie, something that counts how many items of a list that a function returns true for. (or returns the index of the first item that returns true) I used to use the big list at the MSR site for F# before the documentation on MSDN was ready. I could just search the page for the above text because the types were listed. But now the MSDN documentation only lists types on the individual pages--the module page is a mush of descriptive text. Google kinda-sorta works, but it can't help with // compatible interfaces ('T -> bool) -> Seq<'T> -> int // argument-swaps Seq<'T> -> ('T -> bool) -> int // type-variable names ('a -> bool) -> Seq<'a> -> int // wrappers ('a -> bool) -> 'a list -> option<int> // uncurried versions ('T -> bool) * 'T list -> int // .NET generic syntax ('T -> bool) -> List<'T> -> int // methods List<'T> member : ('T -> bool) -> int Haskell has a standalone program for this called Hoogle. Does F# have an equivalent, like Fing or something?

    Read the article

  • Are the formatted addresses of a Google location unique?

    - by Hans
    I want our users of a web site to be able to either search and pick an address or mark a location on a map decide how accurate this address/location is I am in the process of implementing the first part with jquery, jquery ui's autocomplete, google map, and google geocoder. For the second part I will generate a radiobutton list based on the address elements/alternatives of the first part on the client side with jquery. My concern, however, is how to convey the choices to the server side. The Google geocoder includes a number of useful metadata that I want to store. A possibility is to store the complete json object in a hidden form field, but I can't trust the users. Such a solution would enable an unfriendly insertion of spam in the data. If the addresses/locations would have a unique identifyer I could store just these and let the server refetch/evaluate the data. The alternative geonames.org web service has such ids. But are for example the formatted addresses of a Google location unique? Any tips?

    Read the article

  • GUnload is null or undefined using Directions Service

    - by user1677756
    I'm getting an error using Google Maps API V3 that I don't understand. My initial map displays just fine, but when I try to get directions, I get the following two errors: Error: The value of the property 'GUnload' is null or undefined, not a Function object Error: Unable to get value of the property 'setDirections': object is null or undefined I'm not using GUnload anywhere, so I don't understand why I'm getting that error. As far as the second error is concerned, it's as if something is wrong with the Directions service. Here is my code: var directionsDisplay; var directionsService = new google.maps.DirectionsService(); var map; function initialize(address) { directionsDisplay = new google.maps.DirectionsRenderer(); var geocoder = new google.maps.Geocoder(); var latlng = new google.maps.LatLng(42.733963, -84.565501); var mapOptions = { center: latlng, zoom: 15, mapTypeId: google.maps.MapTypeId.ROADMAP }; map = new google.maps.Map(document.getElementById("map_canvas"), mapOptions); geocoder.geocode({ 'address': address }, function (results, status) { if (status == google.maps.GeocoderStatus.OK) { map.setCenter(results[0].geometry.location); var marker = new google.maps.Marker({ map: map, position: results[0].geometry.location }); } else { alert("Geocode was not successful for the following reason: " + status); } }); directionsDisplay.setMap(map); } function getDirections(start, end) { var request = { origin:start, destination:end, travelMode: google.maps.TravelMode.DRIVING }; directionsService.route(request, function(result, status) { if (status == google.maps.DirectionsStatus.OK) { directionsDisplay.setDirections(result); } else { alert("Directions cannot be displayed for the following reason: " + status); } }); } I'm not very savvy with javascript, so I could have made some sort of error there. I appreciate any help I can get.

    Read the article

  • Works for Short Input, Fails for Long Input. How to Solve?

    - by r0ach
    I've this program which finds substring in a string. It works for small inputs. But fails for long inputs. Here's the program: //Find Substring in given String #include <stdio.h> #include <string.h> main() { //Variable Initialization int i=0,j=0,k=0; char sentence[50],temp[50],search[50]; //Gets Strings printf("Enter Sentence: "); fgets(sentence,50,stdin); printf("Enter Search: "); fgets(search,50,stdin); //Actual Work Loop while(sentence[i]!='\0') { k=i;j=0; while(sentence[k]==search[j]) { temp[j]=sentence[k]; j++; k++; } if(strcmp(temp,search)==0) break; i++; } //Output Printing printf("Found string at: %d \n",k-strlen(search)); } Works for: Enter Sentence: good evening Enter Search: evening Found string at 6 Fails for: Enter Sentence: dear god please make this work Enter Search: make Found string at 25 Which is totally wrong. Can any expert find me a solution? P.S: This is kinda like reinventing the wheel since strstr() has this functionality. But I'm trying for a non-library way of doing it.

    Read the article

  • Step Aside Google

    - by David Dorf
    Step aside Google. While search will always be a huge part of the web, I can see a day in the not-too-distance future where search takes a backseat to the social graph. Links between pages will give way to relationships between people, including context like location. What does this mean for retail? It means your e-commerce strategy will slowly transition to an f-commerce strategy. Remember when a large portion of the online population was held captive inside the walls of AOL? All the commercials listed an AOL keyword, not a web address because that's where the majority of people surfed. Now, people are spending a huge amount of time in Facebook (despite Betty White's proclamation that its a big waste of time). According to Facebook, users spend 500 billion minutes per month on the site. Selling products where consumers are spending their time makes sense. The power of Like and Share are the most effective approach to marketing. More and more stores are popping up on Facebook, and soon they will be the front-end to e-commerce systems. As sites adopt the Facebook Open Graph API, users will have a harder time distinguishing the open web from their Facebook experience, including shopping. Of course e-commerce sites won't go away, but a large portion of their traffic will emanate from Facebook and in some cases Facebook will act as the front-end for the web store. Ignore Facebook Open Graph at your peril. In a Mashable article, Mitchell Harper made several predictions about how e-commerce will change based on Facebook. His five points are not far-fetched at all, so we need to watch this space carefully.

    Read the article

  • OpenGL ES 2 shaders for drawing buildings and roads like Google Maps does

    - by Pris
    I'm trying to create a shader that'll give me an effect similar to what buildings and roads look like on 3D Google Maps. You can see the effect interactively if you enable WebGL at maps.google.com, and I also found a couple of screenshots that illustrate what I'm trying to achieve: Thing I noticed: There's some kind of transparency thing going on with the roads/ground and the buildings, but not between the buildings themselves. It might be that they're rendering the ground and roads after the buildings with the right blend functions to achieve that effect. If you look closely, you'll see parts of the building profiles have an outline. The roads also have nice clean outlines. There are a lot of techniques for outlining things with shaders... but I'm curious to find out what might have been used in this case considering mobile hardware and a large number of entities with outlines (roads and buildings) I'm assuming that for the lighting, some sort of simple diffuse per-vertex shader is being used for the buildings though I could be wrong. I'm especially curious about the 'look' they achieved with buildings (clean, precise outlines/shading). It reminds me a little of what you'd see when designing stuff with CAD applications like SolidWorks: I'd appreciate any advice on achieving this kind of look with ES 2 shaders.

    Read the article

  • New whitepaper, “Why Oracle Sun ZFS Storage Appliance for Oracle Databases?” now available.

    - by Cinzia Mascanzoni
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Databases are the backbone of today’s modern business providing transaction integrity for key business systems such as payment engines or providing the core of analytical data for decision-making. These diverse use cases require a flexible, high performance and highly available storage platform. The ZFS Storage Appliance is ideally suited with its architecture providing a platform flexible enough to meet the ever-changing availability, capacity and performance requirements from the business. In this just published white paper the authors provide both business and technical evidence of the suitability of the Oracle ZSF Storage Appliance as primary storage for Oracle Database 11gR2 environments. Click here to download the whitepaper.

    Read the article

  • How To Disable Loading Of Images In Chrome, Firefox and IE

    - by Gopinath
    Many of us find the necessity to disable loading images in web browsers for various reasons. May be when we are at work place, we don’t our boss to notice flashy browser window or we are connected to low bandwidth connections like GPRS which works faster without images. What ever may be the reason, here are the tips to disable images in Google Chrome, Firefox and Internet Explorer web browsers. Google Chrome – Disable Loading Images To disable loading of images in Google Chrome 1. Click on Tools Icon and choose Options menu item 2. In Google Chrome Options dialog window, switch to the tab Under the hood and click on the button Content Settings 3. Select Images from the list of options available in the left panel and choose the option Do not show any images 4. Close dialog windows and you are done. Firefox – Disable Loading Images To disable loading of images in Firefox 1. Open Firefox 2. Go to Tools -> Options 3. Switch to Content tab 4. Uncheck the option Load images automatically Internet Explorer – Disable Loading Images To disable loading of images in Internet Explorer 1. Launch Internet Explorer 2. Go to Tools -> Internet Options 3. Switch to Advanced tab 4. Uncheck the option Show pictures under Multimedia category cc image credit: flickr/indoloony This article titled,How To Disable Loading Of Images In Chrome, Firefox and IE, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Security in shared hosting vs VPS 'virtual appliances'

    - by Pedro Loureiro
    I have to change my hosting provider. Right now I have a shared hosting account but I'm considering trying the LAMP stack appliance from turnkeylinux.org. I'm very comfortable with using linux, I've been using it for a long time. I have no problem ssh'ing into remote machines and do whatever I have to do (coding, reading logs, moving files, deploying, etc). The problem is that none of those tasks have involved securing the server/firewall. My experience has been as a desktop user or developer deploying apps/files in remote servers. Ignoring the security in the application logic (read: any scripts, frameworks, websites I might have created or installed) - I'm worried about things like base configuration of deamons, firewall, ports, executable scripts being readable from the outside and whatnot. My question is: how do you compare the (expected) out of the box security of the LAMP stack from turnkey and the (expected) security of a "regular" shared hosting provider? I was hoping to find some guides with a list of steps to do to protect my server but the only documentation I found was simply referring to ubuntu's documentation.

    Read the article

  • Is this information about me as a programmer concise and good enough?

    - by Nick Rosencrantz
    I not only want you to review my resume but please tell me what you think Google means when they answered me: "We don't look at personal letters and we like your resume and we can recommend you internally but we need measurable experience. What is meant with "measurable" here? Do they mean like O(1) compared to O(n), selling an entire company, grades or what? This is what I sent: Curriculum vitae Nick Rosencrantz Competence: System development, web development Technical competence: Java, Javascript, HTML, XML, CSS, AJAX, PHP, SQL, Python Employments: 2012- Mobile Innovation AB System Developer IT consultant (Java programmer) 2011-2012 Bnano International Ltd System Developer Python programming in Google App Engine 2008-2009 Sweden Island AB System Developer Programming C++ and Java EE components 2003-2007 Studies Stockholm School of Economics During studies worked as network technician at Effnet AB 2000-2002 Jadestone AB System Developer System development in Java/J2EE. In 2001: KTH, Assistant. Teaching application server programming in Java Enterprise + weblogic + Informix. 1999-2000 Studies KTH 1996-1998 Spray.se System development, Researcher 1995-1995 Finance broker Backoffice work with financial instruments 1993-1994 Computer & Audio-Technical Systems AB Programming, sommer job Education/Courses: Stockholm School of Economics, Master of Science diploma, KTH, Computer Science undergraduate studies Languages Swedish, English, also some German and French Born 1973, Swedish citizen I also have a project-based CS which is several pages long but the above is about what I was aiming for in the beginning when I was looking for a job, now I have employment as an IT consultant in central Stockholm and I want to make my resume concise and also know what Google meant with their answer (It was a Swedish Google employee that via linkedin recruited from my Stockholm School of Economics groups since that is a small elite economics school where I took my M.Sc. and KTH is one of the largest universities in northern Europe so I sent her a link with my CV and she said she could promote me internally if I added "measurable experience" and I've been thinking for weeks what that may mean?

    Read the article

  • New META TAGS with positive effects for seo ranking in 2011 and beyond

    - by Sam
    Hi all, im trying to make an up to date chart of meta tags, for all of us, with their purposes, their use and their good (or bad) effects on search engines/being found. Also any body knows new/promising meta tags? I will add yours into my list so this chart is a result of live discussion and up to date. Also, it would be creative to invent your own useful meta, because we are the ones making the web, or aren't we? LEGEND P PURPOSE? What does this meta tag do in 2011, if anything N NECESSARY? Does every site really needs it or not? G GOOD wether it will have a good effect for your site to be found I INVENTED meta tag, who knows it will be accepted in a year! META "METANAME" = PURPOSE? - NECESSARY? - GOOD EFFECT? #### important meta "title" = P consice summary + teaser - N very - G extremely meta "description" = P description + teaser - N yes - G very meta "robots" = P if needed, to skip default dmoz/yahoodir listing - N no - G? #### new & promising! Thanks for input (John, ) meta "original-source" P url of whoever broke the news gets credits - N? - G? meta "syndication-source" P url for syndication of published news - N? - G? meta "canonical" P? - N? - G? #### seems obsolete meta "keywords" = P some keywords - N+G not for google but yahoo likes them meta "language" = P overrule guesswork by defining language - N no - G? meta "page-topic" = P topic/theme - N? - G? meta "abstract" = P short summary - N? - G? meta "copyright" = ? #### invented by me meta "audience" = P filteres audience: "+seniors, +parents, -children, -youth" meta "mood" = P specifies textual style: "discussion, informative, commercial, sexual, fictional, scientific, romantic, therapeutic, technical"

    Read the article

  • Hot to get website/product reviews reflected in Google's search results using review-aggregate format

    - by BasP
    I am managing a website called Rent A Boat Amsterdam. We have a system that gathers reviews from people that have used our services and that publishes these customer reviews making them available for all website visitors. When these customer reviews are published we have placed them within the appropriate tags according to the guidelines set by Google, which you can find here. An example looks like this: <li class="" style="clear:both;"> <div class="hreview"> <div class="item" style="display:none;"><span class="fn">Boatname</span></div> <div style="border:1px solid #DEDEDE; background-color:#D9FFD4; margin:0 10px 10px 0; float:left; text-align:center; padding:10px; height:50px; width:70px;"><h1><span class="rating">10</span></h1>9-Jun-2010</div> <div> <div class="description"><p>Great canal Cruise!</p></div> <p class="reviewer vcard"><strong><span class="fn">First name Last name</span></strong></p> </div> </div> We have implemented these tags a couple of months ago, but there are no visible results in the Google SERP's. This whilst I had expected to find the reviews / ratings displayed similar to: Is anyone familiar with this topic and able to help me find the answer to the question why the review-aggregate format doesn't seem to have the desired effect?

    Read the article

  • Sending Emails via Google SMTP - after some time quit working

    - by Chris
    on a website I use PHPMailer to send automated registration emails, etc and also a newsletter-tool (which loops through the emails and sends them one by one). Also, I configured in Gmail under Settings and confirmed @mydomain addresses, so I can send from @mydomain emails without the gmail address being displayed. Furthermore I authorized the website to send mails with this link: https://accounts.google.com/DisplayUnlockCaptcha Now, after 2 month where everything worked perfectly fine, suddenly users started not to receive emails anymore and most recently emails are not even being sent anymore. Also, I received many error messages like this: Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 5.4.1 [email protected]: Recipient address rejected: Access Denied (state 13). When I check at this link: https://toolbox.googleapps.com/apps/checkmx/ It tells 2 none critical errors: Relayhost configuration detected. There SHOULD be a valid SPF record. So, the questions I would have were: does anybody have any hint why it stopped working, what the error messages mean? what to do to fix it? where do I set a SPF record (Cpanel?)? what is a relayhost and how to fix that? It is about 1000-1400 mails a day (gmail's limit is 2000). Also, what can I do wrong when setting up an SPF record? I've heard there are some testing tools for that. Thank you so much already in advance for your help!

    Read the article

  • MOSS Search Error: Authentication failed because the remote party has closed the transport stream

    - by Cherie Riesberg
    http://support.microsoft.com/?id=962928 To resolve this issue, follow these steps: Stop the Office SharePoint Services Search service. To do this, follow these steps: Click Start, click Run, type cmd , and then click OK. At the command prompt, type net stop osearch, and then press ENTER. Type exit to exit the command prompt. Download and install the IIS 6.0 Resource Kit Tools. To obtain the IIS 6.0 Resource Kit Tools, visit the following Microsoft Web site: http://www.microsoft.com/downloads/details.aspx?familyid=56FC92EE-A71A-4C73-B628-ADE629C89499 (http://www.microsoft.com/downloads/details.aspx?familyid=56FC92EE-A71A-4C73-B628-ADE629C89499) On each server in the farm that has Office SharePoint 2007 installed, follow these steps: Click Start, click Run, type cmd , and then click OK. Navigate to the location of the IIS 6.0 Resource Kit Tools (default location is: C:\Program Files\IIS Resources\SelfSSL) At the command prompt, type selfssl /s:951338967 /v:1000, and then press ENTER. Notes For 64 bit Server, 951338967 is the default ID of the Office Server Web Services certificate. For 32 bit Server, 1720207907 is the default ID of the Office Server Web Services certificate. You can check the ID of Office Server Web Services from IIS. 1000 is the number of days that the certification will be valid. You need to execute the selfssl command on each MOSS Server in the farm which is running a "Office Server Web Services" site. SharePoint partly uses SSL name resolution in the background between farm servers, which users generally do not need to be aware of. Start the Office SharePoint Services Search service. To do this, follow these steps: At the command prompt, type net start osearch, and then press ENTER. Type exit to exit the command prompt. Download and install the following update to the .NET Framework 3.5 SP1. For more information, click the following article number to view the article in the Microsoft Knowledge Base: 959209  (http://support.microsoft.com/kb/959209/ ) An update for the .NET Framework 3.5 Service Pack 1 is available

    Read the article

  • Google-Bot fell in love with my 404-page

    - by 32bitfloat
    Every day my access-log looks kind of this: 66.249.78.140 - - [21/Oct/2013:14:37:00 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [21/Oct/2013:14:37:01 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [21/Oct/2013:14:37:01 +0200] "GET /vuqffxiyupdh.html HTTP/1.1" 404 1189 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" or this 66.249.78.140 - - [20/Oct/2013:09:25:29 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.75.62 - - [20/Oct/2013:09:25:30 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [20/Oct/2013:09:25:30 +0200] "GET /zjtrtxnsh.html HTTP/1.1" 404 1186 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" The bot calls the robots.txt twice and after that tries to access a file (zjtrtxnsh.html, vuqffxiyupdh.html, ...) which cannot exist and must return a 404 error. The same procedure every day, just the unexisting html-filename changes. The content of my robots.txt: User-agent: * Disallow: /backend Sitemap: http://mysitesname.de/sitemap.xml The sitemap.xml is readable and valid, so there seems to be no reason why the bot should want to force a 404-error. How should I interpret this behaviour? Does it point to a mistake I've done or should I ignore it?

    Read the article

  • Join the Geek+ Community on Google+ and Share Your Random Geekery

    - by The Geek
    It turns out that Google+ recently added a new feature that allows you to create your own community inside of Google+, where anybody that’s a member can post images, links, or start a discussion. We’ve created the Geek+ Community, so stop by and join in the fun. You’ll notice that there’s only a few members right now, but we’re hoping that we can get every How-To Geek reader to participate in the geeky discussion. You’re welcome to: Post random geeky stuff that you find. Yell at us for articles that you don’t like, or tell us how we can do things better. Participate in discussions with other HTG readers. Post up your own Geek Trivia. We might even publish it over here on How-To Geek. Ask others for advice. Just read everything that the other readers post. Lots of other things we can’t think of right now. Note: If you want tech support, you should post on our regular forum. Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • Batch OCR for many PDF files (not already OCRed) ?

    - by David
    Hello, I use Google Desktop Search (I am on Vista) and not all my PDF files are recognized in my archive folder. It is normal as "PDF files that contain scanned images" are not indexed (http://desktop.google.com/support/bin/answer.py?hl=en&answer=90651) So I would like to OCR many of my PDF files that are not already OCRed. My goal : I give the program a folder and it search alone in the subfolders the PDF files that need to be converted into PDF-OCRed files. Note: In the past, if a PDF file was password protected, I removed the password with another batch (paying) tool: verypdf.com "pwdremover" Any (not too much expensive) idea ? I already tried : Finereader 6 pro on xp at the time, but there was no batch processor included... Paperfile paperfile.net which uses Tesseract code.google.com/p/tesseract-ocr/. But the OCR is only PDF to text, not PDF to PDF! There is also another project code.google.com/p/ocropus Thanks in advance ;)

    Read the article

  • Chrome Time Track Is a Simple Task Time Tracker

    - by ETC
    If you don’t need advanced project tracking but still want to track how long it takes you to finish certain tasks or projects, Chrome Time Tracker is a free Chrome extension with a dead simple interface. Install the extension and a Time Track icon appears in your toolbar. Click it to create new tasks, delete old tasks, and start, stop, and reset your task timers. When the timer is running the icon changes to indicate you’re on the clock; pause the timer and it toggles back to the default icon. Hit up the link below to grab a copy. Chrome Time Track is free and works wherever Google Chrome does. Chrome Time Track [Google Chrome Extensions] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope Walking Through a Seaside Village Wallpaper

    Read the article

  • Best practices for caching search queries

    - by David Esteves
    I am trying to improve performance of my ASP.net Web Api by adding a data cache but I am not sure how exactly to go about it as it seems to be more complex than most caching scenarios. An example is I have a table of Locations and an api to retrieve locations via search, for an autocomplete. /api/location/Londo and the query would be something like SELECT * FROM Locations WHERE Name like 'Londo%' These locations change very infrequently so I would like to cache them to prevent trips to the database for no real reason and improve the response time. Looking at caching options I am using the Windows Azure Appfabric system, the problem is it's just a key/value cache. Since I can only retrieve items based on keys I couldn't actually use it for this scenario as far as Im aware. Is what I am trying to do bad use of a caching system? Should I try looking into NoSql DB which could possibly run as a cache for something like this to improve performance? Should I just cache the entire table/collection in a single key with a specific data structure which could assist with the searching and then do the search upon retrieval of the data?

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >