Search Results

Search found 34047 results on 1362 pages for 'google search'.

Page 284/1362 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • Email provider - suggestions needed

    - by Christian Fazzini
    We are looking for a good way to have email support. In theory, we need to allow end-users to send emails directly to support and careers. i.e. support@domain_name_here.com and careers@domain_name_here.com. Second, we need to provide emails to our staff. So each staff member has their own email address. i.e. joe@domain_name_here.com, meghan@domain_name_here.com, etc. Google Apps is one that we are considering. However, they are charging $50 per user, per year. Not so bad, considering the quality and the features they offer. However, there are also cheaper alternatives. i.e. my domain registrar offers an email plan for $20 / year / 10 emails. Go Daddy has a number of plans and still a lot more affordable than Google Apps. So far Namecheap and Go Daddy are the only ones I have looked at for email plans. Is it worth signing up with Google Apps? Or are there better alternatives? Your thoughts?

    Read the article

  • OPA Mobile Now Available on iTunes AppStore and Google Play

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 A free standalone app demonstrating the power of Oracle Policy Automation (OPA) Interviews is available on both Apple’s iTunes AppStore and Google Play (for Android). Later in 2014 customers will be able to deploy their own policy models to the mobile app using the new OPA Hub! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Two different domains as one user session

    - by Mathew Foscarini
    I have two websites that are run as the same service. Each domain offers articles from a different market. At the top of each page the two domains are shown as menu options. If a user clicks one they can switch to the other domain. See here: http://www.cgtag.com Each domain has a different Google Analytics account, and when a user switches domains Google is counting this as a new session. It's listing the other domain as the "referral" for that new session. When the user switches back to the first domain Google is counting this as a returning visitor. This is messing up my reports. Showing returning visitors values that are higher than reality. It's also increasing hits on landing pages when the user switches, and listing the other domain as a referral site. I've found tips on how to list two domains as one website, but that results in merging the data. I want to keep the two domains separate so that I can track each ones performance, but I don't want to count domain changes as new sessions. Maybe something like treating the two domains as subdomains.

    Read the article

  • help installing some java and flash

    - by william
    ok so i have git now and some other stuff but ive looked around and all and i dont get very help full info Instructions 1 Download the latest version of Flash Player from the Adobe Website. Click on "Download the Latest Player," and choose ".deb for Ubuntu" from the drop down list. 2 Choose "Save File" from the pop up window. Sponsored Links Google Cloud Hosting Build And Run Your App Using Google App Engine cloud.google.com/appengine 3 Open a terminal window. The terminal window will be found under Applications - Utilities. 4 Change to the directory where your downloaded file was saved. cd Download 5 Install the libcurl3 library. sudo apt-get install libcurl3 6 Install the Flash Player. sudo dpkg --install install_flash_player__linux.deb Replace with the latest version number. 7 Restart Firefox. 8 Check that Flash Player was installed. Type about:plugins in a browser window, and look for the flash player MIME type. Read more: http://www.ehow.com/how_5068467_install-flash-player-ubuntu.html#ixzz2ixPj47qS what i dont is how the heck i change the directory and all that crap witch they say every one know

    Read the article

  • Tips for switching jobs and moving into web based programming?

    - by JerryC
    I graduated in 2006 with a computer science degree and got solid grades (3.5 overall 3.8 in my major) For the past 4.5 years I've been working as a Software Engineer doing primarily rich client development. Most of my experience is with Java, Swing and C++. I've done a lot of network programming and I have acquired some skill working & debugging in distributed environments. I would like to switch jobs and move into a role where I can get exposure to some new technologies and frameworks. I would like to move into a more web development role but I find my lack of web development experience is hurting me. 90% of the jobs I see advertised are looking for one of two skill sets: 1) Stereotypical server side Java web developer. Experience with Spring, Hibernate, J2EE, etc. 2) Stereotypical front end web developer. Experience with Javascript, jQuery, HTML5, GWT, CSS, etc I find most of these companies are looking really specifically for this experience and they are not willing to take on good programmers/ CS fundamental guys who lack experience with this stuff. I would love to get a job doing stuff like this, but have my skills become out of date and unmarketable? Any opinions on ways to sell myself to help get a new position?

    Read the article

  • Here Is Official GMail App For iPhones & iPads

    - by Gopinath
    Its a great day for GMail users! Few hours ago Google pushed a new GMail web user interface to all the users and now they released GMail iOS App[iTunes Link]. After delaying several years Google at last released a native GMail application of iPhone, iPad & iPod touch. In a blog post, Google says We’ve combined your favorite features from the Gmail mobile web app and iOS into one app so you can be more productive on the go.It’s designed to be fast, efficient and take full advantage of the touchscreen and notification capabilities of your device. The iOS App includes almost all the features that are found on Android version of GMail app -  users can star, label, archive, access the Priority Inbox and push notifications for new mail alerts. It also includes standard touchscreen commands like pull down to refresh and swipe to scroll emails. Go and grab the GMail App from iTunes This article titled,Here Is Official GMail App For iPhones & iPads, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • White box testing with Google Test

    - by Daemin
    I've been trying out using GoogleTest for my C++ hobby project, and I need to test the internals of a component (hence white box testing). At my previous work we just made the test classes friends of the class being tested. But with Google Test that doesn't work as each test is given its own unique class, derived from the fixture class if specified, and friend-ness doesn't transfer to derived classes. Initially I created a test proxy class that is friends with the tested class. It contains a pointer to an instance of the tested class and provides methods for the required, but hidden, members. This worked for a simple class, but now I'm up to testing a tree class with an internal private node class, of which I need to access and mess with. I'm just wondering if anyone using the GoogleTest library has done any white box testing and if they have any hints or helpful constructs that would make this easier. Ok, I've found the FRIEND_TEST macro defined in the documentation, as well as some hints on how to test private code in the advanced guide. But apart from having a huge amount of friend declerations (i.e. one FRIEND_TEST for each test), is there an easier idion to use, or should I abandon using GoogleTest and move to a different test framework?

    Read the article

  • How to Avoid Duplicate Content in Wordpress Ecommerce Store

    - by Bhanuprakash Moturu
    hi i run a word press eCommerce store powered by woo commerce . i have a large inventory of products most of the product description is same for all products and its mandatory to include it. its creating a large duplicate content on site each category have 6 products i thought of a solution can you suggest which one is good 1 no index and follow product page and link it to categories page using canonical tag 2 index and nofollow product page and link it to categories page using canonical tag which is the best solution and is it a good practice to use canonical tag to link to categories page

    Read the article

  • How to register properly to the most famous SEOs? [closed]

    - by Olivier Pons
    I know it may have been asked many times, but here's my question: I'm about to open my website which I'm more than proud of (I'll talk about its capabilities on my blog). Anyway I want it to be registered by all the most famous SEOs and to be fetched often because it may grow up quickly. I know that a lot of people may have already asked this question but nevertheless I didn't find something relevant to that. I just want to know where I should register on all major SEOs when I release a website. Maybe this is a wiki, but I didn't find anything helpful on the subject. Any advice welcome.

    Read the article

  • Double vs Single Quotes in Chrome

    - by Rodrigo
    So when you want to embed google docs on a site you are given this chunk of code: <iframe width='500' height='300' frameborder='0' src='https://docs.google.com/spreadsheet/pub?hl=en_US&hl=en_US&key=0AiV6Vq32hBZIdHZRN3EwWERLZHVUT25ST01LTGxubWc&output=html&widget=true'></iframe> This works fine on my site. If you edit the page, we run the new content through some filters to escape out stuff and make sure it is valid html. After the process, the link above gets converted to this: <iframe frameborder="0" height="300" src="https://docs.google.com/spreadsheet/pub?hl=en_US&amp;hl=en_US&amp;key=0AiV6Vq32hBZIdHZRN3EwWERLZHVUT25ST01LTGxubWc&amp;output=html&amp;widget=true" width="500"></iframe> This will work on every browser except for chrome. Chrome thinks I am running JS in the src. I narrowed it down to a combination of double quotes and escaped '&' symbols. If i revert one of those back to the original state, the iframe works. I work in ruby where ' and " have different behaviors. Is Chrome doing the same thing? Is there a way to turn that off?

    Read the article

  • Linear Search in Python? [closed]

    - by POTUS
    def find_interval(mesh,x): '''This function finds the interval containing x according to the following rules, mesh is an ordered list with n numbers return 0 if x < mesh[0] return n if mesh[n-1] < x return k if mesh[k-1] <= x < mesh[k] return n-1 if mesh[n-2] <= x <= mesh[n-1] This function does a Linear search. 08/29/2012 ''' for n in range(len(mesh)): for k in range(len(mesh)): if x == mesh[n]: print "Found x at index:" return n elif x<mesh[n]: return 0 elif mesh[n-1]<x: return n elif mesh[n-2]<=x<=mesh[n-1]: return n-1 elif mesh[k-1]<=x<mesh[k]: return k mesh = [0, 0.1, 0.25, 0.5, 0.6, 0.75, 0.9, 1] print mesh print find_interval(mesh, -1) print find_interval(mesh, 0) print find_interval(mesh, 0.1) print find_interval(mesh, 0.8) print find_interval(mesh, 0.9) print find_interval(mesh, 1) print find_interval(mesh, 1.01) Output: [0, 0.100000000000000, 0.250000000000000, 0.500000000000000, 0.600000000000000, 0.750000000000000, 0.900000000000000, 1] 0 Found x at index: 0 2 6 -1 -1 0 I don't think the output is correct. Can anyone help me fix it? Thanks.

    Read the article

  • Daily Blog Archives and Duplicate Content

    - by nemmy
    A few weeks back I realised that my blog software was creating daily post archives. Which basically resulted in duplicate content especially if I only had one post a day. The situation is something like this: www.sitename.com/blog/archives/2013/06/01 - daily archive for 1 June 2013 www.sitename.com/blog/archives/2013/06/my-post-name.html So, here we have two pages that are basically identical except the daily archive has some meaningless title like "Daily Archive for 1 June 2003". And I have no control over which content Google decides is the primary content. It's quite possible (and likely) that the daily archive could be the "primary" content and the actual post itself the "duplicate". Once I realised it was doing this I modified the daily archive template to include <meta name="robots" content="noindex"> Here we are a few weeks later and I still see some daily archives coming up in Google search results. I realise some of those deep pages might not be crawled yet but I am worried that the original post (which should be the PRIMARY content) has been marked duplicate content by Google. Now I've no indexed the daily archives I might end up with no indexed content AND the original articles still flagged as duplicates. And nothing will show up in search at all. Have I screwed myself here or is there a way out?

    Read the article

  • Googlebot visit but no cache update - why?

    - by Mick
    I have made a new plain vanilla HTML website. I have been making regular modifications to it on an almost daily basis. The site is hosted by hostmonster and as part of their service they offer "awstats" to let you know assorted details of visitors to the site. One thing is puzzling me. According to awstats, a "robot/spider" calling itself "Googlebot" visited my site as recently as today (28th June 2011), but when I find my site on google (e.g. by searching for "full reserve banking") the cache is dated only the 5th June. I always thought that a visit from the google robot was synonymous with a cache update. Am I wrong? Or have I accidentally put something in the site telling google that nothing has been updated? EDIT: It seems a moderator has removed the name of my website, so there is now no chance that anyone could check out if I had made some error on my site :-( ... but anyway, in answer to paulmorriss' question, here is what aw stats was telling me:

    Read the article

  • How can I find files quicker than find or locate?

    - by Chaitanya
    I have been using find command to find files on my 1 tb hard disk. it takes very long. then I used locate which proved to be faster with regular update using updatedb. But the limitation of locate is that I cannot find files with certain size or modified/created time. can you suggest me any ideas on how to find files at more speed or in that case how to pipe output of locate command in a way that all other information like size, time, etc. can be displayed or redirected to a file.

    Read the article

  • My approach towards SEO implementation needs improvements. [closed]

    - by Eritrea
    I have always copy/paste this below code as a template for meta tags on project, but I think they are not effective as they could possible be. So, I need to know if there is anyway I can improve it. suppose I have a site called coop.com for a company called Coopm and we do import and export as a business in France. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8" /> <meta name="rpbots" content="index, follow" /> <meta name="description" content="ccop is a major import and export company" /> <meta name="keywords" content="coop, coop.com coop company, import export, import export france, " /> <meta name='REVISIT-AFTER' content='30 DAYS'> <title>coop is an import and export company located in France</title> </head> The reason I am asking, is because I want to know if there are better ways of constructing your SEO tags, and construction.

    Read the article

  • How to handle URLs with diacritic characters

    - by user359650
    I am wondering how to handle URLs which correspond to strings containing diacritic (á, u, ´...). I believe what we're seeing mostly are URLs where diacritic characters where converted to their closest ASCII equivalent, for instance Rånades på Skyttis i Ö-vik converted to ranades-pa-skyttis-i-o-vik. However depending on the corresponding language, such conversion might be incorrect. For instance in German, ü should be converted to ue and not just u, as seen with the below URL representing the Bayern München string as bayern-muenchen: http://www.bundesliga.de/en/liga/clubs/fc-bayern-muenchen/index.php However what I've also noticed, is that browsers can render non-ASCII characters when they are percent-encoded in the URL, which is the approach Wikipedia has chosen, for instance http://de.wikipedia.org/wiki/FC_Bayern_M%C3%BCnchen which is rendered as: Therefore I'm considering the following approach for creating URL slugs: -(1) convert strings while replacing non-ASCII characters to their recommended ASCII representation: Bayern München - bayern-muenchen -(2) also convert strings to percent encoding: Bayern München - bayern_m%C3%BCnchen -create a 301 redirect from version (1) to version (2) Version (1) URLs could be used for marketing purposes (e.g. mywebsite.com/bayern-muenchen) but the URLs that would end being displayed in the browser bar would be version (2) URLs (e.g. mywebsite.com/bayern-münchen). Can you foresee particular problems with this approach? (Wikipedia is not doing it and I wonder why, apart from the fact that they don't need to market their URLs)

    Read the article

  • SH404SEF URLs in Joomla 1.5

    - by Tao Bellamine
    I have two modules to play with urls, the global configuration module and the sh404sef module. The global config is set to "Sef urls: YES" and "mod rewrite enabled: YES" and the sh404sef is set "url optimization: NO". My problem is, even with "Sef urls" set in the global config, my urls still don't seem to be that "user friendly" so I turn on the "Url optimization" using the sh404sef module, and I get better descriptive urls. However, the problem I inherit from doing this is that my dynamically populated chronoforms get messed up (only the chrono forms, other forms are fine); These forms are now showing up at the homepage instead of their own reserved page. Here's an example: Old form "GOOD" url: http://www.mycraftwork.com/index.php?option=com_content&view=article&id=94 New optimized "BAD" URL: http://www.mycraftwork.com/handthrown-pottery/alladin-teapot/index.php?option=com_content&view=article&id=94 Any help would be GREATLY appreciated! I can even turn the sh404sef on and off if some people are interested in seeing the issue LIVE. Thanks!! Tao Bellamine

    Read the article

  • How to determine the source of a request in a distributed service system?

    - by Kabumbus
    Map/Reduce is a great concept for sorting large quantities of data at once. What to do if you have small parts of data and you need to reduce it all the time? Simple example - choosing a service for request. Imagine we have 10 services. Each provides services host with sets of request headers and post/get arguments. Each service declares it has 30 unique keys - 10 per set. service A: name id ... Now imagine we have a distributed services host. We have 200 machines with 10 services on each. Each service has 30 unique keys in there sets. but now to find to which service to map the incoming request we make our services post unique values that map to that sets. We can have up to or more than 10 000 such values sets on each machine per each service. service A machine 1 name = Sam id = 13245 ... service A machine 1 name = Ben id = 33232 ... ... service A machine 100 name = Ron id = 777888 ... So we get 200 * 10 * 30 * 30 * 10 000 == 18 000 000 000 and we get 500 requests per second on our gateway each containing 45 items 15 of which are just noise. And our task is to find a service for request (at least a machine it is running on). On all machines all over cluster for same services we have same rules. We can first select to which service came our request via rules filter 10 * 30. and we will have 200 * 30 * 10 000 == 60 000 000. So... 60 mil is definitely a problem... I hope to get on idea of mapping 30 * 10 000 onto some artificial neural network alike Perceptron that outputs 1 if 30 words (some hashes from words) from the request are correct or if less than Perceptron should return 0. And I’ll send each such Perceptron for each service from each machine to gateway. So I would have a map Perceptron <-> machine for each service. Can any one tall me if my Perceptron idea is at least “sane”? Or normal people do it some other way? Or if there are better ANNs for such purposes?

    Read the article

  • Finding duplicate files?

    - by ub3rst4r
    I am going to be developing a program that detects duplicate files and I was wondering what the best/fastest method would be to do this? I am more interested in what the best hash algorithm would be to do this? For example, I was thinking of having it get the hash of each files contents and then group the hashes that are the same. Also, should there be a limit set for what the maximum file size can be or is there a hash that is suitable for large files?

    Read the article

  • Looking for a customizable "Did you know..." dialog application

    - by Jorge Suárez de Lis
    I want to deploy a "Did you know..." or "Tip of the day" application at the office. It should: Show a dialog at login time with a random tip. Obviously, provide some way to store my own tips. Be easy to disable and reenable by the user itself. I'm using puppet, so I'm covered with the deployment. The tips don't even need to be gathered from a server, since I can deploy the newest tips file/database with no costs. Sure, I could hack a quick solution by using zenity and bash, but I'd like to know if there's any application out there specifically targeted at this. I don't like the zenity approach very much because it's very limited on the contents that can be displayed. No text alongside screenshots, for example. Zenity is aimed towards displaying simple dialogs.

    Read the article

  • How do I get mlocate to only index certain directories?

    - by Andrew Ferrier
    I'd like to use mlocate on my Ubuntu server, but only to index certain directories (e.g. /home and /data, but not everything under /). However, mlocate's standard configuration works the opposite way; you specify the paths you want to remove (with PRUNE_PATHS). Is there any easy way to achieve this, or any similar utility that will do what I want? (note: it should maintain an index like mlocate, so find is not acceptable, for example) Thanks.

    Read the article

  • page rank 0 penalty

    - by mark
    I have a wordpress blog and a www-website on the same domain for about one year. Together it is about 170 pages. The page rank is still 0. I understand that page rank 0 is a penalty for duplicate content. The pages are indexed in google but still no page rank. In google webmaster tools there is no indication for any problem. I asked for reconsideration of both blog and website a month ago. Google accepted the reconsideration but it did not change anything. Other pages of similar size and similar audience earn PR 4-6. Is there something I can do in order to get a fair page rank? A coworker told me that it might be the case that a link farm is using the content and I can do nothing about it. Is there a reliable way to check for something like that? I do not like to give up so quickly is there a chance to fix this by for example moving to another domain?

    Read the article

  • How do I catalog files on several external hard drives that I want to store off-line? OSX

    - by raudi
    My partner, an artist, has more than 10 external hardisks both USB and firewire and every 2-3 months a new one has to be added (She's working with videos and pictures) currently its 10TB and growing so too much for a affordable NAS. Right now the files are not indexed and I think can not be searched with spotlight because not all drives can be connected at the same time. So if she wants to search for a file, she has to guess which disk/disks (based mostly on the date) and then search several drives. Now I'm looking for a solution to index/catalog the drives, something like GentibusCD Cathy Disclib (all these solutions are unfortunately Windows only) Is there any software for OSX that will catalog all the hard drives, so she can search the catalog, find the files, and get the ID of the disk / disk name that has the content? Preferably something with a GUI so my partner can also use it easily Preferably with Thumbnails for pictures/videos (But even an equivalent to "tree /F /A" would be better than nothing)

    Read the article

  • 301 redirect: Is this good or bad for 2 domains?

    - by Tim
    Since i couldn't find any appropriate answer to my specific question, I wanted to ask you. I've read alot of things about the 301-redirect for moving pages and so on. A customer of mine has booked a new domain last year for better search results (he included his main keyword into the domain. Before he had only a domain with his business name, which had nothing to say about what he does). I told him, that he should do a 301-redirect so he doesn't loose his position in Google and to redirect all new customers coming from the old domain to the new domain. After about one year where his site hat a good amount of traffic the search results of Google for his keywords are getting more worse. Since he didn't maintain his website (no new content, bad content on all pages and so on) I assumed this would be the problem. He gave his website to another company which also makes websites. They told him, that this 301-redirection is very bad for his website. They removed it, and also updated his content and the template so now he has the same meta keywords on every page (instead of the specific ones I put there before). He also removed the canonical-tag which I placed there to ensure no duplicate content. What I am now afraid of is, that without this redirect Google now will find duplicate content and therefore kick him out of the index, which would be a nightmare, since most of his customers come over his website. I need verification of the fact, that the 301 isn't bad but in fact the correct way of working with 2 domains. If possible with good sources I can point out to him since he don't wants to hear anything about this. If someone also has a few words about the keywords and the canonical-tag I would really appreciate it! Thank you very much!

    Read the article

  • Is there a way to force Windows to recognize a network folder as a local drive, for the purposes of

    - by NoCatharsis
    I just started using the file search program Everything at work to search through documentation on our shared drives. This is after disappointments with Google Desktop and Windows Search. I love the speed of Everything, but I wish it were able to index other shared folders. My makeshift solution was to somehow force Windows to recognize the necessary shared folders as local drives, then add them to the index list. I have also considered using SyncToy, but this requires downloading all data to my drive, which could be terabytes of information - obviously not a good idea on a small company network. What would be the best solution here?

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >