Search Results

Search found 10402 results on 417 pages for 'macbook pro'.

Page 182/417 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • How can I get cross-browser consistent behavior for TR heights within a table with a set height? [migrated]

    - by Dan
    I have an arbitrary number of tables with an arbitrary number of rows in each, and all tables are the same height. My initial approach was to just set the overall height of the table and hope the rows were smart enough to distribute themselves appropriately. That's not the case. I have 4 different behaviors going on with 4 browsers, but I need them to all render at the very least in a similar way. Safari & Chrome (WebKit): All rows are equal height, creating scroll bars as needed and fitting within table height. Firefox: All rows are the height necessary to fit their content, with the remaining rows overflowing out of the table. Additionally, If the content of the rows does not take up all of the height, only the part of the table with content in it takes the background (though it seems, through use of Firebug, that the actual table [and TR] extend to the bottom of the proper table height). IE: All rows are the height necessary to fit their content, with the remaining rows overflowing out of the table. Obviously this only includes one version of each browser and additional variation would likely appear with more being tested. Ideally, a solution where the browser renders TRs with less content smaller than those with larger content, while still using scrolling within the variable height TRs when the overall height of the table is not enough would be optimum. I could potentially see a solution to achieve that with JS, but can it be done with CSS? Or, if not, can the behavior that WebKit displays be made to work across the browsers? Thanks! PS: Example can be found here.

    Read the article

  • What is the best way to have the same website in multiple domains?

    - by Daniel Magliola
    I would like to have the same website to sell a specific product, in multiple domains , to take advantage of keywords matching the domain name, for several different searches. However, I understand that having the same content in multiple sites will unleash the wrath of Google. If I have a redirect from all domains minus one, to that last one, do I still get any bonus for the "magic exact domain match jackpot"? Same question applies to canonical URLs... What's the best way to approach this? Thanks!

    Read the article

  • Why is Google still not indexing my !# website?

    - by Zubair
    I have been working on a website which uses #! (2minutecv.com), but even after 6 weeks of the site up and running and conforming to the Google hash bang guidelines stated here, you can still see that Google still hasn't indexed the site yet. For example if you use Google to search for 2MinuteCV.com benefits it does not find this page which is referenced from the homepage. Can anyone tell me why Google isn't indexing this website? Update: Thanks for al lthe help with this answer. So just to make sure I understand what is wrong. According to the answers Google never actually indexes the pages after the Javascript has run. I need to create a "shadow site" which google indexes (which google calls HTNL snapshots). If I am right in thinking this then I can pick a winner for the bounty

    Read the article

  • FFmpeg & Installation on phpmyadmin

    - by Vivek
    I am attempting to have an interface in which people can upload music files and listen to them through the site. The biggest problem obviously is that someone who uploads an audio track in mp3 format into Mozilla wouldn't be able to play it back (since MF doesn't support mp3 playback since I'm using jPlayer). I did some research and found out that I could use command line php using FFmpeg to convert the mp3 to ogg or some other supportable format. I believe I understand (a little bit) how command line php works but I was wondering how I could install it onto phpmyadmin on my hosting service? Could anyone link me to a tutorial or care to explain? I tried googling it but I just couldn't find it.

    Read the article

  • Google adsense - providing access (via an additional account?) to a third party

    - by Homunculus Reticulli
    I am working with a partner who will be handling the marketing side of things for one of my websites. He has informed me that he will require access to my adsense account. I need to create an additional account for him, so that he can access and manage Google Adwords/units etc, using his own login credentials. However, despite searching Google for a while now, I can't seem to locate any information that pertains to creating additional user accounts. Does anyone know how I may do this?

    Read the article

  • Apache virtual hosts - Resources on website not loaded when accessed from other hostname than localhost

    - by Christian Stadegaart
    Running virtual hosts on Mac OS X 10.6.8 running Apache 2.2.22. /etc/hosts is as follows: 127.0.0.1 localhost 3dweergave studio-12.fritz.box 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost Virtual hosts configuration: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot "/opt/local/www/3dweergave" ServerName 3dweergave ErrorLog "logs/3dweergave-error_log" CustomLog "logs/3dweergave-access_log" common <Directory "/opt/local/www/3dweergave"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:80> ServerName main </VirtualHost> This will output the following settings: *:80 is a NameVirtualHost default server 3dweergave (/opt/local/apache2/conf/extra/httpd-vhosts.conf:21) port 80 namevhost 3dweergave (/opt/local/apache2/conf/extra/httpd-vhosts.conf:21) port 80 namevhost main (/opt/local/apache2/conf/extra/httpd-vhosts.conf:34) I made 3dweergave the default server by putting it first in the list. This will cause all undefined virtual hosts' names to load 3dweergave, and thus http://localhost will point to 3dweergave. Of course, normally, the first in the list is the virtual host main and localhost will point to main, but for testing purposes I switched them. When I navigate to http://localhost, my CakePHP default homepage shows as expected: Screenshot 1 But when I navigate to http://3dweergave, my CakePHP default homepage doesn't show as expected. It looks like every relative link to resources are not accepted by the server: Screenshot 2 For example, the CSS isn't loaded. When I open the source and click on the link, it opens the CSS file in the browser without errors. But when I run FireBug while loading the webpage, it seems that the CSS file isn't retrieved. (<link rel="stylesheet" type="text/css" href="/css/cake.generic.css" />) How can I fix this unwanted behaviour?

    Read the article

  • Alternative to Google Adsense which has good international coverage

    - by Yoga
    I have a technical blog (programming related) which has around 500 visit per days, 70% are international visitors and 30% are from US/CA. Google Adsense disabled my account due to invalid clicks so I can't use them (no need to explain here, they suck hard and never respect and listen to the publishers' appeal) I have tried adbrite and recently using chitika but they almost give me nothing, e.g. chitika 13,865 Page Views 4 clicks $0.01 The performance is so poor even I don't want to mention about it. I am already putting a full top banner and a 350x200 box in article body. I am researching if any alternative would provide more revenue for my internation visitors or technical visitors. Thanks.

    Read the article

  • Wordpress with user login and file manager support

    - by Don
    This may be a RTFM kind of thing, so I'll apologize up front. I've been asked by a friend I used to freelance for if there's a solution in Wordpress where users an login, then they can upload their own files in a "my docs" kind of thing. I've never used WP, so before I dig into their info I thought I'd see if anyone here can confirm or maybe point me to a resource. It's one of those "I'll look up at lunch and get back to you" things, which is why I'm bugging you all before reading the docs. Thanks

    Read the article

  • Keeping rackspace vserver alive

    - by mit
    It appears to me that rackspace somehow freezes cloud VMs after some idle time. This means the first page request to a php page takes much longer to respond than the subsequent requests. This is in some cases good, in other cases not acceptable. I am actually querying a machine with wget from a different host now to keep it "alive". But I wonder what frequency would be necessary. Does anyone know the time period after which they send a VM to "sleep"? I guess it would be some minutes. EDIT: There is absolutely no caching involved on the php site. It just recently moved from another vhost and there was never such latency on the first request.

    Read the article

  • Facebook Like javascript related to Time Spent Downloading a page Increase in GWT?

    - by donaldthe
    Hi, I installed the Facebook Like button Javascript version on my website on December 15th. Take a look at this report from Google Webmaster Central. Crawl stats Googlebot activity in the last 90 days The crawl stats are from Googlebot which as far as I know doesn't execute Javascript. Could the Facebook Like Javascript code, "The XFBML version" be related to large spike in Time spent downloading a page? (By the way the huge spike in November was caused by a mistake where every image request was getting a 301.) I'm not sure what caused the spike to go down by half somewhere in December. It may have been related to a faulty setting in web.config. I'm at a loss as to what I can do about this or even how to tell if this is my problem or Googlebots crawl problem. Here is the Facebook code I am using to create the like button. It is right after the opening body tag <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({appId: 'xxxxx', status: true, cookie: true, xfbml: true}); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }()); ` and this creates the like box: <fb:like show_faces="false"></fb:like> If the Javascript can't be the problem any ideas on where to start looking would be appreciated.

    Read the article

  • Need suggestions on how to create a website with an encrypted database.

    - by SFx
    Hi guys, I want to create a website where a user enters content (say a couple of sentences) which eventually gets stored in a backend database (maybe MySQL). But before the content leaves the client side, I want it to get encrypted using something on client like maybe javascript. The data will travel over the web encrypted, but more importantly, will also be permanently stored in the backend database encrypted. Is JavaScript appropriate to use for this? Would 256 bit encryption take too long? Also, how do you query an encrypted database later on if you want to pull down the content that a user may have submitted over the past 2 months? I'm looking for tips, suggestions and any pointers you guys may have in how to go about learning about and accomplishing this. Thanks!

    Read the article

  • Where can I safetly search domain whois without worrying about the search engine parking on the domain immediately after the search?

    - by Evan Plaice
    There are a lot of companies that provide domain whois but I've heard of a lot of people who had bad experiences where the domain was bought soon after the whois search and the price was increased dramatically. Where can I gain access to a domain whois where I don't have to worry about that happening? Update: Apparently, the official name for this practice is called Domain Front Running and some sites go as far as to create explicit policies stating that they don't do it. This is where a domain registrar or an intermediary (like a domain lookup site) mines the searches for possibly attractive domains and then either sells the data to a third-party, or goes ahead and registers the name themselves ahead of you. In one case a registrar took advantage of what's known as the "grace period" and registered every single domain users looked up through them and held on to them for 5 days before releasing them back into the pool at no cost to themselves. Source: domainwarning.com And apparently, after ICANN was notified of the practice, they wrote it off as a coincidence of random 'domain tasting'. Source: See for yourself

    Read the article

  • One to many problem with implementing 301 redirect after changed urls

    - by user16136
    I have a problem. I had an old dynamic url which I have now split into multiple static urls. e.g. www.mydomain.com/product.php?type=1&id=2 www.mydomain.com/product.php?type=2&id=3 www.mydomain.com/product.php?type=2&id=4, etc which I have changed to something like www.mydomain.com/electronics/radio www.mydomain.com/electronics/television www.mydomain.com/mobile/smartphone, etc. Google has previously indexed the dynamic urls and search results show the old urls. I want search to point to the new urls. I have kept the old url active, so both urls work. How can I set up a 301 redirect in this case? I run IIS and it only allows a page to be redirected to 1 url. Should I deactivate the old dynamic url? In that case I lose all the previous seo rankings..

    Read the article

  • Desktop Software to monitor online status of web site and web-based application

    - by pansp
    I'm basically looking for a desktop-based software which can monitor my company's website and the web application's online availability. I know there are few online applications like Uptime Robot which does the same work but I have been asked to find a desktop based software which can monitor running in system tray and notify any down-time. A free software would be great. Any help would be appreciated. Thanks!

    Read the article

  • IE8 HTTPs Download Issue

    - by Jon Egerton
    I have a problem with a system I develop related to IE8 downloading over SSL (ie on sites using https://...) and is described on this MS kb article: http://support.microsoft.com/kb/323308 We use the HTTPCacheability.NoCache option as the data being downloaded is sensitive, and is downloaded from a secured site. I don't want that data to be cached on any of the proxies etc that the response passes through back to the client. The article describing the issue details a fix to the client side registry changing a BypassSSLNoCacheCheck setting. I don't want to loosen the system security just for IE8, as the system works fine on anything more upto date. Getting all the clients to apply the hotfix is difficult at best, and impossible at worst. We need to support IE8 in the system, at least for now. So: 1: Does the detailed hotfix have any implications for the security at the browser end in IE8 - does it mean the file will be cached? (in a place other than where the user saves the file). 2: Is there some way I can get these files downloadable with a change at the server end that doesn't break the security side of things?

    Read the article

  • Can I include a robots meta tag outside of the head in HTML snippets indeded to be SSIed?

    - by Dan
    I have a number of files in my site which are not intended for independent viewing, but rather to be AJAXed into content within the site. They obviously don't meet HTML standards (no body, head, etc.) as independent entities. I would like to prevent search engines from indexing these pages, but do not have access to /robots.txt (which would be much more ideal). My question is, could I include the following at the top of these partial HTML files and get the desired results? <meta name="robots" content="noindex, noarchive"> I guess there are two parts to this question. Will this cause any rendering issues in any browsers? Will search engines (at least Google & Bing) interpret this as intended?

    Read the article

  • My Xmap generated sitemap is not being submitted

    - by user2014989
    I m using Joomla Xmap component for creating sitemap. Here is the URL of my Xmap generated Sitemap: http://www.acethehimalaya.com/index.php?option=com_xmap&sitemap=1&view=xml I tried to submit my sitemap to Google but the problem I'm facing is that the URL doesn't get submitted and I'm having the issue that it says the sitemap is empty. Can Xmap generated sitemaps not be submitted, or am I doing anything wrong?

    Read the article

  • SEO for images: can I use a different (cookieless) domain?

    - by Oliver
    Hello, We want to increase the value of some of our important images by means of SEO, and we want to start serving them from a different, i.e. cookieless, domain. We want to go from http://www.example.com/images/1234.jpg to http://www.example.com/germany/bavaria/landscape.jpg which can easily be done via URL rewriting. Then on the other hand, we would like to serve the image from a completely different domain, let's say http://www.examplestatic.com/germany/bavaria/landscape.jpg, to save the overhead of sending the cookie from www.example.com. Somehow I feel that this is not a good idea because I move the image away from the content by putting it on a different domain. Can anyone shed some light on this problem? Naturally, I would just use a different subdomain, e.g. img.example.com, but we already use subdomains for languages and our cookies are valid for all subdomains of example.com, so this won't help. I'd really appreciate any hints. Cheers,

    Read the article

  • Creating date based back entries for a blog and its site registration

    - by open_sourse
    So I am showing a blog to a colleague and telling him how the author has been regularly blogging for over ten years now. My colleague tells me that anyone can register a domain name and start entries from say circa 2000. When I argued that the site registration date can easily show that the registration was done recently he put forward two arguments: The author can claim that he moved from an old domain name which was registered many years ago and lapsed. So he took the data and rebuilt it in the new site. The author can buy an expired domain which was on the internet for many years. I am not sure if these ways can work to con someone to believing you have been a blogger for over a decade. But I do not have enough expertise in the topic to refute him. So I thought I would ask the wise community here at StackExchange. Can anyone give me some insight?

    Read the article

  • Which is better for search engines, repeated phrases or different phrases with the same meaning?

    - by George Botros
    When I'm designing an ads website I have two options: Let the advertiser to choose from some predefined lists to create the new ad. For Example: product list ( T-Shirt, Shorts, Suit, .....) Color list ( Black, Red, .....) Let the advertiser to write his own descriptive content for the product For Example "Amazing suit with a good price" I like the first Scenario but which is better for search engine optimization [SEO], repeated phrases or different phrases with the same meaning? Note : assuming each page will contain one or more ads

    Read the article

  • Web Safe Area (optimal resolution) for web app design?

    - by M.A.X
    I'm in the process of designing a new web app and I'm wondering for what 'Web Safe Area' should I optimize the app layout and design. By Web Safe Area I mean the actual area available to display the website in the browser (which is influenced by monitor resolution as well as the space taken up by the browser and OS) I did some investigation and thinking on my own but wanted to share this to see what the general opinion is. Here is what I found: Optimal Display Resolution: w3schools web stats seems to be the most referenced source (however they state that these are results from their site and is biased towards tech savvy users) http://www.w3counter.com/globalstats.php (aggregate data from something like 15,000 different sites that use their tracking services) StatCounter Global Stats Display Resolution (Stats are based on aggregate data collected by StatCounter on a sample exceeding 15 billion pageviews per month collected from across the StatCounter network of more than 3 million websites) NetMarketShare Screen Resolutions (marketshare.hitslink.com) (a web analytics consulting firm, they get data from browsers of site visitors to their on-demand network of live stats customers. The data is compiled from approximately 160 million visitors per month) Display Resolution Summary: There is a bit of variation between the above sources but in general as of Jan 2011 looks like 1024x768 is about 20%, while ~85% have a higher resolution of at least 1280x768 (1280x800 is the most common of these with 15-20% of total web, depending on the source; 1280x1024 and 1366x768 follow behind with 9-14% of the share). My guess would be that the higher resolution values will be even more common if we filter on North America, and even higher if we filter on N.American corporate users (unfortunately I couldn't find any free geographically filtered statistics). Another point to note is that the 1024x768 desktop user population is likely lower than the aforementioned 20%, seeing as the iPad (1024x768 native display) is likely propping up those number (the app I'm designing is flash based, Apple mobile devices don't support flash so iPad support isn't a concern). My recommendation would be to optimize around the 1280x768 constraint (*note: 1280x768 is actually a relatively rare resolution, but I think it's a valid constraint range considering that 1366x768 is relatively common and 1280 is the most common horizontal resolution). Browser + OS Constraints: To further add to the constraints we have to subtract the space taken up by the browser (assuming IE, which is the most space consuming) and the OS (assuming WinXP-Win7): Win7 has the biggest taskbar footprint at a height of 40px (XP's and Vista's is 30px) The default IE8 view uses up 25px at the bottom of the screen with the status bar and a further 120px at the top of the screen with the windows title bar and the browser UI (assuming the default 'favorites' toolbar is present, it would instead be 91px without the favorites toolbar). Assuming no scrollbar, we also loose a total of 4px horizontally for the window outline. This means that we are left with 583px of vertical space and 1276px of horizontal. In other words, a Web Safe Area of 1276 x 583 Is this a correct line of thinking? I'm really surprised that I couldn't find this type of investigation anywhere on the web. Lots of websites talk about designing for 1024x768, but that's only half the equation! There is no mention of browser/OS influences on the actual area you have to display the site/app. Any help on this would be greatly appreciated! Thanks. EDIT Another caveat to my line of thinking above is that different browsers actually take up different amounts of pixels based on the OS they're running on. For example, under WinXP IE8 takes up 142px on top of the screen (instead the aforementioned 120px for Win7) because the file menu shows up by default on XP while in Win7 the file menu is hidden by default. So it looks like on WinXP + IE8 the Web Safe Area would be a mere 572px (768px-142-30-24=572)

    Read the article

  • how to get new site indexed by alexa

    - by JohnMerlino
    When I search my site on alexa, it says "Alexa Traffic Rank: No Data". So I google the issue. I come across this page: http://www.loudable.com/my-website-data-not-showing-in-alexa-get-your-website-crawled-by-alexasolution.html It says to get site indexed, click Crawl my site on the webmasters page. However there is no longer a link that says "Crawl my site". So as of now, does anyone know how to get site indexed by alexa so that my traffic rank will display on alexa index?

    Read the article

  • How to filter traffic coming to particular page from other page?

    - by BishKopt
    I've got page A linking to page B. There are also other pages linking to B. How can I see traffic that is coming to page B be ONLY form page A? I can somehow do it via Behavior flow: Behavior Behavior flow [Right click on anything] Explore traffic through here [Click edit icon] Define a page group [Right click] Group details [Dropdown] Incoming traffic But how do I do it in normal reports? Is there any way to filter out only the traffic coming from particular page?

    Read the article

  • Marketing for Scheduled Online Events

    - by JT703
    Last year I started working with a team on our first major web project (We, the Pixels). I believe the idea is very solid, but it has a hard requirement for a group of people being on the site for the randomly scheduled events. We are having problems getting people to come and stay for these events. What is the proper marketing approach needed to bring people to the site for these events? We have recently done the following in an attempt to fix the problem: Added email notification of new events being created Added privileges based on rank Added text throughout the site encouraging setting up the events in the future so other users can have time see that it exists. Gotten involved in with other communities that would find the site interesting in order to promote (market) the site Advertised using Google Adwords Is there an standard marketing approach for such a case as this?

    Read the article

  • Buy internet country domain name: .fr .co.uk .de .com.au .sg etc

    - by user700580
    I already bought a domain name .com on godaddy for my company. I would like to reserve the same name with country specific domain extention, but not sure where to buy them and how to do it. Here are the ones that I would like to buy: Europe: fr, co.uk, de, ch, es, it, nl, se, no, ru australia: com.au asia: sg Godaddy has all except 1 in europe, australia and singapore. Should I find a website that sell all of them or should I buy some of them in godaddy and others elsewhere? Any suggestions where to buy them? Until now i've always buy .com domain names only so not sure how to do it. Thanks

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >