Search Results

Search found 25614 results on 1025 pages for 'content filter'.

Page 297/1025 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • How can I solve display glitches and poor performance with ATI fglrx driver on my ThinkPad X100e?

    - by rewbs
    Hi all, I noticed that video performance on my Thinkpad X100e was very poor compared to Windows 7, so I installed the ATI fglrx proprietary drivers by using the "Additional Drivers" dialogue box. The system has an ATI Radeon Mobility HD 3200 chip. The result of installing the drivers is pretty devastatingly negative, with symptoms such as skewed content in windows, browser tabs and text boxes failing to refresh when their content changes. In fact, please excuse typos in this post, because I can't really see what I am typing. :) I also notice that HD video playback performance is no better - perhaps even worse - than prior to installing the drivers. Example of what I see: Here's the output of fglrxinfo: display: :0.0 screen: 0 OpenGL vendor string: ATI Technologies Inc. OpenGL renderer string: ATI Radeon HD 3200 Graphics OpenGL version string: 3.3.10237 Compatibility Profile Context Output of lspci | grep -i vga: 01:05.0 VGA compatible controller: ATI Technologies Inc RS780M/RS780MN [Radeon HD 3200 Graphics] I'm on Ubuntu 10.10 with kernel 2.6.35-22-generic-pae. What can I try? Many thanks, -R

    Read the article

  • SEO perspective on non existent directory base in URL?

    - by Sandro Dzneladze
    I'm wondering if there will be any SEO/readability/memorability benefit to using this kind of URL structure for my upcoming project: www.moviereviews.com/movie/name? Considering that /movie is not a real directory. So that page doesn't exist. Something similar to wordpress /category/ base that is used purely for content separation on the site. What do you think? For user it will be beneficial, if domain doesn't signal what content is about my extra dir will tell what it is about. Correct? But from SEO perspective?

    Read the article

  • JS / HTML 5 Compatablity issue on iOS 6

    - by Dhaval
    I'm using HTML 5 to play video and there are some content before the video so I'm using flexroll to scroll that whole window. I'm checking it on iPad, now problem is that in iOS 5 its working fine but when I update to iOS 6 then screen is not scrolling only video is scroll up and down, content is as it is in the position. I can't understand what is the exact problem. Is that js compatibility issue or HTML 5 video compatibility issue. Can anyone please help me to figure out, your help will really be appreciated.

    Read the article

  • How does delicious.com avoid being sued for copyright infringement?

    - by Stanish
    With the recent redesign of delicious.com, they've added a much more graphical home page. The site continues to be a service for people to bookmark and share websites they come across on the web. The delicious home is now made up of images taken from those linked sites. See for yourself at http://delicious.com I would like to know what in the law allows them to do this, considering the images represent the main content of the page, and they clearly do not own copyright to those images? I know there is some leeway given to search engines where it is considered fair use to use a small portion of the content if the aim is to lead people to the originating site. Does that apply here?

    Read the article

  • Should I use a unit testing framework to validate XML documents?

    - by christofr
    From http://www.w3.org/XML/Schema: [XML Schemas] provide a means for defining the structure, content and semantics of XML documents. I'm using an XML Schema (XSD) to validate several large XML documents. While I'm finding plenty of support within XSD for checking the structure of my documents, there are no procedural if/else features that allow me to say, for instance, If Country is USA, then Zipcode cannot be empty. I'm comfortable using unit testing frameworks, and could quite happily use a framework to test content integrity. Am I asking for trouble doing it this way, rather than an alternative approach? Has anybody tried this with good / bad results? -- Edit: I didn't include this information to keep it technology agnostic, but I would be using C# / Linq / xUnit for deserialization / testing.

    Read the article

  • AdSense link units targeting better keywords?

    - by Unavailable
    I have one link unit above my navigation which targets exact keywords on what my site is about, on contrary my AdSense for content Leaderboard and Wide Skyscraper show ads that are totally opposite from my keyword, while my ctr increased for about 10 times with link unit I noticed CPC is lower also (but that was just based on one day, today is higher). Is it so bad to completely get rid of standard content ads and put link units, because this way I earn so little for that site but I think that is because smart pricing (accidental clicks on ads that my users are not interested)? I earn in whole day with about 25-35 clicks as much as keyword tool shows for only one click, my site is first for the topic? I really don't know what to do. Has anyone had similar situation or can give some advice?

    Read the article

  • Leapfrog Crammer won't mount as a USB flash drive

    - by William
    I can't seem to get the Leapfrog Crammer study and sound system to show up as a flash drive under ubuntu so I can transfer stuff to it. I don't want to install the leapfrog bloatware, can someone help me with this? Additional Information: When I plug my crammer into my computer it shows a 1 MB file system with a link to download the crammer software. I want to know how to access the rest of the crammer's file system so I can transfer music to it. The crammer does not show any other partitions in natulius. According to an article on the internet, the crammed is divided into three partitions: One with a link to install the crammer software, one with all content(music, flash cards, etc.) and one for firmware. I want to know how to access the one with the content so I can add music to the player. Can anyone help me with this? Thanks in advance.

    Read the article

  • Enemies don't shoot. What is wrong? [closed]

    - by Bryan
    I want that every enemy shoots independently bullets. If an enemy’s bullet left the screen, the enemy can shoot a new bullet. Not earlier. But for the moment, the enemies don't shoot. Not a single bullet. I guess their is something wrong with my Enemy class, but I can't find a bug and I get no error message. What is wrong? public class Map { Texture2D myEnemy, myBullet ; Player Player; List<Enemy> enemieslist = new List<Enemy>(); List<Bullet> bulletslist = new List<Bullet>(); float fNextEnemy = 0.0f; float fEnemyFreq = 3.0f; int fMaxEnemy = 3 ; Vector2 Startposition = new Vector2(200, 200); GraphicsDeviceManager graphicsDevice; public Map(GraphicsDeviceManager device) { graphicsDevice = device; } public void Load(ContentManager content) { myEnemy = content.Load<Texture2D>("enemy"); myBullet = content.Load<Texture2D>("bullet"); Player = new Player(graphicsDevice); Player.Load(content); } public void Update(GameTime gameTime) { Player.Update(gameTime); float delta = (float)gameTime.ElapsedGameTime.TotalSeconds; for(int i = enemieslist.Count - 1; i >= 0; i--) { // Update Enemy Enemy enemy = enemieslist[i]; enemy.Update(gameTime, this.graphicsDevice, Player.playershape.Position, delta); // Try to remove an enemy if (enemy.Remove == true) { enemieslist.Remove(enemy); enemy.Remove = false; } } this.fNextEnemy += delta; //New enemy if (fMaxEnemy > 0) { if ((this.fNextEnemy >= fEnemyFreq) && (enemieslist.Count < 3)) { Vector2 enemyDirection = Vector2.Normalize(Player.playershape.Position - Startposition) * 100f; enemieslist.Add(new Enemy(Startposition, enemyDirection, Player.playershape.Position)); fMaxEnemy -= 1; fNextEnemy -= fEnemyFreq; } } } public void Draw(SpriteBatch batch) { Player.Draw(batch); foreach (Enemy enemies in enemieslist) { enemies.Draw(batch, myEnemy); } foreach (Bullet bullets in bulletslist) { bullets.Draw(batch, myBullet); } } } public class Enemy { List<Bullet> bulletslist = new List<Bullet>(); private float nextShot = 0; private float shotFrequency = 2.0f; Vector2 vPos; Vector2 vMove; Vector2 vPlayer; public bool Remove; public bool Shot; public Enemy(Vector2 Pos, Vector2 Move, Vector2 Player) { this.vPos = Pos; this.vMove = Move; this.vPlayer = Player; this.Remove = false; this.Shot = false; } public void Update(GameTime gameTime, GraphicsDeviceManager graphics, Vector2 PlayerPos, float delta) { nextShot += delta; for (int i = bulletslist.Count - 1; i >= 0; i--) { // Update Bullet Bullet bullets = bulletslist[i]; bullets.Update(gameTime, graphics, delta); // Try to remove a bullet... Collision, hit, or outside screen. if (bullets.Remove == true) { bulletslist.Remove(bullets); bullets.Remove = false; } } if (nextShot >= shotFrequency) { this.Shot = true; nextShot -= shotFrequency; } // Does the enemy shot? if ((Shot == true) && (bulletslist.Count < 1)) // New bullet { Vector2 bulletDirection = Vector2.Normalize(PlayerPos - this.vPos) * 200f; bulletslist.Add(new Bullet(this.vPos, bulletDirection, PlayerPos)); Shot = false; } if (!Remove) { this.vMove = Vector2.Normalize(PlayerPos - this.vPos) * 100f; this.vPos += this.vMove * delta; if (this.vPos.X > graphics.PreferredBackBufferWidth + 1) { this.Remove = true; } else if (this.vPos.X < -20) { this.Remove = true; } if (this.vPos.Y > graphics.PreferredBackBufferHeight + 1) { this.Remove = true; } else if (this.vPos.Y < -20) { this.Remove = true; } } } public void Draw(SpriteBatch batch, Texture2D myTexture) { if (!Remove) { batch.Draw(myTexture, this.vPos, Color.White); } } } public class Bullet { Vector2 vPos; Vector2 vMove; Vector2 vPlayer; public bool Remove; public Bullet(Vector2 Pos, Vector2 Move, Vector2 Player) { this.Remove = false; this.vPos = Pos; this.vMove = Move; this.vPlayer = Player; } public void Update(GameTime gameTime, GraphicsDeviceManager graphics, float delta) { if (!Remove) { this.vPos += this.vMove * delta; if (this.vPos.X > graphics.PreferredBackBufferWidth +1) { this.Remove = true; } else if (this.vPos.X < -20) { this.Remove = true; } if (this.vPos.Y > graphics.PreferredBackBufferHeight +1) { this.Remove = true; } else if (this.vPos.Y < -20) { this.Remove = true; } } } public void Draw(SpriteBatch spriteBatch, Texture2D myTexture) { if (!Remove) { spriteBatch.Draw(myTexture, this.vPos, Color.White); } } }

    Read the article

  • Are the contents in the front page considered as duplicate of the post?

    - by yibe
    I asked this same question on stackoverflow, but closed being off topic. Therefore, I am posting it here. In Wordpress blogs, the front page of the blog will display many posts in whole or excerpts. When the link to the post is clicked, the content will be opened with an other template file(single.php). Can we say that the content displayed in the front page and the post pages are considered as duplicate? Does it harm SEO in any way?

    Read the article

  • Apache: DoS with mod_deflate & range requests, tomcat also? [migrated]

    - by VextoR
    I know that apache has a security bug http://seclists.org/fulldisclosure/2011/Aug/175 So if you do this command: curl -I -H "Range: bytes=0-1,0-2" -s www.yandex.ru/robots.txt it says HTTP/1.1 206 Partial Content it means, the problem is exist. But the fact is, that for apache tomcat (our server) curl says 206 Partial Content as well. So we need to fix it. I found solution for apache HTTP (.htaccess, mod_headers) but not for tomcat. I'm very newbie for servers things, so can't understand most, so please help

    Read the article

  • Java EE 7 support in Eclipse 4.3

    - by arungupta
    Eclipse Kepler (4.3) features 71 different open source projects and over 58 million LOC. One of the main themes of the release is the support for Java EE 7. Kepler specifically added support for the features mentioned below: Create Java EE 7 Eclipse projects or using Maven New facets for JPA 2.1, JSF 2.2, Servlet 3.1, JAX-RS 2.0, EJB 3.2 Schemas and descriptors updated for Java EE 7 standards (web.xml, application.xml, ejb-jar.xml, etc) Tolerance for JPA 2.1 such as features can be used without causing invalidation and content assist for UI (JPA 2.1) Support for NamedStoredProcedureQuery (JPA 2.1) Schema generation configuration in persistence.xml (JPA 2.1) Updates to persistence.xml editor with the new JPA 2.1 properties Existing features support EE7 (Web Page Editor, Palette, EL content assist, annotations, JSF tags, Facelets, etc) Code generation wizards tolerant of EE7 (New EJB, Servlet, JSP, etc.) A comprehensive list of features added in this release is available in Web Tools Platform 3.5 - New and Noteworthy. Download Eclipse 4.3 and Java EE 7 SDK and start playing with Java EE 7! Oracle Enterprise Pack for Eclipse was released recently that uses Eclipse Kepler RC3 but will be refreshed soon to include the final bits.

    Read the article

  • What are the Search engine affects of registering the same domain on multiple top level domains (ie. .com, .ie, .nl etc.)?

    - by user1020317
    I'm looking to register a few more domains for my company, I have my-company.com at the moment, but now require my-company.com.au and my-company.nl and some others.. I'm running through my options and wondering what is the best.. Duplicate all the content on the .com package and make a replica at the other domains Buy the other domains but do a 301 redirect back to the .com domain. Create a full new website with different content for the new domains, thus having no text duplication We currently sell over the world so would like to raise our Search rankings in various countries, can this be done by buying the domain in the country, and if so, how will the above methods affect our search rankings. Any other suggestions are welcome!

    Read the article

  • ASP.NET 3.5 Search Function Basic Development Concepts

    Adding a search web form to your ASP.NET website is very helpful for your users when they want to search related content. For example if you are going to display a very large table it would be much more efficient for users to search related content and display it back to the browser. This tutorial will show you how to develop a search function in ASP.NET 3.5 which you can add to your website.... AutoCAD LT? 2011 Trial Try The New AutoCAD LT? And Benefit From Consistent 2D Drafting Focus.

    Read the article

  • In addition to Google's First Flick Free, should you whitelist search engine bots past a paywall?

    - by tobek
    Our site has subscription-only pages - non-subscribed visitors see a snippet preview. As per Google's FCF requirements, your first 5 hits to a subscriber-only pages with .google. as the referrer, you see the full page. In addition to this, should we whitelist search engine bots so that they can index the full content? I assume this is not required for Google, which can use FCF to index our content, but what about other search engines? Is this considered cloaking? My gut says that whitelisting bots past the paywall is bad practice., but I wanted to confirm - any evidence or references would be amazing.

    Read the article

  • A drop in SERP after following webmaster guidelines [on hold]

    - by digiwig
    So here's a puzzle for all you SEO gurus out there. I recently launched my own site. I had target keywords which were ranking very well for about 1 month, within the top five and even appearing in first place. In an attempt to maintain good positioning, I followed guidelines by adding robots.txt, an xml sitemap redirecting non-www to www redirecting index.php to root domain adding htaccess 301 redirect for old pages I added rich snippets created a google+ account, verified my picture to appear, I went through each of the webmaster issues with duplicate titles and meta descriptions and improved header tag document outlines i even created a few more blog posts to keep the content freshing and moving. So now my website appears on page 2 with my target keywords - and all because I followed the guidelines. What is happening? I see competitors with stagnant content superglued to position 1.

    Read the article

  • Web optimization

    - by hmloo
    1. CSS Optimization Organize your CSS code Good CSS organization helps with future maintainability of the site, it helps you and your team member understand the CSS more quickly and jump to specific styles. Structure CSS code For small project, you can break your CSS code in separate blocks according to the structure of the page or page content. for example you can break your CSS document according the content of your web page(e.g. Header, Main Content, Footer) Structure CSS file For large project, you may feel having too much CSS code in one place, so it's the best to structure your CSS into more CSS files, and use a master style sheet to import these style sheets. this solution can not only organize style structure, but also reduce server request./*--------------Master style sheet--------------*/ @import "Reset.css"; @import "Structure.css"; @import "Typography.css"; @import "Forms.css"; Create index for your CSS Another important thing is to create index at the beginning of your CSS file, index can help you quickly understand the whole CSS structure./*---------------------------------------- 1. Header 2. Navigation 3. Main Content 4. Sidebar 5. Footer ------------------------------------------*/ Writing efficient CSS selectors keep in mind that browsers match CSS selectors from right to left and the order of efficiency for selectors 1. id (#myid) 2. class (.myclass) 3. tag (div, h1, p) 4. adjacent sibling (h1 + p) 5. child (ul > li) 6. descendent (li a) 7. universal (*) 8. attribute (a[rel="external"]) 9. pseudo-class and pseudo element (a:hover, li:first) the rightmost selector is called "key selector", so when you write your CSS code, you should choose more efficient key selector. Here are some best practice: Don't tag-qualify Never do this:div#myid div.myclass .myclass#myid IDs are unique, classes are more unique than a tag so they don't need a tag. Doing so makes the selector less efficient. Avoid overqualifying selectors for example#nav a is more efficient thanul#nav li a Don't repeat declarationExample: body {font-size:12px;}h1 {font-size:12px;font-weight:bold;} since h1 is already inherited from body, so you don't need to repeate atrribute. Using 0 instead of 0px Always using #selector { margin: 0; } There’s no need to include the px after 0, removing all those superfluous px can reduce the size of your CSS file. Group declaration Example: h1 { font-size: 16pt; } h1 { color: #fff; } h1 { font-family: Arial, sans-serif; } it’s much better to combine them:h1 { font-size: 16pt; color: #fff; font-family: Arial, sans-serif; } Group selectorsExample: h1 { color: #fff; font-family: Arial, sans-serif; } h2 { color: #fff; font-family: Arial, sans-serif; } it would be much better if setup as:h1, h2 { color: #fff; font-family: Arial, sans-serif; } Group attributeExample: h1 { color: #fff; font-family: Arial, sans-serif; } h2 { color: #fff; font-family: Arial, sans-serif; font-size: 16pt; } you can set different rules for specific elements after setting a rule for a grouph1, h2 { color: #fff; font-family: Arial, sans-serif; } h2 { font-size: 16pt; } Using Shorthand PropertiesExample: #selector { margin-top: 8px; margin-right: 4px; margin-bottom: 8px; margin-left: 4px; }Better: #selector { margin: 8px 4px 8px 4px; }Best: #selector { margin: 8px 4px; } a good diagram illustrated how shorthand declarations are interpreted depending on how many values are specified for margin and padding property. instead of using:#selector { background-image: url(”logo.png”); background-position: top left; background-repeat: no-repeat; } is used:#selector { background: url(logo.png) no-repeat top left; } 2. Image Optimization Image Optimizer Image Optimizer is a free Visual Studio2010 extension that optimizes PNG, GIF and JPG file sizes without quality loss. It uses SmushIt and PunyPNG for the optimization. Just right click on any folder or images in Solution Explorer and choose optimize images, then it will automatically optimize all PNG, GIF and JPEG files in that folder. CSS Image Sprites CSS Image Sprites are a way to combine a collection of images to a single image, then use CSS background-position property to shift the visible area to show the required image, many images can take a long time to load and generates multiple server requests, so Image Sprite can reduce the number of server requests and improve site performance. You can use many online tools to generate your image sprite and CSS, and you can also try the Sprite and Image Optimization framework released by The ASP.NET team.

    Read the article

  • Two different domain for specific languages pointing to one site

    - by user25599
    I developing a client's blog and he needs it to be bilingual (english and spanish). Now what he wants is that users can get to the content based on the domain e.g. Jhon enters www.domain.com and he gets the english version and Juan enters www.elsenordominio.com to get the spanish version. All content will be validated by php so the users and search engines only reads the domain related language. What do i need to use header re-direct or 301? is it bad for seo? will google will google penalize me? I hope you guys can help me and forgive me if my english is not good.

    Read the article

  • Is it safe to run multiple XNA ContentManager instances on multiple threads?

    - by Boinst
    My XNA project currently uses one ContentManager instance, and one dedicated background thread for loading all content. I wonder, would it be safe to have multiple ContentManager instances, each in it's own dedicated thread, loading different content at the same time? I'm prompted to ask this question because this article makes the following statement: If there are two textures created at the same time on different threads, they will clobber the other and you will end up with some garbage in the textures. I think that what the author is saying here, is that if I access one ContentManager simultaneously on two threads, I'll get garbage. But what if I have separate ContentManager instances for each thread? If no-one knows the answer already from experience, I'll go ahead and try it and see what happens.

    Read the article

  • Google Analytics is not tracking all of our pages

    - by luis
    Our website is insynchq.com. In the All Pages report under Content - Site Content we can only see data for some our pages, like /, /getstarted, and /download. Others, like /gmail, /about, and /mobile are not shown, even if we are sure that there have been visits to them. We use a template for our pages so the scripts that are loaded for / (for example) should also be loaded for /gmail, so it doesn't seem to be a problem with the installation of the tracking code. Can anyone help? Thanks.

    Read the article

  • Bad search result due to strange linked domains

    - by VDesign
    I have a website which is not scoring good in Google's search results. I use Majestic SEO and Open Site Explorer in order to have a view about my link profile. I now see different backlink domains, some of them already removed, that contains sexual content or other non relative content linking to my domain. How much influence does these strange linked domains have on my search result? Even if some of them are already removed for a couple of months. I have already disavow open sexual domains using the tool that Google provides.

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >