Search Results

Search found 13889 results on 556 pages for 'results'.

Page 202/556 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Drivers for GeForce 7300 GS?

    - by user1443346
    I have been searching EVERYWHERE!!!!! And I cannot seem to find a driver for my GeForce 7300 GS video card. If I don't get it, the Android SDK emulator won't work. I get this error while starting the emulator up: X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 154 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 I looked up anything and everything, and the results I got was to get a video card driver, which I can not find. Any help is appreciated. Thanks!

    Read the article

  • Big delay to open web pages on Ubuntu 11.10; also slow torrent client speed

    - by user54234
    The keywords for my issue are too common among other issues, so, I couldn't find anything that could answer me: why will it take around 30+ seconds for any of my browsers to open a page? happens even with google.com... with both Firefox and Chromium. This does not happen while I use Windows, from exactly the same point at my house (I've got enough wi-fi signal here for sure). Also, the standard torrent client won't hit the max download speed... I can hit 1 Mb/s with utorrent on Windows, and can't go over 300 kbps here. I tried changing the program settings, no results. Please help me. I REALLY don't wanna go back to Windows. Thanks in advance, I admire this community, and I'm sorry that I couldn't find something that could help me. I already solved a lot of issues without asking, but couldn't do it this time.

    Read the article

  • How to minimize the data loss when laying off a programmer?

    - by thursdaysgeek
    I was just laid off and it was the standard process that is used in the US: call the person to talk to personnel, and remove access to the network while that is going on, then have someone help pack, always have someone with the person until they are escorted from the property. That is supposed to keep an unhappy developer from deleting or damaging software or data: to mimimize data loss. However, it still results in a lot of data loss, as all of the work the programmer was working on is dropped: software not checked in is possibly lost, documents not finished are lost, releases in process are slowed down or stopped, and a huge amount of knowledge could be lost. It seems the potential data loss is more than offset by the actual data loss. How can all losses, both potential and actual, be mimimized?

    Read the article

  • Is server validation necessary with client-side validators?

    - by peroija
    I recently created a .net web app that used over 200 custom validators on one page. I wrote code for both ClientValidationFunction and OnServerValidate which results in a ton of repetitive code. My sql statements are parameterized, I have functions that pull data from input fields and validates them before passing to the sql statements or stored procedures. And the javascript validates the fields before the page submits. So essentially the data is clean and valid before it even hits the OnServerValidate and clean after it anyways due to the aforementioned steps. This makes me question, is OnServerValidate really needed when I validate on the clientside?

    Read the article

  • Advantages of Search Engine-Friendly Websites

    For every webmaster, System Engine Optimization or SEO only means one thing - tailoring web content to attract more search engine-driven traffic to his website. The higher the rank of a website in the search engine results, the more traffic it gets. Improving the rank of a website means it will have more chances of being visited by more readers and people who may become potential clients or sources of revenue. Regardless of what services or products may be offered, it is essential for for the site to be search engine-friendly as statistics show over 95% of traffic are driven by search engines.

    Read the article

  • Why can't I get 100% code coverage on a method that calls a constructor of a generic type?

    - by Martin Watts
    Today I came across a wierd issue in a Visual Studio 2008 Code Coverage Analysis. Consider the following method:  private IController GetController<T>(IContext context) where T : IController, new() {     IController controller = new T();     controller.ListeningContext = context;     controller.Plugin = this;     return controller; } This method is called in a unit test as follows (MenuController has an empty constructor): controller = plugin.GetController<MenuController>(null);  After calling this method from a Unit Test, the following code coverage report is generated: As you can see, Code Coverage is only 85%. Looking up the code results in the following: Apparently, the call to the constructor of the generic type is considered only partly covered. WHY? Google didn't help. And MSDN didn't help at all, of course. Anybody who does know?

    Read the article

  • Release Notes for 8/16/2012

    Below are the release notes for today's deployment. Source Tab Improvements Revamped fork navigation to make it more clear that you are within a fork verses the parent project. Infrastructure Updated CodePlex to ASP.Net MVC 4.0. RTM+1 how's that for a turnaround? Bug Fixes Fixed a set of user experience issues with the in-line diff view experience. Fixed issue with date mismatches between UTC and local time for discussions and issue tracker Fixed issue where new issues would not show up in search results. Changed the default destination branch for pull requests to master. Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @mgroves84

    Read the article

  • Where in a typical rendering pipeline does visibility and shading occur?

    - by user29163
    I am taking a computer graphics course. The book and the lecture notes are vague on the on the order of flow between the different steps in the rendering process. For example, if we have specified a view in a scene, and then want to perform a projection transformation for that given view, then we have to go through a sequence of transformations. In the end we end up with a normalized "viewcube" ready to be mapped 2D after clipping. But why do we end up with a cube (ie 3D thing), when a projection results in projecting the 3D objects to 2D. (depth information is lost?) The other line of reasoning is that all information further needed is stored within the "cube" and that visibility detection and shading is performed with respect to this cube and then we perform rasterezation.

    Read the article

  • Keyboard Issues After Resuming From Sleep on a Dell Vostro 13

    - by sammyd253
    I'm using Ubuntu 11.10 x64. It's running on my Dell Vostro V13. Everything works great until I shut the lid of my laptop (puts laptop into sleep). Once I come back and resume working in Ubuntu, typing becomes a chore. Ubuntu misses a lot of the keypresses from my keyboard. It usually results in me mashing my keyboard until the letter appears. A full system reboot fixes the problem. Again, typing is never an issue unless I resume from sleep. Any ideas?

    Read the article

  • Disposing of ContentManager increases memory usage

    - by Havsmonstret
    I'm trying to wrap my head around how memory management works in XNA 4.0 I've created a screen management test and when I close a screen, the ContentManager created by that screen is unloaded. I have used ANTS Memory Manager to look at how the memory usage is altered when I do this, and it gives me some results which makes me scratch my head. The game starts with loading two textures (435kB and 48,3kB) which puts the usage at about 61MB. Then, when I delete the screen and runs Unload on the ContentManager, the memory usage drops to 56,5MB but instantly after goes up to 64,8MB. Am I doing something wrong or is this usual for XNA games? Do I have to dispose of everything the ContentManager loads seperatly or do I need to do something more to the ContentManager? Thanks in advance!

    Read the article

  • How can I do individual file encryption on Dropbox?

    - by Scaine
    I'd like to set a single directory inside Dropbox in which files are encrypted on a file-by-file basis. At the moment, I use a 2Mb Truecrypt container inside my Dropbox which I then have to mount manually, access/change the files within, then unmount manually. At that point, the entire 2Mb uploads to Dropbox. This is a pain for a number of reasons : Dropbox sync will only occur when the Truecrypt container is unmounted, because Dropbox only syncs files that aren't locked and mounting a container locks it. A single byte change to one file inside that container results in the whole 2Mb being uploaded again. It doesn't scale - I was originally using a 10Mb container, but obviously the bigger the container, the longer it takes to sync when it's unmounted. I was wondering if I can somehow use LUKS to implement file-by-file encryption to get round the "container" issues.

    Read the article

  • 12.04: I need a new network card - and I think this one may work?

    - by Jason Malone
    I found this list of compatible USB wireless network adaptors: http://www.cyberciti.biz/tips/linux-usb-wireless-compatibility-adapter-list.html Listed here is the Belkin F5D8053. I'm looking for a store to buy a suitable adaptor in locally and I can't find that particular adaptor - but I can see a Belkin F7D4101az. I think this is simply a newer model - but a Google search for "ubuntu belkin f7d4101az" brings no results - not even people trying to get it to work. Nothing. So I didn't want to purchase it without being sure. There's really only one PC shop that sells this sort of stuff around here - so here's a list of all my available options: http://www.pcworld.co.uk/gbuk/wireless-cards-adapters/xx_7088_70095_xx_xx/xx-criteria.html I'd really like to get one today, because I have a lot of work to do and don't really want to have to wait a week for a new adaptor - I spent 6 or 7 hours last night trying to get my Dell 1450 to work with no luck. Any help would be much appreciated.

    Read the article

  • How Microsoft copy the result from user for Bing.

    - by anirudha
    from some days before i read about the problem google show with another search engine Bing who come from Microsoft. well google show that bing copycat and Microsoft show that they not. i am not known matter much more but predict something that maybe true if google are right in what they tell. if MS really copy the search from User then they use Live service who used by all user who use the internet or on the internet. see the snapshot i have in this snapshot they tell that Help improved search results,internet safety and Microsoft software by allowing Microsoft to collect and retain information about your system, the searches you do and the websites you visit. Microsoft will not use this information to personally identify or contact you. Learn more so why they need the research on what user search on internet even the not contact but user their is no reason as well as bing. well their is a example who maybe possible they  use for bing.

    Read the article

  • How to prevent a search engines from indexing a section of a page?

    - by BrunoLM
    I have many pages with lots of text in it. But I will always have two sections of text and I want to prevent one section from appearing in search results, the other section must be indexed. <p class="please-index-me">text</p> <p class="get-out">never index me please</p> I thought that maybe if I load the "please don't index me text" with Javascript maybe search engines wouldn't look for it. But I am not sure it would work and this is not really nice. I was wondering if there is a way to tell search engines "hey, this text you can't grab, move on". So, is there a way to do it?

    Read the article

  • Listing SQL Columns

    - by Bunch
    When I am writing up stored procedures in SSMS sometimes I need to know what column types are used in a table. For instance I will know the table name but I might not remember exactly the length of a varchar column or if a column stored the data as an integer or varchar. And I may not want to scroll through all the tables in Object Explorer to find the one I want. A lot of times it is easier if I can just write a quick query to pull up the information I need. The syntax to do something like this is pretty easy. SELECT COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH FROM yourdbname.information_schema.columns WHERE TABLE_NAME = ‘yourtablename’ After running that you will get a listing in the Results pane just like any other query with the column name, data type and length (if any). Technorati Tags: SQL

    Read the article

  • Math behind multivariate testing for website optimization

    - by corkjack
    I am looking for theoretical resources (books, tutorials, etc.) to learn about making sound statistical inferences given (plenty of) multivariate website conversion data. I'm after the math involved, and cannot find any good non-marketing stuff on the web. The sort of questions I want to answer: how much impact does a single variable (e.g. color of text) have? what is the correlation between variables? what type of distribution is used for modelling (Gaussian, Binomial, etc.)? When using statistics to analyze results - what should be considered as a random variable - the web-page element that gets different variations or the binary conversion-or-no-conversion outcome of an impression? There's plenty of information about different website optimization testing methods and their benefits\pitfalls, plenty of information about multivariate statistics in general, do you guys know of resources that discuss statistics in this specific context of website optimization ? Thanks for any info!

    Read the article

  • Remove home page from Google cache

    - by Steve
    The Google Webmaster Tools Remove URL section allows you to specify a page URL to be removed from the index, or cache, or both. However, I want to remove just the home page, which is / I want to remove it from the cache because it is indexed when the "under construction" page was up. This URL is not recognised by the Remove URL section as an individual page. Instead Google assumes you want to remove the entire website from the index. I've specified /index.php and /index.html to be removed from the cache, but this is not the URL listed in the search results for the home page I want removed from the cache.

    Read the article

  • Site loads Extremely slowly over HTTPS; loads perfectly over HTTP - Sudden and new issue

    - by guest234239048
    My business' website suddenly started loading extremely slowly over HTTPS last night and continues through today. The facts: Page loadtime via HTTPS - 2 minutes +, HTTP - 3 seconds max. NO updates were done on any site files in the past 2 weeks. This is on Shared Hosting HTTPS has worked perfectly over the past 9 months then suddenly failed yesterday. Hosting company and SSL issuer say there are no problems on their end. - Searches for others having similar issues return no results, it appears to be just me... Site is primarily run via php/mysql Currently attempted troubleshooting: Tried all major browsers and different versions - same result. Tried 2 separate ISPs - same result. Tried proxies - same result. Tried 3 separate computers - same result. I'm basically at a total loss here. Does anyone know what could cause such a thing to happen? Please help guide troubleshooting.

    Read the article

  • Bad search result due to strange linked domains

    - by VDesign
    I have a website which is not scoring good in Google's search results. I use Majestic SEO and Open Site Explorer in order to have a view about my link profile. I now see different backlink domains, some of them already removed, that contains sexual content or other non relative content linking to my domain. How much influence does these strange linked domains have on my search result? Even if some of them are already removed for a couple of months. I have already disavow open sexual domains using the tool that Google provides.

    Read the article

  • "Never do in code what you can get the SQL server to do well for you" - Is this a recipe for a bad design?

    - by PhonicUK
    It's an idea I've heard repeated in a handful of places. Some more or less acknowledging that once trying to solve a problem purely in SQL exceeds a certain level of complexity you should indeed be handling it in code. The logic behind the idea is that for the large majority of cases, the database engine will do a better job at finding the most efficient way of completing your task than you could in code. Especially when it comes to things like making the results conditional on operations performed on the data. Arguably with modern engines effectively JIT'ing + caching the compiled version of your query it'd make sense on the surface. The question is whether or not leveraging your database engine in this way is inherently bad design practice (and why). The lines become blurred further when all the logic exists inside the database and you're just hitting it via an ORM.

    Read the article

  • Top 5 Places to Get Good Quality Links & Boost Your Search Rank

    If you're looking to get a higher ranking for your website, the bottom line is that you need to get good quality links. Gone are the days when you could just rely on keywords on your site to get you to the top... or even getting 1,000's of un-targeted links to blast your way to the #1 spot on Google. Now, it's all about getting high quality links that will make Google think your site is "worthy" enough to put at the top of the results... and here's where to get those links.

    Read the article

  • How to debug wifi connection?

    - by mmb
    I am using Ubuntu's Network Manager to connect to a local wifi router. My problem is, that it often disconnects without any visible reason (router is on, internet connection seems to be working, wifi diodes are blinking). I mostly have to disable wifi from Network Manager, enable it again to get wifi connecting again. Plus I am often experiencing Network Manager trying to connect to my wifi, trying several times but then giving up. I even tried an external usb-wifi card, but with the very same results. My question is: How can I debug this? Which logs should I read to see what is really happening when all of these errors occur - so I can post them here and see how to proceed?

    Read the article

  • How to improve a single-paged site search result [closed]

    - by Trisism
    Possible Duplicate: How to SEO a Single-Page website I created an online CV of mine a couple of weeks ago and it has had quite a few visits. Now I want to improve the chance it will appear in google search results; however, my web CV is a one-paged site and it contains only internal links (those with hash #) so I can't really submit a sitemap. I could have changed the internal links to normal links to be processed on server-side, but there's no point of doing so. I'm very new to web SEO so I would really appreciate if somebody can show me what should I do with a single-paged site with internal links to be effectively indexed by crawlers.

    Read the article

  • Is there any problem with using two slashes in the middle of a URL? [closed]

    - by joshuahedlund
    Possible Duplicate: What does the double slash mean in URLs? I'm working on a mod_rewrite URL structure as follows: http://example.com/search/filter1/filter2/filter3/filter4 There are some conditions where it is OK for the first attribute to be blank, but i want to keep the other attributes in the same position. (Otherwise I can't assume that the attribute in the second position represents what I want it to represent.) However this results in some URLs like this: http://example.com/search//filter2/filter3/filter4 This seems to work in all browsers I've tested (Chrome,Firefox,IE9,IE compatible) and I'm not seeing any errors on the server side, so I can't think of any problems in using it. But it just looks wrong and weird to me and I'm not used to seeing it. Are there any potential downsides to using a structure that encourages URLs like this, or any major reasons no one seems to use it? (Everything I search in Google assumes I'm talking about the two slashes after http:)

    Read the article

  • How can I prevent my site from being branded a "content farm?"

    - by Fredashay
    I'm building a small social Q&A site. Another Q&A site that I use was recently branded by Google as a content farm and removed from Google results. I know what Wikipedia says is the definition of a content farm (low quality paid articles and spammy text across the page to catch search engines). That other site I use doesn't do those things, so there must be more to it than that. I want to make sure I don't do anything that causes Google to think my Q&A site is a content farm. What should I do, or avoid doing in designing my site layout?

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >