Search Results

Search found 886 results on 36 pages for 'no duplicates'.

Page 16/36 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Split screen and other issues on Ubuntu 11.10 with ATI graphics card

    - by garus
    Ever since updating my Ubuntu 11.04 to 11.10 I experience issues with graphics. The biggest is the "split screen" effect where my desktop is shifted to the right, resulting in having Unity bar in the middle of the screen. As shown here: http://i.imgur.com/I8nmZ.jpg This changes from boot to boot, sometimes it's on the left, sometimes in the middle. What I tried: Removing fglrx drivers completely Installing post-update version of them, but installation is broken ATM in the Ubuntu, so no go. No one is even trying to fix it (bug report https://bugs.launchpad.net/ubuntu/+source/nvidia-common/+bug/873058 and a couple of duplicates out there) Also using the open source "radeon" driver results in the same (I have a better successful boot ratio with this one, proprietary rarely lets me boot) Other artifacts are: serious screen tearing weird lines flickering in random places lagginess Did anyone experience that? My specs: Ubuntu 11.10 AMD Radeon HD 6950 1GB

    Read the article

  • Are unit tests really used as documentation?

    - by stijn
    I cannot count the number of times I read statements in the vein of 'unit tests are a very important source of documentation of the code under test'. I do not deny they are true. But personally I haven't found myself using them as documentation, ever. For the typical frameworks I use, the method declarations document their behaviour and that's all I need. And I assume the unit tests backup everything stated in that documentation, plus likely some more internal stuff, so on one side it duplicates the ducumentation while on the other it might add some more that is irrelevant. So the question is: when are unit tests used as documentation? When the comments do not cover everything? By developpers extending the source? And what do they expose that can be useful and relevant that the documentation itself cannot expose?

    Read the article

  • Multiple TOC with MediaWiki using section headings in single page

    - by user1704043
    I'm running my own installation of MediaWiki, which has been great! I haven't been able to find the answer to this small problem in any post, how to, etc. Here's the setup: Article TOC (limited to showing only H1 and H2) ==H1== ===H2=== ====H3==== ====H3==== I don't want the H3 to show up on the main table of contents, because it would make the list very long. Instead, under the H2, I would like to display another TOC with all the H3's under that listing. From my understanding, you cannot have multiple table of contents on a single page. I've thought about making a template for each H2 that has the H3 links, but that seems like it duplicates a lot of work and creates loads of pages. I'd love a template that sucks all subsection names and spits them out, but I don't see how to do that. Alternatively, is there a way to enable multiple TOCs in a custom install of MediaWiki that I'm missing? Even that would get closer to what I'm trying to do.

    Read the article

  • Verify uniqueness of new content

    - by rogerkk
    I'm working on a review site, where there is a minor issue with almost duplicate reviews across items. Just a few words are changed. It would be very nice to be able to uncover these duplicates before they are approved by a moderator, and I'm hoping someone could chime in on the best strategy to get there. The site is running Ruby on Rails on a Postgres database and using Thinking Sphinx for search (all on Heroku), and so far the best option I see is to be pulling all the reviews out of the db and using a module like amatch to compare the strings. Not very efficient, so in this case I guess I'll have to limit the number/age of reviews to scan for dupes. Anyone got a better idea?

    Read the article

  • Bug Tracking Etiquete - Necromany or Duplicate?

    - by Shauna
    I came across a really old (2+ years) feature request issue in a bug tracker for an open source project that was marked as "resolved (won't fix)" due to the lack of tools required to make the requested enhancement, but since the determination was made, new tools have been developed that would allow it to be resolved, and I'd like to bring that to the attention of the community for that application. However, I'm not sure as to what the generally accepted etiquette is for bug tracking in cases like this. Obviously, if the system explicitly states to not duplicate and will actively mark new items as duplicates (much in the way the SE sites do), then the answer would be to follow what the system says. But what about when the system doesn't explicitly say that, or a new user can't easily find a place that says with the system's preference is? Is it generally considered better to err on the side of duplication or necromancy? Does this differ depending on whether it's a bug or a feature request?

    Read the article

  • What happened to my files?

    - by Ivan Broes
    After a successful upgrade from 11.04 to 11.10 a gradual deterioration occurred -- my laptop became unstable with updates -- I corrected the Aspell file, only to have another appear, I sought but I was blocked out. Had an idea, but resolved one that another problem appeared -- going from bad to worse -- I re-installed windows Vista and Ubuntu 11.10 in the original partitions. Window called it Windows Old and I had no problems recovering my files there - Ubuntu decided it is going to make a new Home directory -- . the questions is where did these files go to after re-installation -- are they deleted? If so That's fine duplicates are in Ubuntu 1 by synchronization - I can only download one file at a time!

    Read the article

  • apt-get not recognizing downloaded archives

    - by meteors
    I installed Ubuntu Gnome 13.10. I previously had Ubuntu Gnome 13.04 and had all the archives in the /var/cache/apt/archives/ stored to a removable disk. After installing 13.10 I copied all my archives to the above mentioned path. When I run apt-get install it tries to fetch the archives although I have the archives. Also if instead of apt-get install if I try to install individual .deb files using dpkg -i everything runs fine. These are the permissions of files: How do I fix this. Previously copying archives like this worked fine and downloading duplicates the files.

    Read the article

  • Can I use the test suite from an open source project to verify that my own 'compatible library' is compatible?

    - by Mark Booth
    The question Is it illegal to rewrite every line of an open source project in a slightly different way, and use it in a closed source project? makes me wonder what would be considered a clean-room implementation in the era of open source projects. Hypothetically, if I were to develop a library which duplicates the publicly documented interface of an open-source library, without ever looking at the source code for that library, could that code ever be considered a derivative work? Obviously it would need the same class hierarchy and method signatures, so that it could be a drop-in replacement - could that in itself, be enough to provoke a copyright claim? What about if I used the test suite of the open source project to verify whether my clean implementation behaved in the same way as the original library? Would using the test suite be enough to dirty my clean code? As should be expected from a question like this, I am not looking for specific legal advice, but looking to document experiences people may have had with this sort of issue.

    Read the article

  • Removing an element not currently in a list: ValueError?

    - by Izkata
    This is something that's bothered me for a while, and I can't figure out why anyone would ever want the language to act like this: In [1]: foo = [1, 2, 3] In [2]: foo.remove(2) ; foo # okay Out[2]: [1, 3] In [3]: foo.remove(4) ; foo # not okay? --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /home/izkata/<ipython console> in <module>() ValueError: list.remove(x): x not in list If the value is already not in the list, then I'd expect a silent success. Goal already achieved. Is there any real reason this was done this way? It forces awkward code that should be much shorter: for item in items_to_remove: try: thingamabob.remove(item) except ValueError: pass Instead of simply: for item in items_to_remove: thingamabob.remove(item) As an aside, no, I can't just use set(thingamabob).difference(items_to_remove) because I do have to retain both order and duplicates.

    Read the article

  • Startup applications 14.04

    - by gpalthrow
    I'm trying to mount 3 partitions by default when I log in. I used to do that with the start up applications in 12.04 However there seems to be a nasty bug in 14.04 where false command duplicates are removed (apparently, only the command is taken into account, even if the arguments differ). I tried using one single command instead of 3, putting all the devices in one line (something like /usr/bin/udisks --mount /dev/sdb1; /usr/bin/udisks --mount /dev/sdb2; /usr/bin/udisks --mount /dev/sdb3; ) However it does not seem to work either. What can I do to have 3 partitions mounted by default (with their label) ?!

    Read the article

  • Why are new pages not being indexed and old pages stay in the index?

    - by ZakGottlieb
    I currently have a site that was recently restructured, causing much of its content to be reposted, creating new URL's for each page. To avoid duplicates, all of the existing pages were added to the robots file. That said, it has now been over a week - I know Google has recrawled the site - and when I search for term X, it is stil the old page that is ranking, with the new one nowhere to be seen. I'm assuming it's a cached version, but why are so many of the old pages still appearing in the index? Furthermore, all "tags" pages (it's a Q&A site, like this one) were also added to the robots a few months ago, yet I think they are all still appearing in the index. Anyone got any ideas about why this is happening, and how I can get my new pages indexed?

    Read the article

  • Why does Linq to Entity Sum return null when the list is empty?

    - by Hannele
    There are quite a few questions on Stack Overflow about the Linq to Entity / Linq to SQL Sum extension method, about how it returns null when the result set is empty: 1, 2, 3, 4, 5, 6, 7, and many more, as well as a blog post discussing the issue here. Now, I could go a flag these as duplicates, but I feel it is still an inconsistency in the Linq implementation. I am assuming at this point that it is not a bug, but is more or less working as designed. I understand that there are workarounds (for example, casting the field to a nullable type, so you can coalesce with ??), and I also understand that for the underlying SQL, a NULL result is expected for an empty list. But because the result of the Sum extension for nullable types is also not nullable, why would the Linq to SQL / Linq to Entity Sum have been designed to behave this way?

    Read the article

  • What motivated Facebook to choose PHP and Twitter to choose Rails? [closed]

    - by mallieem saleie
    Possible Duplicates: Why did Facebook, Wordpress, vBulletin use PHP/MySQL? Why did Facebook use C++ beside PHP? While Facebook chose PHP and Twitter chose Ruby on Rails, I stopped and asked myself a question! why did they chose PHP and Ruby on Rails? Why not ASP.NET or Java? Is it because of bieng open source? or what?. I just want to know the right reason so I can examine their vision and decide which technology should I use if I want to build something unique.

    Read the article

  • return distinct records using subsonic 3 query and VB

    - by HR
    I have been having issues trying to return distinct records from a subsonic3 query using VB. My base query looks like so: Dim q As New [Select]("Region") q.From("StoreLocation") q.Where("State").IsEqualTo(ddlState.SelectedValue) q.OrderAsc("Region") This returns duplicates. How can I add a distinct clause in this to return distinct records? I have been trying to place around with Contraints, but to no avail. Thanks in advance HR

    Read the article

  • DNN 5.2.3 Stored Procedures executing numerous times during page loads

    - by David Neale
    After tracing the DB activity from a DNN 5.2.3 site I noticed that there are numerous identical calls to the database whilst loading the home page for the first time (afterwards the caching works successfully). //Procedure : Number of executions exec dbo.aspnet_Membership_GetUserByName @ApplicationName=N'DotNetNuke',@UserName=N'MYDOMAIN\ME',@UpdateLastActivity=0,@CurrentTimeUtc='2010-03-24 10:04:15:223' : 22 exec dbo.GetPortalAliasByPortalID @PortalID=0 : 15 exec dbo.GetUserProfile @UserID=8 : 11 exec dbo.GetUser @PortalID=0,@UserID=8 : 10 exec dbo.GetDatabaseVersion : 2 exec dbo.GetUserCountByPortal @PortalId=0: 2 exec dbo.GetDesktopModules : 2 exec dbo.KB_XMod_Forms_List @PortalId=0 : 2 exec dbo.KB_XMod_Templates_List @PortalId=0,@TemplateType=-1 : 2 Why so many duplicates?

    Read the article

  • List to TreeSet conversion produces: "java.lang.ClassCastException: MyClass cannot be cast to java.l

    - by Chuck
    List<MyClass> myclassList = (List<MyClass>) rs.get(); TreeSet<MyClass> myclassSet = new TreeSet<MyClass>(myclassList); I don't understand why this code generates this: java.lang.ClassCastException: MyClass cannot be cast to java.lang.Comparable MyClass does not implement Comparable. I just want to use a Set to filter the unique elements of the List since my List contains unncessary duplicates.

    Read the article

  • Why use "foo" in coding examples? [closed]

    - by ThePower
    Possible Duplicates: To foo bar, or not to foo bar: that is the question. Bit of a general question here, but it's something I would like to know! Whenever I am looking for resolutions to my C# problems online, I always come across "foo" being used as an example. Does this represent anything or is it just one of those unexplained catchy object names, used by many people in examples?

    Read the article

  • interview questions - little help

    - by Idan
    i ran into thos quesiton in a google search.... they look pretty common, but i couldn't find a decent answer. any tips/links ? 1.Remove duplicates in array in O(n) without extra array 2.Write a program whose printed output is an exact copy of the source. Needless to say, merely echoing the actual source file is not allowed.

    Read the article

  • Is it possible to get identical SHA1 hash?

    - by drozzy
    Given two different strings S1 and S2 (S1 != S2) is it possible that: SHA1(S1) == SHA1(S2) is True? If yes - with what probability? Is there a upper bound on the length of a string, for which probably of getting duplicates is 0? If not - why not? Thanks

    Read the article

  • What is a good programming language for beginners? [closed]

    - by user122401
    Possible Duplicates: Best ways to teach a beginner to program? What is the easiest language to start with? What is a good programming language for beginners? I am 15 and had learned C++ before never got into it though? But is this like learning a new language? like having to do it over and over and every day having new stuff to learn ?

    Read the article

  • Select distinct from multiple fields using sql

    - by Bryan
    I have 5 columns corresponding to answers in a trivia game database - right, wrong1, wrong2, wrong3, wrong4 I want to return all possible answers without duplicates. I was hoping to accomplish this without using a temp table. Is it possible to use something similar to this?: select c1, c2, count(*) from t group by c1, c2 But this returns 3 columns. I would like one column of distinct answers. Thanks for your time

    Read the article

  • delete all but minimal values, based on two columns in SQL Server table

    - by sqlill
    how to write a statement to accomplish the folowing? lets say a table has 2 columns (both are nvarchar) with the following data col1 10000_10000_10001_10002_10002_10002 col2 10____20____10____30____40_____50 I'd like to keep only the following data: col1 10000_10001_10002 col2 10____10____30 thus removing the duplicates based on the second column values (neither of the columns are primary keys), keeping only those records with the minimal value in the second column. how to accomplish this?

    Read the article

  • Python Sets vs Lists

    - by mvid
    In Python, which data structure is more efficient/speedy? Assuming that order is not important to me and I would be checking for duplicates anyway, is a Python set slower than a Python list?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >