Search Results

Search found 11888 results on 476 pages for 'hero vs zero'.

Page 259/476 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • Why should I use List<T> over IEnumerable<T>?

    - by Rowan Freeman
    In my ASP.net MVC4 web application I use IEnumerables, trying to follow the mantra to program to the interface, not the implementation. Return IEnumerable(Of Student) vs Return New List(Of Student) People are telling me to use List and not IEnumerable, because lists force the query to be executed and IEumerable does not. Is this really best practice? Is there any alternative? I feel strange using concrete objects where an interface could be used. Is my strange feeling justified?

    Read the article

  • Version Changes: How considerable are the compatibility issues in project?

    - by Aditya P
    For example if we consider ActionScript2.0(based on Objects but programming does not implement much OOP ) vs 3.0(highly OOP) its like a whole new scripting language in the sense of approach, programming style,features you get the idea. In PHP we can see current versions going from 3-5. brief version changes Question :Developers who work on PHP is it easy to migrate from version to version? Question :Are there any extensive compatibility issues, forward or backward? Question :Does your project stick to a particular version till the end ? Question :Does the programming style ,approach change from version to version? Question :If you had to get started on PHP to contribute to a project built earlier versions, would learning the latest version be counterproductive towards this aim? Some related topics i had come across on SE How should I be keeping track of php script version/changes? What is happening to PHP 6? It would be Really helpful in understanding if you could answer this topic directly to the questions put forth.

    Read the article

  • How to explain OOP to a matlab programmer?

    - by Oak
    I have a lot of friends who come from electrical / physical / mechanical engineering background, and are curious about what is "OOP" all about. They all know Matlab quite well, so they do have basic programming background; but they have a very hard time grasping a complex type system which can benefit from the concepts OOP introduces. Can anyone propose a way I can try to explain it to them? I'm just not familiar with Matlab myself, so I'm having troubles finding parallels. I think using simple examples like shapes or animals is a bit too abstract for those engineers. So far I've tried using a Matrix interface vs array-based / sparse / whatever implementations, but that didn't work so well, probably because different matrix types are already well-supported in Matlab.

    Read the article

  • Which programming language do you think is the most beautiful and which the ugliest? [closed]

    - by user1598390
    I would like to hear opinions about what programming language do you consider to produce the most legible, self-documenting, intention-transparent, beautiful-looking code ? And which produces the most messy-looking, unintentionally obfuscated, ugly code, regardless of it being good code ? Let me clarify: I'm talking about the syntax, "noise vs signal", structure of the language. Assignment operators. De-referencing. Whether it's dot syntax or "-" syntax. What languages do you think are inherently harder to read than others, given all other things being equal like, say, code quality, absence of code smells, etc. ?

    Read the article

  • Walking Through a Seaside Village Wallpaper

    - by Asian Angel
    Sea View [DesktopNexus] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope Walking Through a Seaside Village Wallpaper

    Read the article

  • API Design Techniques

    - by Dehumanizer
    Is it right or more beautiful to name the functions with an prefix, like in Qt? Or using "many" namespaces, but 'normal' names for functions? For example, slOpenFile(); //"sl" means "some lib" vs some_lib::file_functions::openFile(); UPD: I've read somewhere that the first variant(using some prefix) is better, because the API users can perform 'fast' search among the documentation and in the Internet. E.g. by typing the magic prefix search engine starts to advice the exact functions. Is it enough to use the first variant?

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • I removed nvidia driver and lshw -c video still shows nvidia

    - by sinekonata
    Today I tried to activate the newer experimental drivers and both 304 and 310 failed to even install. So I tried the regular nvidia driver 295.40 for the 20th time today (I had lag issues and was testing Nouveau vs Nvidia with dual monitor and Unity2D-3D) Within my tty1 I tried to remove nvidia: sudo apt-get remove nvidia-settings nvidia-current and purge too reboot, nothing. So when lshw -c video displayed nvidia as my driver I tried sudo rm /etc/X11/xorg.conf since I read ubuntu would "reset" the GUI conf but reboot, nothing. So next I tried sudo jockey-text --disable=xorg:nvidia_current And nothing has worked...

    Read the article

  • Enterprise Architecture - Wikipedia

    - by pat.shepherd
    I was looking at the Wikipedia entry for EA and found this chart which does a great job showing the differences of ENTERPRISE Architecture vs. SOLUTION Architecture across several categories.  This really gets at the heart of a misconception many people have about what EA is and where it sits in the grand business –> technical detail continuum. The following image from the 2006 FEA Practice Guidance of US OMB sheds light on the relationship between enterprise architecture and segment(BPR) or Solution architectures. (From this figure and a bit of thinking[which?] one can see that software architecture is truly a solution architecture discipline, for example.) Enterprise architecture - Wikipedia, the free encyclopedia

    Read the article

  • How vibrations might effect Kinect depth measurements

    - by dreza
    I'm currently doing some research into development with the Microsoft Kinect product. My project manager has come up with a potential design for mounting the camera to do the capturing. However the solution means that the camera might be subject to vibrations as the platform it is on is directly connected to where the subjects will be moving. It was my thought that vibrations would effect the quality of the results, however I could not come up with a viable explanation as to why, other than it's the same as if you held a camera in your hand and your hand was shaking vs using a tripod. Do vibrations effect the depth measurements on a Kinect and if so how can I explain this in simple terms to my PM to help come up with a better design to attach the sensor to?

    Read the article

  • At what visitor share do you stop supporting a given browser?

    - by adam
    I'm lead dev for a large website which has a higher than average percentage of IE6 users - about 4.4% of our audience. Our new version is going to make use of progressive enhancement - including transitions and effects as well as rounded corners, gradients, web fonts and other CSS techniques. Obviously there are cross-browser ways to achieve most of these things which require various amounts of work to implement. What I'm currently looking into - and what I'd like your experiences of - is how to decide at what point we draw the line between providing an enhanced experience vs just supporting the functionality. FYI, I believe that this question meets the six guidelines for great subjective questions as defined in the FAQ. I'm after answers detailing why and how, not too short, with constructive comments, experiences, facts and references. Thanks! Adam

    Read the article

  • Data-tier Applications in SQL Server 2008 R2

    - by BuckWoody
    I had the privilege of presenting to the Adelaide SQL Server User Group in Australia last evening, and I covered the Data Access Component (DAC) and the Utility Control Point (UCP) from SQL Server 2008 R2. Here are some links from that presentation:   Whitepaper: http://msdn.microsoft.com/en-us/library/ff381683.aspx Tutorials: http://msdn.microsoft.com/en-us/library/ee210554(SQL.105).aspx From Visual Studio: http://msdn.microsoft.com/en-us/library/dd193245(VS.100).aspx Restrictions and capabilities by Edition: http://msdn.microsoft.com/en-us/library/cc645993(SQL.105).aspx    Glen Berry's Blog entry on scripts for UCP/DAC: http://www.sqlservercentral.com/blogs/glennberry/archive/2010/05/19/sql-server-utility-script-from-24-hours-of-pass.aspx    Objects supported by a DAC: http://msdn.microsoft.com/en-us/library/ee210549(SQL.105).aspx   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Q1 2010 SP1 versions of Telerik ASP.NET AJAX and MVC suites are live

    The Q1 2010 SP1 releases of RadControls for ASP.NET AJAX and Telerik Extensions for ASP.NET MVC are available for download. Lots of fixes and several new features/enhancements are incorporated in these SP releases. The highlight with this drop we announce official support for VS 2010/.NET 4 RTM. Find more details browsing the online demos, documentation and release notes below: RadControls for ASP.NET AJAX Release notesDemos DocumentationTelerik Extensions for ASP.NET MVCRelease notes Demos DocumentationDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • New videos available #dax #ssas #powerpivot

    - by Marco Russo (SQLBI)
    The collaboration I and Alberto started with Project Botticelli is starting producing content. At this point we have three videos available: DAX in Action shows the power of DAX in PowerPivot solving common patterns not so easy or fast to solve in other languages DAX: Calculated Columns vs. Measures shows the difference between calculated columns and measures in DAX Introduction to DAX has a content corresponding to the title! The first two videos are freely available, the third one is longer and visible only to subscribers. The goal for this series of video is to reach advanced Excel users and BI developers that are new to DAX. If we should categorize this content, it’s a sort of level 200 session in a conference. I don’t expect readers of this blog to watch this video (if not for the sake of curiosity!) but if you have to explain this subject to anyone else and you have other priorities… well, you can add this post to the list of resources you provide for studying the subject!

    Read the article

  • What are examples of games with "minimalist" models/art assets

    - by Ken
    When teaching game development, my student's obsess about building realistic or complex art/models/animation. And spending wayyy to much time trying to get accurate collision detection between two 3D models [despite my best efforts] However I would like them to spend more time thinking about developing the game mechanics, interaction and game play. I'm looking for some games where the visuals are simple but have good game play. Things I am thinking about are Cubes' vs Spheres or Impossible Game. What are more examples of visually simple (preferably 3D) games to help inspire my students?

    Read the article

  • Most efficient AABB - Ray intersection algorithm for input/output distance calculation

    - by Tobbey
    Thanks to the following thread : most efficient AABB vs Ray collision algorithms I have seen very fast algorithm for ray/AABB intersection point computation. Unfortunately, most of the recent algorithm are accelerated by omitting the "output" intersection point of the box. In my application, I would interested in getting both the the distance from source ray to input: t0 and source ray to output of bounding box: t1. I have seen for instance Eisemann designed a very fast version regarding plucker, smits, ... , but it does not compare the case when both input/output distance should be computed see: http://www.cg.cs.tu-bs.de/publications/Eisemann07FRA/ Does someone know where I can find more information on algorithm performances for the specific input/output problem ? Thank you in advance

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Why does Javascript use JSON.stringify instead of JSON.serialize?

    - by Chase Florell
    I'm just wondering about "stringify" vs "serialize". To me they're the same thing (though I could be wrong), but in my past experience (mostly with asp.net) I use Serialize() and never use Stringify(). I know I can create a simple alias in Javascript, // either JSON.serialize = function(input) { return JSON.stringify(input); }; // or JSON.serialize = JSON.stringify; http://jsfiddle.net/HKKUb/ but I'm just wondering about the difference between the two and why stringify was chosen. for comparison purpose, here's how you serialize XML to a String in C# public static string SerializeObject<T>(this T toSerialize) { XmlSerializer xmlSerializer = new XmlSerializer(toSerialize.GetType()); StringWriter textWriter = new StringWriter(); xmlSerializer.Serialize(textWriter, toSerialize); return textWriter.ToString(); }

    Read the article

  • Just too bright

    - by Bunch
    Like a lot of folks I am using SSMS and VS pretty much all day. But staring at the text on the stark white background can be a bit much for my eyes after a while. I have seen quite a few different “themes” for these apps which change all the colors around to make it easier on your eyes. Some of them are pretty cool but all I really wanted was to dim the background a little not radically change the way everything looked. Since the stock colors for comments, breakpoints, keywords and the like are so familiar I wanted a background that did not interfere with those colors. So I picked the following custom color for the item background. It comes off as a parchment type color. Hue: 42        Red: 244 Sat: 123    Green: 245 Lum: 221    Blue: 224

    Read the article

  • The provider did not return a ProviderManifestToken string Entity Framework

    - by PearlFactory
    Moved from Home to work and went to fire up my project and after long pause "The provider did not return a ProviderManifestToken string" or even More Abscure ProviderIncompatable Exception Now after 20 mins of chasing my tail re different ver of EntityFramework 4.1 vs 4.2...blahblahblah Look inside at the inner exception A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible DOH!!!! Or a clean translation is that it cant find SQL or is offline or not running. SO check the power is on/Service running or as in my case Edit web.config & change back to Work SQL box   Hope you dont have this pain as the default errors @ the moment suck balls in the EntityFramework 4.XX releases   Cheers

    Read the article

  • Free Typescript editor with definition based code completion feature

    - by NagyI
    I know that a plugin for Visual Studio exists. However i can't afford VS so i'm looking for a free alternative which can be used to code TypeScript and aware of the .d.ts definition files and can do code completion based on them. I know that Sublime Text and VIM can do syntax highlighting with the correct language definition file. However the biggest advantage of Typescript for me is that ability to give code assistance while coding. Are you aware of any editor which can do this? I'm interested even if it's in an experimental state.

    Read the article

  • How should I select continuous integration tool?

    - by DeveloperDon
    I found this cool comparison table for integration servers on Wikipedia, but I am a little uncertain how to rank the tools vs. my needs and interests. The chart itself seems to have a lot of boxes marked unknown, so if you are comfortable updating it on Wikipedia, that could be great too. Are there a few top performing products so I can quickly narrow down to four or five options? Which products seems to have the largest user communities and most ongoing enhancements and integration with new tools? Are the open source offerings best, or are there high quality tools that can be a great deal for a single user at home? Will use of multiple systems (primary desktop, local only home network server, personal and work notebooks, multiple virtual machines spread across all) create problems and how can they be managed?

    Read the article

  • errors with libosmscout [migrated]

    - by Katlego Moukangwe
    I am using the libosmscout library for routing and I get the following error when I run the Routing demo; Routing . 169639811 169639816 169639831 253045594 //Running the routing demo with the map in the current directory. Cannot get start way! There was an error while calculating the route! I think It might be during the import stage since I get the following warnings; !! Cannot resolve way member 28693141 for relation 28322 boundary_administrative Mecklenburg-Vorpommern !! Cannot resolve way member 152675051 for relation 28936 boundary_administrative Bergedorf !! Cannot resolve way member 26496646 for relation 28964 boundary_administrative Harburg (Hamburg) !! Node 323985058 of way 135078286 cannot be joined with any other way of the relation 1817154 Neuenkirchen WW Multipolygon relation 2136137 has conflicting types for outer boundary (place_island vs. boundary_administrative) !! Cannot resolve way member 162656856 for relation 2174826 landuse_forest

    Read the article

  • Why I can't see my desktop icons in ubuntu 13.04?

    - by Edgar
    I just installed Ubuntu 13.04 in my laptop Aspire-M3-5871TG, and I can't see anything (neither icons, ...). Only is visible the desktop background but I can open and work with the terminal. Maybe the problem is related with Nvidia Geforce GT 640M vs Unity. I've tried several commands: dconf reset -f /org/compiz/ unity --reset-icons &disown and unity --replace & but nothing happens. I've tried other commands as well: sudo add-apt-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current and nothing happens. I've also tried to install tweak but is it not possible to find it. Thus, I can't do nothing ... Definetely my laptop is not compatible with Ubuntu 13.04?

    Read the article

  • Will a Wubi install of Ubuntu effect my Windows installation in anyway? [closed]

    - by Oddysee
    Possible Duplicate: What are the benefits of a disk install vs. Wubi? And can I migrate my settings easily? Having never tried a Linux OS before, I want to dabble and take a look at one. Ubuntu seems like a good distro to go with (due to how popular it is) and I want to use Wubi to try it out. I just wanted to know if anything I do while trying out Ubuntu will effect my Windows Installation or the files pertaining to it in anyway? If so, is it potentially Windows breaking?

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >