Search Results

Search found 54098 results on 2164 pages for 'something broken'.

Page 126/2164 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • What is the difference between development and R&D?

    - by MainMa
    I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?

    Read the article

  • Learn a NoSQL or become a badass with traditional RDMS - Where is/will the work be?

    - by beck
    I'm half way through my MSc and am thinking about my dissertation which I get 3 months to work on full time. Im very comfortable with the traditional Relational Database, the question is should I work on a project where I get a good understanding of something like Cassandra, or should I really push my RDMS knowledge to the limit. Getting great at something like MySQL is a solid safe option, will there really be much work for me with Cassandra in my tool belt? I would love to do either.... Thanks for your opinions and advice.

    Read the article

  • What if you could work on anything you wanted?

    - by Nick Harrison
    What if you could work on anything you wanted? Redgate is doing an experiment of sorts this week.  Called Down Tools Week.    The idea is that they stopped working on their regular projects for a week and strike out on something that catches their attention and drives their passion. Evidently in many cases, these projects have turned out to be new features in their existing products that individual were interested in, some were internal iniatives and some where evidently off the wall new ideas.   Today is show and tell where they will share with each other what they have been working on. There may well be some interesting announcements coming out of this.    The prospects are exciting. I understand that Google does something similar allowing their employees a specified amount of time to work on projects of their own choosing.    This has been the breeding ground for some of my favorite services. It is a shame that more companies do not follow such practices.   Now I know that most companies cannot afford to shut down everything for a week and sometimes you can't really explore an interesting idea in 8 hours a week or however much time Google allocates, but still it may be worth while. What would happen if your company gave you as an individual 1 week each quarter to work on a project of your own design and see what happens?   I would be happen if you still had to get approval for before your week long adventure. Personally, I think that this could be a very effective use of training budgets.   Give me a week to research something on my own and you would be amazed at what I can find out.    Maybe this should be the prerequisite before starting a new project.   Stagger the team onboarding but have everyone spend a week long sabbatical studying BizTalk before starting a project that will hinge on BizTalk. The show and tell afterwards is a great way to keep everyone honest or at least reassure management that everyone is honest.    If your goal was to spend a week researching and exploring a new technology and you had to do a show and tell afterwards to show off what you had learned, then everyone can learn a bit of what you just learned.     Sounds like a promising win win for me. Maybe it is a pipe dream, but what if .... What would you work on if given the opportunity to work on anything you wanted?

    Read the article

  • Resources on expected behaviour when manipulating 3D objects with the mouse

    - by sebf
    Hello, In my animation editor, I have a 3D gizmo that sits on the origin of a bone; the user drags the mesh around to rotate the bone. I've found that translating the 2D movements of the mouse into sensible 3D transforms is not near as simple as i'd hoped. For example what is intuitively 'up' or 'down'? How should the magnitude of rotations change with respect to dX/dY? How to implement this? What happens when the gizmo changes position or orientation with respect to the camera? ect. So far with trial and error i've written something (very) simple that works 70% of the time. I could probably continue to hack at it until I made something that works 99% of the time, but there must be someone who needed the same thing, and spent the time coming up with a much more elegant solution. Does anyone know of one?

    Read the article

  • Command line options style - POSIX or what?

    - by maaartinus
    Somewhere I saw a rant against java/javac allegedly using a mix of Windows and Unix style like java -classpath ... -ea ... Something IMHO, it is no mix, it's just like find works as well, isn't it? AFAIK, according to POSIX, the syntax should be like java --classpath ... --ea ... Something and -abcdef would mean specifying 6 short options at once. I wonder which version leads in general to less typing and less errors. I'm writing a small utility in Java and in no case I'm going to use Windows style /a /b since I'm interested primarily in Unix. What style should I choose?

    Read the article

  • Unable to boot ubuntu 11.10 from external usb drive

    - by user45006
    I'm new to Ubuntu (and actually all things Linux) as of this morning, so please excuse any stupid mistakes I may be making. I recently bought an external hard drive for my newly built PC (that is running windows 7 if it matters). I would like to install Ubuntu onto the external drive and boot from there. I downloaded Ubuntu 11.10 and made a bootable cd, unplugged my internal HDDs, plugged in the external drive, installed Ubuntu 11.10 on the external drive via the installer, and replugged in my internal HDDs. Then I set my bios boot order to: Boot from USB-HDD - Boot from Hard Disk - Boot from CD/DVD. Now when I restart I get the message "Starting Operating System..." (or something like that, I forget exactly what it says) that lingers on the screen for a moment and then windows starts. Any idea what the problem may be? ~Relevant info~ BIOS version: Award Software International, Inc. F2, 2/22/2011 Ubuntu Version: 11.10 External Hard Drive: Western Digital My Passport Essential 500GB Portable Hard Drive (Black) ~Things I've already tried~ 1) Unplug internal HDDs so that only external drive was plugged in via usb. Same thing happened only obviously my BIOS could not detect any hard drives besides the external one. When booting received error "Could not detect operating system" 2) Formatted external hard drive and re installed. It didn't make a difference, however interestingly when I booted from cd the ubuntu installer said it detected ubuntu 11.10 on the external hard drive. 3) Within BIOS I've messed around with every boot order combo I could think of both in the "Hard Disk boot order" screen and the "Boot order" screen. I'm a little confused of why there are two screens for this. 4) Held F12 during startup which opens (what I think is) the one time boot screen and it gave me the options "Hard Drive", "cd/dvd", "USB-FDD", "USB-cdrom", "USB-HDD", and "USB-something else I can't remember what it was". I tried all of them, but the same thing as before happened each time. ~References~ I noticed several people on askubuntu have tried to do something similar if not the exact same. In fact, I even found a post that pretty much outlines step by step exactly what I did... only their's worked. /jealous. Linky: Install Ubuntu or Kubuntu on a External USB Drive I'm willing to try a different version of ubuntu - it's not like my heart is set on 11.10, but it's a pain to open my case and unplug my internal hard drives so I'd prefer not to do this unless someone is reasonably confident it'll work. Thank you for all of your help in advance! I'm really looking forward to exploring Ubuntu!

    Read the article

  • How to change keyboard layout?

    - by swedishhh
    I'm on ubuntu 12.04. Recently I bought a cheap apple style bluetooth keyboard. It pairs OK. I paired it with the current 102 key still attached. Anyway I noticed that the character mapping is incorrect. Most keys do not type anything - some keys on the right (k, l, ;') etc give numbers, but that's about it. So I rebooted, with 102 kbd unattached, and the bluetooth keyboard on, ready to connect. After boot at the login screen, the bluetooth keyboard had paired. I typed my password, and it logged in fine!! However after the user login was complete it reverted to the broken behaviour. A glance at the layout chart shows ubuntu thinks I still have the 102 layout, even though it remained disconnected. Any ideas? Thanks, Dave

    Read the article

  • Role of an entity state in a component based system?

    - by Paul
    Component-based entity systems are all the rage these days; everyone seems to agree they are the way to go, but no one really has a definitive implementation of such a system. I was wondering, what role do entity states (walking-left, standing, jumping, etc) have in a CBS? Do they act like controllers (i.e. they handle events and change the entity's attributes based on those events)? What about cases where a state would, for example, require that the entity enters no-clip mode? Should, that state, when it enters, maybe set the CollisionComponent of the entity to a null pointer or something? (Then, on exit, the state should restore the entity's CollisionComponent to its previous state.) Also, I guess it's the current state's job to change the entity's state to something else, right?

    Read the article

  • A command-line clipboard copy and paste utility?

    - by Peter.O
    In Windows I used command-line clipboard copy-and-paste utilities... pclip.exe and gclip.exe These were UnixUtils ports for Windows (but they only handled plain text). There were a couple of other native Windows utils which could write/extracy any format. I've looked for something similar in Synaptic Package Manager, but I can't find anything. Is there something there, that I've missed? ... or maybe this is available in bash scripting? The type of utility I'd like will be able to read/write via std-in/std-out or file-in/file-out, and handle Unicode/Rich-text/Picture/etc clipboard formats... Late Edit: NB: I'm not after a clipboard manager.

    Read the article

  • How should I design a wizard for generating requirements and documentation

    - by user1777663
    I'm currently working in an industry where extensive documentation is required, but the apps I'm writing are all pretty much cookie cutter at a high level. What I'd like to do is build an app that asks a series of questions regarding business rules and marketing requirements to generate a requirements spec. For example, there might be a question set that asks "Does the user need to enter their age?" and a follow-up question of "What is the minimum age requirement?" If the inputs are "yes" and "18", then this app will generate requirements that look something like this: "The registration form shall include an age selector" "The registration form shall throw an error if the selected age is less than 18" Later on down the line, I'd like to extend this to do additional things like generate test cases and even code, but the idea is the same: generate some output based on rules determined by answering a set of questions. Are there any patterns I could research to better design the architecture of such an application? Is this something that I should be modeling as a finite state machine?

    Read the article

  • sudo apt-get install won't work

    - by Ben Casling
    I'm having issues with my ubuntu server version 12.04 installed on a HP550 laptop, when i try sudo apt-get install <programname>, e.g apache2 it will not work, saying E: Unable to locate package apache2. I have tried to look/edit the sources. but they will not work either the gedit command is broken too, i am trying gedit /etc/apt/sources.list for those wondering, is this a case of the computer network not configured properly? it downloaded a language pack easily enough in the installation though. how do i fix this? a prompt reply would be appreciated.

    Read the article

  • How to improve testing your own code

    - by Peter
    Today I checked in a change on some code which turned out to be not working at all due to something rather stupid yet very crucial. I feel really bad about it and I hope I finally learn something from it. The stupid thing is, I've done these things before and I always tell myself, next time I won't be so stupid... Then it happens again and I feel even worse about it. I know you should keep your chin up and learn from your mistakes but here's the thing: I try to improve myself, I just don't see how I can prevent these things from happening. So, now I'm asking you guys: Do you have certain groundrules when testing your code?

    Read the article

  • How to debug KMail Search after upgrade

    - by Unapiedra
    I added KDE backported packages recently to gain access to a more stable version of KDevelop. However, now, Kontact/Kmail's search doesn't work anymore. Where do I start to find out what's wrong? The problem manifests itself that when in the Inbox folder, I type something in the search bar, no Emails will be shown. (Yes, searching for something where the email is definitely there.) Kubuntu 12.04 LTS. Kontact version 4.13. What I tried. akonadiconsole as suggested here. But I couldn't find a feeder as mentioned. More generally, isn't there a checklist or a general approach to debugging akonadi, nepomuk and Kontact?

    Read the article

  • gnome-shell, gnome 3, unity, cinnamon, mate confusion

    - by Bryan
    I am thinking about adding something besides unity into my ubuntu 12.04. My questions are this: -If I add cinnamon, mate, gnome2/3, could I still call it it Ubuntu, or would it be Mint? -Why not just add Mint instead of cinnamon, or mate, into Ubuntu? -Or is Mint just those at the core, and not the other way around? -I had terrible battery drain using Mint. Something was wrong with the kernel with my laptop type. If I add cinnamon, would I get that battery drain again? -And lastly, would I be able to get that awesome HUD if I add the other things? I realize these questions are a bit confusing, or at least they are for me.

    Read the article

  • Why won't my install on Macbook work?

    - by Wyatt
    I am trying to install Ubuntu 12.04 on my Macbook. The CD drive is broken, so I am going from a USB flash drive I created. I can get it to "Try Ubuntu" perfectly fine, as a matter of fact I'm using it right now. However, I really want to install. Everything is partitioned, rEFIt is installed. I run the installer and I follow the guide at Apple Intel Installation I get to the install part of the installer, and after running it ends with a fatal grub error. Anyone know how to get past this? I feel like it has something to do with the fact that I don't get the last dialog box of the installer with the "advanced" tab used in the guide. Any help is greatly appreciated!

    Read the article

  • Need help installing Wine onto Ubuntu 12.10x64

    - by user106241
    I have tried to install wine through the software center and terminal and I get this error. chris@ubuntu:~$ sudo apt-get install wine1.5 [sudo] password for chris: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: wine1.5 : Depends: wine1.5-i386 (= 1.5.16-0ubuntu1) but it is not installable Recommends: gnome-exe-thumbnailer but it is not going to be installed or kde-runtime but it is not going to be installed Recommends: ttf-droid Recommends: ttf-mscorefonts-installer but it is not going to be installed Recommends: ttf-umefont but it is not going to be installed Recommends: ttf-unfonts-core but it is not going to be installed Recommends: winbind but it is not going to be installed Recommends: winetricks but it is not going to be installed E: Unable to correct problems, you have held broken packages.

    Read the article

  • What approaches can I take to lower the odds of introducing new bugs in a complex legacy app?

    - by m.edmondson
    Where I work I often have to develop (and bug fix) in an old system (.NET 1) whos code is complete spaghetti - with little thought given to variable names, program structure nor comments. Because of this it takes me ages to understand what bits need changed, and I often 'break' the existing software because I've made a modification. I really really want to spend a couple of months (with colleagues) going through it to refactor but existing developers both can't see the need - nor think theres time for this (the system is massive). I dread having to work on its code as it takes days to fix something only to find out I've broken something else. This obviously makes me look incompetent - so how can I deal with this?

    Read the article

  • How to make a great functional specification

    - by sfrj
    I am going to start a little side project very soon, but this time i want to do not just the little UML domain model and case diagrams i often do before programming, i thought about making a full functional specification. Is there anybody that has experience writing functional specifications that could recommend me what i need to add to it? How would be the best way to start preparing it? Here i will write down the topics that i think are more relevant: Purpose Functional Overview Context Diagram Critical Project Success Factors Scope (In & Out) Assumptions Actors (Data Sources, System Actors) Use Case Diagram Process Flow Diagram Activity Diagram Security Requirements Performance Requirements Special Requirements Business Rules Domain Model (Data model) Flow Scenarios (Success, alternate…) Time Schedule (Task Management) Goals System Requirements Expected Expenses What do you think about those topics? Shall i add something else? or maybe remove something?

    Read the article

  • How do I customize desktop wallpaper slideshow via XML?

    - by Pithikos
    I spent some time and tried varioues things but nothing works. Here's what I have tried so far: Making a new folder /usr/share/backgrounds/mywallpapers and add my own background-1.xml in there. Copying a bunch of my own wallpaper files into /usr/share/backgrounds/ Copy /usr/share/backgrounds/Contest/background-1.xml to /usr/share/backgrounds/ I logged out and in and no changes in Appearance app. I have heard about Wallch but I don't want some app running in the background all the time. I'm not even sure Wallch will work with Gnome 3. I also tryied gnome-3-wp (Gnome 3 Wallpaper Slideshow app) but it just seems broken for Oneiric Ubuntu 11.10. Anyone has a solution?

    Read the article

  • Verifying Office 2010 SP1 Installation

    - by Chris Heacock
    So you downloaded and installed SP1, but now you want to verify that SP1 actually installed! Looking at Outlook's Help Screen under Help->About, it isn't readily apparent that SP1 is installed. Like me, you probably expected to see 14.1.something. Perhaps 14.0.something SP1, right? If you click on that "Additional Version and Copyright Information", another window will pop up and show you a bit more useful info (if you don't have the version numbers committed to memory) That window *does* give us that comforting SP1, and now we can determine that if you have Office 14.1.6023.1000 (and beyond) you are indeed runnning Office 2010 SP1!

    Read the article

  • Masters vs. PhD - long [closed]

    - by Sterling
    I'm 21 years old and a first year master's computer science student. Whether or not to continue with my PhD has been plaguing me for the past few months. I can't stop thinking about it and am extremely torn on the issue. I have read http://www.cs.unc.edu/~azuma/hitch4.html and many, many other masters vs phd articles on the web. Unfortunately, I have not yet come to a conclusion. I was hoping that I could post my ideas about the issue on here in hopes to 1) get some extra insight on the issue and 2) make sure that I am correct in my assumptions. Hopefully having people who have experience in the respective fields can tell me if I am wrong so I don't make my decision based on false ideas. Okay, to get this topic out of the way - money. Money isn't the most important thing to me, but it is still important. It's always been a goal of mine to make 6 figures, but I realize that will probably take me a long time with either path. According to most online salary calculating sites, the average starting salary for a software engineer is ~60-70k. The PhD program here is 5 years, so that's about 300k I am missing out on by not going into the workforce with a masters. I have only ever had ~1k at one time in my life so 300k is something I can't even really accurately imagine. I know that I wouldn't have at once obviously, but just to know I would be earning that is kinda crazy to me. I feel like I would be living quite comfortably by the time I'm 30 years old (but risk being too content too soon). I would definitely love to have at least a few years of my 20s to spend with that kind of money before I have a family to spend it all on. I haven't grown up very financially stable so it would be so nice to just spend some money…get a nice car, buy a new guitar or two, eat some good food, and just be financially comfortable. I have always felt like I deserved to make good money in my life, even as a kid growing up, and I just want to have it be a reality. I know that either path I take will make good money by the time I'm ~40-45 years old, but I guess I'm just sick of not making money and am getting impatient about it. However, a big idea pushing me towards a PhD is that I feel the masters path would give me a feeling of selling out if I have the capability to solve real questions in the computer science world. (pretty straight-forward - not much to elaborate on, but this is a big deal) Now onto other aspects of the decision. I originally got into computer science because of programming. I started in high school and knew very soon that it was what I wanted to do for a career. I feel like getting a masters and being a software engineer in the industry gives me much more time to program in my career. In research, I feel like I would spend more time reading, writing, trying to get grant money, etc than I would coding. A guy I work with in the lab just recently published a paper. He showed it to me and I was shocked by it. The first two pages was littered with equations and formulas. Then the next page or so was followed by more equations and formulas that he derived from the previous ones. That was his work - breaking down and creating all of these formulas for robotic arm movement. And whenever I read computer science papers, they all seem to follow this pattern. I always pictured myself coding all day long…not proving equations and things of that nature. I know that's only one part of computer science research, but that part bores me. A couple cons on each side - Phd - I don't really enjoy writing or feel like I'm that great at technical writing. Whenever I'm in groups to make something, I'm always the one who does the large majority of the work and then give it to my team members to write up a report. Presenting is different though - I don't mind presenting at all as long as I have a good grasp on what I am presenting. But writing papers seems like such a chore to me. And because of this, the "publish or perish" phrase really turns me off from research. Another bad thing - I feel like if I am doing research, most of it would be done alone. I work best in small groups. I like to have at least one person to bounce ideas off of when I am brainstorming. The idea of being a part of some small elite group to build things sounds ideal to me. So being able to work in small groups for the majority of my career is a definite plus. I don't feel like I can get this doing research. Masters - I read a lot online that most people come in as engineers and eventually move into management positions. As of now, I don't see myself wanting to be a part of management. Lets say my company wanted to make some new product or system - I would get much more pride, enjoyment, and overall satisfaction to say "I made this" rather than "I managed a group of people that made this." I want to be a big part of the development process. I want to make things. I think it would be great to be more specialized than other people. I would rather know everything about something than something about everything. I always have been that way - was a great pitcher during my baseball years, but not so good at everything else, great at certain classes in school, but not so good at others, etc. To think that my career would be the same way sounds okay to me. Getting a PhD would point me in this direction. It would be great to be some guy who is someone that people look towards and come to ask for help because of being such an important contributor to a very specific field, such as artificial neural networks or robotic haptic perception. From what I gather about the software industry, being specialized can be a very bad thing because of the speed of the new technology. I When it comes to being employed, I have pretty conservative views. I don't want to change companies every 5 years. Maybe this is something everyone wishes, but I would love to just be an important person in one company for 10+ (maybe 20-25+ if I'm lucky!) years if the working conditions were acceptable. I feel like that is more possible as a PhD though, being a professor or researcher. The more I read about people in the software industry, the more it seems like most software engineers bounce from company to company at rapid paces. Some even work like a hired gun from project to project which is NOT what I want AT ALL. But finding a place to make great and important software would be great if that actually happens in the real world. I'm a very competitive person. I thrive on competition. I don't really know why, but I have always been that way even as a kid growing up. Competition always gave me a reason to practice that little extra every night, always push my limits, etc. It seems to me like there is no competition in the research world. It seems like everyone is very relaxed as long as research is being conducted. The only competition is if someone is researching the same thing as you and its whoever can finish and publish first (but everyone seems to careful to check that circumstance). The only noticeable competition to me is just with yourself and your own discipline. I like the idea that in the industry, there is real competition between companies to put out the best product or be put out of business. I feel like this would constantly be pushing me to be better at what I do. One thing that is really pushing me towards a PhD is the lifetime of the things you make. I feel like if you make something truly innovative in the industry…just some really great new application or system…there is a shelf-life of about 5-10 years before someone just does it faster and more efficiently. But with research work, you could create an idea or algorithm that last decades. For instance, the A* search algorithm was described in 1968 and is still widely used today. That is amazing to me. In the words of Palahniuk, "The goal isn't to live forever, its to create something that will." Over anything, I just want to do something that matters. I want my work to help and progress society. Seriously, if I'm stuck programming GUIs for the next 40 years…I might shoot myself in the face. But then again, I hate the idea that less than 1% of the population will come into contact with my work and even less understand its importance. So if anything I have said is false then please inform me. If you think I come off as a masters or PhD, inform me. If you want to give me some extra insight or add on to any point I made, please do. Thank you so much to anyone for any help.

    Read the article

  • Scrolling Box2D DebugDraw

    - by onedayitwillmake
    I'm developing a game using Box2D (javascript implementation - Box2DWeb), and I would like to know how I can pan the debug draw. I know the usual answer is - don't use debug draw, it's just for debugging. I'm not, however not all my objects are on the same screen, and i'd like to see where they are in the physics representation. How can I pan the debug drawing? As you can see the debug draw stuff, is show on the top left, but it only shows a small part of the world. Here is an example of what I mean: http://onedayitwillmake.com/ChuClone/ The game is open source, If you'd like to poke through and note something that perhaps i'm doing something that is obviously wrong: https://github.com/onedayitwillmake/ChuClone Here's my hacky way that I'm using now to scroll the b2DebugDraw view, in which I added a property offsetX and offsetY into b2DebugDraw

    Read the article

  • Book (or resource) on Java bytecode

    - by Andrea
    I am looking for some resources on the JVM bytecode. Ideally I would for a short book; something more than a blog post but not a 800 pages tome. If it is relevant, I am a Scala developer, not a Java one, although I know Java just fine. I would like something that allowed me to read JVM bytecode and answer questions such as: Why does the bytecode has to know about high level construct such as classes? Are subtyping relations still visible in bytecode? How does type erasure work exactly? How do Oracle and Dalvik bytecode differ, and what consequences does this have for, say, developing Android apps with Scala? How does the JVM manage the stack, and why exactly this creates issues with tail call elimination? and so on.

    Read the article

  • problem in installing wireshark on ubuntu 12.04

    - by iqbal
    i tried to install wireshark on ubuntu 12.04 but when i enter the cod the message is whone to me is iqbal@iqbal-HP-ProBook-4530s:~$ sudo apt-get install wireshark Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: wireshark : Depends: wireshark-common (= 1.11.4+svn20140420104827~d0489f2a-0ubuntu1~precise1~ppa0) but it is not going to be installed E: Unable to correct problems, you have held broken packages so how can i install wireshark on ubuntu 12.04 if any one can please tell me thanks

    Read the article

  • After upgrading to 13.04 Unity interface is not showing

    - by ?????? ?????
    I've upgraded to Raring last night. The upgrade itself went okay, no errors. But when I rebooted the computer afterwards and logged in to my Unity session, all I could see was the Desktop background (together with Desktop icons), and no Unity interface. The Super button shortcut wasn't showing the Dash, there was no top panel etc. Please see the screenshot. As a hint, I'm suspecting it's got something to do with my switchable graphics. I'm running Ubuntu on Acer Aspire AS5830TG with nVidia GT540M and an Intel integrated card. In 12.10 I was using Bumblebee to manage the graphic card switching. During the upgrade I saw something related to nvidia had to be uninstalled, but didn't pay much attention to it. I can't be sure if it has anything to do with my problem though. What could possibly go wrong?

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >