Search Results

Search found 6504 results on 261 pages for 'tfs upgrade'.

Page 39/261 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • After Upgrading to 12.04 the Kernel won't Initiate

    - by Jeff
    I had 11.10 and tried to upgrade recently, via the installer pop-up reminder. Afterwards it would not boot, citing an issue with the kernel. So I've setup a 12.04 installation USB, which appears to work fine. The problem is it doesn't provide an upgrade option, just format and install or install alongside the current broken kernel. I believe I should be able to still get the information from the broken OS after installing alongside, but if there is a way to fix this more directly that would be preferable.

    Read the article

  • In-Place DC Upgrade from Server 2003R2 Standard to 2008 Enterprise

    - by Yadhu Tony
    We have a Domain controller in server 2008 Enterprise and Additional DC in server 2003 R2. Now I need to upgrade Additional Domain controller to Server 2008 Enterprise and raise the domain functional level to 2008. The DC is running with Active Directory, DNS and DHCP. The server is installed in VMware ESXi 4.0. Please guide me to carry out the upgrade. Also I want to know about the possible risk of in-place upgrade, if any.

    Read the article

  • Archive update fails while upgrading from 10.10 to 11.04

    - by johnsonrichard
    I tried upgrading from Maverick to Natty, but the fetching of packages failed, when I tried to do a partial upgrade. It's displaying fetching complete, but actually it fails and I get following errors: Could not download the upgrades The upgrade has aborted. Please check your Internet connection or installation media and try again. All files downloaded so far are kept. Failed to fetch http://in.archive.ubuntu.com/ubuntu/pool/main/m/mesa/libgl1-mesa-dri_7.10.1~git20110215.cc1636b6-0ubuntu1_i386.deb 404 Not Found I can't post more hyperlinks as per restrictions. Please help, I am relatively new to Ubuntu and am loving it. Thanks in advance!

    Read the article

  • Installing 12.04 through Update Manager on a XP/ubuntu dual-boot

    - by Madeline Mcormick
    I currently have a dual-boot system running XP Pro SP3 with Ubuntu 10.04 LTS. I decided to upgrade to 12.04 using the Update Manager from the network and NOT using ISO CD version. Now that I am in the middle of 12.04 installation, I have this immense fear that this upgrade from update manager on the network server may affect my Win XP OS and may render it un-bootable. I tried backing up files while its upgrading to Ubuntu but it does not recognize any external media like external HDD. What should I do?

    Read the article

  • After upgrading to 12.04 lts server, the mouse is intermittently working

    - by Jason
    After this upgrade, I've had a problem with the mouse intermittently working and programs crashing (like zentyal crashing and the error reporting application crashing) I also had an error on the screen that said: "Could not grab your mouse. A malisious client may be eavesdropping on your session or you may have just clicked a menu or some application just decided to get focus. Try again." What's going on with this system? Did I really get malware that quickly? This system has been boxed up since 2011. I only did the upgrade today... In regards to the mouse I can move it around, but when I click on something, it doesn't work.

    Read the article

  • Managing TFS Workspaces

    - by Enrique Lima
    You are the administrator (or since you may be the one that knows the most about it) and you need to do some cleanup on what is connected and perhaps even cleanup after people that have left the organization and left some code checked out in their workspace. What permissions do I need? You will need to have Administer Workspaces permission to perform the following tasks. The commands. In order to execute the commands, you will need to open a Visual Studio Command Prompt, once there you will be able to use the tf command.  This has a nice set of options, which I will be providing a listing for later on in another post. To list all workspaces registered: tf workspaces /collection:<url to your TPC> <workspace>;<owner> To delete a specific workspace: tf workspace /delete /server:<url to your TPC> <workspace>;<owner> If for any reason a workspace has embedded spaces, then surround that with “” (double quotes).

    Read the article

  • What's the risk of upgrading over SSH?

    - by C. Ross
    When I run sudo do-release-upgrade over ssh, I get the following message. This session appears to be running under ssh. It is not recommended to perform a upgrade over ssh currently because in case of failure it is harder to recover. If you continue, an additional ssh daemon will be started at port '9004'. Do you want to continue? What is the real risk of upgrading over ssh? How does the additional ssh daemon help mitigate this?

    Read the article

  • Migrating IBM ClearCase to TFS

    - by Bob Hardister
    Using the Team Foundation Server Integration Tools Platform. Versions: ClearCase: 7.1.1.2 Team Foundation Server: 2012 RTM Integration Tools: 2.2.20314.1 OS: Windows 2008 R2 ENT SP1 I was able to do a simple example migration of a few files by using the following approach: Using a dynamic view Creating a view shortcut drive (i.e. Z:\) Running the tools as a UI client (not as a windows service) Running the tools UI in user mode (do not “Run as Administrator”) Using the CC detailed history adapter Selecting the view shortcut drive (i.e. Z) on the Tools UI Connect to CC dialog Selecting the “Detect Changes in CC” option on the Tools UI Connect to CC dialog Changing the DisableTargetAnalysis value to True on the Tools UI configuration view I have yet to perform actual migrations for real projects, but will update this blog as I do.

    Read the article

  • ????Oracle EBS R12 on Exadata V2 ,MMA and Hight Performance

    - by longchun.zhu
    ???????? ?????,???, ????????hands-on ??? ??????: 1. Oracle EBS R12 on Database Machine MAA & Performance Architecture. 2. Oracle EBS R12 Single Instance Node Deployment Procedure. 2.Start Rapid Install Wizard. 3. Oracle EBS R12 Single Instance Node Chinese Patch Update. 4.Applying Patches. 5.Upgrade Application Database Version to 11g Release 2. 6.Database Upgrade 7. Deploy Clone Application Database to Sun Oracle Database Machine. 8.Migrate Application Database File System to Exadata ASM Storage. 9.Covert Application Database Single Instance to RAC.. 10.Configure High Availability & High Performance Architecture with Exadata.

    Read the article

  • Team Foundation Server Setup/Access

    - by Angel Brighteyes
    What I need: A TFS 2010 Setup that allows 2 application developers to access the TFS from remote locations. How it is setup: Server 2008 Standard 2g Ram 300g HD space SharePoint Server 2007, using SQL Server 2005 SQL Server 2008 Standard Team Foundation Server 2010 IIS 7 Sharepoint Bindings: TFS.DynAccount.Me:80; TFS:80 TFS Bindings: TFS.DynAccount.Me:8080; TFS:8080 Using DynDNS service to account for the dynamic ip address being used, this is a requirement for the moment until I can get a better isp package. Access using Local Accounts Server is not setup on a domain, or as a domain. Consequently I did not setup AD services. Problem: When logged into TFS using my credentials TFS\AdminUser through the DynDNS account TFS.DynAccount.Me I recieve the 'Red X of Death' on the Documents and Reports folder. When logged into the TFS through the local peer to peer network using the same credentials TFS\AdminUser I do not receive the 'Red X of Death' problem. Further Troubleshooting: When users 'Right Click' the 'TeamProject1' Click 'Show Project Portal' it tries to take them to http://TFS:8080 instead of http://TFS.DynAccount.Me:8080, which doing further research I am assuming that it is because team foundation server was setup with a local name of TFS instead of 'TFS.DynAccount.Me' as specified here in Visual Studio Magazines: The Red X of Death. Users can Access the Team Portal for SharePoint via http://TFS.DynAccount.Me/TeamCollection/TeamProject so it is not like we are dead in the water or anything. However, as most employees/staff are prone to do, they have expressed a great distaste for having to do it this way and just be patient until the current project is finished since we are under a very strict deadline. Is there a way to set this up differently, or change some settings someplace, reinstall it, point a CName record for our domain website to the DynAccount (e.g. TFS.OurDomain.com points to TFS.DynAccount.Me, which consequently does allow access to the http site without issues), or something. I really don't feel like after all the time and effort I have spent into, first the cost, second the bloody install, third learning SharePoint well enough, fourth the hours into days spent on this, fifth more troubleshooting, sixth employee headaches to just let it lay where it is at. I figure in my spare/off time I would keep trying to get this to work. So I really appreciate any help any one can give me. I know this is probably something really stupid simple that I will 'Face Palm' over, but at the moment the stress and frustration just has me beat. Thank you again, this community has always been a great help.

    Read the article

  • Repository bugzilla package changed to bugzilla3 in Lenny; upgradable?

    - by Pukku
    This question was asked in debianhelp.org almost half a year ago, but never got an answer. I wasn't the one who posted it, however I was today facing exactly the same question. Not sure if copying it to here as such is considered as inappropriate or something, but there's not really anything that I would even like to paraphrase... So let's just go. (I'm sure you will be happy to close it, if this is not the way to go :) Hello all! We are using a Bugzilla server install on a Debian 4/Etch server and are starting to look at the upgrade to Debian 5/Lenny. I was hoping to upgrade the existing Bugzilla server and database from the oldstable (v2.22) to the newer stable in Lenny (v3) when we get to doing a dist-upgrade. However from testing in a virtual machine it seems that the old package was called "Bugzilla" whereas the Lenny package is called "Bugzilla3" and I could not figure a way to directly upgrade between the two. Is it possible to establish some kind of upgrade path quickly after the dist-upgrade to minimise downtime using apt-get or aptitude? Going on past experiences I would not want to do a fresh install with the Bugzilla3 package and attempt to inject the old database into it (previous attempts failed miserably!) :(

    Read the article

  • Oracle Open World starts on Sunday, Sept 30

    - by Mike Dietrich
    Oracle Open World 2012 starts on Sunday this week - and we are really looking forward to see you in one of our presentations, especially theDatabase Upgrade on SteriodsReal Speed, Real Customers, Real Secretson Monday, Oct 1, 12:15pm in Moscone South 307(just skip the lunch - the boxed food is not healthy at all): Monday, Oct 1, 12:15 PM - 1:15 PM - Moscone South - 307 Database Upgrade on Steroids:Real Speed, Real Customers, Real Secrets Mike Dietrich - Consulting Member Technical Staff, Oracle Georg Winkens - Technical Manager, Amadeus Data Processing Carol Tagliaferri - Senior Development Manager, Oracle  Looking to improve the performance of your database upgrade and learn about other ways to reduce upgrade time? Isn’t everyone? In this session, you will learn directly from Oracle’s Upgrade Development team about what you can do to speed things up. Find out about ways to reduce upgrade downtime such as using a transient logical standby database and/or Oracle GoldenGate, and get other hints and tips. Learn about new features that improve upgrade performance and reduce downtime. Hear Georg Winkens, DB Services technical manager from Amadeus, speak about his upgrade experience, and get real-life performance measurements and advice for a successful upgrade. . And don't forget: we already start on Sunday so if you'd like to learn about the SAP database upgrades at Deutsche Messe: Sunday, Sep 30, 11:15 AM - 12:00 PM - Moscone West - 2001Oracle Database Upgrade to 11g Release 2 with SAP Applications Andreas Ellerhoff - DBA, Deutsche Messe AG Mike Dietrich - Consulting Member Technical Staff, Oracle Jan Klokkers - Sr.Director SAP Development, Oracle Deutsche Messe began to use Oracle6 Database at the end of the 1980s and has been using Oracle Database technology together with SAP applications successfully since 2002. At the end of 2010, it took the first steps of an upgrade to Oracle Database 11g Release 2 (11.2), and since mid-2011, all SAP production systems there run successfully with Oracle Database 11g. This presentation explains why Deutsche Messe uses Oracle Database together with SAP applications, discusses the many reasons for the upgrade to Release 11g, and focuses on the operational top aspects from a DBA perspective. . And unfortunately the Hands-On-Lab is sold out already ... We would like to apologize but we have absolutely ZERO influence on either the number of runs or the number of available seats.  Tuesday, Oct 2, 10:15 AM - 12:45 PM - Marriott Marquis - Salon 12/13 Hands On Lab:Upgrading an Oracle Database Instance, Using Best Practices Roy Swonger - Senior Director, Software Development, Oracle Carol Tagliaferri - Senior Development Manager, Oracle Mike Dietrich - Consulting Member Technical Staff, Oracle Cindy Lim - PMTS, Oracle Carol Palmer - Principal Product Manager, Oracle This hands-on lab gives participants the opportunity to work through a database upgrade from an older release of Oracle Database to the very latest Oracle Database release available. Participants will learn how the improved automation of the upgrade process and the generation of fix-up scripts can quickly help fix database issues prior to upgrading. The lab also uses the new parallel upgrade feature to improve performance of the upgrade, resulting in less downtime. Come get inside information about database upgrades from the Database Upgrade development team. . See you soon

    Read the article

  • New Slides - and a discussion about Dictionary Statistics

    - by Mike Dietrich
    First of all we have just upoaded a new version of the Upgrade and Migration Workshop slides with some added information. So please feel free to download them from here.The slides have one new interesting information which lead to a discussion I've had in the past days with a very large customer regarding their upgrades - and internally on the mailing list targeting an EBS database upgrade from Oracle 10.2 to Oracle 11.2. Why are we creating dictionary statistics during upgrade? I'd believe this forced dictionary statistics creation got introduced with the desupport of the Rule Based Optimizer in Oracle 10g. The goal: as RBO is not supported anymore we have to make sure that the data dictionary has fresh and non-stale statistics. Actually that would have led in Oracle 9i to strange behaviour in some databases - so in Oracle 9i this was strongly disrecommended. The upgrade scripts got hardcoded to create these stats. But during tests we had the following findings: It's important to create dictionary statistics the night before the upgrade. Not two weeks before, not 60 minutes before your downtime begins. But very close to the upgrade. From Oracle 10g onwards you'd just say: $ execute DBMS_STATS.GATHER_DICTIONARY_STATS; This is important to make sure you have fresh dictionary statistics during upgrade for performance reasons. Tests have shown that running an upgrade without valid dictionary statistics might slow down the whole upgrade by factors of 2x-3x. And it would be also a great idea post upgrade to create again fresh dictionary statistics when you've did suppress the stats creation during the upgrade process. Suppress? Yes, you could set this underscore parameter in the init.ora: _optim_dict_stats_at_db_cr_upg=FALSE to suppress the forced dictionary statistics collection during an upgrade. We believe strongly that (a) people using the default statistics creation process which will create dictionary statistics by default and (b) create fresh stats before upgrade on the dictionary. Therefore we find it save once you have followed our advice to use the underscore during upgrade. And we've taken out that forced statistics collection during upgrade in the next release of the database. Please note: If you are using the DBUA for the upgrade it will remove underscore parameters for the upgrade run to improve performance - which is generally a good idea. So you'll have to start the DBUA with that call: $ dbua -initParam "_optim_dict_stats_at_cb_cr_upg"=FALSE -Mike

    Read the article

  • TFS 2010 - TF14040 The Folder may not be checked out.

    - by Patricker
    I have a .NET 4 website in VS2010 stored in a TFS 2010 team project. I need to add a reference to System.Data.Linq.dll to the website. I am referencing a LINQ DataContext that is defined in another project and I get build errors saying that I need the reference to System.Data.Linq. I go up to the "Add Reference" menu option and add it like I would any normal reference, and it even shows up in the Web.config and in the Properties pages for the website... BUT if I build I still get the same error. So I found a place in my code where I was referencing the LINQ count function and it told me it was invalid because I was missing a reference and it offered to add the reference automatically. I told it to add the reference automatically and it is at this point that I get the error mentioned in the subject: TF14040: The folder $/Folder/Subfolder may not be checked out. No items were checked out I've done some research online but I haven't been able to find much. I saw on a blog that making the folder not readonly fixed the issue for him, but it didn't seem to work for me unless I misunderstood something. I tried loading up the project from source control onto a fresh computer where that project had never been loaded before and I can reproduce the issue the same way. Help would be greatly appreciated.

    Read the article

  • How to access "Custom" or non-System TFS workitem fields using PowerShell?

    - by DaBozUK
    When using PowerShell to extract information from TFS, I find that I can get at the standard fields but not "Custom" fields. I'm not sure custom is the correct term, but for example if I look at the Process Editor in VS2008 and edit the Work Item type, there are fields such as listed below, with Name, Type and RefName: Title String System.Title State String System.State Rev Integer System.Rev Changed By String System.ChangedBy I can access these with Get-TfsItemHistory: Get-TfsItemHistory "$/path" -Version "D01/12/10~" -R | Select -exp WorkItems | Format-Table Title, State, Rev, ChangedBy -Auto So far so good. However, there are also some other fields in the WorkItem type, which I'm calling "Custom" or non-System fields, e.g.: Activated By String Microsoft.VSTS.Common.ActivatedBy Resolved By String Microsoft.VSTS.Common.ResolvedBy And the following command does not retrieve the data, just spaces. Get-TfsItemHistory "$/path" -Version "D01/12/10~" -R | Select -exp WorkItems | Format-Table ActivatedBy, ResolvedBy -Auto I've also tried the names in quotes, the fully qualified refname, but no luck. How do you access these "non-System" fields? Thanks Boz UPDATE: From Keith's answer I can get the fields I need: Get-TfsItemHistory "$/Hermes/Main" -Version "D01/12/10~" -Recurse ` | Select ChangeSetId, Comment -exp WorkItems ` | Select ChangeSetId, Comment, @{n='WI-Id'; e={$_.Id}}, Title -exp Fields ` | Where {$_.ReferenceName -eq 'Microsoft.VSTS.Common.ResolvedBy'} ` | Format-Table ChangesetId, Comment, WI-Id, Title, @{n='Resolved By'; e={$_.Value}} -Auto

    Read the article

  • WiX major upgrade! Need different behaviors for different components...

    - by Joshua
    Okay! I have finally more closely identified the problem I'm having. In my installer, I was attempting to get a settings file to REMAIN INTACT on a major upgrade. I finally got this to work with the suggestion to set <InstallExecuteSequence> <RemoveExistingProducts After="InstallFinalize" /> </InstallExecuteSequence> This is successful in forcing this component to leave the original, not replacing it if it exists: <Component Id="Settings" Guid="A3513208-4F12-4496-B609-197812B4A953" NeverOverwrite="yes"> <File Id="settingsXml" KeyPath="yes" ShortName="SETTINGS.XML" Name="Settings.xml" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\Settings\settings.xml" Vital="yes" /> </Component> HOWEVER! This is a problem! I have another component listed here: <Component Id="Database" Guid="1D8756EF-FD6C-49BC-8400-299492E8C65D" KeyPath="yes"> <File Id="pathwaysMdf" Name="Pathways.mdf" DiskId="1" Source="\\fileserver\Shared\Databases\Pathways\SystemDBs\Pathways.mdf" Vital="yes"/> <File Id="pathwaysLdf" Name="Pathways_log.ldf" DiskId="1" Source="\\fileserver\Shared\Databases\Pathways\SystemDBs\Pathways.ldf" Vital="yes"/> </Component> And this component MUST BE REPLACED on a major upgrade. I can only accomplish this so far by setting <RemoveExistingProducts After="InstallInitialize" /> THIS BREAKS THE FIRST FUNCTIONALITY I NEED WITH THE SETTINGS FILE. HOW CAN I DO BOTH?!

    Read the article

  • Slides from Upgrade webcast

    - by Alex Blyth
    Thanks everyone for attending the webcast on "Upgrading to Oracle 11g". I hope there were some useful tips for everyone. My apologies for the issue with the audio streaming - Ill re-record the session later this week and hopefully have it available soon there after. As I mentioned, the next session - on Oracle VM and Oracle Enterprise Linux is on April 28 2010.Please click here to enroll. As for the slides... here they are: You can download the slides here. Upgrade to Oracle 11g View more presentations from Oracle Australia. Thanks again Cheers Alex

    Read the article

  • Exadata Storage Server software upgrade is a new era in Patching

    - by Luis Moreno Campos
    Since it was first released, Exadata Storage Server software has been releasing patch releases like every software on the planet. Storage administrators would have to do this, but by some weird tradition, no matter what level of technology, if it says "Oracle" in it, IT Managers will immediately associate this with a task for the DBA. Not the case, but if it falls onto a DBA lap, fear no evil.The last patch released for Exadata Cells, is a true master piece in patching technology. This sentence is not mine, it's from both the customer and the partner that witnessed how 3 Exadata Cells where patch in less than 4 hours, after 12 months of without a single upgrade.The patch manager that takes care of everything will patch not only the software but also the firmware and the operating system. And you know it will all work out because back in the lab everything was already tested.All you have to do is stare at the 3 Sun ILOM Windows from the 3 cells and watch as they boot and reboot, patch and fix to the latest versions all layers of the storage machines. It's a new era in Patching technology!LMC

    Read the article

  • Black screen when running xubuntu 13.10 after upgrade

    - by user213030
    I have a xubuntu v12 that I updated to v13.10. Sice the upgrade I get the black screen at a startup. I can get to the console and login. How can I run it in graphic mode? I run it on Oracle Virtualbox. Starting the VirtualBox Guest Additions ...done. Starting VirtualBox Guest Addition service ...done. saned disabled; edit /etc/default/saned; * Restoring resolver state... [ OK ] And it hangs on this.

    Read the article

  • Kernel Panic: Not booting after upgrade from 10.04 to 12.04

    - by Jitesh
    I upgraded from 10.04 to 12.04LTS. Upgrade went fine, even restarted couple of times. Then the next day while booting into Ubuntu, after the grub, it gave the error Kernel panic : not syncing vfs unable to mount root fs on unknown block (0,0). I then booted into live CD and tried the following commands, based on other posts on this forum: sudo fdisk -l As the 8 was on /dev/sda1, sudo mount /dev/sda1 /mnt sudo mount --bind /dev /mnt/dev Now I got the message: mount: mount point /mnt/dev does not exist Then tried sudo mount --bind /proc /mnt/proc Again got the message: mount point point /mnt/proc does not exist. then tried sudo chroot /mnt Got message: chroot: failed to run comman '/bin/bash': No such file or directory Now have no clue what to do next. Unable to boot into Ubuntu. Please help. Jitesh

    Read the article

  • Lucid hangs at booting after kernel upgrade

    - by Thomas Deutsch
    This weekend, one of our servers running Lucid has installed some upgrades: libgcrypt11 1.4.4-5ubuntu2.1 linux-firmware 1.34.14 linux-image-2.6.32-41-generic 2.6.32-41.91 linux-libc-dev 2.6.32-41.91 Afterwards, it rebooted since this was a kernel upgrade. Now, it hangs at booting, after /scripts/init-bottom. init-bottom itself should not be the problem, the last line I can see is "done". So the problem has to be shortly after that. http://manpages.ubuntu.com/manpages/hardy/man8/initramfs-tools.8.html tells me, that the next step is procfs and sysfs are moved to the real rootfs and execution is turned over to the init binary which should now be found in the mounted rootfs. But I don't know how and where. The problem exists with older kernels too, and this one here doesn't fix the problem: http://www.tummy.com/journals/entries/jafo_20111003_160440 Anyone an idea?

    Read the article

  • Volume indicator issue after xubuntu 13.10 upgrade

    - by misterjinx
    Last night I decided to upgrade my system to the latest version of Xubuntu, 13.10. The process went fine, but now I'm facing this strange issue. There is no sound settings available in the settings manager window and the volume indicator looks like when the volume is muted, clicking the indicator being broken as well. The indicator looks like this: I tried to do a alsa force-reload followed by a restart of the computer, but didn't help. Any thoughts ? l.e. After some digging I found out that the volume control exists, so this must be a volume indicator issue.

    Read the article

  • Upgrade Workshop in Sydney - Recap

    - by Mike Dietrich
    Late, but hopefully not too late, a big THANK YOU to everybody who did attend the Upgrade and Migration Workshop in Sydney at the Cliftons past week. You were a really good crowd, thanks for all your questions, the great conversations in the breaks, thanks to the local marketing team for the excellent organization - and we'll looking forward to see you next time again with all your databases then live on Oracle Database 11.2  To download the slides please find them in the Slides Download Center to your right - or use the direct link to download the workshop slide deck. And I really don't understand how you can go to daily work (or to a workshop) with such beaches nearby ... I would immediatelly change my job profile Honestly, Sydney is really a great place. Australia and New Zealand generally are wonderful places and we've met so many great people in Perth, Brisbane, Melbourne, Wellington, Sydney and during our travel in between. Just if there wouldn't be over 20 hours pure flight time in between Germany and Down Under Hope to see you all again next time for 12c -Mike

    Read the article

  • upgrade from ubuntu 13.04 to 13.10 causes vmware workstation 9 problems

    - by dan
    so, upgrade caused problem where running vmware ws 9 needed patches to accommodate linux kernal 3.11. I applied those fixes I found that others reported, and now i can only run vmware ws 9 from sudo. If i run it from standard user, it says it wants to recompile modules, which it does not do unless I open up a terminal and run sudo vmware. that works, but would like it to work correctly, have the modules that are recompiled stick. when running under sudo vmware, it does recompile with errors.. (vmware-unity-helper:13019): Gtk-WARNING **: Unable to locate theme engine in module_path: "murrine", and starts up and works ok. any ideas? thanks for any help you can provide

    Read the article

  • Dell wireless not working after upgrade

    - by Omer Saeed
    So, the short version of my sad story is that I tried upgrade Ubuntu to 12.04 and wireless driver has stopped working. I have tried all the solutions but nothing seem to be working. When I try to install my wireless from "Additional Drivers" Its says: Sorry, installation of this driver failed. Please have a look at the log file for details: /var/log/jockey.log The lspci command gives me the following info about wireless: 0c:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01) I have tried removing bcm drivers and reinstalling, but nothing seems to be working. rfkill is good too.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >