Search Results

Search found 44258 results on 1771 pages for 'disable add ons'.

Page 612/1771 | < Previous Page | 608 609 610 611 612 613 614 615 616 617 618 619  | Next Page >

  • Why don't my Fn keys work for brightness or media after upgrading?

    - by Adina G
    I recently upgraded from 11.04 to 11.10. After the upgrade, I can no longer adjust the screen brightness or the volume using keyboard (before the upgrade, using Fn+F4, Fn+F11, etc. worked). Using Fn+F2 to disable wireless still works, so I guess the Fn key itself is being recognised. I tried to follow the instructions here, but I don't have a file in /etc/X11 called xorg.conf. I also tried following this workaround, but it had no noticeable effect. I've also tried going to Settings ? Keyboard ? Shortcuts and reassigning the brightness and volume controls, both to the default keys and to new combinations. These changes don't have an effect even after rebooting. Googling has found bug reports where pressing the media keys brings up a "no entry sign" rather than changing the volume. When I press the keys there's no response at all. I've also seen various people say a workaround is to have totem running in the background; this doesn't work for me either. Finally, I tried installing keytouch; I was able to install keytouch-editor but got the message "Unable to locate package keytouch". Any more ideas? I'd be very grateful if anyone could help me (even by pointing to a thread I've missed)!

    Read the article

  • Ubuntu 10.04: boot error for custom compiled kernel - gave up waiting for root device

    - by atharva
    I have installed lucid on my Lenevo Laptop (Y 410 series , x86 platform) and it is working fine. Now I have compiled kernel 2.6.37 downloaded from the kernel tree. I followed usual procedure of compiling kernel (make menuconfig, make, make modules etc). Then I created the initrd image using mkinitramfs and updated my grub using update-grub command. update-grub detects the initrd image of the compiled kernel. However when I boot from this kernel it gives me following error: Gave up waiting for root device. Common problems: -Boot args (cat /proc/cmdline) -Check rootdelay= (did the system wait long enough?) -Check root= (did the system wait for the right device?) -Missing modules (cat /proc/modules; ls /dev) ALERT! root=UUID=/... does not exist and then it falls onto initramfs prompt. I have tried following solutions discussed in different Ubuntu forums: disable uuid and point root=/dev/sda8 (sda8 is where my kernel image resides (both default kernel and compiled one) from /etc/default/grub compile kernel using CONFIG_DEVTMPFS=y suggested here Still I am unable to boot from the compile kernel. Could someone please suggest me the solution?

    Read the article

  • Is there a standard way to track 2d tile positions both locally and on screen?

    - by Magicked
    I'm building a 2D engine based on 32x32 tiles with OpenGL. OpenGL draws from the top left, so Y coordinates go down the screen as they increase. Obviously this is different than a standard graph where Y coordinates move up as they increase. I'm having trouble determining how I want to track positions for both sprites and tile objects (objects that are collections of tiles). My brain wants to set the world position as the bottom left of the object and track every object this way. The problem with this is I would have to translate it to an on screen position on rendering. The positive with this is I could easily visualize (especially in the case of objects made of multiple tiles) how something is structured and needs to be built. Are there standard ways for doing this? Should I just suck it up and get used to positions beginning in the top left? Here are the OpenGL calls to start rendering: // enable textures since we're going to use these for our sprites glEnable(GL_TEXTURE_2D); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // enable alpha blending glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // disable the OpenGL depth test since we're rendering 2D graphics glDisable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, WIDTH, HEIGHT, 0, 1, -1); glMatrixMode(GL_MODELVIEW); I assume I need to change: glOrtho(0, WIDTH, HEIGHT, 0, 1, -1); to: glOrtho(0, WIDTH, 0, HEIGHT, 1, -1);

    Read the article

  • Users can benefit from Session Tracking

    I use to work for a large Dental Plan marketing website a few years ago and they had a large customer-driven website that sold Dental Plans to consumers. Their website started tracking users as soon as they hit their web servers, and then they logged everything they could about the user. There are a lot of benefits for using session tracking for both the user and the website. Users can benefit from session tracking due to the fact that a website can retain pertaining information for the user so that they do not have to re-enter the same information repeatedly. In addition, websites can hold specific items in a cart for each user so that they can pay for all of their  items at once when they are ready to complete their purchases. Websites can also benefit from session tracking because they can determine where a specific user came from and which advertising partner gave them a sale. This information is very useful when deciding on where to spend an advertising budget. There is only one real disadvantage when it comes to session tracking, Users can not really control what is actually tracked by a website. Yes, they can disable cookies and this will help, but that means that no tracking can be done at all. Most sites require users to have cookies enabled in order for users to make purchases or login to their accounts.

    Read the article

  • Booting off a ZFS root in 14.04

    - by RJVB
    I've been running a Debian derivative (LMDE) on a ZFS root for half a year now. It was created by cloning a regular ext4-based install with all the necessary packages onto a ZFS pool, chrooting into that pool and recreating a grub menu and bootloader. The system uses an ext-3 dedicated /boot partition. I would like to do the same with Ubuntu 14.04, but have encountered several obstacles. There is no Trusty zfs-grub package The default grub package doesn't have ZFS support built in. I found a small bug in the build system responsible for that (report with patch created) and built my own grub packages. The built-in ZFS support is dysfunctional, it does not add the proper arguments to the kernel command line I thus installed the ZoL grub package I also use on my LMDE system, which does give me a correct grub.cfg However, even with that correct grub.cfg, the boot process apparently doesn't retrieve the bootfs parameter from the ZFS pool; instead the variable that's supposed to receive the value remains empty. As a result, initrd tries to load the default pool ("rpool"), which fails of course. I can however import the pool by hand, and complete the process by hand. If memory serves me well, I also had to disable apparmor, to avoid the boot process from blocking after importing the pool. Am I overlooking something? Just for comparison, I installed the Ubuntu 3.13 kernel on my LMDE system, and that works just fine (i.e. the identical kernel and grub binaries allow successful booting without glitches on LMDE but not on Ubuntu).

    Read the article

  • How do I turn off PCI devices?

    - by ethana2
    With the purchase of an Intel SSD and 85WHr Li-ion battery and the linking of wifi and bluetooth to my laptop's wireless switch, extensive Intel PowerTop usage, switching from compiz to metacity, stopping of the desktop-couch daemon, removal of Ubuntu One and several other services from my startup, disabling of everything possible in my BIOS, and physical removal of my optical drive, I've gotten my battery life up fairly high, but I think there's still more to be done. Specifically, when I'm in class taking notes, I want to temporarily but completely power down: Ethernet Firewire USB ports SD card reader Optical drive Webcam Sound card PCMCIA slot ..without turning them off in my BIOS like they are now, if possible, because then I have to restart my computer to use any of them. As it stands, I still haven't managed to power down: Firewire USB connection to webcam sound card How do I tell Linux to disable and power down these devices? Is it true that any PCI slot can be physically powered down? My current idle power consumption is 7.9 watts plus the screen. (10.0W at min. brightness) Also, how do I set the screen timeout to ten seconds? gconf editor isn't honoring it when I set it to that. Will switching from nVidia to Nouveau save any significant amount of power?

    Read the article

  • Dynamic Monitoring Service (DMS) Configuration Dumping and CPU Utilization

    - by ShawnBailey
    There was recently a report of CPU spikes on a system that were occuring at precise 3 hour intervals. Research revealed that the spikes were the result of the Dynamic Monitoring Service generating a metrics dump and writing it under the server 'logs' folder for every WLS server in the domain. This blog provides some information on what this is for and how to control it. The Dynamic Monitoring Service is a facility in FMw (JRF to be more precise) that collects runtime data on the components deployed to WebLogic. Each component is responsible for how much or how little they use the service and SOA collects a fair amount of information. To view what is collected on any running server you can use the following URL, http://host:port/dms/Spy and login with admin credentials. DMS is essentially always running and collecting this information in the runtime and to protect against loss of this data it also runs automatic backups, by default at the 3 hour interval mentioned above. Most of the management options for DMS are exposed through WLST but these settings are not so we must open the dms_config.xml file which can be found in DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dms_config.xml. The contents are fairly short and at the bottom you will find the following entry: <dumpConfiguration>     <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="true"/> </dumpConfiguration> The interval of 10800 seconds corresponds to the 3 hours and the maximum size is 75MB. The file is written as an archive to DOMAIN_HOME/servers/<server_name>/logs/metrics. This archive contains the dump in XML format. You can disable the dumps all together by simply setting the 'enabled' value to 'false' or of course you could modify the other parameters to suit your needs. Disabling the dumps will NOT impact DMS collections or display at runtime. It will only eliminate these periodic backups.

    Read the article

  • Google Music Player doesn't work

    - by EricoPF
    I'm trying to log in on Google Music Player application but doesn't work. I get the message below: Login Failed Could not identify your computer. Learn More On Google Help it says that it doesn't run on virtual machines, which is not my case, but I have a virtualbox installed though, and it says some people get to work if they disable their bridge network. The thing is I don't have any bridge interface, even if I remove all virtualbox modules I still get this message. This is my ifconfig output: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:15374 errors:0 dropped:0 overruns:0 frame:0 TX packets:15374 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1455889 (1.4 MB) TX bytes:1455889 (1.4 MB) wlan0 Link encap:Ethernet HWaddr 94:db:c9:b2:1b:d7 inet addr:192.168.1.100 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::96db:c9ff:feb2:1bd7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:828467 errors:0 dropped:0 overruns:0 frame:0 TX packets:568040 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1086663025 (1.0 GB) TX bytes:72984931 (72.9 MB) Any ideias ? Thanks guys! Cheers.

    Read the article

  • New functionality in TFS Build Manager &ndash; Managing Triggers and Build Resources

    - by Jakob Ehn
    Yesterday we pushed out a new release (August 2012) of the Community TFS Build Extension, including a new version of the Community TFS Build Manager (1.0.4.6) The two big new features in the Build Manager in this release are: Set Triggers It is now possible to select one or more build definitions and update the triggers for them in one simple operation: You’ll note that we have started collapsing the context menu a bit, the list of commands are getting long! When selecting the Trigger command, you’ll see a dialog where the options should be self-explanatory: The only thing missing here is the Scheduled trigger option, you’ll have to do that using Team Explorer for now.   Manage Build Resources The other feature is that it is now possible to view the build controllers and agents in your current collection and also perform some actions against them. The new functionality is available by select the Build Resources item in the drop down menu: Selecting this, you’ll see a (sort of) hierarchical view of the build controllers and their agents: In this view you can quickly see all the resources and their status. You can also view the build directory of each build agent and the tags that are associated with them. On the action menu, you can enable and disable both agents and controllers (several at a time), and you can also select to remove them. By selecting Manage, you’ll be presented with the standard Manage Controller dialog from Visual Studio where you can set the rest of the properties. Hopefully we’ll be able to implement most of the existing functionality so that we can remove that menu option Our plan is to add more functionality to this view, such as adding new agents/controllers, restarting build service hosts, maybe view diagnostic information such as disk space and error logs.   Hope you’ll find the new functionality useful. Remember to log any bugs and feature requests on the CodePlex site. Happy building!

    Read the article

  • A Dozen USB Chargers Analyzed; Or: Beware the Knockoffs

    - by Jason Fitzpatrick
    When it comes to buying a USB charger one is just as good as another so you might as well buy the cheapest one, right? This interesting and detailed analysis of name brand, off-brand, and counterfeit chargers will have you rethinking that stance. Ken Shirriff gathered up a dozen USB chargers including official Apple chargers, counterfeit Apple chargers, as well as offerings from Monoprice, Belkin, Motorola, and other companies. After putting them all through a battery of tests he gave them overall rankings based on nine different categories including power stability, power quality, and efficiency. The take away from his research? Quality varied widely between brands but when sticking with big companies like Apple or HP the chargers were all safe. The counterfeit chargers (like the $2 Apple iPad charger knock-off he tested) proved to be outright dangerous–several actually melted or caught fire in the course of the project. Hit up the link below for his detailed analysis including power output readings for the dozen chargers. A Dozen USB Chargers in the Lab [via O'Reilly Radar] 6 Start Menu Replacements for Windows 8 What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8

    Read the article

  • Internal speakers do not work

    - by Nikcefo
    I have a new (from scratch, not update) installation of Ubuntu 12.10 on my notebook, Asus A3Ac (It is based on Intel Centrino - Pentium M with full duplex Intel HDA codec). In older versions of Debian-based systems Intel HDA audio didn't work correctly. Alsamixer display wrong outputs and inputs (more than notebook really have). In clean installations internal speakers were playing, but they didn't mute when headphones was plugged in. There was a solution (propably not the best but working) - edit as root /etc/modprobe.d/alsa-base.conf and add a line "options snd-hda-intel model=z71v position_fix=1". After restart it worked correctly (alsamixer displayed correct devices and internal speakers were muted after I plugged in headphones). It was also working in Ubuntu 12.04. In Ubuntu 12.10 I have another problem. The alsamixer in default (don't have to edit alsa-base.conf) display correct outputs and inputs but internal speakers don't working if the headphones isn't plugged in. I have to manually disable "Auto-Mut" option in alsamixer, then the internal spakers works (but of course they don't mute when the headphones are pluged in). Thanks for any idea how to fix it. I'm not sure if it is a bug or it's caused by a "specific hardware". Tomas

    Read the article

  • Agressive Auto-Updating?

    - by MattiasK
    What do you guys think is best practice regarding auto-updating? Google Chrome for instance seems to auto-update itself as soon as it get's a chance without asking and I'm fine with it. I think most "normal" users benefits from updates being a transparent process. Then again, some more technical users might be miffed if you update their app without permission, as I see it there's 3 options: 1) Have a checkbox when installing that says "allow automatic updates" 2) Just have a preference somewhere that allows you to "disable automatic updates" so that you have to "check for updates manually" I'm leaning towards 2) because 1) feels like it might alienate non-technical users and I'd rather avoid installation queries if possible. Also I'm thinking about making it easy to downgrade if an upgrade (heaven forbid) causes trouble, what are your thoughts? Another question, even if auto-updates are automatically, perhaps they should be announced. If there's new features for example otherwise you might not realize and use them One thing that kinda scares me though is the security implications, someone could theorically hack my server and push out spyware/zombieware to all my customers. It seems that using digital signatures to prevent man-in-the-middle attacks is the least you could do otherwise you might be hooked up to a network that spoofs the address of of update server.

    Read the article

  • Mouse selects everything on its own

    - by meneer
    I have a strange problem that my mouse keeps selecting everything I point to. For example, if I open Rhythmbox, then at the top of the screen there is a text which says which song is playing. If I move my mouse over this text, then it will select this text; but I am not clicking. This behaviour does not happen in all programs. For example in Firefox I have no problem at all. Maybe it is Gtk related? Nautilus also behaves strangely; I cannot open files by clicking on them, I have to select and make a box around the file and then press enter to open it. If I click, then nothing happens. Similar problems also happen with other Gtk software. I think the problem might be related to touchscreen issues (I have a touchscreen). I run Gnome 3 on Ubuntu 12.04. I have a HP touchsmart 610 desktop computer. Any help is greatly appreciated. UPDATE: I just did a fresh reinstall, and I am 90% certain that it is related to the touchscreen drivers. Here is what happens when I reinstall. At first boot, so exactly after install, everything works fine, except the touchscreen: The touchscreen does not respond. I update ubuntu, because I installed from an old CD (CD with ub. 12.04). Then on next reboot I have touchscreen working , but the working touchscreen comes together with the mouse selecting everything on its own. SECOND QUESTION: Would anybody know how I could figure out what those touchscreen drivers are (so that I can disable them) ?

    Read the article

  • How to Activate wifi in Toshiba Satellite C655?

    - by user4106
    I've recently bought a Toshiba Satellite C655. It came with Windows 7 preinstalled. I've never had a notebook before, but as a desktop user, I was a Ubuntu user since 2 years, and I've never had a problem with drivers, wifi, etc. When I tried to install the Ubuntu 10.04, and also the new and fresh 10.10, in my new laptop, I experienced some troubles with some of the componentes of my computer. For example, I was not able to activate my wi-fi card, although I know the kernel recognizes it correctly, because when doing "lspci" at the terminal, it was listed. Anyhow, I'm not able to "activate" the wifi, or whatever it's necessary to do in order to be able to search for public networks available, and to connect with them. The wifi-card the laptops brings is the (the lspci output): 03:00.0 Network Controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) [168c:002b] (rev 01). It's anything in you can help me? Thanks a lot in advance! Edit Neither solution seems to work. In first place, i've tried installig what hhlp told me. After the installation, nothing seems to change: on right-clicking the wireless icon, it seems to recognize the card, because the option "Enable wifi" was ticked. But, once again, i was not able to "turn the wi-fi" on. In second place, i didn't try installing the drives, because the card is already recongnized. The issue is that i cannot seem to turn it on! One thing i've probably missed is that the Toshiba cames with a windows sofntware that allows you to enable / disable the wifi tools. So, it does not have an external "button" to turn it off. I don't know if that's the problem, but i have the feeling that the issue may be aroud there: in how to turn ON the wifi-signal (or to verify if it's on or off) in my ubuntu.

    Read the article

  • Aggressive Auto-Updating?

    - by MattiasK
    What do you guys think is best practice regarding auto-updating? Google Chrome for instance seems to auto-update itself as soon as it get's a chance without asking and I'm fine with it. I think most "normal" users benefits from updates being a transparent process. Then again, some more technical users might be miffed if you update their app without permission, as I see it there's 3 options: 1) Have a checkbox when installing that says "allow automatic updates" 2) Just have a preference somewhere that allows you to "disable automatic updates" so that you have to "check for updates manually" I'm leaning towards 2) because 1) feels like it might alienate non-technical users and I'd rather avoid installation queries if possible. Also I'm thinking about making it easy to downgrade if an upgrade (heaven forbid) causes trouble, what are your thoughts? Another question, even if auto-updates are automatically, perhaps they should be announced. If there's new features for example otherwise you might not realize and use them One thing that kinda scares me though is the security implications, someone could theorically hack my server and push out spyware/zombieware to all my customers. It seems that using digital signatures to prevent man-in-the-middle attacks is the least you could do otherwise you might be hooked up to a network that spoofs the address of of update server.

    Read the article

  • Problems with dual monitor & resolutions, only in 14.04

    - by theLadder
    I installed Ubuntu 14.04 but i am having weird problems with my dual monitors and the resolutions. I also tried Xubuntu 14.04 and was having the same problem. I have one 32 inch LG TV with 1920x1080 and one monitor with 1280x1024 resolution. When i first start my 32 inch gets 1360x768, if i then try to change to 1920x1080, everythings looks fine and the prompt asking me if i want to keep settings comes up and starts the countdown, but after 2 seconds my computer freezes, and after a few more seconds it reboots itself. However, if i disable my smaller monitor first, i can change to 1920x1080 on my 32 inch without problems, but if i then activate the second monitor the same problem happens again. in Xubuntu 14.04 i can change the refresh rate, if i run the 32 inch at 30hz or 50hz everytying works, but i would like to be able to run it at 60hz. I'm currently running Xubuntu 13.10 without this problem. My graphics card is a ATI Radeon HD 4850. What is causing this problem, grahpics drivers? Kernel? Xorg? And how do i solve it?

    Read the article

  • Ubuntu 13.10. After login, no desktop displayed. Two Nvidia Graphics Cards, Four Monitors

    - by jmerkow
    I am working on an issue with my Ubuntu 13.10 installation. I am attempting to get 4 monitors up and running but I am having some trouble. So far, I installed and updated to the latest NVIDIA drivers (331.20). Initially X would not start (after installation) so I replaced my xorg.conf with xorg.conf.failsafe. This fixed that problem, but then I tried to enable the other 2 monitors (other video card) and xorg fails to start once again (after I login there is no desktop). I am fairly new to linux but I am not a complete beginner, but I'm not comfortable poking around too much on my own to troubleshoot yet.... lspci -nn | grep VGA: 03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF110 [GeForce GTX 570 Rev. 2] [10de:1086] (rev a1) 05:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF110 [GeForce GTX 580] [10de:1080] (rev a1) It seems that the nvidia-settings tool does not result in a good xorg.conf file. Here it is: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 331.20 (buildmeister@swio-display-x86-rhel47-05) Wed Oct 30 18:20:32 PDT 2013 Section "ServerLayout" Identifier "Default Layout" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "1" EndSection ... Section "Monitor" Identifier "Configured Monitor" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "SHARP HDMI" HorizSync 15.0 - 68.0 VertRefresh 55.0 - 76.0 EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "Samsung SyncMaster" HorizSync 0.0 - 0.0 VertRefresh 0.0 EndSection Section "Device" Identifier "Configured Video Device" Driver "vesa" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 570" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 580" BusID "PCI:5:0:0" EndSection Section "Screen" Identifier "Default Screen" Device "Configured Video Device" Monitor "Configured Monitor" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-1" Option "metamodes" "HDMI-0: nvidia-auto-select +640+0, DVI-I-3: nvidia-auto-select +0+1080" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DVI-I-2: nvidia-auto-select +0+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection

    Read the article

  • Can I install Ubuntu 13.10 without the internet?

    - by user1526570
    I'm new to Ubuntu and Linux in general. I'm currently out of town and the dorm I am living in has terrible internet connection. It won't be another 2-3 weeks before I can go home and have proper internet connection. So my question is whether or not I can install Ubuntu 13.10 in my laptop without the internet and then do the updates once I go home? Also, I'm attempting to do a dual boot with my Lenovo G505s which was pre-installed with Windows 8. Hopefully I can pull this off. I already did the necessary things (I think and hope so) prior to installation: Disable secure boot Enable legacy and boot UEFI first Create partition Put installer in my pen drive As I am quite new to this, any advice would be of great help. Thanks in advance! EDIT: I tried yesterday. The installation asked me to connect to the internet, so I used my crappy dorm internet. When it reached the downloading/installtion of Ubuntu One, it just stopped and went on forever. So I had to stop it.

    Read the article

  • Is this a b43 driver problem?

    - by Nullet
    I have 13.04 on a Dell Inspiron 1564 with Broadcom 4312 WiFi card. The wl drivers cause kernel panic on linux-3.8 so I succesfully installed the b43 driver a couple of months ago. Now I have changed ISP and got a new router, and the connection drop when downloading software from internet to a 2008r2 using Remmina Client, and apt-get install on virtual machines in virtualbox. I have no idea why this suddenly became a problem.. My phone does not loose the connection, just Ubuntu. Output from /var/log/syslog after lost connection rfkill list(after disconnecting): 0: phy0: Wireless LAN Soft blocked: yes Hard blocked: yes I can use NetworkManager to disable wireless and then enable wireless to connect again. rfkill ublock all/wifi/0 removes only Soft blocked. lshw -C network *-network description: Wireless interface physical id: 4 logical name: wlan0 serial: 78:e4:00:78:d2:05 capabilities: ethernet physical wireless configuration: broadcast=yes driver=b43 driverversion=3.8.0-32-generic firmware=478.104 ip=10.0.0.3 link=yes multicast=yes wireless=IEEE lspci 04:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01) After waking from suspend and when booting, it takes about 40 sec to get network connection. This has been a problem all along AND a different question, but I mention it anyway.. cus it's annoying! Please take a look and hopefully someone spot a problem! Thanks!

    Read the article

  • About output of vga_switcher

    - by zhangjie
    When IGD and DIS both exist in my pc,and I want to disable DIS,so I create a service to switch on and off the DIS.It works.Finally,I decide to add the service command into /etc/rc.local so that DIS will be powered off automatically.Unfortunately,it fails.There's only one command added by myself in the file /etc/rc.local,so I can affirm failure is caused by that added command. Before,I directly added the command "echo OFF /sys/kernel/debug..." into /etc/rc.local,and when I restarted,the system startup fails.So I thought maybe when the command is executed,the DIS hasn't been powered on or ready for work.So conflict occurs!It's just my prediction.Then I added one line command "sleep 1s" before the "echo OFF ...",it works nearly everytime when I start or restart pc,while fails sometimes. The output result of "cat /sys/kernel/debug..." is as following: 0:IGD:+:Pwr:0000:00:02.0 1:DIS: :Pwr:0000:01:00.0 I want know 0000:00:02.0 means what?Time of first power on? If it was really time,I can set the command "sleep 2s" to wait for DIS powered on then "echo OFF ..." Thanks for your advice!

    Read the article

  • New mainboard with 890GX chipset disables lightdm, even when using old graphics card instead of onboard graphics, startx works (Xubuntu 12.04)

    - by user99250
    I am trying to migrate an installed Xubuntu 12.04 to a new mainbaord with 890GX chipset. The Chipset has a built-in Radeon HD 4290 graphics. The system boots, but X won't start. The most suspicious message in Xorg.0.log is "ddxSigGiveUp: Closing log". When searching for this message, I found some answers like "remove your xorg.conf" (but there is none on my system). Or bug reports for fglrx (but that's not installed on my system). Or NVidia-related questions ... Interestingly, "startx" succeeds in opening a basic XFCE session. Then, I tried to disable the onboard graphics in the BIOS setup and use the old PCIe graphics card (Radeon HD 5450). No change. I don't think I can just blacklist a module, because the graphics card and the onboard graphics are covered by the same module. At the moment, use the free radeon drivers, not the restricted fglrx driver. If possible, I would like to stay on the free driver for two reasons: The fglrx driver from the ubuntu packages fails to build the kernel module. In past, I had bad experiences with the fglrx driver and changing screen config with RandR. When I connect the harddisk and the graphics card to my old mainboard, everything works again. This means, I have not screwed up my system configuration wile installing and removing the fglrx drivers. When I ordered this mainboard, I thought the 890GX is old enough to be supported and if not, I could still use my graphics card as backup solution. But without even the backup soluton, I'm screwed ... Any ideas ? Thanks and Regards, Kubuntu-Man (now switched to Xubuntu)

    Read the article

  • ????????????3?????

    - by Feng
    ?? ??blog?????oracle????????????,??????????????,??????: ?????????. ???????: ??????????,????????; ????????????,?” ???”??. 1. OS swapping/paging ??????concurrency??????? Oracle?????????, ??latch/mutex???????”?”,??????????????/???(????????????,??????????????????). ????OS??????swapping/paging????,???????????,??latch/mutex???????,????????????hung/slow???. ??swapping/paging??????: a). ???? b). ??????; ?????, ?????????????? c). ?????/????? ????????????????? ???????: Lock SGA, ??SGA(???latch/mutex)???pin???????swapping???. ???SGA??????,????large page(hugepage)??,??latch/mutex??/?????. 2. SGA resizing?????????? ?AMM/ASMM??????????, shared pool?buffer cache?????component????????????,??ora-4031???.??????????,???????resize????????????(?latch/mutex?????)?????, ?????????latch/mutex??. ????shared pool?resize??????,??latch/mutex???????. ?????????:  ?????bug; ???????????,??resize???????????????,???????????. ??bug?fix??????????impact, ???????????. ???????: 1). ??buffer cache?shared pool??(???????????,?????????) 2). ??resize???????16?? alter system set "_memory_broker_stat_interval"=999; Disable AMM/ASMM?????????,?????: ??ora-4031????????????. 3. DDL?????????? ??????????????????. ???????????DDL (??grant, ?????, ????????),???????????SQL?????invalidate?;????????SQL????????????,?????????hard parse ? SQL??????. ??????? “hardparse storm”, latch/mutex????????, ??library cache lock/row cache lock????; ??????????slow/hung. ???????: ???????????DDL ??????????,???????????,?? “????????????3?????"?

    Read the article

  • Certificate Trusts Lists in IIS7

    - by BrettRobi
    I am trying to enable mutual authentication for my WebService hosted in IIS7. I have the server side cert setup and working but cannot figure out how to get a Certificate Trust List created and setup in IIS7 so that I can require and validate client side certificates. All of my client side certs are signed by my own root cert so I need to create a CTL that contains just my root cert and then have IIS validate client provided certs against the CTL. Can anyone shed some light on how to do this? IIS6 had a UI for assigning a CTL, but I can find nothing similar in IIS7. Update: I have now successfully used MakeCTL in wizard mode to create a CTL with a Friendly Name. However I don't have adsutil support on my IIS7 box so via other posts elsewhere I am trying to use the 'netsh http add sslcert' command to assign the CTL to my site. Before I could use this command I had to remove the existing SSL cert that was assigned to my site for server authentication. Then in my netsh command I specify the thumbprint of that very same SSL cert I removed, plus a made up appid, plus 'sslctlidentifier=MyCTL sslctlstorename=CA'. The resulting command is: netsh http add sslcert ipport=10.10.10.10:443 certhash=adfdffa988bb50736b8e58a54c1eac26ed005050 appid={ffc3e181-e14b-4a21-b022-59fc669b09ff} sslctlidentifier=MyCTL sslctlstorename=CA (the IP addr is munged), but I am getting this error: SSL Certificate add failed, Error: 1312 A specified logon session does not exist. It may already have been terminated. I am sure the error is related to the CTL options because if I remove them it works (though no CTL is assigned of course). Can anyone help me take this last step and make this work? UPDATE 01-07-2010: I never resolved this with IIS 7.0 and have since migrated our app to IIS 7.5 and am giving this another try. Per the response from Taras Chuhay I installed IIS6 Compatibility on my test server and tried the steps he documented using adsutil.vbs (which can also be found here). I immediately ran into this error: ErrNumber: -2147023584 Error trying to SET the Property: SslCtlIdentifier when running this command: adsutil.vbs set w3svc/1/SslCtlIdentifier MyFriendlyName I then went on to try the next adsutil.vbs command documented and it failed with the same error. I have verified that the CTL I created has a Friendly Name of MyFriendlyName and that it exists in the 'Intermediate Certification Authorities\Certificate Trust List' store of LocalComputer. So once again I am at a dead standstill. I don't know what else to try. Has anyone ever gotten CTL's to work with IIS7 or 7.5? Ever? Am I beating a DEAD horse. Google turns up nothing but my own posts and other similar stories. Update 2/23/10 - I've confirmed with Microsoft that this is a bug with IIS 7.5, but it does work with IIS 7. Check out this link for details: http://viisual.net/configuration/IIS7-CTLs.htm Update 6/08/10 - I can now confirm that KB981506 resolves this issue. There is a patch associated with this KB that must be applied to Server 2008 R2 machines to enable this functionality. Once that is installed all works flawlessly for me.

    Read the article

  • How do I manipulate Handler Mappings cleanly in IIS7 using the Microsoft.Web.Administration namespac

    - by Kev
    I asked this over on Stack Overflow but maybe it's something an experienced IIS 7 administrator might know more about, so I'm asking here as well. When manipulating Handler Mappings using the Microsoft.Web.Administration namespace, is there a way to remove the <remove name="handler name"> tag added at the site level. For example, I have a site which inherits all the handler mappings from the global handler mappings configuration. In applicationHost.config the <location> tag initially looks like this: <location path="60030 - testsite-60030.com"> <system.webServer> <security> <authentication> <anonymousAuthentication userName="" /> </authentication> </security> </system.webServer> </location> To remove a handler I use code similar this: string siteName = "60030 - testsite-60030.com"; string handlerToRemove = "ASPClassic"; using(ServerManager sm = new ServerManager()) { Configuration siteConfig = serverManager.GetApplicationHostConfiguration(); ConfigurationSection handlersSection = siteConfig.GetSection("system.webServer/handlers", siteName); ConfigurationElementCollection handlersCollection = handlersSection.GetCollection(); ConfigurationElement handlerElement = handlersCollection .Where(h => h["name"].Equals(handlerMapping.Name)).Single(); handlersCollection.Remove(handlerElement); } The equivalent APPCMD instruction would be: appcmd set config "60030 - autotest-60030.com" -section:system.webServer/handlers /-[name='ASPClassic'] /commit:apphost This results in the site's <location> tag looking like: <location path="60030 - testsite-60030.com"> <system.webServer> <security> <authentication> <anonymousAuthentication userName="" /> </authentication> </security> <handlers> <remove name="ASPClassic" /> </handlers> </system.webServer> </location> So far so good. However if I re-add the ASPClassic handler this results in: <location path="60030 - testsite-60030.com"> <system.webServer> <security> <authentication> <anonymousAuthentication userName="" /> </authentication> </security> <handlers> <!-- Why doesn't <remove> get removed instead of tacking on an <add> directive? --> <remove name="ASPClassic" /> <add name="ASPClassic" path="*.asp" verb="GET,HEAD,POST" modules="IsapiModule" scriptProcessor="%windir%\system32\inetsrv\asp.dll" resourceType="File" /> </handlers> </system.webServer> </location> This happens when using both the Microsoft.Web.Administration namespace and C# or using the following APPCMD command: appcmd set config "60030 - autotest-60030.com" -section:system.webServer/handlers /+[name='ASPClassic',path='*.asp',verb=;'GET,HEAD,POST',modules='IsapiModule',scriptProcessor='%windir%\system32\inetsrv\asp.dll',resourceType='File'] /commit:apphost This can result in a lot of cruft over time for each website that's had a handler removed then re-added programmatically. Is there a way to just remove the <remove name="ASPClassic" /> tag using the Microsoft.Web.Administration namespace code or APPCMD?

    Read the article

  • Unauthorized Access Exception using Web Deploy to Site when the site root is a UNC path

    - by Peter LaComb Jr.
    I am trying to use Web Deploy to deploy a site where the Site is rooted on a UNC path instead of a local drive. This is because I want to have a shared configuration, and have all servers point to the same UNC for content. That would allow me to deploy to one server and have all servers updated at the same time. I've created a share with everyone and users read/write. The NTFS permissions have the ID of the appDomain account as full control, and that is the same account that is configured as the specific user in Management Service Delegation. I can log on to the destination server as that ID, access the share and create/delete files. However, I'm getting the following exception in my Microsoft Web Deploy log on the destination server: User: Client IP: 192.168.62.174 Content-Type: application/msdeploy Version: 9.0.0.0 MSDeploy.VersionMin: 7.1.600.0 MSDeploy.VersionMax: 9.0.1631.0 MSDeploy.Method: Sync MSDeploy.RequestId: c060c823-cdb4-4abe-8294-5ffbdc327d2e MSDeploy.RequestCulture: en-US MSDeploy.RequestUICulture: en-US ServerVersion: 9.0.1631.0 Skip: objectName="^configProtectedData$" Provider: auto, Path: A tracing deployment agent exception occurred that was propagated to the client. Request ID 'c060c823-cdb4-4abe-8294-5ffbdc327d2e'. Request Timestamp: '8/23/2012 11:01:56 AM'. Error Details: ERROR_INSUFFICIENT_ACCESS_TO_SITE_FOLDER Microsoft.Web.Deployment.DeploymentDetailedUnauthorizedAccessException: Unable to perform the operation ("Create Directory") for the specified directory ("\someserver.mydomain.local\sharename\sitename\applicationName"). This can occur if the server administrator has not authorized this operation for the user credentials you are using. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_INSUFFICIENT_ACCESS_TO_SITE_FOLDER. --- Microsoft.Web.Deployment.DeploymentException: The error code was 0x80070005. --- System.UnauthorizedAccessException: Access to the path '\someserver.mydomain.local\sharename\sitename\applicationName' is denied. at Microsoft.Web.Deployment.NativeMethods.RaiseIOExceptionFromErrorCode(Win32ErrorCode errorCode, String maybeFullPath) at Microsoft.Web.Deployment.DirectoryEx.CreateDirectory(String path) at Microsoft.Web.Deployment.DirPathProviderBase.CreateDirectory(String fullPath, DeploymentObject source) at Microsoft.Web.Deployment.DirPathProviderBase.Add(DeploymentObject source, Boolean whatIf) --- End of inner exception stack trace --- --- End of inner exception stack trace --- at Microsoft.Web.Deployment.FilePathProviderBase.HandleKnownRetryableExceptions(DeploymentBaseContext baseContext, Int32[] errorsToIgnore, Exception e, String path, String operation) at Microsoft.Web.Deployment.DirPathProviderBase.Add(DeploymentObject source, Boolean whatIf) at Microsoft.Web.Deployment.DeploymentObject.Add(DeploymentObject source, DeploymentSyncContext syncContext) at Microsoft.Web.Deployment.DeploymentSyncContext.HandleAdd(DeploymentObject destObject, DeploymentObject sourceObject) at Microsoft.Web.Deployment.DeploymentSyncContext.HandleUpdate(DeploymentObject destObject, DeploymentObject sourceObject) at Microsoft.Web.Deployment.DeploymentSyncContext.SyncChildrenNoOrder(DeploymentObject dest, DeploymentObject source) at Microsoft.Web.Deployment.DeploymentSyncContext.SyncChildrenNoOrder(DeploymentObject dest, DeploymentObject source) at Microsoft.Web.Deployment.DeploymentSyncContext.SyncChildrenOrder(DeploymentObject dest, DeploymentObject source) at Microsoft.Web.Deployment.DeploymentSyncContext.ProcessSync(DeploymentObject destinationObject, DeploymentObject sourceObject) at Microsoft.Web.Deployment.DeploymentObject.SyncToInternal(DeploymentObject destObject, DeploymentSyncOptions syncOptions, PayloadTable payloadTable, ContentRootTable contentRootTable, Nullable1 syncPassId) at Microsoft.Web.Deployment.DeploymentAgent.HandleSync(DeploymentAgentAsyncData asyncData, Nullable1 passId) at Microsoft.Web.Deployment.DeploymentAgent.HandleRequestWorker(DeploymentAgentAsyncData asyncData) at Microsoft.Web.Deployment.DeploymentAgent.HandleRequest(DeploymentAgentAsyncData asyncData) This is shown as the following on the console of the machine where I run the deployment: C:\Users\PLaComb"C:\Program Files (x86)\IIS\Microsoft Web Deploy V3\msdeploy.exe" -source:package='C:\Packages\Deployments\applicationName.zip' -dest:auto,computerName='https://SERVERNAME:8172/msdeploy.axd',includeAcls='True' -verb:sync -disableLink:AppPoolExtension -disableLink:ContentExtension -disableLink:CertificateExtension -setParamFile:"C:\Packages\Deployments\applicationName.SetParameters.xml" -allowUntrusted Info: Using ID 'c060c823-cdb4-4abe-8294-5ffbdc327d2e' for connections to the remote server. Info: Adding sitemanifest (sitemanifest). Info: Adding virtual path (JMS/admin) Info: Adding directory (JMS/admin). Error Code: ERROR_INSUFFICIENT_ACCESS_TO_SITE_FOLDER More Information: Unable to perform the operation ("Create Directory") for the specified directory ("\someserver.mydomain.local\sharename\sitename\applicationName"). This can occur if the server administrator has not authorized this operation for the user credentials you are using. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_INSUFFICIENT_ACCESS_TO_SITE_FOLDER. Error: The error code was 0x80070005. Error: Access to the path '\someserver.mydomain.local\sharename\sitename\applicationName' is denied. Error count: 1.

    Read the article

< Previous Page | 608 609 610 611 612 613 614 615 616 617 618 619  | Next Page >