Search Results

Search found 22546 results on 902 pages for 'mode management'.

Page 724/902 | < Previous Page | 720 721 722 723 724 725 726 727 728 729 730 731  | Next Page >

  • SNMPD running but not listening for connections at random

    - by Lukasz
    OS: CentOS release 5.7 (Final) Net-SNMP: net-snmp-5.3.2.2-14.el5_7.1 (from RPM) Periodically my NMS notifies me that SNMP has gone down on this machine. The service is restored in between 10 to 30 minutes. My NMS also pings and check SSH and those services are not affected during the SNMP outage. SNMPD log file shows that it is working and apparently receiving packets (either from local agents from 127.0.0.1 or from my NMS at 172.16.37.37) however attempting to snmpwalk locally or from the NMS system fails with a timeout. I have 7 of these servers running mixture of CentOS 5.7 and RHEL 5.7 with this specific version of Net-SNMP installed from RPM - none of them have this issue except this one. 5 of the machines (including the NMS system and this problem server) are in the same rack connected using one switch. Restarting SNMPD does not fix the issue - it clears up by itself eventually. Any suggestions where I can begin diagnosing the issue? It's a closed subnet so IPTables is not used. SNMPD config below: # Following entries were added by HP Insight Management Agents at # Tue May 15 10:58:17 CLT 2012 dlmod cmaX /usr/lib64/libcmaX64.so rwcommunity public 127.0.0.1 rocommunity public 127.0.0.1 rwcommunity 3adRabRu 172.16.37.37 rocommunity 3adRabRu 172.16.37.37 rwcommunity 3adRabRu 172.16.37.36 rocommunity 3adRabRu 172.16.37.36 trapcommunity callmetraps trapsink 172.16.37.37 callmetraps trapsink 172.16.37.36 callmetraps syscontact Lukasz Piwowarek syslocation Santiago, Chile # ---------------------- END -------------------- agentAddress udp:161 com2sec rwlocal default public com2sec rolocal default public com2sec subnet default 3adRabRu group rwv2c v2c rwlocal group rov2c v2c rolocal group rov2c v2c subnet view all included .1 access rwv2c "" any noauth exact all all none access rov2c "" any noauth exact all none none

    Read the article

  • Clone virtual machine with Server 2008 R2 and Hyper-V?

    - by bwerks
    Hi all, I've recently just started working with Hyper-V, and so far it's quite nice. However, I've been running into problems with what seems like it should be the most basic of workflows. I've set up a baseline Server 2008 R2 configuration, and exported it with the intention of using the export for cloning. I entered "C:\Exports\" as the export folder. However, I run into problems when I try to import the image. From the Hyper-V manager, I select "Import Virtual Machine" and in the resulting window I entered "C:\Exports\BuildServer\" as the folder, set the radial to "Copy the virtual machine (create a new unique ID)" and checked the checkbox for "Duplicate all files so the same virtual machine can be imported again." Doing so results in the following error: "Import failed. Import task failed to copy file from 'H:\Exports\BuildServer\Virtual Hard Disks\BuildServer.vhd' to 'C:\Hyper-V\Virtual Hard Disks\BuildServer.vhd': The file exists. (0x80070050)" Have I somehow messed something up in configuration? Or is this a known thing? I've read it should be possible to clone VMs by copying them in the filesystem but I'd prefer to keep things in the management Ui if possible.

    Read the article

  • Debian and Multipath IO problem

    - by tearman
    Basically the situation is, I have a box running Debian, the box internally has an Intel SCSI RAID controller which is controlling 2 hard drives in RAID1 mode which is where the OS is installed. Further, I have a QLogic fiber channel adapter that connects the unit to a Fiber Channel SAN. My process of installation is I'll install Debian to the local drives, and leave the QLogic firmware out of it for the time being. Then once I get the unit online, I'll install the firmware drivers. This flops my internal drives from /dev/sda to /dev/sdc, which is a bit annoying, but recoverable. Probably should address these by UUID anyways. Once I get back online, I have to install multipath-tools (the framework is a multipath framework). However, once I reboot the machine again, it fails on boot after discovering multipath targets, saying my local drives are busy and cannot be mounted to /root. Any help in what may be the problem here? Or at least how to disable multipath until after the unit boots and then ignores the internal drives?

    Read the article

  • Why can't I ssh into my server using my private key?

    - by user61342
    I just setup my new server as I used to, and this time I can't login using my private key. The server is ubuntu 11.04. And I have setup following ssh key directories. root@myserv: ls -la drwx------ 2 root root 4096 Sep 23 03:40 .ssh And in .ssh directory, I have done chmod 640 authorized_keys Here is the ssh connection tracebacks: OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: Connecting to [my.server.ip] [[my.server.ip]] port 22. debug1: Connection established. debug1: identity file /Users/john/.ssh/id_rsa type -1 debug1: identity file /Users/john/.ssh/id_rsa-cert type -1 debug1: identity file /Users/john/.ssh/id_dsa type 1 debug1: identity file /Users/john/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-1ubuntu3 debug1: match: OpenSSH_5.8p1 Debian-1ubuntu3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA ef:b8:8f:b4:fc:a0:57:7d:ce:50:36:17:37:fa:f7:ec debug1: Host '[my.server.ip]' is known and matches the RSA host key. debug1: Found key in /Users/john/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/john/.ssh/id_rsa debug1: Offering RSA public key: /Users/john/.ssh/id_dsa debug1: Authentications that can continue: publickey,password debug1: Next authentication method: password root@[my.server.ip]'s password: Update: I have found the reason but I can't explain it yet. It is caused by uploading the key using rsync -chavz instead of scp, after I used scp to upload my key, the issue is gone. Can someone explain it? Later, I tried rsync -chv, still not working

    Read the article

  • File is not compatible with the version of Windows you're running

    - by vaccano
    I have a really old installer (legacy app) that we are trying to get running on a Windows 7 64 bit os. Previously it has only been installed on Windows XP 32 bit. I get the following error when I try to run it: The version of this file is not compatible with the version of Windows you're running. Check your computer's system information to see whether you need an x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher. Contacting the software publisher is not an option (software is super old). Is there a way to get this to work? Some sort of compatibility mode? The only thing I have heard of that will work is a Virtual XP on the Win 7 box. The problem is that this software is a part of a whole software set. I would have to put all of the pieces on the Virtual XP or none at all. Before I go down the road of putting it all on the virtual xp I would like to know that there is no way to get it all on the Win 7 os.

    Read the article

  • AsteriskNow Migration / Shared Extension Space

    - by Aaron C. de Bruyn
    I am testing the possibility of migrating from an old Avaya phone system to AsteriskNow. The migration would cover several hundred phones--but spread out over several years. (Management wants to move buildings to the new phone system one by one as cables get cut or time permits.) Two other directive is that extensions must not change and they want a GUI that other admins (non-Linux geeks) can manage. They currently use 9XXX for all extensions. We linked the Avaya and Asterisk box via PRI card and they both are communicating. From the Avaya side, if we move (for example) extension 9001 to Asterisk, we forward the call over the PRI to the AsteriskNow box and the SIP phone rings. In AsteriskNow we have an outgoing rule '_9XXX' that routes all 4-digit extensions starting with 9 back to Avaya. Here's the trouble. Dialing 9001 (the extension moved over to AsteriskNow) causes the call to be routed out the PRI to the Avaya box, then the Avaya box routes the call back to Asterisk, and Asterisk routes it to the SIP phone. As we get more and more users switched over, it will use up more and more channels over the PRI card. Is there a way I can ask Asterisk to check it's local extensions first--then forward off to the Avaya system if it starts with '_9XXX'? (I know how I can do it when editing the raw config files, I'm just looking for a way to do it in the GUI so other admins can manage it if necessary.) As a last-ditch plan, I know I can specifically add '_9001' as an outgoing call rule and sent it directly to extension 9001--but I'd really hate to do that for several hundred phones

    Read the article

  • Cannot connect to my EC2 instance because of "Permission denied (publickey)"

    - by Burak
    In AWS console, I saw that my key pair was deleted. I created a new one with the same name. Then I tried to connect with ssh -v -i sohoKey.pem ec2-user@******.compute-1.amazonaws.com Here's the output: macs-MacBook-Air:~ mac$ ssh -v -i sohoKey.pem ec2-user@******.compute-1.amazonaws.com OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to ********.compute-1.amazonaws.com [*****] port 22. debug1: Connection established. debug1: identity file sohoKey.pem type -1 debug1: identity file sohoKey.pem-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '*******.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /Users/mac/.ssh/known_hosts:3 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: sohoKey.pem debug1: Authentications that can continue: publickey debug1: Trying private key: sohoKey.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). Update: I detached my old EBS and attached to the new instance. Now, how can I mount it?

    Read the article

  • OpenVPN and TomatoVPN

    - by Bill Johnson
    Wondering if someone can help me with the following. I have updated my Linksys router with TomatoVPN and used the following config: Interface Type:TAP Protocol:UDP Port:1195 Firewall Custom Authorization Mode:Static Key I have then inserted the static key generated in OpenVPN saved and started the service. connect.ovpn. # Use the following to have your client computer send all traffic through your router # (remote gateway) remote (entered my DNS/DHCP servers external IP address here) port 1195 dev tap secret static.key.txt proto udp comp-lzo route-gateway 192.168.1.1 redirect-gateway float I've then placed my static key in a file in the same directory as your connect.ovpn (static.key.txt) Now OpenVPN is installed on a laptop that I use at home. I have plugged in the laptop to my home connection and started connect.ovpn The Local Area Connection is connected as 'Home Network 3' - and when I start OpenVPN it is connected as 'Local Area Connection 2' and this is showing as 'Unidentified Network' and it appears there is no network access. TAP-Win32 Adapter V9 appears to be the adaptors name and the IP and DNS properties are set to automatic. If I open up the OpenVPN GUI it shows an error message saying "Connecting to connect has failed". Looking at the error message behind this pop-up one line says "TCP/UDP Socket bind failed on local address [undef]:1195 Address already in use [WSAEADDRINUSE] Could anyone possibly help me further with this please?

    Read the article

  • OpenAM throwing 302 0 behind haproxy, nginx

    - by Travis
    I'm having some issues with my deployment and was wondering if you can help. My set up is as follows: 2 OpenAM servers are set up behind a load balancer (HAproxy). The load balancer is set up behind two reverse proxies (nginx). The two reverse proxies are ser up behind another load balancer (haproxy). So a request will go through Haproxy nginx Haproxy openam I can access the OpenAM web console through the reverse proxies without a problem. Everything works fine at this level. However when I access openam through the load balancer in front of the reverse proxies Openam throws a 302 error. The funny thing is however I can access the host/openam/UI/Login and login successfully. I even get the cookie and have access to my apps that are set up. However immediately after the login OpenAM throws a 302 redirect. I'm puzzled and cannot figure out what is going wrong. Does anyone have any idea? My config files are below: nginx config : server { listen 443; server_name oamlb1; location / { proxy_pass http://oamlb1.mydomain.com:8080; proxy_set_header X-Real-IP $remote_addr; } location /openam { proxy_pass http://oamlb1.mydomain.com:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host oamlb1.mydomain.com:8080; } } haproxy config : (This file is for the servers. The file for the reverse proxies is idenical except it points to the reverse proxies) listen http_proxy :8090 mode http balance roundrobin option httpclose option forwardfor server webA oamserver1.mydomain.com:18080 option forwardfor Thanks

    Read the article

  • IPMI not fucntioning with Network Bonding

    - by muhammed sameer
    Hey, I am having problems with running IPMI on my servers that have network bonding enabled. Platform: CentOS release 5.3 (Final) Kernel: 2.6.18-92.el5 64bit Dell PowerEdge 1950 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet I have bonded the interface eth0 and eth1 as active passive, with eth0 as the active interface, below is conf description from /proc Bonding Mode: fault-tolerance (active-backup) Primary Slave: eth0 Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 30 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cd Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cf My IPMI device is as follows IPMI Device Information Interface Type: KCS (Keyboard Control Style) Specification Version: 2.0 I2C Slave Address: 0x10 NV Storage Device: Not Present Base Address: 0x0000000000000CA8 (I/O) Register Spacing: 32-bit Boundaries I Have used openIPMI as well as freeipmi both to control the chassis via the IPMI card, but on servers which have bonding enabled, the command times out, below is the full run of the command with debug info. ipmi_lan_send_cmd:opened=[0], open=[4482848] IPMI LAN host 70.87.28.115 port 623 Sending IPMI/RMCP presence ping packet ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed Error: Unable to establish LAN session Failed to open LAN interface Unable to get Chassis Power Status On the other hand I configured IPMI on a box with the same specs as mentioned above without bonding and IPMI works perfectly. Has anyone faced this problem with IPMI + Bonding ? I would be thankful is someone helps circumvent this issue. Muhammed Sameer

    Read the article

  • IPCop server slows down download speed

    - by noocyte
    I have an IPCop server running at home, been doing just fine for ~5 months, but last week I suddenly started getting time-outs and slow downloads from the 'net. I first thought that this was my ISP acting up, then I thought it might be one of my 3 switches or some of my cabling. In due order I've tested everything above and found them all to be working as they should. The only factor remaining is my IPCop server. Facts: I've got a 15/15 Mbit line (fiber) and I get ~15 Mbit upload, but only 0.5 Mbit download with the IPCop box as router (ISP router set in bridge mode). If I connect without the IPCop box (using the ISP router) I get ~12 Mbit upload and ~15 Mbit download. The load on the IPCop box appears to be light and it used to handle this traffic just fine 2 weeks ago. The memory usage is ~60%, I tried to restart it and test again, the memory fell to ~50% then (5 months of uptime). I'm thinking that one of my nics are busted, but I'm sort of perplexed that this could be the outcome; slow download but full speed upload. Anybody ever seen that happening before? Could it just be one of the nics that needs to be replaced? Will try that as soon as I can get my hands on a couple of new ones.

    Read the article

  • Remote Desktop to Server 2008R2 fails from one particular Win7 client

    - by Jesse McGrew
    I have a VPS running Windows Web Server 2008 R2. I'm able to connect using Remote Desktop from my home PC (Windows 7), personal laptop (Windows 7), and work laptop (Windows XP). However, I cannot connect from my work PC (Windows 7). I receive the error "The logon attempt failed" in the RDP client, and the server event log shows "An account failed to log on" with this explanation: Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: username Account Domain: hostname Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: JESSE-PC Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 I can connect from the offending work PC if I start up Windows XP Mode and use the RDP client inside that. The server is part of a domain but my account is local, so I'm logging in using a username of the form hostname\username. None of the clients are part of a domain. The server uses a self-signed certificate, and connecting from home I get a warning about that, but connecting from work I just get the logon error.

    Read the article

  • X11 performance problem after upgrading from Centos3 to Centos5 with an ATI Rage XL

    - by Marcelo Santos
    After upgrading a computer from Centos3 to Centos5 an application that does a lot of scrolling took a very high performance hit. top tells me that X is using a lot of CPU and that was not happening before. The machine has an ATI Rage XL with 8MB and X is using the ati driver as there is no proprietary ATI driver for this board on linux. The xorg.conf: Section "Device" Identifier "Videocard0" Driver "ati" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "800x600" "640x480" EndSubSection EndSection Section "DRI" Group 0 Mode 0666 EndSection A similar machine that still has Centos3 installed is able to start DRI on the X server while this one is not, this is the Xorg.0.log for the Centos5 machine: drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed [drm] failed to load kernel module "mach64" (II) ATI(0): [drm] drmOpen failed (EE) ATI(0): [dri] DRIScreenInit Failed (II) ATI(0): Largest offscreen areas (with overlaps): (II) ATI(0): 1024 x 1279 rectangle at 0,768 (II) ATI(0): 768 x 1280 rectangle at 0,768 (II) ATI(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles 8x8 mono pattern filled rectangles Indirect CPU to Screen color expansion Solid Lines Offscreen Pixmaps Setting up tile and stipple cache: 32 128x128 slots 10 256x256 slots (==) ATI(0): Backing store disabled (==) ATI(0): Silken mouse enabled (II) ATI(0): Direct rendering disabled (==) RandR enabled I also tried using EXA instead of XAA and setting: Option "AccelMethod" "XAA" Option "XAANoOffscreenPixmaps" "true" uname -a Linux sir5.erg.inpe.br 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:20:55 EDT 2009 i686 i686 i386 GNU/Linux rpm -qa | grep xorg-x11-server xorg-x11-server-utils-7.1-4.fc6 xorg-x11-server-sdk-1.1.1-48.52.el5 xorg-x11-server-Xvfb-1.1.1-48.52.el5 xorg-x11-server-Xnest-1.1.1-48.52.el5 xorg-x11-server-Xorg-1.1.1-48.52.el5 The drmOpenDevice error continues when using the suggested Option "AIGLX" "true".

    Read the article

  • Mediawiki extension error

    - by vinylguitar
    I'm running the latest version of mediawiki using MoWeS Portable II from my desktop. I just installed this extension on the wiki http://www.mediawiki.org/wiki/Extension:MsUpload It adds an option to upload files (to be embedded in an article) to the edit screen of an article. After installing it when I try and edit an article I get the following error: Fatal error: Call to undefined method OutputPage::addModules() in C:\Users\User\Desktop\knowledge mapedia 10 25 13 copy\mowes_portable\www\mediawiki\extensions\MsUpload\msupload.php on line 65 Also here is what I posted in the localsettings.php file (I put it in at the end of localsettings.php if it makes a difference): Start --------------------------------------- MsUpload $wgMSU_ShowAutoKat = false; #autocategorisation $wgMSU_CheckedAutoKat = false; #checkbox for autocategorisation checked $wgMSU_debug = false; #debug mode $wgMSU_ImgParams = '400px'; #default max-size for inserted image $wgMSU_UseDragDrop = true; #show drag&drop area require_once "$IP/extensions/MsUpload/msupload.php"; End --------------------------------------- MsUpload require_once "$IP/extensions/msupload/msupload.php"; At line 65 in the localsettings.php file there is the following: line 64 ## Database settings line 65 $wgDBtype = "mysql"; line 66 $wgDBserver = "localhost"; line 67 $wgDBname = "mediawiki"; line 68 $wgDBuser = "root"; line 69 $wgDBpassword = ""; Any idea what I'm doing wrong?

    Read the article

  • How can I make my PCI-E graphics card visible to Ubuntu when the motherboard has integrated graphics

    - by Norman Ramsey
    I have a Gigabyte GA-MA74GM-S2 motherboard with integrated graphics that shows up on lspci as an ATI Radeon 2100. I also bought a PCI-Express Nvidia graphics card so I could use the VDPAU feature on Linux (plays H.264 in hardware). The BIOS has three settings about which display to initialize first: Integrated graphics PCI graphics PCI-Express graphics (PEG) I set the BIOS on PEG, but I cannot get anything, not even a splash screen or POST messages, to emerge from the PCI-Express graphics card. (I'm using a DVI connector; the card also has an HDMI output.) I cannot get the kernel lspci to see the graphics card; the only VGA controller it acknowledges is the integrated one. Running dmidecode acknowledges the existence of an x16 PCI Express slot, and it says Current usage: Unknown There is an additional BIOS setting called "Internal Graphics Mode" which is normally set to "Auto" which means it is supposed to prefer a PCI Express VGA card. I set it to "Disabled" which now means I'm getting no output at all. I will soon be learning how to do a BIOS reset! Other information: The PCI-E card is a MSI N210-MD512H GeForce 210. This is a fanless card. Although there are no fans to see turning, the heat sink on the PCI-E card is definitely getting hot, so the card is getting some sort of power. It gets all its power from the PCI-E slot; there is no external power connector. The BIOS is an AMI Award BIOS. My question: how can I make the PCI Express graphics card visible to Ubuntu?

    Read the article

  • Hide account from login screen but can be used in UAC

    - by tvanover
    So I have a Windows 7 home machine with 2 user accounts. One is a standard user account and one is an administrator account. Now this is going to be put in the hands of a very low-tech user so I don't want them to be able to see the administrator account on logon, but they want to have a password to prevent someone else from using the machine. My goal is that when the user turns on the computer, they are presented with their login. After logging in to their non-administrator account, if something needs to be installed then the administrator account can be used through UAC. I have tried creating the reg key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\SpecialAccounts\UserList and adding a dword of the account name and set it to 0. It succeeded in hiding the account from th login screen. As well as hiding it from UAC. So it fails the second requirement, of being able to run things as administrator through UAC. Also since I didn't set an administrator password (left it blank) it seems that I have completely locked myself out of the machine since runas doesn't accept blank passwords. So I also cannot undo it, and have quite effectively bricked the install, prompting an OS reinstall. This is Windows 7 Home, so there is no Users management console.

    Read the article

  • How to do 'search for keyword in files' in emacs in Windows without cygwin?

    - by Anthony Kong
    I want to search for keyword, says 'action', in a bunch of files in my Windows PC with Emacs. It is partly because I want to learn more advanced features of emacs. It is also because the Windows PC is locked down by company policy. I cannot install useful applications like cygwin at will. So I tried this command: M-x rgrep It throws the following error message: *- mode: grep; default-directory: "c:/Users/me/Desktop/Project" -*- Grep started at Wed Oct 16 18:37:43 find . -type d "(" -path "*/SCCS" -o -path "*/RCS" -o -path "*/CVS" -o -path "*/MCVS" -o -path "*/.svn" -o -path "*/.git" -o -path "*/.hg" -o -path "*/.bzr" -o -path "*/_MTN" -o -path "*/_darcs" -o -path "*/{arch}" ")" -prune -o "(" -name ".#*" -o -name "*.o" -o -name "*~" -o -name "*.bin" -o -name "*.bak" -o -name "*.obj" -o -name "*.map" -o -name "*.ico" -o -name "*.pif" -o -name "*.lnk" -o -name "*.a" -o -name "*.ln" -o -name "*.blg" -o -name "*.bbl" -o -name "*.dll" -o -name "*.drv" -o -name "*.vxd" -o -name "*.386" -o -name "*.elc" -o -name "*.lof" -o -name "*.glo" -o -name "*.idx" -o -name "*.lot" -o -name "*.fmt" -o -name "*.tfm" -o -name "*.class" -o -name "*.fas" -o -name "*.lib" -o -name "*.mem" -o -name "*.x86f" -o -name "*.sparcf" -o -name "*.dfsl" -o -name "*.pfsl" -o -name "*.d64fsl" -o -name "*.p64fsl" -o -name "*.lx64fsl" -o -name "*.lx32fsl" -o -name "*.dx64fsl" -o -name "*.dx32fsl" -o -name "*.fx64fsl" -o -name "*.fx32fsl" -o -name "*.sx64fsl" -o -name "*.sx32fsl" -o -name "*.wx64fsl" -o -name "*.wx32fsl" -o -name "*.fasl" -o -name "*.ufsl" -o -name "*.fsl" -o -name "*.dxl" -o -name "*.lo" -o -name "*.la" -o -name "*.gmo" -o -name "*.mo" -o -name "*.toc" -o -name "*.aux" -o -name "*.cp" -o -name "*.fn" -o -name "*.ky" -o -name "*.pg" -o -name "*.tp" -o -name "*.vr" -o -name "*.cps" -o -name "*.fns" -o -name "*.kys" -o -name "*.pgs" -o -name "*.tps" -o -name "*.vrs" -o -name "*.pyc" -o -name "*.pyo" ")" -prune -o -type f "(" -iname "*.sh" ")" -exec grep -i -n "action" {} NUL ";" FIND: Parameter format not correct Grep exited abnormally with code 2 at Wed Oct 16 18:37:44 I believe rgrep tried to spwan a process and called 'FIND' with all the parameters. However, since it is a Windows, the default Find executable simply does not know how to handle. What is the better way to search for a keyword in multiple files in Emacs on Windows platform, without any dependency on external programs? Emacs version: 24.2.1

    Read the article

  • Unable to connect Xend with virt-manager

    - by Majid Azimi
    I have installed debian 6.0.1a. I have install all XEN stuff. including xen kernel, libvirtd, ... but when i want to connect xend, virt-manager shows me this: Verify that: A Xen host kernel was booted The Xen service has been started details: Unable to open connection to hypervisor URI 'xen:///': unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/connection.py", line 971, in _try_open None], flags) File "/usr/lib/python2.6/dist-packages/libvirt.py", line 111, in openAuth if ret is None:raise libvirtError('virConnectOpenAuth() failed') libvirtError: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied here is uname output: Linux debian 2.6.32-5-xen-amd64 #1 SMP Tue Mar 8 00:01:30 UTC 2011 x86_64 GNU/Linux and also xend and libvirtd is runnig: root@debian:/home/mazimi# /etc/init.d/libvirt-bin status Checking status of libvirt management daemon: libvirtd running. root@debian:/home/mazimi# /etc/init.d/xend start Starting Xen daemons: xenstored xenconsoled xend. permissions for livbirt-sock: root@debian:/home/mazimi# ls -alih /var/run/libvirt/ total 12K 671017 drwxr-xr-x 3 root root 4.0K Apr 15 13:54 . 654083 drwxr-xr-x 18 root root 4.0K Apr 15 13:54 .. 670901 srwxrwx--- 1 root libvirt 0 Apr 15 13:54 libvirt-sock 670928 srwxrwxrwx 1 root libvirt 0 Apr 15 13:54 libvirt-sock-ro 670870 drwxr-xr-x 2 root root 4.0K Apr 15 02:34 qemu and also we have group named libvirt in /etc/group When running libvirtd with verbose mode it behaves kind of stange: root@debian:/var/log/libvirt# /usr/sbin/libvirtd --verbose 17:26:55.841: warning : qemudStartup:1832 : Unable to create cgroup for driver: No such device or address 17:26:56.128: warning : lxcStartup:1900 : Unable to create cgroup for driver: No such device or address and waits infinitely.

    Read the article

  • signed applet automatically running as insecure

    - by Terje Dahl
    My application is deployed as a self-signed applet to several thousand users at more than 50 schools across the country (in Norway). The user is presented with the standard Java security warning asking if they will accept the signature. When they do, the applet runs perfectly. However, about half a year ago a group of 7 school, all under a common IT department, stopped getting the security warning. In stead the applet loads and starts running in untrusted mode, without first giving the user an option to accept or reject the signature. The problem is on Windows machines, and only when the machine is connected to the schools network. If they take the same machine home with them, the program functions as it should, with security warnings and everything. I know little about Window systems in general, but I would think it would be some sort of policy-file or something that is loaded when a machine hooks up to/through the schools network. Furthermore, the problem only started occurring in these 7 schools after changes made after a security breach they had a while back. The IT department is stumped. I am stumped. Any thoughts, comments, suggestions?

    Read the article

  • mysql.proc has gone corrupt. How can I fix it?

    - by Metalcoder
    I have a server running Debian 5.0, and MySQL. Suddendly, MySQL stopped working, and after many attempts to fix it, I decided to reinstall it. I installed MySQL 5.1.63, and when started it goes to safe mode. I made some typing, and when I executed mysql_upgrade as root, it complained: ... Running 'mysql_fix_privilege_tables'... ERROR 1548 (HY000) at line 1111: Cannot load from mysql.proc. The table is probably corrupted ERROR 1064 (42000) at line 1112: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'sqlstate 'HY000' set message_text='Unexpected content found in the performance_s' at line 1 ERROR 1548 (HY000) at line 1125: Cannot load from mysql.proc. The table is probably corrupted FATAL ERROR: Upgrade failed I checked the mysql.proc table, and it's comment column was slightly different from my backup. -- My backup says: `comment` char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '', -- But it were: `comment` text CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, So, I restored my mysql database backup, and now they all match, but mysql_upgrade still trigger the same errors. I also tried do check and repair the mysql.proc table, but got no success.

    Read the article

  • What device can create wireless network while connecting to an ethernet router

    - by Nicolo
    Hi, I have access to an ethernet port of a wireless router. I simply connect my laptop to it via an ethernet cable. There are a total of four such ports on the wireless router. Now I want to connect a device (a wireless access point? wireless bridge? wireless switch?) via an ethernet cable to one of the other ethernet ports of the router. I want this device to act as a kind of wireless switch - it should "split" the ethernet connection coming from the router to two or more computers that connect to this device via a wireless. Basically, I have a wireless router with its wireless function switched off. I don't know the password for that router so can't activate the wireless function. Don't know the password of the ISP either. The only thing I can do is to connect via ethernet cable to the wireless router and this does not require a password. Now I want to use that connection and build a wireless upon it. What kind of device do I need? I am not really very well informed about network management and find the descriptions "wireless access point", "wireless bridge", "wireless switch" confusing. I know what an ethernet switch is - what I need is a device which would do the same but by allowing the clients to connect to it via a wireless. What kind of device would do that? Any recommendations about specific products?

    Read the article

  • Windows 8 unable to connect to WPA2 AES Wireless Network

    - by user170193
    I'm running Windows 8 and am unable to connect to my home wireless network. I've tried restarting the router, patching the drivers to the next version, patching the drivers to the last version, running windows update and patching the chipset drivers to the latest version. So far nothing has worked. My computer can get on the internet via USB tethering on my phone or an open WiFi connection, but it is unable to connect to my home WPA2 AES secured wireless network. It sees the network, attempts to connect, gets a limited connection and then drops the connection. All the other wireless devices in my household have no problems. I have the new Dell XPS 12, running Windows 8 using an Intel Centrino Advanced-N 6235 wireless adapter. I've refreshed windows twice now to try different driver configurations. I've tried uninstalling all the Dell software, I've tried uninstalling all the Intel software and reinstalling just the drivers. I've tried turning switching the ability for the Wireless driver to turn the computer off or on. I've tried setting up the connection manually from desktop mode. I've tried switching it on and off using the wireless button on the keyboard and in the software. So far nothing has allowed me to connect to the secured network. It just keeps getting a limited connection, dropping the connection and retrying. It's driving me crazy, any ideas, anything I missed? Thanks.

    Read the article

  • Exchange 2007 two node cluster setup on Windows 2008 Enterprise, install error

    - by Shadow00Caster
    I am installing Microsoft Exchange 2007 x64 in a two node environment using Microsoft Windows 2008 Enterprise x64. The Failover Cluster is all setup properly and following best practices for setting up the windows clustering for use with Exchange 2007. All the validation tests pass on the cluster and all of that portion is working fine. The problem is when I go to install the first Exchange node as an Active mailbox in configuration for a two node CCR. It gets all the way through the first 3 steps (Copy Exchange Files, Management Tools, Mailbox Role) and then fails on the 4th step 'Clustered Mailbox Server' with the following error: Error: The clustered mailbox server's group 'XXXX' was not found, and should already exist. Firewalls are all disabled, DNS is all setup properly, the environment has 3 domain controllers all 2k8 ent x64, all replication works. The name I pick for the CCR cluster (XXX) does not exist in AD or in DNS. I have attempted this install from both of the two Exchange nodes and multiple times .. tried with different names. I have been banging my head against the wall for days working on this and would appreciate any feedback on the issue.

    Read the article

  • How to automatically remove Flash history/privacy trail? Or stop Flash from storing it?

    - by Arjan van Bentem
    Many people have heard about third-party cookies, and some browsers even block those by default. Some people may even be using Private Browsing modes. However, only few seem to realise that Adobe's Flash player also leaves a cross-browser trail on your local hard drive, and allows for sending cookie-like information back to the server, including third-party sites. And because it is a plugin, Flash does not take any of the browser's privacy settings into account. Sorry for the long post, but first some details about why using Flash raises a privacy concern, followed by the results of my tests: The Flash player keeps a cross-browser history of the domain names of the Flash-sites your computer has visited. Unlike your browser's history, this history is not limited to a certain number of days. History is also recorded while using so-called Private Browsing modes. It is stored on your hard drive (though, as described below, without going to Adobe's site you won't know what is stored). I am not sure if any date and time information is kept about each visit, but to see the domain names: right-click on some Flash content, open the settings dialog, and click the Help icon or click the Advanced button within the Privacy tab. This opens a browser to the help pages on Adobe.com, where one can click through to the Website Storage Settings panel. One can clear the existing list, but one cannot stop it from being recorded again. Flash allows for storing data on your local hard drive, using so-called Local Shared Objects (aka "Flash Cookies"). Just like HTTP cookies, this data can be sent back to the server, for tracking purposes. They are cross-browser, have no expiration date, and no user defined maximum lifetime can be set in the Flash preferences either. These not being HTTP cookies, they are (of course) not blocked by a browser's cookies preferences and are not removed when the normal HTTP cookies are deleted. Adobe has announced that version 10.1 will obey Private Browsing in most popular browsers, but unfortunately no word about also removing the data whenever normal cookies are deleted manually. And its implementation might be confusing: [..] if the browser is in normal browsing mode when the Flash Player instance is created, then that particular instance will forever be in normal browsing mode (private browsing is turned off). Accordingly, toggling private browsing on or off without refreshing the page or closing the private browsing window will not impact Flash Player. Local Shared Objects are not limited to the site you visit, and third-party storage is enabled by default. At the Global Storage Settings panel one can deselect the default Allow third-party Flash content to store data on your computer. Because of the cross-browser and expiration-less nature (and the fact that few people know about it), I feel that the cross-browser third-party Flash Cookies are more dangerous for visitor tracking than third-party normal HTTP cookies. They are even used to restore plain HTTP cookies that the user tried to delete: "All advertisers, websites and networks use cookies for targeted advertising, but cookies are under attack. According to current research they are being erased by 40% of users creating serious problems," says Mookie Tenembaum, founder of United Virtualities. "From simple frequency capping to the more sophisticated behavioral targeting, cookies are an essential part of any online ad campaign. PIE ["Persistent Identification Element"] will give publishers and third-party providers a persistent backup to cookies effectively rendering them unassailable", adds Tenembaum. [..] To justify this tracking mechanism, UV's Tenembaum said, "The user is not proficient enough in technology to know if the cookie is good or bad, or how it works." When selecting None (zero KB) for Specify the amount of disk space that website websites that you haven't yet visited can use to store information on your computer, and checking Never ask again then some sites do not work. However, the same site might work when setting it to None but without selecting Never ask again, and then choose Deny whenever prompted. Both options would result in zero KB of data being allowed, but the behaviour differs. The plugin also provides a Flash Player cache for Adobe-signed files. I guess these files are not an issue. So: how to automatically delete that information? On a Mac, one can find a settings.sol file and a folder for each visited Flash-website in: $HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer/sys/ Deleting the settings.sol file and all the folders in sys, removes the trail from the settings panels. However, the actual Local Shared Ojects are elsewhere (see Wikipedia for locations on other operating systems), in a randomly named subfolder of: $HOME/Library/Preferences/Macromedia/Flash Player/#SharedObjects But then: how to remove this automatically? Simply removing the folders and the settings.sol file every now and then (like by using launchd or Windows' Task Scheduler) may interfere with active browsers. Or is it safe to assume that, given the cross-browser nature, the plugin would not care if things are removed while it is active? Only clearing during log-off may not work for those who hibernate all the time. Firefox users can install BetterPrivacy or Objection to delete the Local Shared Objects (for all others browsers as well). I don't know if that also deletes the trail of website domain names. Or: how to stop Flash from storing a history trail? Change of plans: I'm currently testing prohibiting Flash to write to its own sys and #SharedObjects folders. So far, Flash has not tried to restore permissions (though, when deleting the folders, Flash will of course recreate them). I've not encountered any problems but this may take some while to validate, using multiple browsers and sites. I've not yet found a log that reports errors. On a Mac: cd "$HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer" rm -r sys/* chmod u-w sys cd "$HOME/Library/Preferences/Macromedia/Flash Player" # preserve the randomly named subfolders (only preserving the latest would suffice; see below) rm -r \#SharedObjects/*/* chmod -R u-w \#SharedObjects I guess the above chmods cannot be achieved on an old Windows system (I'm not sure about XP and Vista?). Though maybe on Windows one could replace the folders sys and #SharedObjects with dummy files with the same names? Anyone? Obviously, keeping Flash from storing those Local Shared Objects for all sites may cause problems. Some test results (Flash 10 on Mac OS X): When blocking the sys folder (even when leaving the #SharedObjects folder writable) then YouTube won't remember your volume settings while viewing multiple videos. Temporarily allowing write access to the blocked folders while visiting trusted sites (to only create folders for domains you like, maybe including references in settings.sol) solves that. This way, for YouTube, Flash could be allowed to write to sys/#s.ytimg.com and #SharedObjects/s.ytimg.com, while Flash could not create new folders for other domains. One may also need to make settings.sol read-only afterwards, or delete it again. When blocking both the sys and #SharedObjects folders, YouTube and Vimeo work fine (though they might not remember any settings). However, Bits on the Run refuses to even show the video player. This is solved by temporarily unblocking the #SharedObjects folder, to allow Flash to create a subfolder with some random name. Within this folder, it would create yet another folder for the current Flash website (content.bitsontherun.com). Removing that website-specific folder, and blocking both #SharedObjects and the randomly named subfolder, still seems to allow Bits on the Run to operate, even though it still cannot write anything to disk. So: the existence of the randomly named subfolder (even when write protected) is important for some sites. When I first found the #SharedObjects folder, it held many subfolders with random names, some created on the very same day. I wonder when Flash decides it wants a new folder, and how it determines (and remembers) that random name. For a moment I considered not blocking write access for sys and #SharedObjects, but explicitly creating read-only folders for well-known third-party tracking domains (like based on a list from, for example, AdBlock Plus). That way, any other domain could still create Local Shared Objects. But the list would be long, and the domains from AdBlock Plus are probably all third-party domains anyway, so disabling Allow third-party Flash content to store data on your computer might have the very same result. Any experience anyone? (Final notes: if the above links to the settings panels do not work in the future, then use the URL that is known to Flash player as a starting point: www.adobe.com/go/settingsmanager. See also "You Deleted Your Cookies? Think Again" at Wired.com -- which uses Flash cookies itself as well... For the very suspicious using Time Machine: you may want to exclude both folders, for each user, and remove the trace that is already on your backup.)

    Read the article

  • ASP.NET Website Administration Tool: Unable to connect to SQL Server database

    - by MedicineMan
    I am trying to get authentication and authorization working with my ASP MVC project. I've run the aspnet_regsql.exe tool without any problem and see the aspnetdb database on my server (using the Management Studio tool). my connection string in my web.config is: <connectionStrings> <add name="ApplicationServices" connectionString="data source=MYSERVERNAME;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient" /> The error I get is: There is a problem with your selected data store. This can be caused by an invalid server name or credentials, or by insufficient permission. It can also be caused by the role manager feature not being enabled. Click the button below to be redirected to a page where you can choose a new data store. The following message may help in diagnosing the problem: Unable to connect to SQL Server database. In the past, I have had trouble connecting to my database because I've needed to add users. Do I have to do something similar here?

    Read the article

< Previous Page | 720 721 722 723 724 725 726 727 728 729 730 731  | Next Page >