Search Results

Search found 39456 results on 1579 pages for 'why do you'.

Page 220/1579 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • Why can`t we treat SSL Certs like Pgp keys instead of trusting CAs?

    - by yarun can
    I am dumb and stupid and I do not know all the technical aspects of SSL and server/client side implications and implementations. However I understand them good enough from user point of view to use SSL and encyrption daily. I was thinking that how silly it is to trust some unknown/known CAs when it comes to our our certificates for our servers. There had been many cases of misconduct, misuse, compromises and theft of certificates/ca keys from those places. On top of those known issues we also have to pay these guys regularly. I am wondering why can not we use/treat web server certificates like we use our pgp keys? So I sign a SSL certificate and send to a central server. And then each user accessing my site checks the validity and the keys from some central server (like pgp key servers). Is this a stupid idea? If so what could be a better idea than current system of issuing valid certificates. I am looking for a better than more secure idea. Naturally this is not a solution to an existing problem, rather it will be a hypothetical solution for some future implementation of a currently messed up web of trust on the internet due to recent news about NSA and their criminal buddies around the world. thanks

    Read the article

  • Why do I have 55 local area connections in ipconfig?

    - by RMorrisey
    Windows Vista Home Premium. I should mention that I am having no problem whatever getting an internet connection. When I type "ipconfig" in the console, I get (55!) messages of 3 lines each, listing a ton of disconnected network connections. My PC only has one network card. Each message looks like this: Tunnel adapter Local Area Connection* 55: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : These don't cause a major problem; they make it a pain, though, to fish upward and find my IP address. How can I get rid of them? Edit: Actually, a few connection numbers are randomly missing from the sequence; so, it's really more like 30 or 40 connection messages, rather than all 55. Not sure why that is, either.

    Read the article

  • On Windows and Windows 7's Task Manager, why Memory is 1118MB Available but only 62MB Free? [closed]

    - by Jian Lin
    Possible Duplicate: Windows 7 memory usage What are the "Cached", "Available", and "Free" memory in the following picture (From Windows 7's Task Manager). If it is 1118MB Available, then why isn't it Free (to use)? As I understand it, if a bowl of noodle is available, that doesn't mean it is free... it may still cost $7. But what about in the Task Manager, when it is Available, it is also not Free? Does it cost $2 per MB? What about the "Cached"... What exactly is the Cached Memory? We may put some hard disk data in RAM and so we cache the data in RAM, for faster access (that's the operating system's job). So the Total Physical RAM is 6GB, what is the 1106 Cached? Cached in where? Caching physical RAM in ... some where? It is also strange that the Cached value is sometimes higher and sometimes lower than the Available value. Can somebody who is knowledgeable about this shred some light on these meanings?

    Read the article

  • Why is the link between my switch and my router always negotiating half-duplex mode?

    - by Massimo
    I have a Cisco 2950 switch which has one of its ports connected to an Internet router provided by my ISP; I have no access to the router configuration, but I manage the switch. If I leave all switch ports with their default setup (auto-negotiation of speed and duplex mode), this link always connects at 100 MBit/s, but in half-duplex mode. I've tried replacing the cable, and also moving the link to another switch port: the result is always the same. A different device connected to the same port (or to any switch port, really) shows no problem at all. It could be guesed that someone configured the router to only connect in half-duplex mode... BUT, here's the catch: if I manually force the switch port to full-duplex mode (duplex full in the interface configuration), the link goes up, stays up and is completely stable. So: The connection is not forced to half-duplex mode by the router, otherwise it would not connect at all if I force the switch end to full-duplex. There is no actual link problem, otherwise the full-duplex connection would not go up or would at least show some errors. But if I leave the port free to auto-negotiate, it always connects in half-duplex mode. Why?

    Read the article

  • Why is wget so much faster than Firefox at some downloads?

    - by Earlz
    Recently, I needed to do an update of Xilinx WebPack, mind you, this is one hefty piece of software. It weighs in at 6gigs, which definitely isn't "quick" on any internet I've ever had available to me. So, when I went to download it(using Firefox of course), I was very... unsettled by the fact that the download was only going at 110kByte/s. My internet connection is capable of about 2200kByte/s download, so what gives!? My workaround in the past for this issue has been to take the link to my Linode linux server and download it there with wget, where the download will zip along at 14MByte/s, and then either copying it to my website directory and downloading it that way through HTTP, or using sftp. Both ways work about as well and will sufficiently max out my connection. However, I recently figured out the missing variable. I tried doing the download locally with wget and was able to max out my connection! TL;DR; Now, my question. Why is wget so much faster than firefox at downloading this file? I hardly ever have such a difference in download speeds except for with this one file.

    Read the article

  • Why is a FLAC encoded from a decoded MP3 bigger than the MP3?

    - by Ryan Thompson
    To be more precise than in the title, suppose I have a MP3 file that is 320 kbps. If I decompress it, then logically, all the data except for roughly 320 kilobits out of each second of audio should be redundant data, able to be compressed away. So, when I encode the decompressed file to FLAC, or any other lossless codec, why is it so much larger? On a related note, is it theoretically possible to losslessly recover the source mp3 audio from a decompressed wav? (I know the mp3 itself is lossy. I'm asking if it's possible to re-encode without any further loss.) EDIT: Let me clarify the related question, and the rationale behind it. Suppose I have a wav that was decompressed from an MP3 file (and assume I don't have the mp3 itself for some reason). If I don't want to lose any more quality, I can re-encode it with FLAC or any other lossless encoder and get a larger file just to maintain the same quality. Or, I can re-encode it to mp3 again and get the same size as the original but lose more data. Obviously, neither of these cases is ideal. I can either have the original size or the original quality, but not both (I mean the quality of the original mp3, not the original lossless source). My question is: Can we get both? Is it theoretically possible to recover the lossy compressed data from the lossy decompressed data, without losing even more? If it is possible, I could imagine a lossless compression algorithm that compresses the audio with FLAC. Then it also scans the audio for any signs of previous lossy compression, and if detected, recompresses it losslessly to the original lossy file. Then it keeps whichever file is smaller.

    Read the article

  • Why is this file hidden when you run ls?

    - by luckytaxi
    For a few weeks now I couldn't figure out why I wasn't able to delete this one particular file. As root I can, but my shell script runs as a different user. So I go run ls -la and it's not there. However, if I call it as a parameter, it shows up! Sure enough, the owner is root, hence I'm not able to delete. Notice, 6535 is missing ... [root@server]# ls -la 653* -rw-rw-r-- 1 svn svn 24002 Mar 26 01:00 653 -rw-rw-r-- 1 svn svn 7114 Mar 26 01:01 6530 -rw-rw-r-- 1 svn svn 8653 Mar 26 01:01 6531 -rw-rw-r-- 1 svn svn 6836 Mar 26 01:01 6532 -rw-rw-r-- 1 svn svn 3308 Mar 26 01:01 6533 -rw-rw-r-- 1 svn svn 3918 Mar 26 01:01 6534 -rw-rw-r-- 1 svn svn 3237 Mar 26 01:01 6536 -rw-rw-r-- 1 svn svn 3195 Mar 26 01:01 6537 -rw-rw-r-- 1 svn svn 27725 Mar 26 01:01 6538 -rw-rw-r-- 1 svn svn 263473 Mar 26 01:01 6539 Now it shows up if you call it directly. [root@server]# ls -la 6535 -rw-rw-r-- 1 root root 3486 Mar 26 01:01 6535

    Read the article

  • Why I am getting "Problem loading the page" after enabling HTTPS for Apache on Windows 7?

    - by Anish
    I enabled HTTPS on the Apache server (2.2.15) Windows 7 Enterprise by uncommenting: Include /private/etc/apache2/extra/httpd-ssl.conf in C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd.conf and modifying C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd-ssl.conf to include: DocumentRoot "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/htdocs" ServerName myserver.com:443 ServerAdmin [email protected] ... SSLCertificateFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem SSLCertificateKeyFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/key.pem" Then I restart apache (going to start-All Progranms-Apache Server 2.2-Control-restart) and go to localhost on port 443 in Firefox , where I get: Index of / Index of / Links/ ..... .... But on Display of WebPage I see: Unable to connect Firefox can't establish a connection to the server at localhost. *The site could be temporarily unavailable or too busy. Try again in a few moments. *If you are unable to load any pages, check your computer's network onnection. *If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. I read: Why am I getting 403 Forbidden after enabling HTTPS for Apache on Mac OS X? and added default web server configuration block to match my DocumentRoot The error Log C:\Program Files (x86)\Apache Software Foundation\Apache2.2\logs\error.log gives following error: The Apache2.2 service is running. (OS 5)Access is denied. : Init: Can't open server certificate file C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem I checked the permissions for cert.pem and it indicates: All the permissions (Full control, Read, Read and modify, execute, Write) are marked for Admin and I am currently logged in as Admin. I tried using oldcert.pem and oldkey.pem on the same server and it works fine. Is there anything that I missed?

    Read the article

  • Why would the SQL 2008 "Generate scripts..." utility generate an invalid SQL script?

    - by Deane
    I have a SQL2008 database that needs to be restored to a SQL2005 instance. I have gone through the "Generate scripts..." wizard, set it for SQL2005 compatibility, and generated a 62MB SQL script. When I run it on the SQL2005 instance, it throws all kinds of errors, and some of them are really strange in that they describe an invalid database. FK constraints are wrong. It's trying to create FKs on columns that don't exist. It's trying insert records with duplicate key errors. It's trying to create the same objects twice. Any idea how this could happen? This SQL script was generated by SQL Server Management Studio just minutes before I tried to restore it, and was not modified. Why would this generate an invalid SQL file? Doesn't it just describe the SQL2008 database, which is presumably valid since we're using it? In particular, the duplicate key insertion errors mystify me. If there's a key constraint in the SQL script, then there must be the same thing in the SQL2008 table. So how could we get rows in there that violate that key constraint?

    Read the article

  • Why am I seeing MailSlot Browse messages on unrouted ports of my Linux box?

    - by nmichaels
    I have a Linux box (Debian squeeze) with several NICs. The ones of interest are: eth3 - my main link to the network (dhcp on 10.20.30.0/24) eth0 - the first connection to my test network (static: 192.168.1.2) eth4 - the second connection to my test network (static: 192.168.1.1) My routing table looks like this: $ sudo route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.20.30.0 * 255.255.255.0 U 0 0 0 eth3 default 10.20.30.254 0.0.0.0 UG 0 0 0 eth3 I have the 2 test net ports connected to each other with a crossover cable and an instance of wireshark running on each port. Every once in a while, I'll see a packet like the following show up. Who could be doing this, and how do I convince them to stop? I do have Samba running on the machine (for a cifs mount) but don't see why it would be sending packets out to unrouted ports. I had a Windows VM running in VMWare Client and thought that might be causing it, but it still happens without it. What I want is totally silent interfaces so I can run some tests with Scapy over them.

    Read the article

  • Why does USB thumb drive screw up boot sequence?

    - by Carl B
    I am looking for understanding to a boot issue. I have at times had some files and such that I save and retrieve from my thumb drive. I use the front panel as it is nice and easy to get to and I typically power down my system nightly. If I forget to pull the drive and power on the system, it becomes the first bootable device. As there is no OS on the USB Drive I get the BOOTMGR is missing press CTRL+ALT+DELETE. When I go into BIOS to see Boot sequence, there’s the thumb drive up top, DVD drive is missing and not found in the list of devices. All of the hard drives are next in line. When I pull the USB drive, and reboot, everything is back to normal. Old boot sequence is in place, DVD drive right where it should be and no issues. So why does this happen with a USB drive in port at boot up? If it can’t be booted from, shouldn’t the next drive be attempted? Note: This happens when the thumb drive is plugged into a USB port on the front panel. It does not seem to happen on rear panel ports.

    Read the article

  • Why won't dhclient use the static IP I'm telling it to request?

    - by mike
    Here's my /etc/dhcp3/dhclient.conf: request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, domain-search, host-name, netbios-name-servers, netbios-scope, interface-mtu; timeout 60; reject 192.168.1.27; alias { interface "eth0"; fixed-address 192.168.1.222; } lease { interface "eth0"; fixed-address 192.168.1.222; option subnet-mask 255.255.255.0; option broadcast-address 255.255.255.255; option routers 192.168.1.254; option domain-name-servers 192.168.1.254; } When I run "dhclient eth0", I get this: There is already a pid file /var/run/dhclient.pid with pid 6511 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ wmaster0: unknown hardware address type 801 wmaster0: unknown hardware address type 801 Listening on LPF/eth0/00:1c:25:97:82:20 Sending on LPF/eth0/00:1c:25:97:82:20 Sending on Socket/fallback DHCPREQUEST of 192.168.1.27 on eth0 to 255.255.255.255 port 67 DHCPACK of 192.168.1.27 from 192.168.1.254 bound to 192.168.1.27 -- renewal in 1468 seconds. I used strace to make sure that dhclient really is reading that conf file. Why isn't it paying attention to my "reject 192.168.1.27" and "fixed-address 192.168.1.222" lines?

    Read the article

  • Why are Microsoft Windows Update taking so long to install?

    - by Mathieu Pagé
    Hi, I have a question that is not related to a problem I have. Just something I'd like to understand. Why are Windows update so long? First Windows Update need to find witch updates you needs and this take about 5 minutes. What is happening behind the scene during those 5 minutes? I would have tought that it would be enough to compare the updates you already have to the complete list of updates or to check the version numbers of a couples files. Then when it comes time to install the upgrades, they're also taking a long time. Some 1 Mb updates takes 2, 3 or 5 minutes to install. What is taking so long. I would have though that it was simply a mater of backup the old file, uncompress the new files, replace the old file. This should be really fast. Is Windows doing something else? For comparison, under Linux, you can find which updates you need in about 20 seconds and installing them is usually pretty fast (The time to uncompress the files). I can do a complete updgrade of my linux machine in about 25 minutes (download 600-800 Mb of updates, hundreds of them and install them) while under windows 25 minutes is the time it needs to find witch update are needed and install about 5-10 updates. I just updated a Windows XP home from SP1a to SP3 + all other updates. It took me more than 3 hours. Doing something like that in the Linux World takes about 30 minutes. I don't want to bash Microsoft here. I genuinly want to know what they do differently that makes it so long.

    Read the article

  • BackupPC - why does it use rsync --sender --server ... ?

    - by Jakobud
    I'm in the process of experimenting with BackupPC on a CentOS 5.5 server. I have everything pretty much setup with default values. I tried setting up a basic backup for a host's /www directory. The backup fails with the following errors: full backup started for directory /www Running: /usr/bin/ssh -q -x -l root target /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/ Xfer PIDs are now 30395 Read EOF: Connection reset by peer Tried again: got 0 bytes Done: 0 files, 0 bytes Got fatal error during xfer (Unable to read 4 bytes) Backup aborted (Unable to read 4 bytes) Not saving this as a partial backup since it has fewer files than the prior one (got 0 and 0 files versus 0) First of all, yes I have my ssh keys setup to allow me to ssh to the target server without requiring a password. In the process of troubleshooting, I tried the above ssh command directly from the command line, and it hangs. Looking at the end of the debug messages for SSH I get: debug1: Sending subsystem: /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/ Request for subsystem '/usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/' failed on channel 0 Next I started looking at the rsync flags. I did not recognize --server and --sender. Looking at the rsync man pages, sure enough, I don't see anything about --server or --sender in there. What are those in there for? Looking at the BackupPC config I have this: RsyncClientPath = /usr/bin/rsync RsyncClientCmd = $sshPath -q -x -l root $host $rsyncPath $argList+ And for the arguments, I have the following listed: --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive Notice there is no --server, --sender or --ignore-times. Why are these things getting added in? Is this part of the problem?

    Read the article

  • Why is SMF manifest losing configuration data when exported on SmartOS?

    - by Scott Lowe
    I'm running a server process under SMF (Server Management Facility) on Joyent's Base64 1.8.1 SmartOS image. For those not aqauinted with SmartOS, it is a cloud-based distribution of IllumOS with KVM. But essentially it is like Solaris and inherits from OpenSolaris. So even if you've not used SmartOS, I'm hoping to tap into some Solaris knowledge on ServerFault. My issue is that I want an unprivileged user to be allowed to restart a service that they own. I have worked out how to do that by using RBAC and adding an authorisation to /etc/security/auth_attr and associating that authorisation with my user. I then added the following to my SMF manifest for the service: <property_group name='general' type='framework'> <!-- Allow to be restarted--> <propval name='action_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> <!-- Allow to be started and stopped --> <propval name='value_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> </property_group> And this works well when imported. My unprivileged user is allowed to restart, start and stop its own server process (this is for automated code deployments). However, if I export the SMF manifest, this configuration data is gone... all I see in that section is this: <property_group name='general' type='framework'> <property name='action_authorization' type='astring'/> <property name='value_authorization' type='astring'/> </property_group> Does anybody know why this is happening? Is my syntax wrong, or am I simply not using SMF incorrectly?

    Read the article

  • Why is OpenSSH not using the user specified in ssh_config?

    - by Jordan Evens
    I'm using OpenSSH from a Windows machine to connect to a Linux Mint 9 box. My Windows user name doesn't match the ssh target's user name, so I'm trying to specify the user to use for login using ssh_config. I know OpenSSH can see the ssh_config file since I'm specifying the identify file in it. The section specific to the host in ssh_config is: Host hostname HostName hostname IdentityFile ~/.ssh/id_dsa User username Compression yes If I do ssh username@hostname it works. Trying using ssh_config only gives: F:\>ssh -v hostname OpenSSH_5.6p1, OpenSSL 0.9.8o 01 Jun 2010 debug1: Connecting to hostname [XX.XX.XX.XX] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa type -1 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa-cert type -1 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa type 2 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3p1 Debia n-3ubuntu5 debug1: match: OpenSSH_5.3p1 Debian-3ubuntu5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'hostname' is known and matches the RSA host key. debug1: Found key in /cygdrive/f/progs/OpenSSH/home/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa debug1: Offering DSA public key: /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). I was under the impression that (as outlined in this question: How to make ssh log in as the right user?) specifying User username in ssh_config should work. Why isn't OpenSSH using the username specified in ssh_config?

    Read the article

  • Why can't I boot in to Windows Recovery Environment to fix my HDD or salvage my data?

    - by Kevin
    I've been trying to get in to WindowsRE to salvage the files on my Sony Vaio laptop after it failed to load Vista (it finally, consistently displays "Error loading operating system" after months of such intermittent failures, usually rectified via restarts or utilizing Startup Repair or CHKDSK from WindowsRE) . The problem is, after successfully accessing it once after this failure (and many times before over the course of the laptop's life), I can no longer get it to load. During the last successful access (right after the failure), I ran startup repair, which itself failed and notified me that the boot sector was corrupt. I attempted to head in to Sony's proprietary recovery tools menu, which is accessible from WindowsRE when it is loaded from the recovery partition or recovery disk, however it hung. I have since been unable to access the recovery environment after restarting, using any of these methods: Access via the recovery partition (pressing F10 on boot) Access via recovery DVD (created using the same computer when it was healthy) Access via a Windows Vista installation DVD All three methods produce the same results: The computer acknowledges the boot attempt The computer successfully gets passed the "Windows is loading files" screen The computer successfully gets passed the Windows loading screen The computer then stalls at a black screen, while showing HDD activity (via indicator light). After a few minutes, the HDD activity ceases, and after a few more minutes, the over sized cursor that is utilized in WindowsRE appears on the black screen. The actual recovery environment, however, never appears, even after leaving the computer in such a state overnight. What is fustrating is that other bootable utilities, such as SeaTools for DOS and MemTest, boot up and run fine. In running perfectly normally, MemTest was able to produce a plethora of errors utilizing my RAM. I'm inclined to believe the RAM's faultiness may causing the WindowsRE booting to fail. Would this be a valid assumption? If I'm not mistaken, booting from external media utilizes the RAM, so such a reason is plausible, assuming my knowledge of bootloading is correct. Other than that, I can't figure out any reason why all the bootable utilities except WindowsRE run fine. Does anyone know what the problem is, or could be? Any solutions?

    Read the article

  • SSH X11 forwarding does not work. Why?

    - by Ole Tange
    This is a debugging question. When you ask for clarification please make sure it is not already covered below. I have 4 machines: Z, A, N, and M. To get to A you have to log into Z first. To get to M you have to log into N first. The following works: ssh -X Z xclock ssh -X Z ssh -X Z xclock ssh -X Z ssh -X A xclock ssh -X N xclock ssh -X N ssh -X N xclock But this does not: ssh -X N ssh -X M xclock Error: Can't open display: The $DISPLAY is clearly not set when logging in to M. The question is why? Z and A share same NFS-homedir. N and M share the same NFS-homedir. N's sshd runs on a non standard port. $ grep X11 <(ssh Z cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes $ grep X11 <(ssh N cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes N:/etc/ssh/ssh_config == Z:/etc/ssh/ssh_config and M:/etc/ssh/ssh_config == A:/etc/ssh/ssh_config /etc/ssh/sshd_config is the same for all 4 machines (apart from Port and login permissions for certain groups). If I forward M's ssh port to my local machine it still does not work: terminal1$ ssh -L 8888:M:22 N terminal2$ ssh -X -p 8888 localhost xclock Error: Can't open display: A:.Xauthority contains A, but M:.Xauthority does not contain M. xauth is installed in /usr/bin/xauth on both A and M. xauth is being run when logging in to A but not when logging in to M. ssh -vvv does not complain about X11 or xauth when logging in to A and M. Both say: debug2: x11_get_proto: /usr/bin/xauth list :0 2>/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 0: request x11-req confirm 0 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. I have a feeling the problem may be related to M missing in M:.Xauthority (caused by xauth not being run) or that $DISPLAY is somehow being disabled by a login script, but I cannot figure out what is wrong.

    Read the article

  • Why does my DSDT table is different from what I found online?

    - by Hao Shen
    I have found a field in DSDT table where I want to modify from here http://www.ztex.de/misc/c2ctl.e.html Generally, I want to modify the _PSS field about the processor so that I can have more frequency levels available in the CPUfreq driver interface. I try to use this command to dissemble the DSDT table from my Desktop(Linux2.6.29,Intel CORE 2): cat /proc/acpi/dsdt > dsdt.aml iasl -d dsdt.aml Then I have a file dsdt.dsl as following(very long, so I just show the beginning of the file): /* * Intel ACPI Component Architecture * AML Disassembler version 20090123 * * Disassembly of dsdt.aml, Mon May 6 20:41:40 2013 * * * Original Table Header: * Signature "DSDT" * Length 0x00003794 (14228) * Revision 0x01 **** ACPI 1.0, no 64-bit math support * Checksum 0x46 * OEM ID "DELL" * OEM Table ID "dt_ex" * OEM Revision 0x00001000 (4096) * Compiler ID "INTL" * Compiler Version 0x20050624 (537200164) */ DefinitionBlock ("dsdt.aml", "DSDT", 1, "DELL", "dt_ex", 0x00001000) { Method (DBIN, 0, NotSerialized) { Noop } Scope (\) { Device (_SB.VBTN) ................... But I can not find the _PSS field as shown in the website I have given above. I do not know why? I am sure the current cpufreq driver shows 4 frequency levels available. So at least there should be something in the table showing this..right? Has anybody here played with the DSDT table before? Thanks,

    Read the article

  • Why would Windows Task Scheduler spawn multiple instances of the same task that run into each other?

    - by swagner88
    Overview: I use Windows Task Scheduler to run automated tasks. Occasionally I will see that randomly a task has failed to perform its duties. When I check Task Scheduler to see what has occurred in the history log, I see that for some reason, when the tasks are triggered at their schedules, they are spawning several instances of themselves simultaneously which turns into a train wreck for the task and it either kills the other instances and tries to run the "first" one, or it just does not run at all as it believes another instance of itself is already running. Sometimes this occurs in the same tasks and then occasionally it happens with others. The fix is just to end all instances and start the task manually. Question: Why would one single task with one single schedule decide to spawn multiple instance of itself simultaneously? Note: I've got a separate user account set to run the tasks instead of myself. That user is indeed an admin on the machine that runs the tasks and the tasks are set to tun whether or not the user is logged on. Also, the machine is windows server 08 R2.

    Read the article

  • My Boot order changed. Why?

    - by Chris
    I have a laptop running Windows XP SP3 with one internal hard drive partitioned into C: (system), D: (storage) and I have an external hard drive, F: (external drive). Yesterday the machine was running fine. Today, I go back to it and see that it's just showing a blinking cursor. Checked through the BIOS and the hard drive checked out fine. CTRL-ALT-DELETED the machine a few times, but I was never able to boot back into the operating system. I threw in a live CD and found out that the boot order of the drives has changed. The external drive is now C:, the system partiton is D:, and the storage partition is E:. Does anyone have any idea of how or why this would have occured? Auto system updates are turned off so there should have been no automatic reboot of the system overnight, and anti-virius runs on the machine and has found no infections before this occured. Edit When I was looking through the BIOS of the machine, I did see that the boot order was changed. But still the same question remains, what would have caused this to happen? I can't believe that a random reboot happened and totally changed my hard drive setup.

    Read the article

  • Why does MOSS sometimes delete an existing user from a site?

    - by Jesse
    I'm experiencing an issue with a MOSS installation. I am using the Site Settings Permissions to add an Active Directory account as a valid user of a site. This entails validating that the user account name is correct via the 'Check Names' button, then giving them 'Contribute' permissions. Once this is done they appear as a user on the 'All People' page. This works fine and the user is able to access the site. At some point in the future (sometimes several days later) the user account is somehow removed as a valid user from the site. This site resides in a test environment so access is pretty well controlled; which has allowed us to rule out someone else going in and removing the user manually. This appears to be something that is being done by the system itself and we have no idea why. We can manually add the user back, but then it will eventually get removed again later. I have an admittedly limited understanding of SharePoint permissions, but I believe that SharePoint stores valid users in a SQL database and I would assume that when dealing with Active Directory accounts it would be storing the user name and probably the SID. It appears that for some reason this record is later getting deleted out of the database, as the users will suddenly disappear from the "All People" page and will start getting "Access Denied: You are not authorized..." messages when trying to access the site. Has anyone seen this behavior before?

    Read the article

  • Why is it necessary to chmod o+r parent directory to fix 403 access forbidden error with Nginx and P

    - by davenolan
    This may be an Nginx wrinkle, or it may be because I don't understand Unix permissions. We're using Hudson CI to deploy our staging instance. So RAILS_ROOT is /var/lib/hudson/jobs/JOBNAME/workspace. Hudson runs as hudson user Nginx runs as www-data user hudson and nginx are both members of the www group root of my nginx conf points to RAILS_ROOT/public as per normal. RAILS_ROOT/config/environment.rb is owned by www-data (so Passenger runs as www-data) RAILS_ROOT and everything in it is owned by the www group and group has r/w/x permissions As it stood, Nginx threw 403 permission denied when requesting any url. error.log contained entries like this: public/index.html" is forbidden (13: Permission denied). These did not fix the or change the error (each with a stop/start of Ngnix): chmod 777 -R RAILS_ROOT chgrp www -R /var/lib/hudson I also tried Nginx as root, and passenger complained that it could not find config/environment (despite the path displayed on the error page being correct). The fix was to ensure everybody has read permissions on each directory in the heirachy. In this case chmod o+r /var/lib/hudson. But if the group has read permissions on the directory, and nginx is a member of the owner group of the directory, why was it necessary to allow everyone read permissions? Is there something have not grokked about permissions? $nginx -V nginx version: nginx/0.7.61 built by gcc 4.4.1 (Ubuntu 4.4.1-4ubuntu8) configure arguments: --prefix=/opt/nginx --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-2.2.5/ext/nginx --with-http_ssl_module --with-pcre=~/src/pcre-8.00/ --with-http_stub_status_module $cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=9.10 DISTRIB_CODENAME=karmic DISTRIB_DESCRIPTION="Ubuntu 9.10"

    Read the article

  • Why might my Fedora 15 live USB persistent storage not work?

    - by Richard J Foster
    I created a Fedora 15 "live" USB stick using the live USB creator found at https://fedorahosted.org/liveusb-creator/ and the Fedora 15 i686 Desktop ISO image with the persistent storage space set to 4096MB. (The USB stick I have available has an 8GB capacity, so there should be plenty of space.) Fedora appears to boot correctly, however it seems that the persistent storage is not working. To verify this, I opened a terminal prompt, then did su - followed by yum update yum. As expected, I was informed that a new version was available. (The live CD contains version 3.2.29-4, at the time of typing 3.2.29-6 is the current version). After installing, I verified that the new version was installed by typing yum --version. I then shutdown the system using shutdown now. After the system had shut down, I rebooted and returned to the terminal prompt. On typing yum --version, I was informed that the version was 3.2.29-4 (i.e. the original version). Why might the persistent storage not be working? Is there anything I can do to fix it?

    Read the article

  • Why is 32-bit-mode required in IIS7.5 for my app?

    - by Jonas Lincoln
    I have a .net4 web application running in a 64 bits 2008 server. I can only get it to run when I set the app pool to Enable 32-bits application to true. All dlls are compiled for .net4 (verified with corflags.exe). How can I figure out why Enable 32-bit is required? The error message from the event log when starting as a 64-bit app-pool Event code: 3008 Event message: A configuration error has occurred. Event time: 2011-03-16 08:55:46 Event time (UTC): 2011-03-16 07:55:46 Event ID: 3c209480ff1c4495bede2e26924be46a Event sequence: 1 Event occurrence: 1 Event detail code: 0 Application information: Application domain: removed Trust level: Full Application Virtual Path: removed Application Path: removed Machine name: NMLABB-EXT01 Process information: Process ID: 4324 Process name: w3wp.exe Account name: removed Exception information: Exception type: ConfigurationErrorsException Exception message: Could not load file or assembly 'System.Data' or one of its dependencies. An attempt was made to load a program with an incorrect format. at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) at System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() at System.Web.Configuration.AssemblyInfo.get_AssemblyInternal() at System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) at System.Web.Compilation.BuildManager.CallPreStartInitMethods() at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters, PolicyLevel policyLevel, Exception appDomainCreationException) Could not load file or assembly 'System.Data' or one of its dependencies. An attempt was made to load a program with an incorrect format. at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.Load(String assemblyString) at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) Request information: Request URL: "our url" Request path: "url" User host address: ip-adddress User: Is authenticated: False Authentication Type: Thread account name: "app-pool" Thread information: Thread ID: 6 Thread account name: "app-pool" Is impersonating: False Stack trace: at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) at System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() at System.Web.Configuration.AssemblyInfo.get_AssemblyInternal() at System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) at System.Web.Compilation.BuildManager.CallPreStartInitMethods() at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters, PolicyLevel policyLevel, Exception appDomainCreationException) Custom event details:

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >