Search Results

Search found 2043 results on 82 pages for 'newly insecure'.

Page 57/82 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Can I configure Thunderbird 3 to refresh the folder list for an Exchange IMAP account?

    - by Howiecamp
    Background: When used as an IMAP client against Gmail, Thunderbird 3 (may be the case in v2 also, not sure) will refresh it's list of folders (the folders correspond to Gmail labels) when you do "Download/Sync Now..." or restart the Thunderbird client. Any new folders (labels) created in Gmail will sync to the client and any folders moved/changed/deleted folders in Gmail will move/change/delete on the client as well. (Note: Thunderbird has the concept of "subscribing" to IMAP folders (assumingly allowing you to determine which folders you want, rather than bringing all of them down and dragging loads of data across the wire). When used against Gmail, Thunderbird appears to automatically subscribe to all folders (including when folders are newly created in Gmail), so this might be why the refresh is happening properly.) This behavior is what I want with Exchange. When using Thunderbird with Exchange (2007), the folder list doesn't refresh when folders are added/changed/deleted on the server and/or from a different mail client. When I look at the subscription options, some are checked and some are not (not sure why Thunderbird picked some and not others). And when I add new folders on the server and/or from another client, they never even appear in Thunderbird's list of folders, preventing me from subscribing to them.

    Read the article

  • apt-get : Size mismatch

    - by Cédric Girard
    I created a private deb repository to spread a software and it's updates to 600 Ubuntu netbooks. Each time the network is connected, my script try to do a apt-get update. But sometimes (quite often in fact), I have this : Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch The server is an 2.2 Apache, HTTPS only. There is no error on it's logs. Here is the script : apt-get update apt-get dist-upgrade --force-yes --yes Here is the complete output of apt-get Ign https://myserver maverick Release.gpg Ign https://myserver/ubuntu/ maverick/main Translation-en Ign https://myserver maverick Release Ign https://myserver maverick/main i386 Packages/DiffIndex Ign https://myserver maverick/main i386 Packages Ign https://myserver maverick/main i386 Packages Hit https://myserver maverick/main i386 Packages Reading package lists... Reading package lists... Building dependency tree... Reading state information... The following packages will be upgraded: majdb utilitaires voosicomat 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 6207kB/6273kB of archives. After this operation, 0B of additional disk space will be used. WARNING: The following packages cannot be authenticated! utilitaires voosicomat majdb Get:1 https://myserver/ubuntu/ maverick/main voosicomat all 2.0.1 [4755kB] Get:2 https://myserver/ubuntu/ maverick/main majdb all 1.0.17 [1452kB] Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch Fetched 7091kB in 21s (324kB/s) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Regards Cédric

    Read the article

  • Do email forms need to be santized before sending?

    - by levi
    I have a client that keeps getting reports from godaddy's "websiteprotection.com" stating how the website is insecure. Your website contains pages that do not properly sanitize visitor-provided input to make sure it contains no malicious content or scripts. Cross-site scripting vulnerabilities let malicious users execute arbitrary HTML or script code in another visitor's browser. Output: The request string used to detect this flaw was : /cross_site_scripting.?nasl.asp The output was : HTTP/1.1 404 Not Found\r Date: Wed, 21 Mar 2012 08:12:02 GMT\r Server: Apache\r X-Pingback:http://?CLIENTSWEBSITE.com/?xmlrpc.php\r Expires: Wed, 11 Jan 1984 05:00:00 GMT\r Cache-Control: no-cache, must-revalidate, max-age=0\r Pragma: no-cache\r Set-Cookie: PHPSESSID=?1jsnhuflvd59nb4trtquston50; path=/\r Last-Modified: Wed, 21 Mar 2012 08:12:02 GMT\r Keep-Alive: timeout=15, max=100\r Connection: Keep-Alive\r Transfer-Encoding: chunked\r Content-Type: text/html; charset=UTF-8\r \r <div id="contact-form" class="widget"><form action="http://?CLIENTSWEBSITE.c om/<script>cross_site_?scripting.nasl</script>.asp" id="contactForm" meth od="post"> It looks like it has an issue with the contact form. All the contact form does is posts an ajax request to the same page, and than a PHP script mails the data (no database stuff). Is there any a security issues here? Any ideas on how I can satisfy the security scanner? Here is the form and script: <form action="<?php echo $this->getCurrentUrl(); ?>" id="contactForm" method="post"> <input type="text" name="Name" id="Name" value="" class="txt requiredField name" /> //Some more text inputs <input type="hidden" name="sendadd" id="sendadd" value="<?php echo $emailadd ; ?>" /> <input type="hidden" name="submitted" id="submitted" value="true" /><input class="submit" type="submit" value="Send" /> </form> // Some initial JS validation, if that passes an ajax post is made to the script below //If the form is submitted if(isset($_POST['submitted'])) { //Check captcha if (isset($_POST["captchaPrefix"])) { $capt = new ReallySimpleCaptcha(); $correct = $capt->check( $_POST["captchaPrefix"], $_POST["Captcha"] ); if( ! $correct ) { echo false; die(); } else { $capt->remove( $_POST["captchaPrefix"] ); } } $dateon = $_POST["dateon"]; $ToEmail = $_POST["sendadd"]; $EmailSubject = 'Contact Form Submission from ' . get_bloginfo('title'); $mailheader = "From: ".$_POST["Email"]."\r\n"; $mailheader .= "Reply-To: ".$_POST["Email"]."\r\n"; $mailheader .= "Content-type: text/html; charset=iso-8859-1\r\n"; $MESSAGE_BODY = "Name: ".$_POST["Name"]."<br>"; $MESSAGE_BODY .= "Email Address: ".$_POST["Email"]."<br>"; $MESSAGE_BODY .= "Phone: ".$_POST["Phone"]."<br>"; if ($dateon == "on") {$MESSAGE_BODY .= "Date: ".$_POST["Date"]."<br>";} $MESSAGE_BODY .= "Message: ".$_POST["Comments"]."<br>"; mail($ToEmail, $EmailSubject, $MESSAGE_BODY, $mailheader) or die ("Failure"); echo true; die(); }

    Read the article

  • NTFS partition size not recognized after disaster recovery clone

    - by djechelon
    I'm in the middle of a disaster recovery of a 250GB hard disk that was "clicking". Obviously I didn't have a backup copy. I managed to salvage all the files thanks to GParted Live that was able to read the disk without a single "click" sound. So I cloned the partition to a new drive sized 500GB. Unfortunately, GParted process went to some kind of infinite loop, disks stopped I/O and after a couple of hours I interrupted the clone process I started. Now the problem is: when cloning the partition I also chose to expand 250GB to the whole 500GB of the target disk. Windows sees the partition sized 500GB in computer management, but Windows Explorer only sees 250. chkdsk e: /f says the filesystem is OK. How can I repair the file system and let Windows see the full 500GB of the new partition? An alternate idea is to deep-copy the files from the backup disk to a newly formatted disk. This should definitely fix. Any other ideas?

    Read the article

  • switching dns server providers

    - by Yoav Aner
    I'm trying to wrap my head around something that I thought I kinda understood, but clearly there's some piece missing. We're currently using Zerigo as our primary dns, with slave dns running on linode. This works quite well. However, recent DDOS attacks on zerigo meant that whilst dns queries were still resolved, we were unable to make any dns changes. Since we rely on dns changes on our own infrastructure, I'm looking to improve this somehow. I'd rather not ditch zerigo completely, and realise that this or similar problems can happen with ANY primary dns hosting provider. It might not be DDOS, but a bug on their server, or something that means we can no longer issue updates. For this I want to have some fallback option: a completely independent (primary) dns provider (maybe AWS), which we will keep in-sync manually. We will switch-over to it when there's a problem. This brings me to my question: How do I make sure we can switch those providers quickly enough? specifically, on our registrar, there's a list of name servers, but no settings like TTL etc. How do dns clients know to use the newly updated name server records? Is this configured in the SOA? However, the SOA itself is hosted with the dns provider and we might not be able to update it... This is not a question about a one-time move, which can be planned and scheduled and tested, but rather to be able to do so when things are half-broken.

    Read the article

  • ESXi - change to thin - virtual disk filesize is the same

    - by sven
    running ESXi 5.5 here with a datastore on a single SSD. Now, I thought about changing to thin disks from thick and found that I could use a tool on the ESXi host to do that. However, the file size of the new created virtual disk is not changing. I run: vmkfstools -i loader.vmdk -d 'thin' thinloader.vmdk Destination disk format: VMFS thin-provisioned Cloning disk 'loader.vmdk'... Clone: 100% done. After that I compared the virtual disksizes: ls -la *.vmdk -rw------- 1 root root 32212254720 Jun 10 08:25 loader-flat.vmdk -rw------- 1 root root 467 May 21 17:04 loader.vmdk -rw------- 1 root root 32212254720 Jun 10 08:27 thinloader-flat.vmdk -rw------- 1 root root 520 Jun 10 08:33 thinloader.vmdk Stats on the original file: stat loader.vmdk File: loader.vmdk Size: 467 Blocks: 0 IO Block: 131072 regular file Device: 8bf64d175e27544ch/10085333178302026828d Inode: 419443780 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-01-25 10:17:34.000000000 Modify: 2014-05-21 17:04:06.000000000 Change: 2014-05-21 17:04:06.000000000 and on the thin file: stat thinloader.vmdk File: thinloader.vmdk Size: 520 Blocks: 0 IO Block: 131072 regular file Device: 8bf64d175e27544ch/10085333178302026828d Inode: 432026692 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-06-10 08:27:45.000000000 Modify: 2014-06-10 08:33:30.000000000 Change: 2014-06-10 08:33:30.000000000 Anyone an idea why the disk is not providing any more space (tried with multiple VM's already - all the same)? Also, I have noticed that the newly created file "autoappend" "-flat" to the disk ... Thanks Sven Update - diff of the vmdk config* --- loader.vmdk +++ thinloader.vmdk @@ -7,15 +7,17 @@ createType="vmfs" -RW 62914560 VMFS "loader-flat.vmdk" +RW 62914560 VMFS "thinloader-flat.vmdk" ddb.adapterType = "lsilogic" +ddb.deletable = "true" ddb.geometry.cylinders = "3916" ddb.geometry.heads = "255" ddb.geometry.sectors = "63" ddb.longContentID = "6d95855805dfa0079327dfee29b48dca" -ddb.uuid = "60 00 C2 98 d5 7d 17 bf-ac 54 70 b1 2d 39 43 d5" +ddb.thinProvisioned = "1" +ddb.uuid = "60 00 C2 93 c4 13 6c cf-bb 7b 34 c9 2c b4 dc 1e" ddb.virtualHWVersion = "8"

    Read the article

  • How to get automatic upgrades to work on Ubuntu Server?

    - by J. Pablo Fernández
    I followed the documentation for enabling automatic upgrades in Ubuntu servers, but it's not really updating anything at all. My /etc/apt/apt.conf.d/50unattended-upgrades looks almost like the default. // Automatically upgrade packages from these (origin, archive) pairs Unattended-Upgrade::Allowed-Origins { "Ubuntu karmic-security"; "Ubuntu karmic-updates"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. The package 'mailx' // must be installed or anything that provides /usr/bin/mail. Unattended-Upgrade::Mail "[email protected]"; // Automatically reboot *WITHOUT CONFIRMATION* if a // the file /var/run/reboot-required is found after the upgrade //Unattended-Upgrade::Automatic-Reboot "false"; The directory /var/log/unattended-upgrades/ is empty. Running /etc/init.d/unattended-upgrades start is not very nice: root@mozart:~# /etc/init.d/unattended-upgrades start Checking for running unattended-upgrades: root@mozart:~# Something seems to be broken, but I'm not sure why. I have pending updates and they are not being applied: root@mozart:~# aptitude safe-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following packages will be upgraded: linux-libc-dev 1 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/743kB of archives. After unpacking 4096B will be used. Do you want to continue? [Y/n/?] In all the servers I have, unattended upgrades seems to have been disabled: root@mozart:~# apt-config shell UnattendedUpgradeInterval APT::Periodic::Unattended-Upgrade root@mozart:~# Any ideas what am I missing?

    Read the article

  • installing Conkeror on Ubuntu 12.04

    - by Menelaos Perdikeas
    I am reading the instructions on conkeror site (and elsewhere) on how to install conkeror on Ubuntu (I am using Ubuntu 12_04 LTS) and it seems that the correct sequence is: sudo apt-add-repository ppa:xtaran/conkeror sudo apt-get update sudo apt-get install conkeror conkeror-spawn-process-helper The first step (apt-add-repository) seems to execute without a problem, giving the following output: You are about to add the following PPA to your system: Conkeror Debian packages for Ubuntu releases without xulrunner (i.e. for 11.04 Natty and later) More info: https://launchpad.net/~xtaran/+archive/conkeror Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret- keyring /tmp/tmp.Re7pWaDEQF --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv CB29CBE050EB1F371BAB6FE83BE0F86A6D689050 gpg: requesting key 6D689050 from hkp server keyserver.ubuntu.com gpg: key 6D689050: "Launchpad PPA for Axel Beckert" not changed gpg: Total number processed: 1 gpg: unchanged: 1 However, the apt-get update seems unable to fetch packages from the newly added PPA, with its output ending in: Hit http://security.ubuntu.com precise-security/restricted Translation-en Hit http://security.ubuntu.com precise-security/universe Translation-en Err http://ppa.launchpad.net precise/main Sources 404 Not Found Ign http://extras.ubuntu.com precise/main Translation-en_US Err http://ppa.launchpad.net precise/main i386 Packages 404 Not Found Ign http://extras.ubuntu.com precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en W: Failed to fetch http://ppa.launchpad.net/xtaran/conkeror/ubuntu/dists/precise /main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/xtaran/conkeror/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead. Accordingly, apt-get-install conkeror fails with: mperdikeas@mperdikeas:~$ sudo apt-get install conkeror Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package conkeror Any ideas what might be wrong ?

    Read the article

  • Why does this preseed for gitolite fail?

    - by troutwine
    I'm installing gitolite on a Debian Squeeze box with the following preseed: gitolite gitolite/gituser string git gitolite gitolite/adminkey string ssh-rsa AAAAB3ECT gitolite gitolite/gitdir string /var/lib/git On installation: # debconf-set-selections /var/cache/debconf/gitolite.preseed # apt-get install gitolite Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: git-daemon-run gitweb The following NEW packages will be installed: gitolite 0 upgraded, 1 newly installed, 0 to remove and 26 not upgraded. Need to get 0 B/114 kB of archives. After this operation, 348 kB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package gitolite. (Reading database ... 24715 files and directories currently installed.) Unpacking gitolite (from .../gitolite_1.5.4-2+squeeze1_all.deb) ... Setting up gitolite (1.5.4-2+squeeze1) ... adduser: The home dir must be an absolute path. dpkg: error processing gitolite (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: gitolite E: Sub-process /usr/bin/dpkg returned an error code (1) Why? The pre-seed was extracted from a manually configured installation, per here and exists without issue on another machine.

    Read the article

  • Install Windows 7 from ISO image

    - by Albert
    Hi, I have an ISO file of the Windows 7 DVD and I want to install it on my PC which currently only runs Linux. I don't have any DVD drive. I have some unpartitioned space on one disk where I want to install it in. When I am doing this for Linux, I usually just create the partitions from the running system, format them, mount them, copy files over, chroot into it, setup the stuff and I can boot into it (or I use some of the uncountable available scripts which do exactly that automatically). However, I have no idea how to do the same thing with Windows. So far, I tried with VMware, i.e. I gave it direct full access to the disk where I want to install it in, installed it there, then tried to boot natively into it. The Windows logo showed up but after maybe 3 seconds or so, it crashes. Safe mode also crashes. I already expected that this probably would exactly behave the way it does right now because I have heard that Windows is quite sensible about hardware changes (i.e. the VMware hardware and the real hardware). However, how can I fix it now that it works? Or I could also just delete it again and try just over. But how exactly? I also searched for ways to boot directly into an ISO file. There seem to be ways to do that via GRUB (and maybe some additional boot loader), although quite complicated. I already tried one method (GRUB: map ...iso (hdX)), however, that didn't worked. Also, even if it does work, I will get into trouble when I boot into the newly installed Windows and it requests for the DVD (because it does that at the first boot into the new system). Seems all quite complicated. Isn't there some easy way like I would do it for Linux? Or what would be the easiest way to get what I want?

    Read the article

  • IPSec tunnelling with ISA Server 2000...

    - by Izhido
    Believe it or not, our corporate network still uses ISA Server 2000 (in a Windows Server 2003 machine) to enable / control Internet access to / from it. I was asked recently to configure that ISA Server to create a site-to-site VPN for a new branch in a office about 25 km. away from it. The idea is basically to enable not only computers, but also Palm devices (WiFi-enabled, of course), to be able to see other computers in both sites. I was also told that a simple VPN-enabled wireless AP/router (in this case, a Cisco WRV210 unit) should be enough to establish communications with the main office. To be fair, the router looks easy to configure; it was confusing at first, but further understanding of how site-to-site VPNs work cleared all doubts about it. Now I need to make modifications to our ISA Server in order to recognize the newly installed & configured "remote" VPN site. Thing is, either my Googling skills are pathethically horrible, or there doesn't seem to be much (or any, at all) information about how to configure an ISA Server 2000 for this purpose. Lots of stuff on 2004, of course; also, I think I saw something for 2006. But nothing I could find about 2000. Reading about 2004, it seems that the only way I can do site-on-site with a Cisco router (read: a non-ISA-Server machine) is through something they call a "IPSec tunnel". Fair enough. However, I can't figure for the life of me how could I even start to find, leave alone configure, such a thing. Do you, people, happen to know how to do IPSec tunelling on a ISA Server 2000, so I can connect to a Cisco WRV210 VPN-enabled router, and build a site-to-site VPN for both networks? Or is this not possible at all? (Meaning I should change anything in this configuration to make it work...)

    Read the article

  • PC only boots from Linux-based media and won't boot from DOS-based media

    - by Xolstice
    I have this problem where the PC only seems to boot from a floppy disk or CD if it was created as a Linux-based bootable media. If it was created as a DOS-based bootable media the system just freezes at the starting point of the boot process. I originally asked this under question 139515 for CD booting only, and based on the given answers, I was under the impression the problem was with the CD-ROM drive; however, I have since installed a newly purchased CD-ROM drive and the same freezing occurs. This then made me try the DOS bootable floppy disk approach and I was quite surprised that it exhibited the same freezing problem. I then tried try a Linux bootable floppy and everything booted from it without any issues. As I mentioned in my original question, the PC was booting just fine from the DOS-based bootable CD, and then it suddenly decides to pull this freezing stunt. I can't remember if I changed anything in the BIOS settings that may I have caused the problem, but I am wondering if that could be the case - it is currently using the Award Module BIOS v4.60PGMA. Can anyone help?

    Read the article

  • Can't connect to EC2 instance Permission denied (publickey)

    - by Assad Ullah
    I got this when I tried to connect my new instace (UBUNTU 12.01 EC2) with my newly generated key sh-3.2# ssh ec2-user@**** -v ****.pem OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to **** [****] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /var/root/.ssh/id_rsa type -1 debug1: identity file /var/root/.ssh/id_rsa-cert type -1 debug1: identity file /var/root/.ssh/id_dsa type -1 debug1: identity file /var/root/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '****' is known and matches the RSA host key. debug1: Found key in /var/root/.ssh/known_hosts:4 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /var/root/.ssh/id_rsa debug1: Trying private key: /var/root/.ssh/id_dsa debug1: No more authentication methods to try.

    Read the article

  • How to debug silent hang on shutdown of Solaris 10?

    - by jblaine
    We're experiencing a mysterious hang on shutdown of a newly-imaged Oracle/Sun Solaris 10 SPARC box. It is repeatable (in the same spot ... from what we can tell). We let it try to work itself out multiple times for 5-10 minutes and it never progressed. I've never seen this happen before. The last thing displayed on the console is that syslogd was sent signal 15. Prior to us disabling snmpdx on the box, the last thing on the console was that snmpdx was sent signal 15 (after syslogd was sent signal 15). While very rare to find, in Solaris days past, I'd have a better idea from experience where the problem might be, and could then narrow things down further with silly (but effective) debugging echo statments in /etc/*.d scripts. With SMF in the picture, I'm not really quite sure where to start. We forced a crash dump via sync at the {ok} prompt for later analysis, and then let the box come up because it's a production server and our scheduled outage window was closing. /var/adm/messages shows nothing of use. How would you debug this situation? mdb ps of the savecore shows the following processes were running at hang time (afsd is the OpenAFS client and that many are expected): > > ::ps S PID PPID PGID SID UID FLAGS ADDR NAME R 0 0 0 0 0 0x00000001 00000000018387c0 sched R 108 0 0 0 0 0x00020001 00000600110fe010 zpool-silmaril-p R 3 0 0 0 0 0x00020001 0000060010b29848 fsflush R 2 0 0 0 0 0x00020001 0000060010b2a468 pageout R 1 0 0 0 0 0x4a024000 0000060010b2b088 init R 1327 1 1327 329 0 0x4a024002 00000600176ab0c0 reboot R 747 1 7 7 0 0x42020001 0000060017f9d0e0 afsd R 749 1 7 7 0 0x42020001 00000600180104d0 afsd R 752 1 7 7 0 0x42020001 0000060017cb44b8 afsd R 754 1 7 7 0 0x42020001 0000060017fc8068 afsd R 756 1 7 7 0 0x42020001 0000060017fcb0e8 afsd R 760 1 7 7 0 0x42020001 00000600177f4048 afsd R 762 1 7 7 0 0x42020001 000006001800f8b0 afsd R 764 1 7 7 0 0x42020001 000006001800ec90 afsd R 378 1 378 378 0 0x42020000 0000060013aee480 inetd R 7 1 7 7 0 0x42020000 0000060010b28008 svc.startd R 329 7 329 329 0 0x4a024000 00000600110ff850 sh Z 317 7 317 317 0 0x4a014002 0000060013b3a490 sac

    Read the article

  • Homebrew large data cluster access for 2 user levels?

    - by Yegor
    The title probably makes little sense, so here is an example. I have a file hosting site, that serves a large amount of semi-randomly accessed files. The setup is as follows: High horsepower front-end +DB server that also does encoding for files that need encoding Fresh file server, which stores newly uploaded content, thats probably (and usually) rapidly accessible, which has 500GB of raided SSD storage, that can push over 3GBit of traffic. 3 cheap node servers, containing 2 x 750GB SATA drives in raid1, where files older than 2 weeks are archived, from the SSD server (mentioned above). Files on each server are accessed via subdomains (via modsec) in a straight forward fashion (server1.domain.com, server2.domain.com, etc) Where I have the problem is this. I introduced a "premium" service where people pay a small fee every month, and get ad-free, quick accesses to stuff on the site. Once they are logged in, they access same files via premium.server1.domain.com via a different modsec script, with a different pass phrase. That all works fine and dandy.... except the cheap node servers are all IO bound, so accessing the files on them via a different, unsaturated network makes no difference, since it cannot read off the drive fast enough. What would be a good way to make files on the site be accessible via 2 different network routes, 1 of which will be saturated (the "free network") while all other files are on an un-saturated "premium" network?

    Read the article

  • How to calculate unweighted averages in Excel PivotTable?

    - by yonatron
    I often make PivotTables in which each row contains a number of per-person average measures. I then want to look at the unweighted column average for each measure, and usually make some kind of chart from these. Because my individual cells are often averaged from different numbers of data points, the Grand Total row ends up being a weighted average, which I’m not interested in. So I usually make my own average row a few rows above the table to use for my charts. That’s not too much work, but there’s another problem. I often add a few more people’s worth of data to the PivotTables’ source, then refresh the tables. This means my average row needs to be updated to encompass more rows from the PivotTable. Not a huge deal with one table, but when I have lots of them across lots of sheets, I have to do find/replace on a whole bunch of formulas. So: is there a way to automatically get unweighted column averages in a PivotTable, such that when the table is refreshed, the averages don’t change locations and encompass the newly added (or removed) data Thanks

    Read the article

  • Install and enforce a scheduled task across a Windows domain

    - by Ricket
    We have a small domain of about 70 Windows computers (XP and 7). We want to schedule a command (an update mechanism) to run on all computers periodically, and we want the task to run regardless of the computer's connection to our network (i.e. the task should run even on a laptop that isn't connected to our VPN). We have a Microsoft System Center Essentials 2010 server so that might come in handy. The options I see are these: Do it completely manually. Install the scheduled task by hand or remotely using psexec (and the at command?) for each computer in our network. Enforce that newly imaged computers should have this task installed on them before deployed to the employee, or the task should be in the image. High initial cost (having to do this for each of 70 computers) but building it into the image might work... But there is some maintenance in making sure the task is added to everything. And I fear that a year or two down the road, we will have forgotten about it or gotten sloppy or had new IT employees who miss this step and some computers won't have the task. Having one of our servers run a script that loops through all computers and psexec's the command on each computer in the network -- it would only run on running, connected computers, so this solution wouldn't work. I suspect SCE could do something like this too, but again this is not a good solution. Neither of these are ideal, and I'm certain there is a better way to do it -- right? What is the best way to accomplish this task?

    Read the article

  • Has anyone else experienced page fault crashes with Snow Leopard on MacBook Pro?

    - by BruceMartin
    I bought a Macbook Pro computer on Sept 3rd from MacMall. As I was using it to learn Snow Leopard (this is my first Mac, I am a long time Windows developer), it would crash every one or two hours. After calling Apple support, I dropped it off at the Apple store for diagnostic testing and repair. WhenI picked up the computer from Apple, they told me that it did not crash while they had it. They suspected a software problem, so they had done a fresh install of Snow Leopard for me. At home I went through the start up procedure with the newly installed Snow Leopard. Then I downloaded the iPhone SDK, and the computer crashed again while I was away waiting for the download to finish. I was using a USB mouse, which was the only device attached. No other software installed. I was presented with a dump that mentions terms like "panic", Kernel trap", and "page fault". Does anyone have any idea what this problem might be? I really can not use this MacBook under these circumstances.

    Read the article

  • Can't install xclip on Ubuntu 10.10

    - by wildster
    I'm trying to load an SSH key to Github from a new machine and this command is not working: sudo apt-get install xclip Reading package lists... Done Building dependency tree Reading state information... Done Package xclip is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package xclip has no installation candidate when I try: sudo aptitude install xclip Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done No candidate version found for xclip No candidate version found for xclip The following partially installed packages will be configured: synaptics-dkms 0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 0B will be used. Writing extended state information... Done Setting up synaptics-dkms (1.1.1) ... Loading new synaptics-1.1.1 DKMS files... Error! Cannot locate /usr/src/synaptics-1.1.1.dkms.tar.gz. File does not exist. dpkg: error processing synaptics-dkms (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: synaptics-dkms E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up synaptics-dkms (1.1.1) ... Loading new synaptics-1.1.1 DKMS files... Error! Cannot locate /usr/src/synaptics-1.1.1.dkms.tar.gz. File does not exist. dpkg: error processing synaptics-dkms (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: synaptics-dkms Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Any idea how I can install this? Mucho thanks in advance

    Read the article

  • Programs don't have permissions when using absolute path

    - by Markos
    I have asked this on askubuntu but didn't get a single response in days, so I will try it here. I have directory structure like this: /path/dir1 - all users in group1 must have rwx permissions, including subdirs and newly created dirs /path/dir1/dir2 - also users in group2 must have rwx permissions So what I tried is that I used ACL. getfacl /path/dir1 # file: /path/dir1 # owner: root # group: nogroup user::rwx group::--- group:group1:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:group1:rwx default:mask::rwx default:other::--- getfacl /path/dir1/dir2 # file: /path/dir1/dir2 # owner: root # group: nogroup user::rwx group::--- group:group1:rwx group:group2:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:group1:rwx default:group:group2:rwx default:mask::rwx default:other::--- That shows that I have granted rwx to group1 in /path/dir1 and rwx to group1 and group2 in /path/dir1/dir2. Now it gets interesting. Let's assume, that user2 is member of group2. If I issue commands as user2: cd /path/dir1/dir2 mkdir foo Then folder is succesfully created. However, if I do this: mkdir /path/dir1/dir2/foo I get permission denied error. I have tried extensively to resolve the problem. What I have found is that ACL is to blame. If I add permissions to group2 in /path/dir1 it starts to work. Also if I completely remove /path/dir1 ACL it starts to work. Obviously I am missing something VERY basic. I don't have much experience with linux, but this is a no-brainer on Windows. I have spent way too many hours to resolve this basic requirement. If you need more information, I will try to update the question, so feel free to ask!

    Read the article

  • How can one make vim change terminal colors?

    - by amn
    I am using command line vim running from an xterm (which runs sh). I have color in vim according to a color scheme I like. The problem is, as usual with 256-color terminals and truecolor color schemes, colors are wrong. Now, I know I can do a gazillion things to fix this, including installing gvim, but I like my terminal. In fact, using xrdb [-merge] .Xresource file, I now actually have xterm override the color values, and the theme now looks perfect. Since, I may be switching to another theme, I need some workflow to have vim actually do what xrdb does - to reset terminal color pallette. Because right now I have to reset color values with xrdb ... first, then launch another xterm to actually use these values, then launch vim from that newly opened xterm to have the exact colors. The way I understood it is that vim color scheme, just as any other terminal application, uses colors by referencing their ids, and X resources set the values themselves. I think I saw somewhere on Internet, that terminal control character sequences can reset actual color values, in fact, I am sure they can - I managed to set my terminal background color at runtime. How would I make vim execute these sequences to match values for the color scheme? And is there any reference to these control sequences, as part of any standard?

    Read the article

  • LG LW20 Express won't boot after hdd replace

    - by Mika
    My old laptop (LG LW20 Express) got a hdd failure and I replaced the hdd. Now the laptop won't boot from cd or usb. I'm trying to install ubuntu on it. When I turn the laptop on it shows me the startup screen but when it should be the time to load operating system it just gives a black screen and starts over. This loop continues until I shut down the laptop. I created the usb boot drive following this guide https://help.ubuntu.com/community/Installation/FromUSBStick/ I used my boot cd to install ubuntu on this machine I'm using right now. So at least the cd should work. From the BIOS I can see that my newly installed hdd is recognized and put as a secondary master. Also the cd and removable media are in the boot list before hdd. The laptop runs pretty hot. The fan is at full speed pretty soon after the laptop is turned on. Earlier I suspected that it would have been the almost broken hdd that would have produced that heat but there obviously is something else also. Any ideas what to check?

    Read the article

  • Setting up SSL on JBoss 5

    - by socal_javaguy
    How can I enable SSL on JBoss 5 on a Linux (Red Hat - Fedora 8) box? What I've done so far is: (1) Create a test keystore. (2) Placed the newly generated server.keystore in $JBOSS_HOME/server/default/conf (3) Make the following change in the server.xml in $JBOSS_HOME/server/default/deploy/jbossweb.sar to include this: <!-- SSL/TLS Connector configuration using the admin devl guide keystore --> <Connector protocol="HTTP/1.1" SSLEnabled="true" port="8443" address="${jboss.bind.address}" scheme="https" secure="true" clientAuth="false" keystoreFile="${jboss.server.home.dir}/conf/server.keystore" keystorePass="mypassword" sslProtocol = "TLS" /> (4) The problem is that when JBoss starts it logs this exception (during start-up) (but I am still able to view everything under http://localhost:8080/): 03:59:54,780 ERROR [Http11Protocol] Error initializing endpoint java.io.IOException: Cannot recover key at org.apache.tomcat.util.net.jsse.JSSESocketFactory.init(JSSESocketFactory.java:456) at org.apache.tomcat.util.net.jsse.JSSESocketFactory.createSocket(JSSESocketFactory.java:139) at org.apache.tomcat.util.net.JIoEndpoint.init(JIoEndpoint.java:498) at org.apache.coyote.http11.Http11Protocol.init(Http11Protocol.java:175) at org.apache.catalina.connector.Connector.initialize(Connector.java:1029) at org.apache.catalina.core.StandardService.initialize(StandardService.java:683) at org.apache.catalina.core.StandardServer.initialize(StandardServer.java:821) at org.jboss.web.tomcat.service.deployers.TomcatService.startService(TomcatService.java:313) I do know that's there's more to be done to enable full SSL client authentication....

    Read the article

  • dpkg broken while upgrading Debian Etch to Lenny

    - by artvolk
    Good day! While trying to recover a box to lenny it seems I've broken things. It upgrades libc and glib after that dpkg seems to be broken. I can run apt-get, but it gets segmentation fault from dpkg: # apt-get -f install Reading package lists... Done Building dependency tree... Done 0 upgraded, 0 newly installed, 0 to remove and 316 not upgraded. 9 not fully installed or removed. Need to get 0B of archives. After unpacking 0B of additional disk space will be used. /bin/sh: line 1: 4606 Segmentation fault /usr/sbin/dpkg-preconfigure --apt E: Sub-process /usr/bin/dpkg received a segmentation fault. I can login via SSH but even ls is not working: # ls Segmentation fault Is there anything I can do remotelly via SSH? # ldd /bin/ls linux-gate.so.1 => (0xffffe000) librt.so.1 => /lib/tls/i686/cmov/librt.so.1 (0xb7fc8000) libacl.so.1 => /lib/libacl.so.1 (0xb7fc2000) libselinux.so.1 => /lib/libselinux.so.1 (0xb7fac000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb7e51000) libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb7e3f000) /lib/ld-linux.so.2 (0xb7fd8000) libattr.so.1 => /lib/libattr.so.1 (0xb7e3b000) libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb7e37000) libsepol.so.1 => /lib/libsepol.so.1 (0xb7df6000) It seems I've temporary fixed it with: # touch /etc/ld.so.nohwcap From here: http://saintaardvarkthecarpeted.com/blog/archive/2005/08/_etc_ld_so_nohwcap.html

    Read the article

  • Losing Windows Authentication intermittently

    - by Mark Robinson
    I'm running a website on IIS6 / Server 2003 which uses Integrated Windows Authentication on a local intranet. I can browse to the site but get intermittent "Object null" errors when calling the following C# code which is called on every request: .... GetUserIdFromPrincipal(User) .... public static string GetUserIdFromPrincipal(IPrincipal principal) { return principal.Identity is WindowsIdentity ? (principal.Identity as WindowsIdentity).User.Value : principal.Identity.Name; } (Apologies for the code sample but was torn between SF and SO but think this is more to do with server config). So as the error is intermittent clearly Windows Auth is working on some level but after navigating around the site for several clicks I get the null reference error meaning IPrincipal is null (I thought this should never be null in ASP.NET). The error only happens on this newly built VM. The code is fine on other machines. Does IIS request the Windows Auth details on each request? What would cause such an intermittent problem? Any help or suggestions would be much appreciated.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >