Search Results

Search found 7583 results on 304 pages for 'roger guess'.

Page 197/304 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • Run mplayer from bash in background without extra bash

    - by Emanuel Berg
    I would like to watch a movie with mplayer from bash in the background, like I do with all programs and there has never been any problems: mplayer Kick* & if you'd like to see Kickboxer, for example. But, this doesn't bring up the window, instead it says the process is stopped. I can bring the movie window up with fg mplayer, but then the CLI is unavailable. (This is -- as far as I can see anyway -- equivalent to mplayer Kick*). I'm able to work around the problem like this: $(mplayer Kick*) & But then I get two extra bashes (I see this with ps). It is not really a problem as those closes down when I Alt-F4 the movie, but it is still undesirable. I guess I'm most annoyed with having to type that extra stuff, so if you come up with an alias or function, that would be OK, to. Although, it wouldn't hurt me to learn what's going on. Edit: Hm, it doesn't even seem to work the way I said. The "work"around is not reliable. Forget about it.

    Read the article

  • Connecting Adium to Google Talk with a 2-factor authentication account isn’t working

    - by Robin
    Anyone else having this problem? After turning on 2-factor authentication on my Google Account I stopped being able to log in through Adium (Mac IM client that uses Pidgin’s libpurple for IM). Obviously you need to generate an application-specific password but these won’t let me log in. Application specific passwords work with other applications (e.g. Reeder for feeds and calendering on my phone). Google specifically mention Adium in their examples of setting up an application password for Google Talk so I doubt it’s a generic Adium problem. I can still access Google Talk for this account if I use a talk widget on a Google Website (Plus, or iGoogle for example). My bug report to Adium including a connection log file is up on their Trac: http://trac.adium.im/ticket/15310 . No activity there though. I also asked around in their IRC channel but no-one else could replicate the problem. If I had to guess then I’d think it was a consequence of me not having a GMail account associated with my Google account. I don’t see exactly why that would cause it, but it seems like a fairly unusual setup that might not have been tested for.

    Read the article

  • I cant get rid of apache2

    - by DaNieL
    It's obvious that I am a newbie with server amministration, my goal is to reach the knowledge needed to working with web services. I played with my debian server, and i messed up apache2; Now i want to completely* remove it from the server and then reinstall it as new. *for completely i really mean completely, logs, configurations, settings, everythings! I followed the steps suggested by freedom_is_chaos in this answer, and i guess apache2 is nomore installed, becose if i do apt-get remove apache2 i get this: Reading package lists... Done Building dependency tree Reading state information... Done Package apache2 is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 11 not upgraded. Then, i rebooted the server, and: # netstat -plant Active Internet connections (servers and established) [...] tcp6 0 0 :::80 :::* LISTEN 3467/apache2 [...] WTF? is apache2 still here? So seem: # /etc/init.d/apache2 stop Stopping web server: apache2. But: # update-rc.d remove apache2 update-rc.d: /etc/init.d/remove: file does not exist So, what is happening to my server? How can i completely and truly remove apache2 from my server?

    Read the article

  • apache Client Certificate Authentication errors: Certificate Verification: Error (18): self signed certificate

    - by decoy
    So I have been following instructions on setting up Client Certificate Authentication in Apache2 w/ mod_ssl. This is solely for the purpose of testing an application against CAA, not for any sort of production use. So far I've followed http://www.impetus.us/~rjmooney/projects/misc/clientcertauth.html for advice on generating my CA, server, and client encryption information. I've put all three of them into /etc/ssl/ca/private. I've setup the following additional directives in my default_ssl site file: <IfModule mod_ssl.c> <VirtualHost _default_:443> ... SSLEngine on SSLCertificateFile /etc/ssl/ca/private/server.crt SSLCertificateKeyFile /etc/ssl/ca/private/server.key SSLVerifyClient require SSLVerifyDepth 2 SSLCACertificatePath /etc/ssl/ca/private SSLCACertificateFile /etc/ssl/ca/private/ca.crt <Location /> SSLRequireSSL SSLVerifyClient require SSLVerifyDepth 2 </Location> <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> ... </VirtualHost> </IfModule> I've install the p12 file into Chrome, but when I go to visit https://localhost, I get the following errors Chrome: Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error. Apache: Certificate Verification: Error (18): self signed certificate If I had to guess, one of my directives is not setup right to load and verify the p12 w/ my self created CA. But I can't for the life of me figure out what it is. Would anyone have more experience here who could point me in the right direction?

    Read the article

  • Intel 1.83 Mac Mini upgraded to 1.5 gig for Snow Leopard

    - by Paula
    Even though I've upgraded countless video cards, RAM, harddrives, motherboards in PCs... this will be my first mac mini RAM upgrade. I've watched the classic "putty knife" video. (Absurd method... but I guess it's what I'm stuck with.) I have a 1.83 Intel-based Mac Mini from 2007-2008, with 1 gig of RAM. (Two 512 sticks) Can I install 1 gig + 512 ? (Or do I have to throw away my existing sticks and buy two 1 gig sticks?) This old machine is rarely used... so I want to spend the absolute minimum on this RAM upgrade. We ONLY use it to run xCode... nothing more. But wanted to increase the RAM so we can install Snow Leopard. I have no idea how many pins the memory has. I printed out over FORTY pages of specs about this machine from "About this Mac"... but didn't find what I needed. Does this sound right: DDR2 SDRAM (but no mention of SO-DIMM) 667MHZ (but don't know if I can use faster also) Pin count: Unknown Computer model number: Unknown (But I "think" it's an MB138/A) PC2 RAM (unknown... not mentioned) 5300 (unknown... not mentioned) Mfg date: (unknown... not mentioned) Number of slots: (unknown... not mentioned) Laptop or desktop RAM: (unknown... not mentioned)

    Read the article

  • Howto make a diff of a bios or backup/ restore all bios settings

    - by sfonck
    Hi, I'm using an Dell M90 Precision Laptop which has a NVidia Quadro FX 2500M graphics card and is running Windows XP. Laptop has been running fine - but a few weeks ago screen went 'white' - restarted computer- bios and startup screens show weird green dots and stripes, normal startup only shows a black screen... only VGA mode works to display something. I've been trying to remove and reinstall the correct drivers downloaded from Dell's website - no solution. I gave up and reinstalled XP - everything was working perfect again. 2 weeks later - again the white screen - tried everything again (flashin new bios also - nothing works) Reinstalled XP - everyhting was working again, so I made a DriveSnapShot of the partition. Today - again the 'white screen'. Ok, no problem ...I was thinking all I needed to do was to restore the DriveSnapShot backup... Few minutes later the backup is restored ... but guess what: video driver does not work correctly... As the DriveSnapShot restored the complete partition, as it was at the time everything was working perfectly, this would mean my driver problems are due to 'settings' in the bios or on the graphics-card itself + these 'settings' can get overridden by doing a new XP-install.... I'm out of options, can somebody help me to find a solution for this problem: Is there some way to backup and restore a bios after seeing some problems? Is there some way to know what is causing this problem like a bios diff utility? Thanks!

    Read the article

  • Goldtouch USB Keyboard reverses keystrokes in fast typing -- expected?

    - by Justin Grant
    I am running into an odd keyboard problem: some key combinations end up reversed (e.g. "pl" ends up being emitted as "lp") when I'm typing quickly. The problematic ones are the key combos I hit with two adjacent fingers on my right hand-- in other words, the combos I can hit the fastest. No idea how fast is "fastest", but I guess around 50-150 msecs gap between them. I'm trying to track down whether this represents a failed keyboard, an inherent limitation of my Goldtouch USB keyboards, or a software problem on my Windows 7 Lenovo T500. I use a PS/2 version of the same Goldtouch keyboard at home with no problems. I've tried another USB keyboard with my laptop and can't repro the problem. I've also used this keyboard on other laptops without a problem. According to this SU thread, USB keyboards have higher latency than PS/2 keyboards-- up to 30 msecs. I find it hard to imagine that I can type key combos faster than 50 msecs, probably more like 100-150. Anyone encountered this problem with this or another keyboard? If so, how did you fix it? Any idea if there's a "keyboard log" or some way to diagnose the problem inside Windows?

    Read the article

  • Help configuring Mercury mail or similiar with XAMPP to send e-mail outside of localhost

    - by user291040
    I'm building a PHP/MySQL driven website for my department at work (installed via XAMPP). I need to be able to send mail to outside e-mail addresses (e.g., Yahoo, Hotmail, etc.) using the PHP mail() function. As I see it I have to solutions: Configure the SMTP directive in php.ini to the server running at my work. Configure/run a mail server that can send e-mails outside of localhost (I'm trying Mercury because it comes installed with XAMPP). Here are problems I've come up against: I took a guess at our SMTP server name, and when calling PHP mail(), I get the error SMTP server response: 530 5.7.1 Client was not authenticated I can't be sure, however, the SMTP name is correct (I can't get help from our IT guys because of politics). I have tried to use mercury mail. Mercury seems to be picking up the request, but it doesn't want to forward the e-mail to the outside. I keep getting a Temporary error 240 (temporary MX resolution error). I've searched high and low but still can't find a definitive answer on how to send e-mails outside of localhost. Any help is greatly appreciated.

    Read the article

  • PHP-APC Installation

    - by Leo
    Trying to get my head around the way to install APC cache on PHP 5.3.13. That's a VPS with apache, configured preferably through whm/cpanel (although not only). I read a bunch of articles where it was suggested to use FastCGI with APC, as suPHP doens't do well with opcode caching, and fcgid_module doesn't do it right for APC either. Noted that fcgid_module is a newer package than FastCGI and that's what whm/cpanel installs for you but ok, that can be solved I guess. Then I'm reading that php-fpm is a much better alternative to manage the php processes, especially for APC. Ok. Then I realised that php-fpm is included in php core since 5.3 and got confused. Does that mean I don't have to use FastCGI/fcgid_module (and what should I use instead of them - mod_php or cgi?)? Or does that mean that I still need to get the older FastCGI module, and configure it to use one process per user (or just one process?)? Or would fcgid_module work as well? And how bad would it be just to go with mod_php/APC to avoid troubles of installing php-fpm and FastCGI (whm/cpanel doesn't support neither) given than Varnish would serve most of the static content anyway - no php process need to be created for static content. Any examples of their FastCGI/fcgid_module/php-fpm/APC configurations would be greatly appreciated as well.

    Read the article

  • PHP CLI not respecting memory limit in php.ini

    - by user13743
    I am using drush, which is a command-line php app to manage a drupal website. I am running a command to import a lot of data, which is causing me to hit php's memory limit. PHP Fatal error: Allowed memory size of 536870912 bytes exhausted ... Which is 512MB if I'm doing the math correctly (536870912 / 1024 / 1024 = 512). I've changed the directive in the php.ini that drush uses: $> drush status ... PHP configuration : /etc/php5/cli/php.ini $> grep memory /etc/php5/cli/php.ini ; Maximum amount of memory a script may consume (128MB) ; http://php.net/memory-limit memory_limit = 1024M But I'm still hitting the 512 MB limit! I am running in a virtual machine, whose memory settings I changed from 512 to 1025 MB of RAM to allow drush to run. $> free -m total used free shared buffers cached Mem: 1010 578 431 0 14 392 -/+ buffers/cache: 172 837 Swap: 382 0 382 So it says it has some 431 MB free, now that I've bumped the vm up to 1024. I guess half the memory is being used to run the GUI, but I don't understand how the GUI was running okay when the vm had 512 MB of ram. Why is the PHP cli still hitting a 512 MB memory limit? If it was hitting a system memory limit, shouldn't it die around 431MB, which is what the free command says is available?

    Read the article

  • swapping or trashing with vast amounts of unmapped pagecache

    - by Marco
    I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only Firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happens: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this?

    Read the article

  • swapping or trashing with vast amounts of unmapped pagecache

    - by Marco
    I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happes: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this? TIA, Marco

    Read the article

  • PHP-FPM and APC for shared hosting?

    - by Tiffany Walker
    We are looking into finding a way to get APC to only create one cache per account / site. This can be done with Fastcgi (last update 2006…) but with Fastcgid APC will have to create multiple caches for multiple processes run by the same account. To get around this problem, we have been looking into PHP-FPM PHP process manager allows multiple PHP processes to share a single APC cache. But from what I have read (I hope I'm wrong) , even if you create a pool per process, all sites accross all pools will share the same APC cache. This brings us back to the same problem as with shared Memcached: it's not secure ! On php-fpm's site I read that you can chroot php-fpm pools and define a specific UID and GID per pool… if this is the case then shouldn't APC have to use this user and not have access to other pools cache ? An article here (in 2011) suggests that you would need to run one process per pool creating multiple launchers on different ports and different config files with one pool per config file : http://groups.drupal.org/node/198168 Is this still neceessary ? If so what would be the impact of running say 800 processes of php-fpm ? Would it be mainly memory ? If so how can I work out what the memory impact would be ? I guess that it would be better to run 800 times php-fpm then to have accounts creating multiple APC caches for a single site ? If on average an account creates a 50MB cache and creates 3 caches per account that makes 150Mb per account which makes 120GB… However if each account uses on average only 50Mb that would make 40GB We will have at least 128GB of ram on our next server so 40GB is acceptable if running 800 x PHP-FPM does not create an overhead of more than 20GB ! What do you think is PHP-FPM the best way to go to provide secure APC cache on shared hosting with a server that has a decent amount of memory ? Or should I be looking at another system ? Thanks !

    Read the article

  • How to make 7zip faster

    - by user34463
    I normally use winRAR over 7Zip simply because its faster and only a little less efficient with compression. I did a few tests on different filetypes and sizes comparing the 7zip and winRAR default settings on their normal compression and their best compression, and in a lot of cases winRAR was 50% faster and in some it was actually 100% faster. But, I do like FOSS more. So here are my questions: Is there a way to make 7zip speed up? I'd like it to at least be on par with rar's speed Is there a way to make recovery segments in 7zip like you can in rar? I didn't see any, but I guess it could be a command line thing. I tested winrar and 7zip using the latest stable version of each (4.something with 7zip). Is the 9.x beta release noticeably faster at compression? I'm talking about faster at a comparable setting in WinRAR, not just lowering to bare minimum compression. If it matters, I use a quad core intel i7 720 (1.6ghz)/(2.8ghz) with 4gb DDR3 ram, and the 64-bit version of 7zip.

    Read the article

  • Windows 7 - Windows Update won't update

    - by StickFigs
    I'm trying to update my Windows 7 Professional 32-bit edition and when I try to tell Windows Update to scan for updates it failed with the error code 0x80096001. I checked out WindowsUpdate.log and it appears this is the problem: Validating signature for C:\Windows\SoftwareDistribution\WuRedir\9482F4B4-E343-43B6-B170-9A65BC822C77\muv4wuredir.cab: WARNING: Error: 0x80096001 when verifying trust for C:\Windows\SoftwareDistribution\WuRedir\9482F4B4-E343-43B6-B170-9A65BC822C77\muv4wuredir.cab WARNING: Digital Signatures on file C:\Windows\SoftwareDistribution\WuRedir\9482F4B4-E343-43B6-B170-9A65BC822C77\muv4wuredir.cab are not trusted: Error 0x80096001 How can I go about fixing this? It looks like it's just this one (corrupted?) file that's causing the problem. Thanks! UPDATE: Upon inspecting the file mentioned in the error message it appears that the file does not exist! What does this mean and how do I get it back? UPDATE 2: Ok it appears that the file in question appears only for a split second when Windows Updating is trying to search (but fails) to find updates. So I guess the problem doesn't have to do with the file specifically then.

    Read the article

  • Disable raid member check upon mount to mount damaged nvidia raid1 member

    - by Halfgaar
    Hi, A friend of mine destroyed his Nvidia RAID1 array somehow and in trying to fix it, he ended up with a non-working array. Because of the RAID metadata, the actual disk data was stored at an offset from the beginning. I was able to identify this offset with dd and a hexeditor and then I used losetup to create a loop device with the proper offset, so that I could mount the partition. It was then that I ran into problems, namely that mount says: "mount: unknown filesystem type 'nvidia_raid_member'". I also had this when trying to mount a Linux MD component the other day, and because I can remember that doing that in the past worked, I surmised that it may be some kind of protection. I therefore booted an old Sysrescue CD and tried it there, which worked (because of the older version of mount/libc/kernel/whatever). I still need to try to get more data, and because I don't want to keep using that SysrecueCD, I'd like to be able to mount the disk on my normal system. So, my question is: can the check for a disk being a raid member be disabled? I guess I could also zero out blocks that look like the raid block, but I'd rather not... I made an image of the disk with par2 data, so it's revertable, but still...

    Read the article

  • "Could not claim interface on camera: -6" when trying to connect usb camera (Kinect)

    - by rzetterberg
    I have installed the freenect library from openkinect.org. With that library there is a demo application which you can run from the terminal to test out the Kinect. However when I run this command I get the following output: richard@behemoth:~$ sudo freenect-glview Kinect camera test Number of devices found: 1 Could not claim interface on camera: -6 Could not open device This particular error is thrown by the library libusb by the function libusb_claim_interface and the error -6 corresponds to the LIBUSB_ERROR_BUSY. So my guess is that it has something to do with mounting the usb, rather than specifically the freenect library or the Kinect itself. So my question is how can I find out what resource is using this interface and how can I free it so that I can access it? Edit: What I have tried so far (just to be sure): Rebooted Plugged-out, plugged-in Tried different usb ports Restarted udev Additional information that might be useful: /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=1c73f217-ac8d-451b-8390-7a680628a856 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=bb49bd29-07ec-45a0-bbab-46fb8362b06b none swap sw 0 0 sudo uname -r: Linux behemoth 3.0.0-14-generic-pae #23-Ubuntu SMP Mon Nov 21 22:07:10 UTC 2011 i686 i686 i386 GNU/Linux cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=11.10 DISTRIB_CODENAME=oneiric DISTRIB_DESCRIPTION="Ubuntu 11.10"

    Read the article

  • Can a CNAME be a hostname

    - by pulegium
    This is bit of a theological question, but nonetheless... So, a server has a hostname, let's say the fqdn is hostname.example.com (to be really precise about what I mean, this is the name that is set in /etc/sysconfig/network). The very same server has multiple interfaces on different subnets. Let's say the IPs are 10.0.0.1 and 10.0.1.1. Now the question is, is it theoretically (mind you, this is important, I know that practically it works, but I'm interested in purely academic answer) allowed to have the following setup: interface1.example.com. IN A 10.0.0.1 interface2.example.com. IN A 10.0.1.1 hostname.example.com. IN CNAME interface1.example.com. OR should it rather be: hostname.example.com. IN A 10.0.0.1 interface2.example.com. IN A 10.0.1.1 interface1.example.com. IN CNAME hostname.example.com. I guess it's obvious which one is making more sense from the management/administration POV, but is it technically correct? The argument against the first setup is that a reverse lookup to 10.0.0.1 returns interface1.example.com and not what one might expect (ie the hostname: hostname.example.com), so the forward request and then sub sequential reverse lookups would return different results. Now, as I said, I want a theoretical answer. Links to RFC sections etc, that explicitly allows or disallows use of CNAME name as a hostname. If there's none, that's fine too, I just need to confirm. I failed to find any explicit statements so far, bar this book, where this situation is given as an example and implies that it can be done as one of the ways to avoid MX records pointing to a CNAME.

    Read the article

  • Oracle 11gR2: NLS_CHARACTERSET accidentally removed with an UPDATE-Query

    - by Marco Nätlitz
    Hi folks, I have a fresh installation of Oracle 11gr2_x64 on CentOS. After the installation I wanted to get productive and started to import my dumps. One of the dumps caused characterset error so I tried to change the systems character-set to the one specified in the dump. I ran a statement like this: UPDATE nls_database_parameters SET parameter='WS....' WHERE parameter=’NLS_CHARACTERSET’; As you can see: I have written the value of the character-set in the parameter column instead of the value column. I guess I was just too much thinking about the problem instead of checking what I am typing there. After the query the parameter "NLS_CHARACTERSET" is gone and the server reports that the characterset is "(null)". I want to put the "NLS_CHARACTERSET" paramater back in the table but don't know how. If I try to do something like this INSERT INTO nls_database_parameters (PARAMETERS, VALUE) VALUES ("NLS_CHARACTERSET", "AL32UTF8"); I get the error: Fehler bei Befehlszeile:1 Spalte:84 Fehlerbericht: *Cause: SQL-Fehler: ORA-00984: Spalte hier nicht zulässig *Action: 00984. 00000 - "column not allowed here" Sorry that the error message is in German but it contains the Oracle error code. Do you have any idea how I can fix that? Thanks and best regards Marco

    Read the article

  • Windows 7: Don't combine Taskbar Icons, but only display Icons

    - by cable729
    I've never really liked this feature about Win7, but I guess I just got used to it. The feature I don't like is mousing over the win7 icons when there are multiple windows of the same type open. You have to waste that extra time choosing the window after it pops up and it's just clunky. The XP taskbar was nice, because you didn't have to go through any of this, but it had the problem where you ran out of taskbar space fast. I often resized my taskbar to take care of this. However, on Windows 7, I don't use up even 1/3 of my taskbar. I have all this space I can use, and instead it's all squished to the left, making me take extra steps. Are there any applications I can use to get this desired behavior, or else what route would you recommend to take to write this behavior myself? My first thought would be C# and the Windows taskbar apis. PS. If it's not clear what I'm asking for, please let me know, I'll try to be more specific.

    Read the article

  • Recover files from corrupt filesystem

    - by Emile 81
    My situation: I have an older 80GB IDE internal hdd, with a few files on that I would like very much to recover: some word documents some latex documents (text files) and pictures (png, jpg, eps files) some other text documents and visual studio project files I had backed them (not the latex ones though) up using svn, but have not committed lately, and would loose a lot of work if I cant recover. the hdd seems to have lost its filesystem, i have no idea how it came about. I know it has/had 3 NTFS partitions, i know the files i want are on the second or third partition. I read http://superuser.com/questions/81877/recover-hard-disk-data Partition Find and Mount did not see all the partitions using intelligent scan TestDisk does (i think), I followed the step by step instructions here, but when I try to list the files it says: "Can't open filesystem, filesystem seems damaged." I'm not sure how to proceed here, as TestDisks wiki does not contain this error message afaik. I don't know if the hdd is gonna fail, or some prog has caused the filesystem to be corrupt, the hdd doesnt make a sound, so i guess that's good. I would like some guidance so I don't accidentally cause more damage. (eg. is it ok to let testdisk write the filesystem to disk? I'm pretty the partitions are listed ok, but not 100%)

    Read the article

  • How to boot Linux from a 16gb USB flash drive

    - by Chris Harris
    I'm trying to install Linux on a single partition of a USB flash drive that's larger than 4gb. The first place I went to is http://pendrivelinux.com. I can follow these instructions for installing Xubuntu 9.04 perfectly, which unfortunately break down when I try to scale it up beyond 4gb. There are several other tools to do this (unetbootin and usb-creator) which follow a very similar formula. I figured out that a big problem of mine was that all of these tools assume the USB drive is formatted in FAT32, which unfortunately cannot hold a single file larger than 4gb. This is unfortunate because I want to use just one partition, so that my persistance file, casper-rw, looks like one big partition to the OS once I've booted off of the USB drive. I then tried following a myriad of instructions involving formatting the drive as one large ext2 filesystem and using extlinux to create a single bootable ext2 file system. This doesn't work for me however, after about 20 attempts verifying and slightly tweaking the formula, I cannot seem to get a "good" bootable ext2 file system built. I'm not entirely sure what's going on, but it seems as though no matter how hard I try, I cannot get the ext2 file system to remain coherent after copying the Linux ISO contents over, copying the MBR, and executing extlinux to create the ext bootloader. Every time, after I follow these steps (in any order) and reboot, I get an unbootable USB drive. If I then mount the drive under Linux again, I see a mess of a file system (inodes have clearly been screwed up somewhere along the way). I suspected that the USB drive wasn't being fully flushed, so I tried using the "sync" and "unmount" commands before rebooting which didn't affect things at all. I guess I have several possible questions - but let's start with the obvious - is there something I'm missing to create a bootable ext2 USB flash drive that's large (e.g. 16gb)?

    Read the article

  • run script as another user from a root script with no tty stdin

    - by viktor tron
    Using CentOs, I want to run a script as user 'training' as a system service. I use daemontools to monitor the process, which needs a launcher script that is run as root and has no tty standard in. Below I give my four different attempts which all fail. : #!/bin/bash exec >> /var/log/training_service.log 2>&1 setuidgid training training_command This last line is not good enough since for training_command, we need environment for trqaining user to be set. : su - training -c 'training_command' This looks like it (http://serverfault.com/questions/44400/run-a-shell-script-as-a-different-user) but gives 'standard in must be tty' as su making sure tty is present to potentially accept password. I know I could make this disappear by modifying /etc/sudoers (a la http://superuser.com/questions/119376/bash-su-script-giving-an-error-standard-in-must-be-a-tty) but i am reluctant and unsure of consequences. : runuser - training -c 'training_command' This one gives runuser: cannot set groups: Connection refused. I found no sense or resolution to this error. : ssh -p100 training@localhost 'source $HOME/.bashrc; training_command' This one is more of a joke to show desparation. Even this one fails with Host key verification failed. (the host key IS in known_hosts, etc). Note: all of 2,3,4 work as they should if I run the wrapper script from a root shell. problems only occur if the system service monitor (daemontools) launches it (no tty terminal I guess). I am stuck. Is this something so hard to achieve? I appreciate all insight and guidance to best practice. (this has also been posted on superuser: http://superuser.com/questions/434235/script-calling-script-as-other-user)

    Read the article

  • need help with automating a CMD java tool whcih qurries alexa AWS using batch

    - by Eli.C
    Hi everyone, I need to get all available info on 600 URLs from "Alexa Web Information Service", I downloaded the java tool and I'm able to run a single query each time with a single switch/Response Group. I would like to ask how to write a batch file that would automate the process ? the java tool runs from the CMD with the following: C:java UrlInfo (key1) (key2) (URL) (Response Group) UrlInfo - constant key1 - constant key2 -constant URL - variable (I guess I need to use the "(" sign to read from a file) Response Group - variable - (14 total, and I need to run each Response Group on each of the URLs once ) the app returns data in clear text formatted as XML after each query, here is an example: C:java UrlInfo (key1) (key2) www.url.com Rank Response: (?xml version="1.0"?) (aws:UrlInfoResponse xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:Response xmlns:aws="http://awis.amazonaws.com/doc/2005-07-11") (aws:OperationRequest) (aws:RequestId)ec2b6-e8ae-b392(/aws:RequestId) (/aws:OperationRequest) (aws:UrlInfoResult) (aws:Alexa) (aws:TrafficData) (aws:DataUrl type="canonical")url.com/(/aws:DataUrl) (aws:Rank)472906(/aws:Rank) (/aws:TrafficData) (/aws:Alexa) (/aws:UrlInfoResult) (aws:ResponseStatus xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:StatusCode)Success(/aws:StatusCode) (/aws:ResponseStatus) (/aws:Response) (/aws:UrlInfoResponse) Any help would be really appreciated Thanks and regards Eli.C

    Read the article

  • RAID1 Broken Mirroring

    - by Sanoj
    I'm having a little server with Windows Small Business Server 2003. I'm using RAID1, via a HighPoint Rocket RAID 1640 RAID-card, using two harddrives. This week the server alarmed, and durig reboot I got the error-message Broken Mirroring (User Manual page 30). I had a few alternatives (see the manual), first I tried Continue, but the server restarted during boot. Next time I took Power Off, and replaced the oldest harddrive with a new one, and when I booted, I selected Rebuild. Then I selected the new harddrive to be the new one. The rebuild-procedure started and a progress bar at 0% showed up, but after a few seconds I got the message Copy Failed!, then the server booted and Windows Server started. Now it works fine. But I guess that I'm just using one harddrive now, and it's not mirrored. I haven't touched the server since then (two days ago). What should I do now? I have no experience of this situation. Anyone that have some guidance?

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >