Search Results

Search found 19425 results on 777 pages for 'output clause'.

Page 405/777 | < Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >

  • How to change the screen resolution in VNC viewer for Ubuntu 12.04 without a monitor?

    - by user325320
    I have Ubuntu 12.04 installed on a machine and I always use it remotely from VNC. When I have monitor connected to this machine, I can change the resolution of my VNC viewer in the following line: $vnc4server --geometry 1440x900 This worked for me, but I always use this machine remotely, I unplug the monitor and reboot. and the above command line not work anymore. Then I tried xrandr SZ: Pixels Physical Refresh *0 1024 x 768 ( 260mm x 195mm ) *60 Current rotation - normal Current reflection - none Rotations possible - normal Reflections possible - none There is only one option available, so I tried to add a new one. $cvt 1440 900 # 1440x900 59.89 Hz (CVT 1.30MA) hsync: 55.93 kHz; pclk: 106.50 MHz Modeline "1440x900_60.00" 106.50 1440 1528 1672 1904 900 903 909 934 -hsync +vsync $xrandr --newmode "1440x900_60.00" 106.50 1440 1528 1672 1904 900 903 909 934 -hsync +vsync $xrandr --addmode S2 "1440x900_60.00" then I checked with xrandr again and can't see the new mode added. I try to execute the following command and get error says my RandR is too old. $xrandr --output S2 --mode 1440x900_60.00 xrandr: Server RandR version before 1.2 but this does not make sense to me, if I plug in the monitor back and run the xrandr command, it works again! It seems that Ubuntu must conntect to a real monitor before I can change my resolution in my VNC viewer. Can anyone help? UPDATE: Finally I solved this problem by changing to tightvncserver $tightvncserver -geometry 1440x900 works for me. Thanks everything answered my question

    Read the article

  • Cool SQL formatter tool

    - by AndyScott
    I have to deal with all types of code that was written by people from different organizations, different countries, using different languages, obviously standards are different across these sources.  One of the biggest headaches that I ahve come across is how people differ in the formatting of their SQL statements, specifically stored procs.  When you regularly get over 500 lines in a sproc, if the code is not formatted correctly, you can get lost trying to figure out where one nested BEGIN begins, and another nested END ends.  One of my co-workers showed me this site today that does a pretty damn good job of making sense of that type of code: http://www.dpriver.com/pp/sqlformat.htm.   This is a free website that offers a box to enter your nasty code, and click "Format SQL" and have it clean it for you.  I am sure that there are situations where this may not work, but given the code that I have been working with recently, it does a really good job.  There is a pay version with more options, including VS add-in, desktop component (with quickkeys to clean text in programs like notepad), the ability to output in HTML, and other stuff.  Heck, I watched a demo where the purchased version will take formatted SQL code and turn it into a generic Stringbuilder object embedded in a formatted. Yes, this seems like a shameless plug, but no, I have no relation/knowledge of anyone involved in the development of this product, it just seems useful.  Either way, I recommend checking out the free version.

    Read the article

  • AWS EC2 can't execute user-data script

    - by Bloodnut
    I'm pretty new to AWS and EC2 but I want to run instances with a user script after it's booted from another instance. I have installed ec2 tools and ran the command as it's explained in various examples like here http://www.turnkeylinux.org/blog/ec2-userdata and Eric Hammond's tutorials. however when I actually use the command: "ec2-run-instances --key my-key --user-data-file myscript my-ami" it only runs the new instance but doesn't execute the script myscript contains: #!/bin/bash echo "hello" ~/output.txt I'm running ubuntu server 12.04 AMIs. the target AMIs are duplicates of the initiating instance. if I run curl http:// 169.254.169.254/latest/user-data the imported script is there.

    Read the article

  • Laptop battery: is voltage really important to respect?

    - by Marc-Andre R.
    I got an Acer Aspire 5100 and I just bought a new battery (after the stock battery just died yesterday). But I saw something after buying and I'm wondering whether it's really important or not. My stock battery was a 6-cell 4000mah 11.1v and the new battery is an 8-cell 4800mah 14.8v . I know that 8-cell and 4800mah is okay, but what about the 14.8v instead of 11.1v? The battery description says it's compatible with my laptop model (AS5100, model BL51), but the voltage difference makes me wonder. Will the laptop only take what it needs? Or will it be getting 14.8v straight in the brain? I know that my wall plug claims to output 19v, so logically I'm thinking a higher voltage battery shouldn't be a problem. Am I correct in thinking this? Thanks in advance for your answers!

    Read the article

  • Creating the concept of Time

    - by Jamie Dixon
    So I've reached the point in my exploration of gaming where I'd like to impliment the concept of time into my little demo I've been building. What are some common methodologies for creating the concept of time passing within a game? My thoughts so far: My game loop tendes to spend a fair bit of time sitting around waiting or user input so any time system will likely need to be run in a seperate thread. What I've currently done is create a BackgroundWorker passing in a method that contains a loop triggering every second. This is working fine and I can output information to the console from here etc. Inside this loop I have a DateTime object that is incrimented by 1 minute for every realtime second. (the game begins in the year 01/01/01) Is this a standard way of acheiving this result or are there more tried and tested methods? I'm also curious about how to go about performing time based actions (reducing player energy, moving entities around the game board, life/death etc). Thanks for any pointers or advice. I've searched around however I'm not familiar enough with the terms and so my searches are yeilding little result on this one.

    Read the article

  • Unable to install updates on 14.04 LTS

    - by Mike
    I have been getting update notifications for a few weeks now but whenever I attempt to install them I get this message; The upgrade needs a total of 74.6 M free space on disk '/boot'. Please free at least an additional 29.8 M of disk space on '/boot'. Empty your trash and remove temporary packages of former installations using 'sudo apt-get clean'. First of all I don't have permission to access /boot (don't know why as its a standalone machine and i'm the only user). Secondly, I emptied the trash; Thirdly, I launched Terminal and entered sudo apt-get clean I was a asked for a sudo password. I entered my system password. Re-entered sudo apt-get clean. The cursor stopped blinking - I assumed it was doing it's "thing". I let it go for about 10 minutes then exited Terminal. Tried to install the updates but just got the same message. Is there something i'm ignorant of? This is the output I get from the command df -h and I have no idea what it all means! @Tim, What's bash and why am I denied access to fstab and /boot? mike@mike-MS-7800:~$ /etc/fstab bash: /etc/fstab: Permission denied mike@mike-MS-7800:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/ubuntu--vg-root 913G 11G 856G 2% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 1.7G 4.0K 1.7G 1% /dev tmpfs 335M 1.6M 333M 1% /run none 5.0M 4.0K 5.0M 1% /run/lock none 1.7G 14M 1.7G 1% /run/shm none 100M 52K 100M 1% /run/user /dev/sda2 237M 182M 43M 81% /boot /dev/sda1 487M 3.4M 483M 1% /boot/efi /dev/sr1 31M 31M 0 100% /media/mike/Optus Mobile mike@mike-MS-7800:~$ I ran this from the terminal and all is now working. dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get -y purge

    Read the article

  • software RAID array not starting in initramfs on Debian

    - by Jasper
    One of my Debian servers (kernel 2.6.30-AMD64) refuses to start the software RAID array that houses the root partition in initramfs. It dumps me with a busybox console. When I follow the necessary steps to continue booting it works fine (start the array with mdadm -A and then have LVM scan the volumes with pvscan and then vgchange -ay). I've tried starting with boot options rootdelay=10 to no avail. Also I've updated the initramfs and unpacked it to inspect whether it really tries to assemble the raid array (it does). Output before dumping to console : mount: mounting none on /dev failed: No such device W: devtmpfs not available, falling back to tpmfs for /dev and then some lvm messages saying it can't find the volumes holding the root partitions. Does anybody have a clue how I could fix this?

    Read the article

  • How to tell if SPARC T4 crypto is being used?

    - by danx
    A question that often comes up when running applications on SPARC T4 systems is "How can I tell if hardware crypto accleration is being used?" To review, the SPARC T4 processor includes a crypto unit that supports several crypto instructions. For hardware crypto these include 11 AES instructions, 4 xmul* instructions (for AES GCM carryless multiply), mont for Montgomery multiply (optimizes RSA and DSA), and 5 des_* instructions (for DES3). For hardware hash algorithm optimization, the T4 has the md5, sha1, sha256, and sha512 instructions (the last two are used for SHA-224 an SHA-384). First off, it's easy to tell if the processor T4 crypto instructions—use the isainfo -v command and look for "sparcv9" and "aes" (and other hash and crypto algorithms) in the output: $ isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc These instructions are not-privileged, so are available for direct use in user-level applications and libraries (such as OpenSSL). Here is the "openssl speed -evp" command shown with the built-in t4 engine and with the pkcs11 engine. Both run the T4 AES instructions, but the t4 engine is faster than the pkcs11 engine because it has less overhead (especially for smaller packet sizes): t-4 $ /usr/bin/openssl version OpenSSL 1.0.0j 10 May 2012 t-4 $ /usr/bin/openssl engine (t4) SPARC T4 engine support (dynamic) Dynamic engine loading support (pkcs11) PKCS #11 engine support t-4 $ /usr/bin/openssl speed -evp aes-128-cbc # t4 engine used by default . . . The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 487777.10k 816822.21k 986012.59k 1017029.97k 1053543.08k t-4 $ /usr/bin/openssl speed -engine pkcs11 -evp aes-128-cbc engine "pkcs11" set. . . . The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 31703.58k 116636.39k 350672.81k 696170.50k 993599.49k Note: The "-evp" flag indicates use the OpenSSL "EnVeloPe" API, which gives more accurate results. That's because it tells OpenSSL to use the same API that external programs use when calling OpenSSL libcrypto functions, evp(3openssl). DTrace Shows if T4 Crypto Functions Are Used OK, good enough, the isainfo(1) command shows the instructions are present, but how does one know if they are being used? Chi-Chang Lin, who works on Oracle Solaris performance, wrote a Dtrace script to show if T4 instructions are being executed. To show the T4 instructions are being used, run the following Dtrace script. Look for functions named "t4" and "yf" in the output. The OpenSSL T4 engine uses functions named "t4" and the PKCS#11 engine uses functions named "yf". To demonstrate, I'll first run "openssl speed" with the built-in t4 engine then with the pkcs11 engine. The performance numbers are not valid due to dtrace probes slowing things down. t-4 # dtrace -Z -n ' pid$target::*yf*:entry,pid$target::*t4_*:entry{ @[probemod, probefunc] = count();}' \ -c "/usr/bin/openssl speed -evp aes-128-cbc" dtrace: description 'pid$target::*yf*:entry' matched 101 probes . . . dtrace: pid 2029 has exited libcrypto.so.1.0.0 ENGINE_load_t4 1 libcrypto.so.1.0.0 t4_DH 1 libcrypto.so.1.0.0 t4_DSA 1 libcrypto.so.1.0.0 t4_RSA 1 libcrypto.so.1.0.0 t4_destroy 1 libcrypto.so.1.0.0 t4_free_aes_ctr_NIDs 1 libcrypto.so.1.0.0 t4_init 1 libcrypto.so.1.0.0 t4_add_NID 3 libcrypto.so.1.0.0 t4_aes_expand128 5 libcrypto.so.1.0.0 t4_cipher_init_aes 5 libcrypto.so.1.0.0 t4_get_all_ciphers 6 libcrypto.so.1.0.0 t4_get_all_digests 59 libcrypto.so.1.0.0 t4_digest_final_sha1 65 libcrypto.so.1.0.0 t4_digest_init_sha1 65 libcrypto.so.1.0.0 t4_sha1_multiblock 126 libcrypto.so.1.0.0 t4_digest_update_sha1 261 libcrypto.so.1.0.0 t4_aes128_cbc_encrypt 1432979 libcrypto.so.1.0.0 t4_aes128_load_keys_for_encrypt 1432979 libcrypto.so.1.0.0 t4_cipher_do_aes_128_cbc 1432979 t-4 # dtrace -Z -n 'pid$target::*yf*:entry{ @[probemod, probefunc] = count();}   pid$target::*yf*:entry,pid$target::*t4_*:entry{ @[probemod, probefunc] = count();}' \ -c "/usr/bin/openssl speed -engine pkcs11 -evp aes-128-cbc" dtrace: description 'pid$target::*yf*:entry' matched 101 probes engine "pkcs11" set. . . . dtrace: pid 2033 has exited libcrypto.so.1.0.0 ENGINE_load_t4 1 libcrypto.so.1.0.0 t4_DH 1 libcrypto.so.1.0.0 t4_DSA 1 libcrypto.so.1.0.0 t4_RSA 1 libcrypto.so.1.0.0 t4_destroy 1 libcrypto.so.1.0.0 t4_free_aes_ctr_NIDs 1 libcrypto.so.1.0.0 t4_get_all_ciphers 1 libcrypto.so.1.0.0 t4_get_all_digests 1 libsoftcrypto.so.1 rijndael_key_setup_enc_yf 1 libsoftcrypto.so.1 yf_aes_expand128 1 libcrypto.so.1.0.0 t4_add_NID 3 libsoftcrypto.so.1 yf_aes128_cbc_encrypt 1542330 libsoftcrypto.so.1 yf_aes128_load_keys_for_encrypt 1542330 So, as shown above the OpenSSL built-in t4 engine executes t4_* functions (which are hand-coded assembly executing the T4 AES instructions) and the OpenSSL pkcs11 engine executes *yf* functions. Programmatic Use of OpenSSL T4 engine The OpenSSL t4 engine is used automatically with the /usr/bin/openssl command line. Chi-Chang Lin also points out that if you're calling the OpenSSL API (libcrypto.so) from a program, you must call ENGINE_load_built_engines(), otherwise the built-in t4 engine will not be loaded. You do not call ENGINE_set_default(). That's because "openssl speed -evp" test calls ENGINE_load_built_engines() even though the "-engine" option wasn't specified. OpenSSL T4 engine Availability The OpenSSL t4 engine is available with Solaris 11 and 11.1. For Solaris 10 08/11 (U10), you need to use the OpenSSL pkcs311 engine. The OpenSSL t4 engine is distributed only with the version of OpenSSL distributed with Solaris (and not third-party or self-compiled versions of OpenSSL). The OpenSSL engine implements the AES cipher for Solaris 11, released 11/2011. For Solaris 11.1, released 11/2012, the OpenSSL engine adds optimization for the MD5, SHA-1, and SHA-2 hash algorithms, and DES-3. Although the T4 processor has Camillia and Kasumi block cipher instructions, these are not implemented in the OpenSSL T4 engine. The following charts may help view availability of optimizations. The first chart shows what's available with Solaris CLIs and APIs, the second chart shows what's available in Solaris OpenSSL. Native Solaris Optimization for SPARC T4 This table is shows Solaris native CLI and API support. As such, they are all available with the OpenSSL pkcs11 engine. CLIs: "openssl -engine pkcs11", encrypt(1), decrypt(1), mac(1), digest(1), MD5sum(1), SHA1sum(1), SHA224sum(1), SHA256sum(1), SHA384sum(1), SHA512sum(1) APIs: PKCS#11 library libpkcs11(3LIB) (incluDES Openssl pkcs11 engine), libMD(3LIB), and Solaris kernel modules AlgorithmSolaris 1008/11 (U10)Solaris 11Solaris 11.1 AES-ECB, AES-CBC, AES-CTR, AES-CBC AES-CFB128 XXX DES3-ECB, DES3-CBC, DES2-ECB, DES2-CBC, DES-ECB, DES-CBC XXX bignum Montgomery multiply (RSA, DSA) XXX MD5, SHA-1, SHA-256, SHA-384, SHA-512 XXX SHA-224 X ARCFOUR (RC4) X Solaris OpenSSL T4 Engine Optimization This table is for the Solaris OpenSSL built-in t4 engine. Algorithms listed above are also available through the OpenSSL pkcs11 engine. CLI: openssl(1openssl) APIs: openssl(5), engine(3openssl), evp(3openssl), libcrypto crypto(3openssl) AlgorithmSolaris 11Solaris 11SRU2Solaris 11.1 AES-ECB, AES-CBC, AES-CTR, AES-CBC AES-CFB128 XXX DES3-ECB, DES3-CBC, DES-ECB, DES-CBC X bignum Montgomery multiply (RSA, DSA) X MD5, SHA-1, SHA-256, SHA-384, SHA-512 XX SHA-224 X Source Code Availability Solaris Most of the T4 assembly code that called the new T4 crypto instructions was written by Ferenc Rákóczi of the Solaris Security group, with assistance from others. You can download the Solaris source for this and other parts of Solaris as a few zip files at the Oracle Download website. The relevant source files are generally under directories usr/src/common/crypto/{aes,arcfour,des,md5,modes,sha1,sha2}}/sun4v/. and usr/src/common/bignum/sun4v/. Solaris 11 binary is available from the Oracle Solaris 11 download website. OpenSSL t4 engine The source for the OpenSSL t4 engine, which is based on the Solaris source above, is viewable through the OpenGrok source code browser in directory src/components/openssl/openssl-1.0.0/engines/t4 . You can download the source from the same website or through Mercurial source code management, hg(1). Conclusion Oracle Solaris with SPARC T4 provides a rich set of accelerated cryptographic and hash algorithms. Using the latest update, Solaris 11.1, provides the best set of optimized algorithms, but alternatives are often available, sometimes slightly slower, for releases back to Solaris 10 08/11 (U10). Reference See also these earlier blogs. SPARC T4 OpenSSL Engine by myself, Dan Anderson (2011), discusses the Openssl T4 engine and reviews the SPARC T4 processor for the Solaris 11 release. Exciting Crypto Advances with the T4 processor and Oracle Solaris 11 by Valerie Fenwick (2011) discusses crypto algorithms that were optimized for the T4 processor with the Solaris 11 FCS (11/11) and Solaris 10 08/11 (U10) release. T4 Crypto Cheat Sheet by Stefan Hinker (2012) discusses how to make T4 crypto optimization available to various consumers (such as SSH, Java, OpenSSL, Apache, etc.) High Performance Security For Oracle Database and Fusion Middleware Applications using SPARC T4 (PDF, 2012) discusses SPARC T4 and its usage to optimize application security. Configuring Oracle iPlanet WebServer / Oracle Traffic Director to use crypto accelerators on T4-1 servers by Meena Vyas (2012)

    Read the article

  • Strange rsync behaviour

    - by Stewie
    So, I start with comparing two directories: [root@135759 ]# rsync -av test1/ test2/ building file list ... done sent 128 bytes received 20 bytes 296.00 bytes/sec total size is 6 speedup is 0.04 They both are in sync Now, let me create a file in test1 and copy it to test2 [root@135759 ]# touch test1/hello4.php [root@135759 ]# scp test1/hello4.php test2/hello4.php I verify that those are same: [root@135759 test2]# md5sum test1/hello4.php d41d8cd98f00b204e9800998ecf8427e hello4.php [root@135759 test1]# md5sum test2/hello4.php d41d8cd98f00b204e9800998ecf8427e hello4.php Thus, running rsync will show me 0 files. Why is the output not right ? [root@135759 ]# rsync -avn test1/ test2/ building file list ... done hello4.php sent 116 bytes received 24 bytes 280.00 bytes/sec total size is 6 speedup is 0.04

    Read the article

  • Realtek HD Audio 5.1 Optical Input (from Xbox 360)

    - by Shevek
    I'm trying to connect up my Xbox 360 digital audio to my 5.1 speakers via my PC (my LCD has dual input, DVI for the PC, D-SUB for the Xbox). The motherboard has a Realtek ALC888 chipset and I have a 5.1 speaker system connected via 3 x 3.5mm jacks (FR/FL, RR/RL, C/LFE) and I get full 5.1 output from the PC. I have connected the optical audio cable from the Xbox to the Optical In on the motherboard's backplate. With the Xbox in Digital Stereo mode I get 2 channel audio from the Xbox, through the PC, to the speakers. With the Xbox in Dolby Digital 5.1 mode I get no sound at all. I have the latest Realtek drivers installed in Win 7 32-bit. Questions: Is it possible to use the full 5.1 DD from the Xbox? If so, am I missing some option(s) in the Realtek setup? Do I need some other piece of software to do this? (AC3Filter or FFDShow perhaps) Many thanks

    Read the article

  • How do i have 4GB of video memory with a 1GB video card?

    - by Tomas Spc Yaczik
    I ran the direct X diagnostic tool to take a look at my graphics capabilities and its telling me that the approximate video memory is clocked in at 4096MB. That doesn't make any sense, how is that possible? The only things that i can think of were that DirectX was inaccurate so i looked at the graphics statistics of my computer on CPU-Z which was included with the motherboard and that's telling me i have "negative" 1988MB. Not only is that not 4096MB, but its giving me a negative number and i know for a fact that you can't have a negative amount of memory???!!! The only things i can think of that would be amplifying my video memory output is the PCIe 3.0 bus, or that the motherboard is somehow including the on board video chip set with the video card which only then is only 2GB of video memory, which still has to be being amplified by something. Any suggestions?

    Read the article

  • How can I switch between the HDMI and DVI outputs of my graphics card?

    - by Owen Melbourne
    I've got an Nvidia GTX 560ti card which currently I've got my 2 monitors hooked up to using the 2 DVI ports. However its got a mini-HDMI port which I've plugged a HDMI cable in (with mini adapter) and lead it into my TV which is across the room. I'm hoping to be able to toggle between the HDMI output and the DVI outputs, however I'm not sure how I'd go about this, Could somebody please point me in the right direction, I'm not really worried about having all 3 on at the same time so that isn't a problem, but if its possible then I'll do that.

    Read the article

  • Does not recognize usb sticks and drives

    - by Peter
    When connecting any usb stick to my thinkpad ubuntu 10.10 does not recognize them. I don't see anything on the desktop. the output of "dmesg | tail -n10" gives me: [ 1965.696388] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1965.884537] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.072503] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.260349] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.506227] usb 1-1: new high speed USB device using ehci_hcd and address 9 [ 1966.572375] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.760379] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.948358] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1967.136335] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1967.325423] hub 1-0:1.0: unable to enumerate USB device on port 1 When connecting my usb scanner to the same port: [ 2008.480135] usb 1-1: new high speed USB device using ehci_hcd and address 65 [ 2008.548389] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2008.736786] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2008.924379] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.112348] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.300443] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.488536] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.732180] usb 1-1: new high speed USB device using ehci_hcd and address 71 [ 2014.796299] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2018.000128] usb 2-1: new full speed USB device using uhci_hcd and address 3 And ubuntu 10.10 recognizes that scanner. So: What can i do to see my usb stick? BTW: on my other Thinkpad running fedora 14 it works perfectly... Cheers -Peter

    Read the article

  • Linux process management

    - by tanascius
    Hello, I started a long running background-process (dd with /etc/urandom) in my ssh console. Later I had to disconnect. When I logged in, again (this time directly, without ssh), the process still seemed to to run. I am not sure what happened - I did not use disown. When I logged in later, the process was not listed in top at first, but after a while it reclaimed a high CPU percentage, as I expected. So I assume dd is still running. Now, I'd like to see the progress. I use kill -USR1 <pid> but nothing is printed. Is there any way to get the output again?

    Read the article

  • Vim */dyn support

    - by galymzhan
    What does mean the plus sign before */dyn in the :version command's output, e.g.: +python/dyn +python3/dyn +ruby/dyn +tcl/dyn I didn't find any useful documentation on it. When I run :echo has('python3') Vim returns 0. When I run :python3 print('hi') it says E370: Could not load library python31.dll meaning I should install python (as I understand). So I just can't see the difference between -*/dyn and +*/dyn. What does plus sign give to us? Also what's the difference from dyn-less feature, e.g. +python?

    Read the article

  • Unable to get ls to recognise LS_OPTIONS or LS_COLORS?

    - by A T
    Trying to get --color=auto as the default ls argument. $ ls --version ls (GNU coreutils) 8.21 … $ echo $LS_COLORS no=00:fi=00:di=00;34:ln=01;36:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:ex=00;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.gz=01;31:*.bz2=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.avi=01;35:*.fli=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.ogg=01;35:*.mp3=01;35:*.wav=01;35: $ echo $LS_OPTIONS --color=auto Unfortunately when I run ls I still get non-colored output (running ls --color=auto manually gives me colors). How do I make --color=auto a default ls argument?

    Read the article

  • Explain Model View Controller

    - by Channel72
    My experience with developing dynamic websites is limited mostly to Java servlets. I've used Tomcat to develop various Java servlets, and I wouldn't hesitate to say that I'm reasonably proficient with this technology, as well as with client-side HTML/CSS/Javascript for the front-end. When I think "dynamic website", I think: user requests a URL with a query string, server receives the query, and then proceeds to output HTML dynamically in order to respond to the query. This often involves communication with a database in order to fetch requested data for display. This is basically the idea behind the doGet method of a Java HttpServlet. But these days, I'm hearing more and more about newer frameworks such as Django and Ruby on Rails, all of which take advantage of the "Model View Controller" architecture. I've read various articles which explain MVC, but I'm having trouble really understanding the benefits. I understand that the general idea is to separate business logic from UI logic, but I fail to see how this is anything really different from normal web programming. Web programming, by it's very nature, forces you to separate business logic (back-end server-side programming) from UI programming (client-side HTML or Javascript), because the two exist in entirely different spheres of programming. Question: What does MVC offer over something like a Java servlet, and more importantly, what exactly is MVC and how is it different from what you would normally do to develop a dynamic website using a more traditional approach such as a Java servlet (or even something older like CGI). If possible, when explaining MVC, please provide an example which illustrates how MVC is applied to the web development process, and how it is beneficial.

    Read the article

  • Linux networking crash: best steps to find out the cause?

    - by Aron Rotteveel
    One of our Linux (CentOS) servers was unreachable last night. The server was not reachable in any way except for the remote console. After logging in with the remote console, it turned out I could not ping any outside hosts either. A simple service network restart solved the issue, but I am still wondering what could have caused this. My log files seem to indicate no error at all (except for the various daemons that need a network connection and failed after the network failure). Are there any additional steps I can take to find out the cause of this problem? EDIT: this just happened again. The server was completely unresponsive until I issued a networking service restart. Any advise is welcome. Could this be caused by a faulty hardware component? As per Madhatters request, here are some excerpts from the log at the time (the network crashed at 20:13): /var/log/messages: Dec 2 20:01:05 graviton kernel: Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=<stripped> SRC=<stripped> DST=<stripped> LEN=40 TOS=0x00 PREC=0x00 TTL=101 ID=256 PROTO=TCP SPT=6000 DPT=3306 WINDOW=16384 RES=0x00 SYN URGP=0 Dec 2 20:01:05 graviton kernel: Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=<stripped> SRC=<stripped> DST=<stripped> LEN=40 TOS=0x00 PREC=0x00 TTL=100 ID=256 PROTO=TCP SPT=6000 DPT=3306 WINDOW=16384 RES=0x00 SYN URGP=0 Dec 2 20:01:05 graviton kernel: Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=<stripped> SRC=<stripped> DST=<stripped> LEN=40 TOS=0x00 PREC=0x00 TTL=101 ID=256 PROTO=TCP SPT=6000 DPT=3306 WINDOW=16384 RES=0x00 SYN URGP=0 Dec 2 20:13:34 graviton junglediskserver: Connection to gateway failed: xGatewayTransport - Connection to gateway failed. The first three messages are simple responses to iptables rules I have set up through the LFD firewall. The last message indicates that JungleDisk, which I use for backups can no longer connect to the gateway. Apart from this, there are no interesting messages around this time. EDIT 4 dec: as per Mattdm's request, here is the output of ethtool eth0: (Please not that these are the settings that currently work. If things go wrong again, I will be sure to post this again if necessary. Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: g Wake-on: d Link detected: yes As per Joris' request, here is also the output of route -n: aron@graviton [~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface xx.xx.xx.58 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.42 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.43 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.41 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.46 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.47 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.44 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.45 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.50 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.51 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.48 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.49 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.54 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.52 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.53 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.0 0.0.0.0 255.255.255.192 U 0 0 0 eth0 xx.xx.xx.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 xx.xx.xx.62 0.0.0.0 UG 0 0 0 eth0 The bottom xx.62 is my gateway. EDIT december 28th: the problem occurred again and I got the chance to compare some of the outputs of the above tests. What I found out is that arp -an returns an incomplete MAC address for my gateway (which is not under my control; the server is in a shared rack): During failure: ? (xx.xx.xx.62) at <incomplete> on eth0 After service network restart: ? (xx.xx.xx.62) at 00:00:0C:9F:F0:30 [ether] on eth0 Is this something I can fix or is it time for me to contact the data centre?

    Read the article

  • port to subdomain

    - by takeshin
    I have installed Hudson using apt-get, and the Hudson server is available on example.com:8080. For example.com I use standard port *:80 and some virtual hosts set up this way: # /etc/apache2/sites-enabled/subdomain.example.com <Virtualhost *:80> ServerName subdomain.example.com ... </Virtualhost> Here is info about Hudson process: /usr/bin/daemon --name=hudson --inherit --env=HUDSON_HOME=/var/lib/hudson --output=/var/log/hudson/hudson.log --pidfile=/var/run/hudson/hudson.pid -- /usr/bin/java -jar /usr/share/hudson/hudson.war --webroot=/var/run/hudson/war 987 ? Sl 1:08 /usr/bin/java -jar /usr/share/hudson/hudson.war --webroot=/var/run/hudson/war How should I forward: http:// example.com:8080 to: http:// hudson.example.com

    Read the article

  • How do I stop Gimp from autolaunching on startup?

    - by Joshua Fox
    Gimp launches every time I log into Xubuntu (v. 13.10). Gimp is not shown under Settings Manager- Sessions and startup. It does not appear in ~/.config/autostart. I immediately close Gimp in these cases, so it is not running when I shut down the session. How do I stop Gimp from autolaunching on startup? Diagnostic Info: Note that cd / find . -name gimp.desktop Only produces ./usr/share/applications/gimp.desktop and nothing else Here is the output of grep -lIr 'gimp' ~/ ~/Gimp-search-results.txt sbin/vgimportclone home/joshua/.gimp-2.8/controllerrc home/joshua/.gimp-2.8/tags.xml home/joshua/.gimp-2.8/dockrc home/joshua/.gimp-2.8/gimprc home/joshua/.gimp-2.8/themerc home/joshua/.gimp-2.8/templaterc home/joshua/.gimp-2.8/gtkrc home/joshua/.gimp-2.8/sessionrc home/joshua/.gimp-2.8/toolrc home/joshua/.gimp-2.8/pluginrc home/joshua/.gimp-2.8/menurc home/joshua/Gimp-search-results.txt home/joshua/.local/share/ristretto/mime.db home/joshua/.wine/drive_c/windows/system32/gecko/1.4/wine_gecko/dictionaries/en-US.dic home/joshua/.wine/drive_c/windows/syswow64/gecko/1.4/wine_gecko/dictionaries/en-US.dic home/joshua/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,skype,,0495938f41334883bd3a67d3b164c1d1 home/joshua/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,gnome-utils,,91bba9b826fb21dbfc3aad6d3bd771cb home/joshua/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,icedtea-plugin,,7bb5e4ad0469ef8277032c048b9d7328 home/joshua/.cache/software-center/piston-helper/reviews.ubuntu.com,reviews,api,1.0,review-stats,any,any,,1c66e24123164bb80c4253965e29eed7 home/joshua/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,wine1.4,,2bac05a75dcec604ee91e58027eb4165 home/joshua/.cache/software-center/piston-helper/software-center.ubuntu.com,api,2.0,applications,en,ubuntu,saucy,amd64,,32b432ef7e12661055c87e3ea0f3b5d5 home/joshua/.cache/software-center/apthistory.p home/joshua/.cache/software-center/reviews.ubuntu.com_reviews_api_1.0_review-stats-pkgnames.p home/joshua/.cache/oneconf/861c4e30b916e750f16fab5652ed5937/package_list_861c4e30b916e750f16fab5652ed5937 home/joshua/.cache/sessions/xfwm4-23e853443-fb4b-42fd-aa61-33fa99fdc12c.state home/joshua/.cache/sessions/xfce4-session-athena:0 home/joshua/.config/abiword/profile

    Read the article

  • 11gr2 DataGuard: Restarting DUPLICATE After a Failure

    - by rene.kundersma
    One of the great new features that comes in very handy when databases get larger and larger these days is RMAN's capability to duplicate from an active database and even restart a duplicate when it fails. Imagine yourself the problem I had lately; I used the duplicate from active database feature and had to wait for an hour or 6 before all datafiles where transferred.At the end of the process some error occurred because of the syntax. While this error was easily to solve I was afraid I had to redo the complete procedure and transfer the 2.5 TB again. Well, 11gr2 RMAN surprised when I re-ran my command with the following output: Using previous duplicated file +DATA/fin2prod/datafile/users.2968.719237649 for datafile 12 with checkpoint SCN of 183289288148 Using previous duplicated file +DATA/fin2prod/datafile/users.2703.719237975 for datafile 13 with checkpoint SCN of 183289295823 Above I only show a small snippet, but what happend is that RMAN smartly skipped all files that where already transferred ! The documentation says this: RMAN automatically optimizes a DUPLICATE command that is a repeat of a previously failed DUPLICATE command. The repeat DUPLICATE command notices which datafiles were successfully copied earlier and does not copy them again. This applies to all forms of duplication, whether they are backup-based (with and without a target connection) or active database duplication. The automatic optimization of the DUPLICATE command can be especially useful when a failure occurs during the duplication of very large databases. If a DUPLICATE operation fails, you need only run the DUPLICATE again, using the same parameters contained in the original DUPLICATE command. Please see chapter 23 of the 11g Release 2 Database Backup and Recovery User's Guide for more details. B.w.t. be very careful with the duplicate command. A small mistake in one of the 'convert' parameters can potentially overwrite your target's controlfile without prompting ! Rene Kundersma Technical Architect Oracle Technology Services

    Read the article

  • tcpdump selective acknowledgements question

    - by wlaus
    Hi All, I eventually sometimes watch most initial tcp connection attempts like this: tcpdump -nn -Z somepcapuser not src host (12x.x5.109.xxx or 62.75.160.xxx ) and not (port 9001 or 443 or 8080 ) and tcp[tcpflags]&(tcp-syn) !=0 and not tcp[tcpflags]& (tcp-ack) !=0 or icmp this works pretty well to quickly identify oddness so far. However, I now have a question on the following output: 03:53:52.227884 IP 203.81.166.20.53786 62.75.160.xxx.80: S 846930886:846930886(0) win 61690 "<"mss 1460,nop,nop,sackOK,opt-178:f04700000000,nop,wscale 4"" I wonder what the marked portion means, haven't seen that before. Thanks for help wlaus

    Read the article

  • 1Tb disk formatted on Linux won't mount on windows nor mac

    - by Pedro MC
    I have an external HD (western digital) with 1Tb. I use Linux but I wanted to reserve a cross platform partition on the disk. I decided to create two partitions and used the "disks" application to do it. I created one partition with the LUKS (version 1) encryption and the other one, cross platform, in NTFS filesystem. Things work fine on my OS but when I try to use the disk (the cross platform partition) on both windows and mac the device is not recognized. What could it be? Next, output of "sfdisk -l /dev/sdb": Disk /dev/sdb: 121600 cylinders, 255 heads, 63 sectors/track Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/sdb1 0+ 36473- 36473- 292968750 83 Linux /dev/sdb2 36473+ 121600- 85128- 683789062+ 83 Linux /dev/sdb3 0 - 0 0 0 Empty /dev/sdb4 0 - 0 0 0 Empty

    Read the article

  • Diagnosing xmodmap errors

    - by intuited
    I'm getting this error when trying to use xmodmap to get rid of caps lock: $ xmodmap -e 'clear Lock' X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 118 (X_SetModifierMapping) Value in failed request: 0x17 Serial number of failed request: 8 Current serial number in output stream: 8 I'm running xfce on Maverick "10.10" Meercat. This problem did not occur before I added the Keyboard Layouts applet to a panel; before doing that, I was able to run my xmodmap script to swap Esc and CapsLock: !Remap Caps_Lock as Escape remove Lock = Caps_Lock keysym Caps_Lock = Escape It may be relevant that I chose alt-capslock as the keyboard switch combo in the Keyboard Layouts preferences. I've had a similar problem before, on a different machine, running openbox. On that machine, this problem started when I upgraded to Lucid, and has persisted in Maverick (release 10.10). I reported a bug in xorg. However, it remains unclear whether it's really a problem with xorg, or if I'm just doing something wrong with my configuration. Have other people experienced this problem? Can someone shed some light on what's going on here? It seems there are quite a few layers involved, and I don't understand any of them particularly well, so any information would be helpful. update I've discovered that the problem is specifically triggered by adding the Canada layout variant "Multilingual" (ca-multix). If I instead add the variant "Multilingual (first part)", the problem does not occur. I think this will probably end up being a usable workaround, but I don't yet know what the difference between these variants is. I've filed a freedesktop issue, and am commenting on a related ubuntu issue.

    Read the article

  • gpupdate failing when using Samba 4 AD DC

    - by darthfoolish
    I have a Samba 4 AD domain running with 2 DCs on Centos 6.5, with a named DNS backend. I have multiple Windows 7 machines joined to this domain, which is fine. However, I can't get GPOs to apply. When running gpupdate, I get the following output The processing of Group Policy failed. Windows attempted to read the file \\sysvol\\Policies{31B2F340-016D-11D2-945F-00C04FB984F9}\gpt.ini Obviously, you don't normally see what it's trying to connect to when it's successful, but I would have thought the first place shows up, I should be seeing So, what governs what data gets put in between those angle brackets? If it is just supposed to be the domain, then what else could be going wrong? Thanks in advance for any help.

    Read the article

< Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >