Search Results

Search found 27870 results on 1115 pages for 'standard output'.

Page 129/1115 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Block Skype on Cisco IOS

    - by ensnare
    I'm trying to block skype via policy routing but it's not working ... here's my configuration: class-map match-any block match protocol skype policy-map QoS-Priority-Input class block police 1000000 31250 31250 conform-action drop exceed-action drop violate-action drop policy-map QoS-Priority-Output class block police 1000000 31250 31250 conform-action drop exceed-action drop violate-action drop interface FastEthernet4 description WAN service-policy input QoS-Priority-Input service-policy output QoS-Priority-Output

    Read the article

  • Dos/ Flood Lag even though Port not Saturated

    - by Asad Moeen
    My GameServers had been under some UDP Floods due to which they generated outputs to the attacker which gave the GameServers some huge lags. Thanks to friends at ServerFault that upon different kind of testing, I was able to successfully block the attack. My question is actually something else but it is important to know how the GameServers reacted to the attack and if the machine kept stable or not: 300kb/s Input would cause GameServer to generate 2mb/s Output. So as the Input Rate kept increasing, output rate would reach so high that it would no longer be possible for the GameServer to control it and hence it would give a huge Lag until the attack is stopped. Usually the game server starts to lag when it sends out something greater than 5mb/s and under that is controllable. Theoretically, I was able to receive a 60mb/s output from my GameServer on inputting 10mb/s. Its just the way the GameServer works if not protected. Now on some of my machines, only the GameServer under attack lagged and although the server was generating 60mb/s output, rest of the gameservers on other ports would run fine without lags on the same machine. But there was another machine which also runs on a 100 MBPS Network port, even 1 mbps input ( and ZERO output because attack is blocked ) even on an unused port would give a constant yellow line ( on the Lag-o-Meter ) to all the clients on all GameServers indicating lag because that line is actually blue under normal conditions. It would remain the same even on 50mbps or 900mbps input. I tried contacting the host about it because I believe its the way their Network is bridged, but they can't help me about it. Anyone else knowing about such issues because if 900mbps input does not Saturate the port, how can 1mbps input lag the servers although port is not saturated and enough bandwidth is available?

    Read the article

  • echo newline character not working in bash

    - by Bashuser
    I have bash script which has lots of echo statements and also I aliased echo to echo -e both in .bash_profile and .bashrc, so that new lines are printed properly for a statement like echo 'Hello\nWorld' the output should be Hello World but the output I am getting is Hello\nWorld I even tried using shopt -s expand_aliases in the script, it doesn't help I am running my script as bash /scripts/scriptnm.sh; if I run it as . /scripts/scriptnm.sh I am getting the desired output...

    Read the article

  • Process ANSI escape codes before piping

    - by Tiddo
    I'm trying to pipe the output of a script (Mocha) to another script. However there is one problem: Mocha generates quite a few ansi escape characters to update the screen on the fly. These characters are also send through the pipe. Is there a way to process the ansi sequence such that the output is the same as the final output to the screen? I do want to keep color escape sequences, but not the curser movement escapes. Edit: I have a partial solution now (for Mocha only): so far it seems that Mocha with the spec output (the one I use) only generates color ecape characters and the CSI 0G escape sequence. The CSI 0G escape character means that the cursor should move back to the beginning of the line. Mocha uses this to overwrite a line completely. Therefore you could simply create a sed regexp which will delete everything up to that escape sequence on a line: sed 's/^.*\x1b\[0G//g'. I am still looking for the complete solution though.

    Read the article

  • Windows not remembering default audio device?

    - by Lynda
    I prefer the audio output on my computer to use the standard audio jack output due to volume issues. But I am using a monitor with HDMI. I have chosen to set the default audio device to be "Speakers" But every time I reboot the default audio device is the HDMI Output again. I am running Windows 7 64bit. Why does it not remember the default device? (I do shutdown and boot up properly without errors.)

    Read the article

  • Is your TRY worth catching?

    - by Maria Zakourdaev
      A very useful error handling TRY/CATCH construct is widely used to catch all execution errors  that do not close the database connection. The biggest downside is that in the case of multiple errors the TRY/CATCH mechanism will only catch the last error. An example of this can be seen during a standard restore operation. In this example I attempt to perform a restore from a file that no longer exists. Two errors are being fired: 3201 and 3013: Assuming that we are using the TRY and CATCH construct, the ERROR_MESSAGE() function will catch the last message only: To workaround this problem you can prepare a temporary table that will receive the statement output. Execute the statement inside the xp_cmdshell stored procedure, connect back to the SQL Server using the command line utility sqlcmd and redirect it's output into the previously created temp table.  After receiving the output, you will need to parse it to understand whether the statement has finished successfully or failed. It’s quite easy to accomplish as long as you know which statement was executed. In the case of generic executions you can query the output table and search for words like“Msg%Level%State%” that are usually a part of the error message.Furthermore, you don’t need TRY/CATCH in the above workaround, since the xp_cmdshell procedure always finishes successfully and you can decide whether to fire the RAISERROR statement or not. Yours, Maria

    Read the article

  • Suppress EXT3-fs warning on mount

    - by STM
    I am familiar with output suppress on Unix machines, ie: cat /file/that/doesnt/exist > /dev/null 2>& However I can't seem to suppress the output of mount when an ext3 filesystem is mounted for the nth time, and it recommends an fsck. As it happens, fscks are run regularly by another machine, so these warning messages are needlessly interrupting the flow of output to my pretty bash script. These are the errors: # mount -t ext3 /dev/sda1 /mnt > /dev/null 2>& kjournald starting. Commit interval 5 seconds EXT3-fs warning: maximal mount count reached, running e2fsck is recommended EXT3 FS 2.4-0.9.19, 19 August 2002 on sd(8,1), internal journal EXT3-fs: mounted filesystem with ordered data mode. Can anyone shed some light on this? I'm clearly blocking both fd's, but somehow output is still getting through. This is GNU Bash v2.05a

    Read the article

  • Ubuntu Postfix email account with forward

    - by Mika
    I have an Ubuntu 12.04 server with Postfix installed. In Postfix installation I used this guide https://help.ubuntu.com/community/Postfix. I didn't go through all of that, just the sudo dpkg-reconfigure postfix part. I have created user accounts to my server and the users home directories contain a .forward file which have only one row the email address to forward to. I have defined dns A records for the names www.mydomain.com and mydomain.com But if I send an email to [email protected] it doesn't get forwarded. Actually I can't see any sign about any email ever visiting my server. My firewall is defined to allow incoming traffic for ports 80, 443 and 22. For outgoing traffic it allows ports 587 and 22. The exact definitions are below. Should I allow also outgoing http (port 80)? or maybe port 25? # Allow ssh in iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT # Allow incoming HTTP iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # Allow incoming HTTPS iptables -A INPUT -i eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT # Allow outgoing SSH iptables -A OUTPUT -o eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT # Allow outgoing emails iptables -A OUTPUT -o eth0 -p tcp --dport 587 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 587 -m state --state ESTABLISHED -j ACCEPT Edits: I found lines from my syslog telling me that there were incoming traffic for port 25 which was blocked. The sender ip's for those packages were trustworthy, so I opened also port 25. Now I can see some Postfix logging in my syslog. It looks like it is at least trying to forward emails. I haven't yet received any forwarder emails into my gmail mail box.

    Read the article

  • How to convert an MKV to AVI with minimal loss

    - by Linux Jedi
    To convert an MKV to AVI, I do two things. The first thing I do is this: ffmpeg -i filename.mkv -vcodec copy -acodec copy output.avi This converts the MKV to an AVI, but the problem is that the video does not play smoothly for some reason. That's fine, because if I do one more thing it gets fixed: ffmpeg -i output.avi -vcodec mpeg4 -b 4000k -acodec mp2 -ab 320k converted.avi After I do this then the file plays without problem. I had success doing it this way for one file, but then I tried it on another file, and there is a slight, but noticeable loss in video quality. This is the output I get when doing the second step: FFmpeg version 0.6.1, Copyright (c) 2000-2010 the FFmpeg developers built on Dec 29 2010 18:02:10 with gcc 4.2.1 (Apple Inc. build 5664) configuration: libavutil 50.15. 1 / 50.15. 1 libavcodec 52.72. 2 / 52.72. 2 libavformat 52.64. 2 / 52.64. 2 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0.11. 0 / 0.11. 0 Seems stream 0 codec frame rate differs from container frame rate: 359.00 (359/1) -> 29.92 (359/12) Input #0, avi, from 'output.avi': Metadata: ISFT : Lavf52.64.2 Duration: 00:04:17.21, start: 0.000000, bitrate: 3074 kb/s Stream #0.0: Video: mpeg4, yuv420p, 704x480 [PAR 229:189 DAR 5038:2835], 29.92 fps, 29.92 tbr, 29.92 tbn, 359 tbc Stream #0.1: Audio: vorbis, 48000 Hz, stereo, s16 Output #0, avi, to 'nidome_no_kanojo.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0: Video: mpeg4, yuv420p, 704x480 [PAR 229:189 DAR 5038:2835], q=2-31, 4000 kb/s, 29.92 tbn, 29.92 tbc Stream #0.1: Audio: mp2, 48000 Hz, stereo, s16, 320 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 I just used arbitrarily large settings on the second step and it worked nicely before but not in this case. What settings should I use?

    Read the article

  • Use of c89 in GNU software

    - by Federico Culloca
    In GNU coding standard it is said that free software developer should use C89 because C99 is not widespread yet. 1999 Standard C is not widespread yet, so please do not require its features in programs. Reference here. Are they talking about developers knowledge of C99, or about compilers supporting it? Also, is this statement plausible as of today or is it somewhat "obsolete" or at least obsolescent.

    Read the article

  • Unable to mount external hard drive - Damaged file system and MFT

    - by Khalifa Abbas Lame
    I get the following error when i try to mount my external hard drive. UNABLE TO MOUNT Error mounting /dev/sdc1 at /media/khalibloo/Khalibloo2: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sdc1" "/media/khalibloo/Khalibloo2"' exited with non-zero exit status 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read of MFT, mft=6 count=1 br=-1: Input/output error Failed to open inode FILE_Bitmap: Input/output error Failed to mount '/dev/sdc1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. It doesn't mount on windows either: "I/O Device error" it's an ntfs hard drive with a single partition Of course, i tried chkdsk /f. it reported several file segments as unreadable, but didn't say whether it fixed them or not (apparently not). also tried with the /b flag. ntfsfix reported the volume as corrupt. TestDisk was able to fix a small error with the partition table by adding the "80" flag for the active (only) partition. TestDisk also confirmed that the boot sector was fine and it matched the backup. However, when attempting to repair the MFT, it couldn't read the MFT. It also couldn't list the files on the hard drive. It says file system may be damaged. Active@ also shows that MFT is missing or corrupt. So how do i fix the file system? or the MFT?

    Read the article

  • How to convert an MKV to AVI with minimal loss

    - by OSX NINJA
    To convert an MKV to AVI, I do two things. The first thing I do is this: ffmpeg -i filename.mkv -vcodec copy -acodec copy output.avi or this: ffmpeg -i filename.mkv -sameq -acodec copy output.avi Either of these will convert the MKV to an AVI, but the problem is that the video does not play smoothly for some reason. That's fine though, because if I do one more thing it gets fixed: ffmpeg -i output.avi -vcodec mpeg4 -b 4000k -acodec mp2 -ab 320k converted.avi After I do this then the file plays without problem. I had success doing it this way for one file, but then I tried it on another file, and there is a slight, but noticeable loss in video quality. This is the output I get when doing the second step: FFmpeg version 0.6.1, Copyright (c) 2000-2010 the FFmpeg developers built on Dec 29 2010 18:02:10 with gcc 4.2.1 (Apple Inc. build 5664) configuration: libavutil 50.15. 1 / 50.15. 1 libavcodec 52.72. 2 / 52.72. 2 libavformat 52.64. 2 / 52.64. 2 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0.11. 0 / 0.11. 0 Seems stream 0 codec frame rate differs from container frame rate: 359.00 (359/1) -> 29.92 (359/12) Input #0, avi, from 'output.avi': Metadata: ISFT : Lavf52.64.2 Duration: 00:04:17.21, start: 0.000000, bitrate: 3074 kb/s Stream #0.0: Video: mpeg4, yuv420p, 704x480 [PAR 229:189 DAR 5038:2835], 29.92 fps, 29.92 tbr, 29.92 tbn, 359 tbc Stream #0.1: Audio: vorbis, 48000 Hz, stereo, s16 Output #0, avi, to 'converted.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0: Video: mpeg4, yuv420p, 704x480 [PAR 229:189 DAR 5038:2835], q=2-31, 4000 kb/s, 29.92 tbn, 29.92 tbc Stream #0.1: Audio: mp2, 48000 Hz, stereo, s16, 320 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 I just used arbitrarily large settings on the second step and it worked nicely before but not in this case. What settings should I use?

    Read the article

  • grep command is not search the complete pattern

    - by Sumit Vedi
    0 down vote favorite I am facing a problem while using the grep command in shell script. Actually I have one file (PCF_STARHUB_20130625_1) which contain below records. SH_5.55916.00.00.100029_20130601_0001_NUC.csv.gz|438|3556691115 SH_5.55916.00.00.100029_20130601_0001_Summary.csv.gz|275|3919504621 SH_5.55916.00.00.100029_20130601_0001_UI.csv.gz|226|593316831 SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 SH_5.55916.00.00.100038_20130601_0001_NUC.csv.gz|368|3553014997 SH_5.55916.00.00.100038_20130601_0001_Summary.csv.gz|276|2625719449 SH_5.55916.00.00.100038_20130601_0001_UI.csv.gz|226|3825232121 SH_5.55916.00.00.100038_20130601_0001_US.csv.gz|199|2099616349 SH_5.75470.00.00.100015_20130601_0001_NUC.csv.gz|425|1627227450 And I have a pattern which is stored in one variable (INPUT_FILE_T), and want to search the pattern from the file (PCF_STARHUB_20130625_1). For that I have used below command INPUT_FILE_T="SH?*???????????????US.*" grep ${INPUT_FILE_T} PCF_STARHUB_20130625_1 The output of above command is coming as below PCF_STARHUB_20130625_1:SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 I have two problem in the output, first is, only one entry is showing in output (It should contain two entries) and second problem is, output contains "PCF_STARHUB_20130625_1:" which should not be came. output should come like below SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 SH_5.55916.00.00.100038_20130601_0001_US.csv.gz|199|2099616349 Is there any technique except grep please let me know. Please help me on this issue.

    Read the article

  • What relationship do software Scrum or Lean have to industrial engineering concepts like theory of constraints?

    - by DeveloperDon
    In Scrum, work is delivered to customers through a series of sprints in which project work is time boxed to a fixed number of days or weeks, usually 30 days. In lean software development, the goal is to deliver as soon as possible, permitting early feedback for the next iteration. Both techniques stress the importance of workflow in which software work product does not accumulate in development awaiting release at some future date. Both permit new or refined requirements and feedback from QA and customers to be acted on with as little delay as possible based on priority. A few years ago I heard a lecture where the speaker talked briefly about a family of concepts from industrial engineering called theory of constraints. In the factory, they use an operations model based on three components: drum, buffer, and rope. The drum synchronizes work product as it flows through the system. Buffers that protect the system by holding output from one stage as it waits to be consumed by the next. The rope pulls product from one work station to the next. Historically, are these ideas part of the heritage of Scrum and Lean, or are they on a separate track? It we wanted to think about Scrum and Lean in terms of drum-buffer-rope, what are the parts? Drum = {daily scrum meeting, monthly release)? Buffer = {burn down list, source control system)? Rope = { daily meeting, constant integration server, monthly releases}? Industrial engineers define work flow in terms of different kinds of factories. I-Factories: straight pipeline. One input, one output. A-Factories: many inputs and one output. V-Factories: one input, many output products. T-Plants: many inputs, many outputs. If it applies, what kind of factory is most like Scrum or Lean and why?

    Read the article

  • python reports socket in use, netstat and others claim its not

    - by captainmish
    We have a strange socket issue with a RHES3 box: Python 2.4.1 (#1, Jul 5 2005, 19:17:11) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> s = socket.socket() >>> s.bind(('localhost',12351)) Traceback (most recent call last): File "<stdin>", line 1, in ? File "<string>", line 1, in bind socket.error: (98, 'Address already in use') This seems normal, lets see what has that socket: # netstat -untap | grep 12351 {no output} # grep 12351 /proc/net/tcp {no output} # lsof | grep 12351 {no output} # fuser -n tcp 12351 {no output, repeating the python test fails again} # nc localhost 12351 {no output} # nmap localhost 12351 {shows port closed} Other high ports work fine (eg 12352 works) Is there something magic about this port? Is there somewhere else I can look? Where does python find out that socket is in use that netstat doesnt know about? Any other way I can find out what/if that socket is?

    Read the article

  • How to start/stop iptables in Ubuntu 12.04?

    - by imwrng
    I am using Ubuntu 12.04 . while learning some new things about iptables i cant through this . see at the image . while i am trying to start ,its saying as root@badfox:~# iptables -L -n -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination root@badfox:~# service iptables stop iptables: unrecognized service root@badfox:~# service iptables start iptables: unrecognized service Source: http://www.cyberciti.biz/tips/linux-iptables-examples.html Why i am getting like this ? EDIT: So my firewall already started but why i am not getting the output as i mentioned in the link at source link in first workout. . Here is my output root@badfox:~# sudo start ufw start: Job is already running: ufw root@badfox:~# iptables -L -n -v Chain INPUT (policy ACCEPT 4882 packets, 2486K bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 5500 packets, 873K bytes) pkts bytes target prot opt in out source destination root@badfox:~#

    Read the article

  • passing a font as an argument to a script

    - by josinalvo
    I am trying to use osdSH for notifications. It has a 'font' parameter that receives a curiously formed string. From the man: -f -font Set font (Default: -*-lucidatypewriter- bold-*-*-*-*-240-*-*-*-*-*-*) The manual does not comment on the arguments passed (I assume each * represents a possible argument). It would seem that this notation is (or has someday been) standard, but I've not been able to find anything about it. what is the standard ? what argument specifies letter size ?

    Read the article

  • Ubuntu: move logs from /dev/tty8 to different terminal /dev/tty12 or get rid of it.

    - by Casual Coder
    I want to know how to move or get rid of /dev/tty8 log output in Ubuntu 9.10. /dev/tty7 is my regular X session. When I am switching user to test account where I can try and test setups and configs I am at next available console i.e. /dev/tty9 because /dev/tty8 is taken by log output. Where can I configure this ? All I've found related to /dev/tty8 is commented lines in /etc/rsyslog.d/50-default.conf. I changed it like that: daemon,mail.*;\ news.=crit;news.=err;news.=notice;\ *.=debug;*.=info;\ *.=notice;*.=warn /dev/tty12 And I've got nice log output on /dev/tty12 but where is configuration for log output on /dev/tty8. How can I change it?

    Read the article

  • Gathering buslogic SCSI hardware and virtual machine operating system

    - by Julian
    I'm trying to use Powershell to get SCSI hardware from several virtual servers and get the operating system of each specific server. I've managed to get the specific SCSI hardware that I want to find with my code, however I'm unable to figure out how to properly get the operating system of each of the servers. Also, I'm trying to send all the data that I find into a csv log file, however I'm unsure of how you can make a powershell script create multiple columns. Here is my code (almost works but something's wrong): $log = "C:\Users\me\Documents\Scripts\ScsiLog.csv" Get-VM | Foreach-Object { $vm = $_ Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic" } | Foreach-Object { get-VMGuest -VM $vm } | Foreach-Object{ Write-output $vm.Guest.VmName >> $log } } I don't receive any errors when I run this code however whenever I run it I'm only getting the name of the servers and not the OS. Also I'm not sure what I need to do to make the OS appear in a different column from the name of the server in the csv log that I'm creating. What do I need to change in my code to get the OS version of each virtual machine and output it in a different column in my csv log file? EDIT: Here's a more in depth look at things I've tried that have all failed: Get-VM | Foreach-Object { $vm = $_ $svm = Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic" } Foreach-Object {get-VMGuest -VM $svm } | Foreach-Object{Write-output $svm >> $log} } #Get-VM | Foreach-Object { # $vm = $_ # Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic"} #| write-host $vm # | Foreach-Object { # # #get-VMGuest -VM $_ | # #write-host $vm # #get-VMGuest -VM $vm } | Foreach-Object{ # #write-output $vm.VmName >> $log # #write-output $vm.guest.VmName, get-VmGuest -VM $vm >> $log NO GOOD # # Write-host $vm.Guest.VmName #+ get-vmGuest -vm $VM >> $log # # # } # } I'm not sure why get-VmGuest fails though. I'm getting the scsi hardware, filtering the hardware to only get buslogic, and then wanting to get the operating system of just the filtered VMs. I don't see where my code fails though.

    Read the article

  • How to make gpg2 flush the stream?

    - by Vi
    I want to get some slowly flowing data saved in encrypted form at the device which can be turned off abruptly. But gpg2 seems to not to flush it's output frequently and I get broken files when I try to read such truncated file. vi@vi-notebook:~$ cat asdkfgmafl asdkfgmafl ggggg ggggg 2342 2342 cat behaves normally. I see the output right after input. vi@vi-notebook:~$ gpg2 -er _Vi --batch ?pE??x...(more binary data here)....???-??.... asdfsadf 22223 sdfsdfasf Still no data... Still no output... ^C gpg: signal Interrupt caught ... exiting vi@vi-notebook:~$ gpg2 -er _Vi --batch /tmp/qqq skdmfasldf gkvmdfwwerwer zfzdfdsfl ^\ gpg: signal Quit caught ... exiting Quit vi@vi-notebook:~$ gpg2 " 2048-bit ELG key, ID 78F446CA, created 2008-01-06 (main key ID 1735A052) gpg: [don't know]: 1st length byte missing vi@vi-notebook:~$ # Where is my "skdmfasldf" How to make gpg2 to handle such case? I want it to put enough output to reconstruct each incoming chunk of input. (Also fsyncing after each output can be benefitial as an additional option). Should I use other tool (I need pubkey encryption).

    Read the article

  • Parallel Class/Interface Hierarchy with the Facade Design Pattern?

    - by Mike G
    About a third of my code is wrapped inside a Facade class. Note that this isn't a "God" class, but actually represents a single thing (called a Line). Naturally, it delegates responsibilities to the subsystem behind it. What ends up happening is that two of the subsystem classes (Output and Timeline) have all of their methods duplicated in the Line class, which effectively makes Line both an Output and a Timeline. It seems to make sense to make Output and Timeline interfaces, so that the Line class can implement them both. At the same time, I'm worried about creating parallel class and interface structures. You see, there are different types of lines AudioLine, VideoLine, which all use the same type of Timeline, but different types of Output (AudioOutput and VideoOutput, respectively). So that would mean that I'd have to create an AudioOutputInterface and VideoOutputInterface as well. So not only would I have to have parallel class hierarchy, but there would be a parallel interface hierarchy as well. Is there any solution to this design flaw? Here's an image of the basic structure (minus the Timeline class, though know that each Line has-a Timeline): NOTE: I just realized that the word 'line' in Timeline might make is sound like is does a similar function as the Line class. They don't, just to clarify.

    Read the article

  • when I type apt-get -f install, I get the error message

    - by gene
    xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. Also I can not upgrade my software, It said that the package system is broken, with detail information: The following packages have unmet dependencies: xserver-xorg-core: Depends: xserver-common (>= 2:1.11.4-0ubuntu10.8) but 2:1.11.4-0ubuntu10.8 is installed when I issue sudo apt-get update, the output seems fine the source is(sorry the output has too many links that I can not post in);http://archive.ubuntu.com Reading package lists... Done ====================== when I issue sudo apt-get dist-upgrade, the output is: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: xserver-xorg-core : Breaks: xserver-xorg-video-5 E: Unmet dependencies. Try using -f. ================== when I issue 'sudo apt-get -f install', the output is: dpkg: dependency problems prevent configuration of xserver-xorg-video-radeon: xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. xserver-xorg-video-radeon (1:6.12.1-0ubuntu2) provides xserver-xorg-video-5. dpkg: error processing xserver-xorg-video-radeon (--configure):dependency problems leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: xserver-xorg-video-radeon E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Announcing Key Functional White Papers for SIM and ReIM

    - by Oracle Retail Documentation Team
    Oracle Retail has published two new documents on My Oracle Support (https://support.oracle.com)  that provide partners and retailers with deeper functional information about two products: Oracle Retail Store Inventory Management (SIM) and Oracle Retail Invoice Matching. Oracle Retail Store Inventory Management Item Configuration White Paper (Doc ID 1507221.1) There is functionality within the Store Inventory Management system related to item configuration that spans across multiple concepts that apply to the application as a whole rather than to a specific area. This white paper covers numerous topics around item configuration including: Item Transaction Levels Item Long Description Pack Size Standard Unit of Measure Standard Unit of Measure Conversion Pack Items Simple Pack Conversion Items (Notional Packs) Ranging Items Item Status Non-Sellable Items Type-2 Item Recognition UPC-E Barcodes Non-Inventory Items Consignment and Concession Items Quick Response Codes Oracle Retail Invoice Matching Financial Transactions (Doc ID 1500209.1) This document explains the financial transactions that are posted by Oracle Retail Invoice Matching (ReIM). The scope of the document is limited to ReIM transactions only, and does not explain Retail Merchandising System (RMS), Finance, or Account Receivable transactions. ReIM follows the double-entry accounting standard, which works by recording the debit and credit of each financial transaction belonging to each party involved. Each transaction means a profit to one account (debit) and a loss to another account (credit). Full invoice match processing is completed in ReIM with payment recommendations communicated to Oracle Accounts Payable. ReIM matches merchandise orders and receipts against merchandise invoices, performing automated and manual matching, as well as discrepancy-resolution processing. Matched invoices are posted to interface staging tables specifying the amount and date to pay, vendor, site ID, General Ledger Chart of Accounts (GL CoA) information, and payment terms. Other payables documents, including debit memos, credit memos and credit notes are also interfaced to Accounts Payable through the ReIM staging tables (IM_AP_STAGE_HEAD and IM_AP_STAGE_DETAIL). For information about how ReIM engages in this processing, see the latest Oracle Retail Invoice Matching Operations Guide. Certain ReIM transactions are not interfaced to Oracle Payables, but instead are interfaced to Oracle General Ledger through the IM_FINANCIAL_STAGE table. When analyzing transactions posted through the staging tables, retailers should note the transaction type, Standard/Credit, as well as the sign in the amount field. Technically, a negative sign on a credit transaction changes the transaction to a debit entry, and vice versa. This document is concerned about the financial meaning of the transactions, and will avoid a discussion of negative numbers in T-charts.

    Read the article

  • Reduce memory usage

    - by Flintoff
    I have just installed the standard default desktop configuration of Ubuntu 12.10 (Quantal Quetzal). My PC only has 1GB of RAM and is struggling a little. What steps can I take to reduce the memory overhead of the standard install? If it makes a difference, I use Firefox, and a terminal most of the time. Simply running those two applications I see: free -m total used free shared buffers cached Mem: 938 873 64 0 5 167 -/+ buffers/cache: 701 237 Swap: 959 158 801

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >