Search Results

Search found 20360 results on 815 pages for 'capture output'.

Page 209/815 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • Attempting to caue packet loss with netem doesn't work - possibly because of NAT (but delay does work)

    - by tomdee
    I have traffic from a WIFI access point routed via an Ubuntu box. I have two network interfaces which are NATed *filter :INPUT ACCEPT [11:690] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [37:6224] -A FORWARD -s 192.168.2.0/24 -i eth1 -o eth0 -m conntrack --ctstate NEW -j ACCEPT -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT COMMIT # Completed on Thu Mar 15 13:37:21 2012 # Generated by iptables-save v1.4.10 on Thu Mar 15 13:37:21 2012 *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -j MASQUERADE COMMIT If I run a ping app on an Android device connected to the WIFI network I can happily ping google. If I use netem to introduce some delay tc qdisc change dev eth0 root netem delay 100ms I can clearly see pings taking longer. If I use netem to introduce some packet loss tc qdisc change dev ifb0 root netem loss 50% then I see no change. Packet loss does work fine for locally generated traffic, just not for traffic coming in over the network that's being NATed. Any ideas how to sort this out?

    Read the article

  • Find non-ascii characters from a UTF-8 string

    - by user10607
    I need to find the non-ASCII characters from a UTF-8 string. my understanding: UTF-8 is a superset of character encoding in which 0-127 are ascii characters. So if in a UTF-8 string , a characters value is Not between 0-127, then it is not a ascii character , right? Please correct me if i'm wrong here. On the above understanding i have written following code in C : Note: I'm using the Ubuntu gcc compiler to run C code utf-string is xvab c long i; char arr[] = "xvab c"; printf("length : %lu \n", sizeof(arr)); for(i=0; i<sizeof(arr); i++){ char ch = arr[i]; if (isascii(ch)) printf("Ascii character %c\n", ch); else printf("Not ascii character %c\n", ch); } Which prints the output like: length : 9 Ascii character x Not ascii character Not ascii character ? Not ascii character ? Ascii character a Ascii character b Ascii character Ascii character c Ascii character To naked eye length of xvab c seems to be 6, but in code it is coming as 9 ? Correct answer for the xvab c is 1 ...i.e it has only 1 non-ascii character , but in above output it is coming as 3 (times Not ascii character). How can i find the non-ascii character from UTF-8 string, correctly. Please guide on the subject.

    Read the article

  • recommended way to collect email notifications from crond in Arch Linux

    - by nponeccop
    Arch Linux doesn't have sendmail installed by default. So I get the following messages in my syslog: Sep 15 13:16:01 zorro crond[18497]: mailing cron output for user collectors sh cronjob.sh Sep 15 13:16:01 zorro crond[18497]: unable to exec /usr/sbin/sendmail: cron output for user collectors sh cronjob.sh to /dev/null What is the recommended way to fix this default behaviour so actual messages are sent? heirloom-mailx is installed and capable of sending email messages using SMTP. Is it possible for crond to use mailx to send notifications? Is there any drop-in replacement for sendmail that sends using mailx? Sendmail is not even in the repositories.

    Read the article

  • Can PuTTY be configured to display the following UTF-8 characters?

    - by Stuart Powers
    I'd like to be able to render the characters as seen in this tweet: I saved the tweet's JSON data and wrote a one-liner python script for testing. python -c 'import json,urllib; print json.load(urllib.urlopen("http://c.sente.cc/BUCq/tweet.json"))["text"]' This next image shows the output of this command on two different putty sessions, one with Bitstream Vera Sans Mono font and the other is using Courier New: Next is an example of correct output (I wasn't using PuTTY): The original JSON is at this link using Twitter's API. How can I get PuTTY to display those characters?

    Read the article

  • Using Python to traverse a parent-child data set

    - by user132748
    I have a dataset of two columns in a csv file. Th purpose of this dataset is to provide a linking between two different id's if they belong to the same person. e.g (2,3,5 belong to 1) e.g COLA COLB 1 2 ; 1 3 ; 1 5 ; 2 6 ; 3 7 ; 9 10 In the above example 1 is linked to 2,3,5 and 2 is the linked to 6 and 3 is linked to 7. What I am trying to achieve is to identify all records which are linked to 1 directly (2,3,5) or indirectly(6,7) and be able to say that these id's in column B belong to same person in column A and then either dedupe or add a new column to the output file which will have 1 populated for all rows that link to 1 e.g of expected output colA colB GroupField 1 2 1; 1 3 1; 1 5 1 ; 2 6 1 ;3 7 1; 9 10 9; 10 11 9 I am a newbie and so am not sure on how to approach this problem.Appreciate any inputs you'll can provide.

    Read the article

  • iptables: built-in INPUT chain in nat table?

    - by ughmandaem
    I have a Gentoo Linux system running linux 2.6.38-rc8. I also have a machine running Ubuntu with linux 2.6.35-27. I also have a virtual machine running Debian Unstable with linux 2.6.37-2. On the Gentoo and Debian systems I have an INPUT chain built into my nat table in addition to PREROUTING, OUTPUT, and POSTROUTING. On Ubuntu, I only have PREROUTING, OUTPUT, and POSTROUTING. I am able to use this INPUT chain to use SNAT to modify the source of a packet that is destined to the local machine (imagine simulating an incoming spoofed IP to a local application or just to test a virtual host configuration). This is possible with 2 firewall rules on Gentoo and Debian but seemingly not so on Ubuntu. I looked around for documentation on changes to the SNAT target and the INPUT chain of the nat table and I couldn't find anything. Does anyone know if this is a configuration issue or is it something that was just added in more recent versions of linux?

    Read the article

  • When module calling gets ugly

    - by Pete
    Has this ever happened to you? You've got a suite of well designed, single-responsibility modules, covered by unit tests. In any higher-level function you code, you are (95% of the code) simply taking output from one module and passing it as input to the next. Then, you notice this higher-level function has turned into a 100+ line script with multiple responsibilities. Here is the problem. It is difficult (impossible) to test that script. At least, it seems so. Do you agree? In my current project, all of the bugs came from this script. Further detail: each script represents a unique solution, or algorithm, formed by using different modules in different ways. Question: how can you remedy this situation? Knee-jerk answer: break the script up into single-responsibility modules. Comment on knee-jerk answer: it already is! Best answer I can come up with so far: create higher-level connector objects which "wire" modules together in particular ways (take output from one module, feed it as input to another module). Thus if our script was: FooInput fooIn = new FooInput(1, 2); FooOutput fooOutput = fooModule(fooIn); Double runtimevalue = getsomething(fooOutput.whatever); BarInput barIn = new BarInput( runtimevalue, fooOutput.someOtherValue); BarOutput barOut = barModule(BarIn); It would become with a connector: FooBarConnectionAlgo fooBarConnector = new fooBarConnector(fooModule, barModule); FooInput fooIn = new FooInput(1, 2); BarOutput barOut = fooBarConnector(fooIn); So the advantage is, besides hiding some code and making things clearer, we can test FooBarConnectionAlgo. I'm sure this situation comes up a lot. What do you do?

    Read the article

  • Database Change Auditing - Part of or Abstracted from ORM / Application Layer?

    - by BrandonV
    My fellow developers and I are at a crossroads in how to go about continuing our auditing of database changes. Most of our applications log changes via INSERT, UPDATE, and DELETE triggers. A few of our newer applications audit at the ORM layer; specifically using Hibernate Envers. While ORM layer auditing provides a much cleaner interface and is much more maintainable, it will not capture any manual database changes that are made. ORM layer auditing also means that our libraries will currently require a dependency on our ORM implementation unless, specifically in our case for example, JPA plans on providing something in the near future. Is there a common paradigm that addresses this?

    Read the article

  • UDF Partition reported full when it is not

    - by Capt.Nemo
    I was using these instructions to setup an external hard disk with udf. I have been able to setup a multi-partition system using those instructions, but I seem to have hit a wall, where the partition is reported as full while writing to the disk. Every other tool available to me reports it as free. Relevant lshw output Here's a screenshot showing the disk: Both the output of df and the file manager (caja) report the disk as free. Filesystem Size Used Avail Use% Mounted on /dev/sda9 9.0G 7.6G 910M 90% / udev 974M 12K 974M 1% /dev /dev/sda1 50G 47G 295M 100% /media/Data /dev/sda6 49G 41G 5.9G 88% /home /dev/sda2 155G 127G 29G 82% /media/Entertainment /dev/sda8 14G 13G 516M 96% /media/Stuff /dev/sdb2 120G 1.9G 112G 2% /media/3c887659-5676-4946-875b-b797be508ce7 /dev/sdb3 11G 2.6G 7.7G 25% /media/108b0a1d-fd1a-4f38-b1c6-4ad1a20e34a3 /dev/sdb1 802G 34G 768G 5% /media/disk I seem to have hit a wall near the 35GB mark. Despite being shown as 35gb/860gb used everywhere, the following happens on a write attempt: [2017][/media/Dory]$ echo D>>echo bash: echo: write error: No space left on device Writing byte by byte, the maximum I can take it to is 34719248K. The most weird part is that on mounting it Windows, Windows can write to the disk easily, and the writes are being read fine back in Ubuntu. However, the used-bytes remains at 34719248K in Ubuntu (It goes higher on Windows, however).

    Read the article

  • tail -f updates slowly

    - by Cliff
    I'm not sure why, but on my Macbook Pro running lion I get slow updates when I issue "tail -f" on a log file that is being written to. I used to use this command all the time at my last company but that was typically on Linux machines. The only thing I can think of that would possibly slow the updates are buffering of output and/or maybe a different update interval on a Mac vs. Linux. I've tried with several commands all which write to stout relatively quickly but give slow updates to the tail command. Any ideas? Update I am merely running a python script with a bunch of prints in it and redirecting to a file vi " my output.log". I expect to see updates near real time but that doesn't seem to be the case.

    Read the article

  • Diskless with Ubuntu 12.04

    - by user139462
    I'm trying to setup a new diskless solution with ubuntu 12.04 without any success. I followed this howto: https://help.ubuntu.com/community/DisklessUbuntuHowto But the initramfs seems not to be able to mount my nfs share. On my server side: My /etc/exports /srv/nfs4 192.168.0.0/24(fsid=0,rw,no_subtree_check) /srv/nfs4/nfsroot 192.168.0.0/24(rw,no_root_squash,no_subtree_check,fsid=1,nohide,insecure,sync) I'm able to mount my nfs share on standard Ubuntu installation without any problem. I can mount my nfs on any client with those commands: mount 192.168.0.3:/nfsroot /mnt or mount 192.168.0.3:/srv/nfs4/nfsroot /mnt My /tftpboot/pxelinux.cfg/default config file is DEFAULT vmlinuz-3.5.0-25-generic root=/dev/nfs initrd=initrd.img-3.5.0-25-generic nfsroot=192.168.0.3:/nfsroot ip=dhcp rw I also tried DEFAULT vmlinuz-3.5.0-25-generic root=/dev/nfs initrd=initrd.img-3.5.0-25-generic nfsroot=192.168.0.3:/srv/nfs4/nfsroot ip=dhcp rw. What I got in initramfs: With the setting [nfsroot=192.168.0.3:/nfsroot] Diskless output: mount call failed - server replied: Permission denied On Syslog of my nfs server: rpc.mountd[1266]: refused mount request from 192.168.0.10 for /nfsroot (/): not exported With the setting [nfsroot=192.168.0.3:/srv/nfs4/nfsroot] Diskless output: mount: the kernel lacks NFS v3 support On Syslog of my nfs server I got: Mar 11 14:03:06 BootFromLan rpc.mountd[1266]: authenticated mount request from 192.168.0.10:834 for /srv/nfs4/nfsroot (/srv/nfs4/nfsroot) Mar 11 14:03:06 BootFromLan rpc.mountd[1266]: refused unmount request from 192.168.0.10 for /root (/): not exported

    Read the article

  • Thinkpad speaker turns mute - Linux Codec issue?

    - by Curlew
    At some point a few days ago the speakers on my Lenovo Thinkpad T410 (Model number: 2537A11) suddenly stopped working randomly. This error happens every time I watch a video or listen to a music file. The sound just abruptly stops. At the moment, I can't produce a single sound no matter what I do. I am using Debian GNU/Linux on this laptop and there doesn't appear to be anything else wrong (the fan is working, no abnormal heat (staying around ~40°C), no other obvious errors or problems). Here is the output of a nice program someone pointed me to: martin@martin:~/Downloads$ sudo python run.py --monitor Using temporary directory: /dev/shm/hda-analyzer You may remove this directory when finished or if you like to download the most recent copy of hda-analyzer tool. Downloading file hda_analyzer.py Downloading file hda_guilib.py Downloading file hda_codec.py Downloading file hda_proc.py Downloading file hda_graph.py Downloading file hda_mixer.py Downloaded all files, executing hda_analyzer.py Watching 1 cards ====================================== Sound is working normally and then it stops and the following lines appear: Diff for codec 0/0 (0x14f15069): --- +++ @@ -164,17 +164,17 @@ Power: setting=D0, actual=D0 Node 0x1f [Pin Complex] wcaps 0x400501: Stereo Pincap 0x00000010: OUT Pin Default 0x901701f0: [Fixed] Speaker at Int N/A Conn = Analog, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT - Power: setting=D0, actual=D0 + Power: setting=D3, actual=D3 Connection: 2 0x10* 0x11 Node 0x20 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE And now there is also an error in the dmesg output hda-intel: IRQ timing workaround is activated for card #0. Suggest a bigger bdl_pos_adj. I changed the bdl_pos_adj to various numbers (-1, 0, 64, 1024) and either there is no change at all or dmesg reports that the adjustment is too big. I wonder if this bdl_pos_adj is the real reason for the error. Here is my hardware information provided by alsa-info.sh website. Okay, i did some serious testing and even installed Windows and now i officially conclude that this is a hard-ware related issue with my Laptop speakers. Reason: The error occurs in my installed Debian Linux, an Ubuntu Live distribution and Windows XP No error-message appears in all of the OS. The sound just keeps running and i can't hear a thing. I tested different setups, including OSS, ALSA and the pulseaudio server on top If i use my new usb-headphones i can hear sound all the time without any sudden silences. So obviously, although hard to believe, my laptop speakers are not okay (never heard of similar cases). I'll award the bounty to anyone who can point me to good tutorials or the procedure how to exchange my T410 speakers (i still have warranty. The laptop was bought in Germany, but now i am in Denmark). Or to someone who can explain me the output from hda-analyzer (big log above).

    Read the article

  • Eclipse: Organising Files

    - by someguy
    I want to import a project that I'm planning to build upon. The problem is that it is very messy; with source files, class files and libraries under one directory. How would I organise these files using Eclipse? I know you can change the source folder and output folder, but when I do change the source folder, the files that I want inside it do not physically move to that folder. Output folder is fine, though. Also, I would like a separate folder for libraries. I'm not sure how I would go about this, however. Here's how I would like it: src: This folder will contain source files. bin: This folder will contain binary (class) files. lib: This folder will contain external libraries.

    Read the article

  • SSIS Catalog: How to use environment in every type of package execution

    - by Kevin Shyr
    Here is a good blog on how to create a SSIS Catalog and setting up environments.  http://sqlblog.com/blogs/jamie_thomson/archive/2010/11/13/ssis-server-catalogs-environments-environment-variables-in-ssis-in-denali.aspx Here I will summarize 3 ways I know so far to execute a package while using variables set up in SSIS Catalog environment. First way, we have SSIS project having reference to environment, and having one of the project parameter using a value set up in the environment called "Development".  With this set up, you are limited to calling the packages by right-clicking on the packages in the SSIS catalog list and select Execute, but you are free to choose absolute or relative path of the environment. The following screenshot shows the 2 available paths to your SSIS environments.  Personally, I use absolute path because of Option 3, just to keep everything simple for myself. The second option is to call through SQL Job.  This does require you to configure your project to already reference an environment and use its variable.  When a job step is set up, the configuration part will require you to select that reference again.  This is more useful when you want to automate the same package that needs to be run in different environments. The third option is the most important to me as I have a SSIS framework that calls hundreds of packages.  The main part of the stored procedure is in this post (http://geekswithblogs.net/LifeLongTechie/archive/2012/11/14/time-to-stop-using-ldquoexecute-package-taskrdquondash-a-way-to.aspx).  But the top part had to be modified to include the logic to use environment reference. CREATE PROCEDURE [AUDIT].[LaunchPackageExecutionInSSISCatalog] @PackageName NVARCHAR(255) , @ProjectFolder NVARCHAR(255) , @ProjectName NVARCHAR(255) , @AuditKey INT , @DisableNotification BIT , @PackageExecutionLogID INT , @EnvironmentName NVARCHAR(128) = NULL , @Use32BitRunTime BIT = FALSE AS BEGIN TRY DECLARE @execution_id BIGINT = 0; -- Create a package execution IF @EnvironmentName IS NULL BEGIN   EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @use32bitruntime=@Use32BitRunTime; END ELSE BEGIN   DECLARE @EnvironmentID AS INT   SELECT @EnvironmentID = [reference_id]    FROM SSISDB.[internal].[environment_references] WITH(NOLOCK)    WHERE [environment_name] = @EnvironmentName     AND [environment_folder_name] = @ProjectFolder      EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @reference_id=@EnvironmentID,     @use32bitruntime=@Use32BitRunTime; END

    Read the article

  • What determines which Javascript functions are blocking vs non-blocking?

    - by Sean
    I have been doing web-based Javascript (vanilla JS, jQuery, Backbone, etc.) for a few years now, and recently I've been doing some work with Node.js. It took me a while to get the hang of "non-blocking" programming, but I've now gotten used to using callbacks for IO operations and whatnot. I understand that Javascript is single-threaded by nature. I understand the concept of the Node "event queue". What I DON'T understand is what determines whether an individual javascript operation is "blocking" vs. "non-blocking". How do I know which operations I can depend on to produce an output synchronously for me to use in later code, and which ones I'll need to pass callbacks to so I can process the output after the initial operation has completed? Is there a list of Javascript functions somewhere that are asynchronous/non-blocking, and a list of ones that are synchronous/blocking? What is preventing my Javascript app from being one giant race condition? I know that operations that take a long time, like IO operations in Node and AJAX operations on the web, require them to be asynchronous and therefore use callbacks - but who is determining what qualifies as "a long time"? Is there some sort of trigger within these operations that removes them from the normal "event queue"? If not, what makes them different from simple operations like assigning values to variables or looping through arrays, which it seems we can depend on to finish in a synchronous manner? Perhaps I'm not even thinking of this correctly - hoping someone can set me straight. Thanks!

    Read the article

  • An equivalent of IceCast but for Live Video Streaming ?

    - by Kedare
    Hello, I am looking for a solution to Stream live video like that : A camera/webcam/video output ---> Stream server ---> Clients And if possible multiple Stream Servers like this (like IceCast): A camera/webcam/video output --> Master Stream server +---> Slave Stream Server ---> Clients | `--> Clients | `--> Slave Stream Server ---> Clients `--> Clients The clients will be in flash, so I think RTMP should be a good protocol, I've heard of Red5, is it good for that ? Does it scale ? I would like to get statistics (Amount of clients, Bandwidth, etc), is it possible with red5 ? Do you know any other good solution to do that ? (Only free and if possible Open Source) Thank you !

    Read the article

  • NIC is receiving, but not transmitting at all?

    - by Shtééf
    I'm trying to fix a very strange problem remotely on a machine at a customer site. The machine is a Dell PowerEdge, I believe a 1950 (haven't verified, but the lspci output matches specs I found.) The machine has two similar NICs, identified as Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12) by lspci, and using the bnx2 driver. (I suspect these are on-board and on the same controller, which is what I'm accustomed to for this type of machine.) The primary interface eth0 works perfectly, and is in fact how I am ssh'd in. However, the secondary interface eth1 is not transmitting. I can see this in ifconfig output, for example, where the TX field is always 0. However, it is receiving, and tcpdump shows ARP requests coming from the ISP's gateway on the other side. The interface is physically connected to a Siemens BSTU4 modem, configured by the ISP. The link is properly set to 10MBps and full duplex, without negotation, as the ISP requested. A small /30 subnet is configured. For the sake of anonimity, let's say the machine is 3.3.3.2/30, and the ISP's gateway .1. The machine has no firewall settings whatsoever. Even running something like arping -I eth1 3.3.3.1, and running tcpdump alongside, shows no traffic whatsoever being transmitted on the interface. (But the other side keeps steadily sending ARP requests, and that is all that can be seen.) What could be causing this? Here's some output, anonymized, which may hopefully help: $ ethtool eth1 Settings for eth1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised auto-negotiation: No Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off Supports Wake-on: d Wake-on: d Link detected: yes $ ip link show eth1 3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:15:c5:xx:xx:xx brd ff:ff:ff:ff:ff:ff $ ip -4 addr show eth1 3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 inet 3.3.3.2/30 brd 3.3.3.3 scope global eth1 $ ip -4 route show match 3.3.3.0/30 3.3.3.0/30 dev eth1 proto kernel scope link src 3.3.3.2 default via 10.0.0.5 dev eth0

    Read the article

  • ksoftirqd uses 100% cpu

    - by andy
    I am running 32bit Ubuntu 10.04. A lot of the times ksoftirqd/0 or ksoftirqd/1 start using up 100% CPU for no apparent reason, and I am forced to reboot my laptop. Incidentally this also happens when I maximize my (youtube) videos on Chrome and Fireox, but once I un-maximize the videos the CPU usage goes down to the original levels. Any ideas what it going on? --- Addendum --- dmesg produces a ~2000 line output. I searched for 'error' and 'warning' in the output, and here are the relevant lines (along with some headers): [ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Linux version 2.6.32-21-generic (buildd@yellow) (gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5) ) #32-Ubuntu SMP Fri Apr 16 08:09:38 UTC 2010 (Ubuntu 2.6.32-21.32-generic 2.6.32.11+drm33.2) [ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-21-generic root=UUID=157dcfda-acd6-4d1b-a6a8-ff9ccff61906 ro quiet splash [ 0.000000] KERNEL supported cpus: [ 0.000000] Intel GenuineIntel [ 0.000000] AMD AuthenticAMD [ 0.000000] Centaur CentaurHauls [ 0.000000] BIOS-provided physical RAM map: [ 24.775546] EXT3-fs warning: mounting fs with errors, running e2fsck is recommended [44920.210518] ata1: SError: { PHYRdyChg CommWake 10B8B Dispar LinkSeq TrStaTrns } [44920.210531] res 40/00:00:f0:4b:7f/00:00:18:00:00/40 Emask 0x10 (ATA bus error) [58673.134623] chrome[20101]: segfault at 7f38bc4ad000 ip 00007f38be769ecc sp 00007fff24616850 error 4 in libpepflashplayer.so[7f38bdc08000+e55000] [ 24.775546] EXT3-fs warning: mounting fs with errors, running e2fsck is recommended [44920.210531] res 40/00:00:f0:4b:7f/00:00:18:00:00/40 Emask 0x10 (ATA bus error)

    Read the article

  • Merging and sorting multiple files with "sort"

    - by NewbiZ
    Hello, I have a bunch of text logfiles in the following format: ID (17 characters) Timestamp (14 characters YYYYmmddHHMMSS e.g. "20060210100040" -> 2006/02/10 10:00:40) Random data (? characters) end of line The files are already sorted by timestamp. I need to get 1 log file with all the logs from multiple logs files, sorted by timestamp. Note that the log files are really huge, around 3-4G each (and there are dozens of them) I tried the following command: sort -s -m -t '|' -k1n,1n +17 -o data_sort.txt *.TXT Here is how I ended up with this command: -s : don't bother with tie results -m : merge all logs files -t '|' : there is no | in my logs, so the whole line should be field 1 -k1n,1n: sort on the first field as a numeric value +17 : the timestamp starts at index 17 -o : output file Actually... it fails miserably. The output file data_sort.txt is just the concatenation of all files, not sorted at all :( I would greatly appreciate if anyone could provide any help on this problem! Thanks

    Read the article

  • Why make the login page to a single page application a separate page?

    - by ryanzec
    I am wondering why it seems to be popular to have the login page of a SPA be a separate page that is not page of the SPA (as in loaded and send data through ajax requests)? I only thing I can think of is security but I can't think a specific security reason. I mean the only thing that come to mind is that if your login page in part of the SPA, it sends the username/password through ajax which can be seen by such tools like firebug or web inspector however even if you send it as a normal POST request, there are other tools that can easily capture this data (like fiddler, httpscoop, etc...). Is there something I am missing?

    Read the article

  • Designs for outputting to a spreadsheet

    - by Austin Moore
    I'm working on a project where we are tasked to gather and output various data to a spreadsheet. We are having tons of problems with the file that holds the code to write the spreadsheet. The cell that the data belongs to is hardcoded, so anytime you need to add anything to the middle of the spreadsheet, you have to increment the location for all the fields after that in the code. There are random blank rows, to add padding between sections, and subsections within the sections, so there's no real pattern that we can replicate. Essentially, anytime we have to add or change anything to the spreadsheet it requires a many long and tedious hours. The code is all in this one large file, hacked together overtime in Perl. I've come up with a few OO solutions, but I'm not too familiar with OO programming in Perl and all my attempts at it haven't been great, so I've shied away from it so far. I've suggested we handle this section of the program with a more OO friendly language, but we can't apparently. I've also suggested that we scrap the entire spreadsheet idea, and just move to a webpage, but we can't do that either. We've been working on this project for a few months, and every time we have to change that file, we all dread it. I'm thinking it's time to start some refactoring. However, I don't even know what could make this file easier to work with. The way the output is formatted makes it so that it has to be somewhat hardcoded. I'm wondering if anyone has insight on any design patterns or techniques they have used to tackle a similar problem. I'm open to any ideas. Perl specific answers are welcome, but I am also interested in language-agnostic solutions.

    Read the article

  • What does dd conv=sync,noerror do?

    - by dding
    So what is the case when adding conv=sync,noerror makes a difference when backing up an entire hard disk onto an image file? Is conv=sync,noerror a requirement when doing forensic stuff? If so, why is it the case with reference to linux fedora? Edit: OK, so if I do dd without conv=sync,noerror, and dd encounters read error when reading the block (let's size 100M), does dd just skip 100M block and reads the next block without writing something (dd conv=sync,noerror writes zeros to 100M of output - so what about this case?)? And if is hash of original hard disk and output file different if done without conv=sync,noerror? Or is this only when read error occurred?

    Read the article

  • How to force Multiple Monitors correct resolutions for LightDM?

    - by Hanynowsky
    I am affected by the BUG: https://bugs.launchpad.net/ubuntu/+source/unity-greeter/+bug/874241 Otherwise, if like me you have a laptop connected to a second monitor of higher resolution, LIGHTDM at the login stage, mirrors the displays in both screens and assign to them a common resolution (1024X768) in my case, instead of extending the desktop (Primary screen with the greeter and secondary with just a logo as mentioned in the Multiple Monitors UX specifications book for 12.04). Here is my xrandr -q @L502X:~$ xrandr -q Screen 0: minimum 320 x 200, current 1920 x 1848, maximum 8192 x 8192 LVDS1 connected 1366x768+309+1080 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.0*+ 1360x768 59.8 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 510mm x 287mm 1920x1080 60.0*+ 1600x1200 60.0 1680x1050 60.0 1280x1024 60.0 1440x900 59.9 1280x960 60.0 1280x800 59.8 1024x768 60.0 800x600 60.3 56.2 640x480 60.0 DP1 disconnected (normal left inverted right x axis y axis) I tried to force lightdm to execute some xrandr commands in order to set the right resolution for each monitor and extend the desktop, but I get a LOW GRAPHICS MODE ERROR (You're running in low graphics mode, your screen, input devices...did not get detected..) I created a simple script named lightdmxrand.sh: #!/bin/sh xrandr --output HDMI1 --primary --mode 1920x1080 --output LVDS1 --mode 1366x768 --below HDMI1 And told lightdm to run it : /etc/lightdm/lightdm.conf [SeatDefaults] greeter-session=unity-greeter user-session=ubuntu greeter-setup-script=/usr/bin/numlockx on display-setup-script=/home/hanynowsky/lightdmxrandr.sh Someone knows what is wrong!? Thanks in advance.

    Read the article

  • I can't enable extra effects in Ubuntu 10.10. Please help?

    - by jasoncruz98
    I installed Ubuntu 10.10 alongside Ubuntu 11.10 to use an older version of Compiz. On Ubuntu 11.10, Compiz was enabled by default and I didn't need to use any graphics driver to enjoy the effects. All I had to do was install CompizConfig Settings Manager and enable those extra effects. That was Compiz 0.9.6. Now, after installing Ubuntu 10.10, when I first logged in, the graphics were slow. When I dragged a window from one end of the screen to the other, the whole screen would blur up and pixelate and it would be very laggy. I tried going to System Preference Appearance and selecting Extra effects on the Visual effects tab, but all I got was "Desktop effects could not be enabled". I don't know whether I should install the Additional drivers (proprietary) because my Internet is slow and it would take a long time. Furthermore, in Ubuntu 11.10, after I installed the proprietary graphics driver, I immediately went into fallback mode and wasn't even offered an option to set my desktop session to Ubuntu 3D. I didn't need the driver to run Compiz in Ubuntu 11.10, it just ran so smoothly. But in Ubuntu 10.10, everything is so laggy. Should I install the ATI/AMD Proprietary FGLRX Graphics Driver for Ubuntu 10.10 to enable extra effects? Or is there something else wrong with my system? Here is the output of lspci -nn | grep VGA 00:02.0 VGA compatible controller [0300]: Intel Corporation Sandy Bridge Integrated Graphics Controller [8086:0116] (rev 09) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc Device [1002:6760] Here is the output of the same command, but in Ubuntu 11.10 (in this case the one which is correct, because I don't have the Sandy Bridge Integrated graphics controller) 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc NI Seymour [AMD Radeon HD 6470M] [1002:6760]

    Read the article

  • SoX on Windows 7 64-Bit Outfile Missing

    - by Christian
    I have come across the strangest problem when trying to run sox.exe on my Windows 7 installation. Whenever I try and record audio it works without any issues but it will not output an audio file. The crazy thing is that when I use the play command it successfully plays what I just recorded. Has anyone ever heard of this happening? Here are the commands (and output) that I'm using: C:\Program Files (x86)\Vox\sox-14-4-0>sox -d test.wav trim 0 00:05 Input File : 'default' (waveaudio) Channels : 2 Sample Rate : 48000 Precision : 16-bit Sample Encoding: 16-bit Signed Integer PCM In:0.00% 00:00:05.03 [00:00:00.00] Out:240k [ | ] Clip:0 Done. C:\Program Files (x86)\Vox\sox-14-4-0>play test.wav test.wav: File Size: 960k Bit Rate: 1.54M Encoding: Signed PCM Channels: 2 @ 16-bit Samplerate: 48000Hz Replaygain: off Duration: 00:00:05.00 In:100% 00:00:05.00 [00:00:00.00] Out:240k [ | ] Clip:0 Done. Am I losing my mind or is something up here?

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >