Search Results

Search found 19408 results on 777 pages for 'output formats'.

Page 113/777 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • How to increment a value appended to another value in a shell script

    - by Sitati
    I have a shell script that executes a backup program and saves the output in a folder. I would like to update the name of the output folder every time the shell script run. In the end I want to have many files with different names like this: innobackupex --user=root --password=@g@1n --database="open_cart" /var/backup/backup_1 --no-timestamp And after running the shell script again: innobackupex --user=root --password=@g@1n --database="open_cart" /var/backup/backup_2 --no-timestamp

    Read the article

  • returning a heap block by reference in c++

    - by basicR
    I was trying to brush up my c++ skills. I got 2 functions: concat_HeapVal() returns the output heap variable by value concat_HeapRef() returns the output heap variable by reference When main() runs it will be on stack,s1 and s2 will be on stack, I pass the value by ref only and in each of the below functions, I create a variable on heap and concat them. When concat_HeapVal() is called it returns me the correct output. When concat_HeapRef() is called it returns me some memory address (wrong output). Why? I use new operator in both the functions. Hence it allocates on heap. So when I return by reference, heap will still be VALID even when my main() stack memory goes out of scope. So it's left to OS to cleanup the memory. Right? string& concat_HeapRef(const string& s1, const string& s2) { string *temp = new string(); temp->append(s1); temp->append(s2); return *temp; } string* concat_HeapVal(const string& s1, const string& s2) { string *temp = new string(); temp->append(s1); temp->append(s2); return temp; } int main() { string s1,s2; string heapOPRef; string *heapOPVal; cout<<"String Conact Experimentations\n"; cout<<"Enter s-1 : "; cin>>s1; cout<<"Enter s-2 : "; cin>>s2; heapOPRef = concat_HeapRef(s1,s2); heapOPVal = concat_HeapVal(s1,s2); cout<<heapOPRef<<" "<<heapOPVal<<" "<<endl; return -9; }

    Read the article

  • Can a Windows computer access Pulse sound server on an Ubuntu computer?

    - by Dave M G
    With help I received in this question, I set up all my Ubuntu computers so that they all access a central computer for sound output, using Pulse Audio Preferences. I have some Windows computers as well. I was wondering if it is also possible to make them clients of the sound server computer, so that they will send their sound output over the network to be played by the Ubuntu computer running the Pulse audio sound server. And if so, how?

    Read the article

  • Strategy to use two different measurement systems in software

    - by Dennis
    I have an application that needs to accept and output values in both US Custom Units and Metric system. Right now the conversion and input and output is a mess. You can only enter in US system, but you can choose the output to be US or Metric, and the code to do the conversions is everywhere. So I want to organize this and put together some simple rules. So I came up with this: Rules user can enter values in either US or Metric, and User Interface will take care of marking this properly All units internally will be stored as US, since the majority of the system already has most of the data stored like that and depends on this. It shouldn't matter I suppose as long as you don't mix unit. All output will be in US or Metric, depending on user selection/choice/preference. In theory this sounds great and seems like a solution. However, one little problem I came across is this: There is some data stored in code or in the database that already returns data like this: 4 x 13/16" screws, which means "four times screws". I need the to be in either US or Metric. Where exactly do I put the conversion code for doing the conversion for this unit? The above already mixing presentation and data, but the data for the field I need to populate is that whole string. I can certainly split it up into the number 4, the 13/16", and the " x " and the " screws", but the question remains... where do I put the conversion code? Different Locations for Conversion Routines 1) Right now the string is in a class where it's produced. I can put conversion code right into that class and it may be a good solution. Except then, I want to be consistent so I will be putting conversion procedures everywhere in the code at-data-source, or right after reading it from the database. The problem though is I think that my code will have to deal with two systems, all throughout the codebase after this, should I do this. 2) According to the rules, my idea was to put it in the view script, aka last change to modify it before it is shown to the user. And it may be the right thing to do, but then it strikes me it may not always be the best solution. (First, it complicates the view script a tad, second, I need to do more work on the data side to split things up more, or do extra parsing, such as in my case above). 3) Another solution is to do this somewhere in the data prep step before the view, aka somewhere in the middle, before the view, but after the data-source. This strikes me as messy and that could be the reason why my codebase is in such a mess right now. It seems that there is no best solution. What do I do?

    Read the article

  • Volume control doesn't work for headphones

    - by Hendekagon
    Fresh install of 11.10 and the gnome panel's volume control doesn't work for headphones but does work for speakers on my laptop. In sound settings I have Internal Audio Analogue Stereo selected for output (Output tab) and Analogue Speakers selected for Connector. When I plug headphones in, the volume control doesn't work and the volume is maximum through the headphones. On a side note, only one of my two headphone sockets produces sound too (whereas both worked in 10.04)

    Read the article

  • reference list for non-IT driven algorithmic patterns

    - by Quicker
    I am looking for a reference list for non-IT driven algorithmic patterns (which still can be helped with IT implementations of IT). An Example List would be: name; short desc; reference Travelling Salesman; find the shortest possible route on a multiple target path; http://en.wikipedia.org/wiki/Travelling_salesman_problem Ressource Disposition (aka Regulation); Distribute a limited/exceeding input on a given number output receivers based on distribution rules; http://database-programmer.blogspot.de/2010/12/critical-analysis-of-algorithm-sproc.html If there is no such list, but you instantly think of something specific, please 'put it on the desk'. Maybe I can compile something out of the input I get here (actually I am very frustrated as I did not find any such list via research by myself). Details on Scoping: I found it very hard to formulate what I want in a way everything is out that I do not need (which may be the issue why I did not find anything at google). There is a database centric definition for what I am looking for in the section 'Processes' of the second example link. That somehow fits, but the database focus sort of drifts away from the pattern thinking, which I have in mind. So here are my own thoughts around what's in and what's out: I am NOT looking for a foundational algo ref list, which is implemented as basis for any programming language. Eg. the php reference describes substr and strlen. That implements algos, but is not what I am looking for. the problem the algo does address would even exist, if there were no computers (or other IT components) the main focus of the algo is NOT to help other algo's chances are high, that there are implementions of the solution or any workaround without any IT support out there in the world however the algo could be benefitialy implemented/fully supported by a software application = means: the problem of the algo has to be addressed anyway, but running an algo implementation with software automates the process (that is why I posted it on stackoverflow and not somewhere else) typically such algo implementations have more than one input field value and more than one output field value - which implies it could not be implemented as simple function (which is fixed to produce not more than one output value) in a normalized data model often times such algo implementation outputs span accross multiple rows (sometimes multiple tables), whereby the number of output rows depends on the input paraters and rows in the table(s) at start time - which implies that any algo implementation/procedure must interact with a database (read and/or write) I am mainly looking for patterns, not for specific implementations. Example: The Travelling Salesman assumes any coordinates, however it does not say: You need a table targets with fields x and y. - however sometimes descriptions are focussed on examples with specific implementations very much - no worries, as long as the pattern gets clear

    Read the article

  • emachines e725 resulution problem

    - by Montasir
    I am new in Ubuntu I cannot set my resolution to 1366x768 all options in graphical displays is 1024x768 only. I tried cvt 1366 768 and xrandr --newmode "1368x768_60.00" 85.86 1368 1440 1584 1800 768 769 772 795 -HSync +Vsync and got xrandr: Failed to get size of gamma for output default X Error of failed request: BadName (named color or font does not exist) Major opcode of failed request: 149 (RANDR) Minor opcode of failed request: 16 (RRCreateMode) Serial number of failed request: 19 Current serial number in output stream: 19 What can I do.

    Read the article

  • how can I change user permission settings for an application

    - by naftalimich
    I am trying to prevent the guest user from accessing software-center (I know it can't install anything, but I don't even want it to browse) The file is owned by root and in group root. I run: sudo chmod o= /usr/bin/software-center Afterwards I run ls -l and get the following output: lrwxrwxrwx 1 root root 40 Sep 10 14:28 /usr/bin/software-center -> ../share software- center/software-center The first part of the output indicates chmod didn't do anything and the second part I don't understand.

    Read the article

  • Top Exastack Partner Headlines - 7 Nov

    - by Roxana Babiciu
    R-Style Softlab recommends RS-solutions with Exadata and SuperCluster to improve business performance for their customers. Read more. WingArc, a subsidiary of 1st Holdings, Inc., finds Oracle Exadata’s substantial computing power enabling to their enterprise output management solution (SVF/RD). It helps solve the problem of providing an integrated output platform for high volume businesses. Read more.

    Read the article

  • video playback works only as root (ubuntu 13.10 64 bit)

    - by Hybris
    That is, video playback with anything: chrome (html 5), firefox (flash), vlc, totem, smplayer... whatever It works only if the software is started as root otherwise it freezes at the begin Interesting enough, in chrome, you can move the slider to whatever position and see the current frame updated However the video stays stil This started to happen a couple of days ago after an unidentified update Relevant output from chrome run as normal user gives some hint: NVIDIA: could not open the device file /dev/nvidia0 No output coming from firefox or vlc $ ls -l /dev/nvidia0 crw-rw-rw- 1 root root 195, 0 nov 8 21:18 /dev/nvidia0

    Read the article

  • Ubuntu 10.04 not detecting multiple monitors

    - by user28837
    I have 2 graphics cards, the output from the lspci: 01:00.0 VGA compatible controller: ATI Technologies Inc RV770 [Radeon HD 4850] 02:00.0 VGA compatible controller: ATI Technologies Inc RV710 [Radeon HD 4350] I have one monitor connected to the 4850 and 2 connected to the 4350. However when I go into System Preferences Monitors the only monitor shown is the one connected to the 4850. Is there something I need to enable for it to be able to use the other card? How do I get this to work. Thanks. As per request: X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-25-server i686 Ubuntu Current Operating System: Linux jeff-desktop 2.6.32-22-generic-pae #33-Ubuntu SMP Wed Apr 28 14:57:29 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-22-generic-pae root=UUID=852e1013-4ed6-40fd-a462-c29087888383 ro quiet splash Build Date: 23 April 2010 05:11:50PM xorg-server 2:1.7.6-2ubuntu7 (Bryce Harrington <[email protected]>) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue May 11 08:24:52 2010 (==) Using config file: "/etc/X11/xorg.conf" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (==) No Layout section. Using the first Screen section. (**) |-->Screen "Default Screen" (0) (**) | |-->Monitor "<default monitor>" (==) No device specified for screen "Default Screen". Using the first device section listed. (**) | |-->Device "Default Device" (==) No monitor specified for screen "Default Screen". Using a default monitor configuration. (==) Automatically adding devices (==) Automatically enabling devices (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. Entry deleted from font path. (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/100dpi/:unscaled, /usr/share/fonts/X11/75dpi/:unscaled, /usr/share/fonts/X11/Type1, /usr/share/fonts/X11/100dpi, /usr/share/fonts/X11/75dpi, /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, built-ins (==) ModulePath set to "/usr/lib/xorg/extra-modules,/usr/lib/xorg/modules" (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. (II) Loader magic: 0x81f0e80 (II) Module ABI versions: X.Org ANSI C Emulation: 0.4 X.Org Video Driver: 6.0 X.Org XInput driver : 7.0 X.Org Server Extension : 2.0 (++) using VT number 7 (--) PCI:*(0:1:0:0) 1002:9442:174b:e104 ATI Technologies Inc RV770 [Radeon HD 4850] rev 0, Mem @ 0xc0000000/268435456, 0xfe7e0000/65536, I/O @ 0x0000a000/256, BIOS @ 0x????????/131072 (--) PCI: (0:2:0:0) 1002:954f:1462:1618 ATI Technologies Inc RV710 [Radeon HD 4350] rev 0, Mem @ 0xd0000000/268435456, 0xfe8e0000/65536, I/O @ 0x0000b000/256, BIOS @ 0x????????/131072 (WW) Open ACPI failed (/var/run/acpid.socket) (No such file or directory) (II) "extmod" will be loaded by default. (II) "dbe" will be loaded by default. (II) "glx" will be loaded. This was enabled by default and also specified in the config file. (II) "record" will be loaded by default. (II) "dri" will be loaded by default. (II) "dri2" will be loaded by default. (II) LoadModule: "glx" (II) Loading /usr/lib/xorg/extra-modules/modules/extensions/libglx.so (II) Module glx: vendor="FireGL - ATI Technologies Inc." compiled for 7.5.0, module version = 1.0.0 (II) Loading extension GLX (II) LoadModule: "extmod" (II) Loading /usr/lib/xorg/modules/extensions/libextmod.so (II) Module extmod: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 2.0 (II) Loading extension MIT-SCREEN-SAVER (II) Loading extension XFree86-VidModeExtension (II) Loading extension XFree86-DGA (II) Loading extension DPMS (II) Loading extension XVideo (II) Loading extension XVideo-MotionCompensation (II) Loading extension X-Resource (II) LoadModule: "dbe" (II) Loading /usr/lib/xorg/modules/extensions/libdbe.so (II) Module dbe: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 2.0 (II) Loading extension DOUBLE-BUFFER (II) LoadModule: "record" (II) Loading /usr/lib/xorg/modules/extensions/librecord.so (II) Module record: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.13.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 2.0 (II) Loading extension RECORD (II) LoadModule: "dri" (II) Loading /usr/lib/xorg/modules/extensions/libdri.so (II) Module dri: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.0.0 ABI class: X.Org Server Extension, version 2.0 (II) Loading extension XFree86-DRI (II) LoadModule: "dri2" (II) Loading /usr/lib/xorg/modules/extensions/libdri2.so (II) Module dri2: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.1.0 ABI class: X.Org Server Extension, version 2.0 (II) Loading extension DRI2 (II) LoadModule: "fglrx" (II) Loading /usr/lib/xorg/extra-modules/modules/drivers/fglrx_drv.so (II) Module fglrx: vendor="FireGL - ATI Technologies Inc." compiled for 1.7.1, module version = 8.72.11 Module class: X.Org Video Driver (II) Loading sub module "fglrxdrm" (II) LoadModule: "fglrxdrm" (II) Loading /usr/lib/xorg/extra-modules/modules/linux/libfglrxdrm.so (II) Module fglrxdrm: vendor="FireGL - ATI Technologies Inc." compiled for 1.7.1, module version = 8.72.11 (II) ATI Proprietary Linux Driver Version Identifier:8.72.11 (II) ATI Proprietary Linux Driver Release Identifier: 8.723.1 (II) ATI Proprietary Linux Driver Build Date: Apr 8 2010 21:40:29 (II) Primary Device is: PCI 01@00:00:0 (WW) Falling back to old probe method for fglrx (II) Loading PCS database from /etc/ati/amdpcsdb (--) Assigning device section with no busID to primary device (WW) fglrx: No matching Device section for instance (BusID PCI:0@2:0:0) found (--) Chipset Supported AMD Graphics Processor (0x9442) found (WW) fglrx: No matching Device section for instance (BusID PCI:0@1:0:1) found (WW) fglrx: No matching Device section for instance (BusID PCI:0@2:0:1) found (**) ChipID override: 0x954F (**) Chipset Supported AMD Graphics Processor (0x954F) found (II) AMD Video driver is running on a device belonging to a group targeted for this release (II) AMD Video driver is signed (II) fglrx(0): pEnt->device->identifier=0x9428aa0 (II) pEnt->device->identifier=(nil) (II) fglrx(0): === [atiddxPreInit] === begin (II) Loading sub module "vgahw" (II) LoadModule: "vgahw" (II) Loading /usr/lib/xorg/modules/libvgahw.so (II) Module vgahw: vendor="X.Org Foundation" compiled for 1.7.6, module version = 0.1.0 ABI class: X.Org Video Driver, version 6.0 (II) fglrx(0): Creating default Display subsection in Screen section "Default Screen" for depth/fbbpp 24/32 (**) fglrx(0): Depth 24, (--) framebuffer bpp 32 (II) fglrx(0): Pixel depth = 24 bits stored in 4 bytes (32 bpp pixmaps) (==) fglrx(0): Default visual is TrueColor (==) fglrx(0): RGB weight 888 (II) fglrx(0): Using 8 bits per RGB (==) fglrx(0): Buffer Tiling is ON (II) Loading sub module "fglrxdrm" (II) LoadModule: "fglrxdrm" (II) Reloading /usr/lib/xorg/extra-modules/modules/linux/libfglrxdrm.so ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiOpenByBusid: Searching for BusID PCI:1:0:0 ukiOpenDevice: node name is /dev/ati/card0 ukiOpenDevice: open result is 10, (OK) ukiOpenByBusid: ukiOpenMinor returns 10 ukiOpenByBusid: ukiGetBusid reports PCI:2:0:0 ukiOpenDevice: node name is /dev/ati/card1 ukiOpenDevice: open result is 10, (OK) ukiOpenByBusid: ukiOpenMinor returns 10 ukiOpenByBusid: ukiGetBusid reports PCI:1:0:0 ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiOpenByBusid: Searching for BusID PCI:2:0:0 ukiOpenDevice: node name is /dev/ati/card0 ukiOpenDevice: open result is 11, (OK) ukiOpenByBusid: ukiOpenMinor returns 11 ukiOpenByBusid: ukiGetBusid reports PCI:2:0:0 (--) fglrx(0): Chipset: "ATI Radeon HD 4800 Series" (Chipset = 0x9442) (--) fglrx(0): (PciSubVendor = 0x174b, PciSubDevice = 0xe104) (==) fglrx(0): board vendor info: third party graphics adapter - NOT original ATI (--) fglrx(0): Linear framebuffer (phys) at 0xc0000000 (--) fglrx(0): MMIO registers at 0xfe7e0000 (--) fglrx(0): I/O port at 0x0000a000 (==) fglrx(0): ROM-BIOS at 0x000c0000 (II) fglrx(0): AC Adapter is used (II) fglrx(0): Primary V_BIOS segment is: 0xc000 (II) Loading sub module "vbe" (II) LoadModule: "vbe" (II) Loading /usr/lib/xorg/modules/libvbe.so (II) Module vbe: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.1.0 ABI class: X.Org Video Driver, version 6.0 (II) fglrx(0): VESA BIOS detected (II) fglrx(0): VESA VBE Version 3.0 (II) fglrx(0): VESA VBE Total Mem: 16384 kB (II) fglrx(0): VESA VBE OEM: ATI ATOMBIOS (II) fglrx(0): VESA VBE OEM Software Rev: 11.13 (II) fglrx(0): VESA VBE OEM Vendor: (C) 1988-2005, ATI Technologies Inc. (II) fglrx(0): VESA VBE OEM Product: RV770 (II) fglrx(0): VESA VBE OEM Product Rev: 01.00 (II) fglrx(0): ATI Video BIOS revision 9 or later detected (--) fglrx(0): Video RAM: 524288 kByte, Type: GDDR3 (II) fglrx(0): PCIE card detected (--) fglrx(0): Using per-process page tables (PPPT) as GART. (WW) fglrx(0): board is an unknown third party board, chipset is supported (--) fglrx(0): Chipset: "ATI Radeon HD 4300/4500 Series" (Chipset = 0x954f) (--) fglrx(0): (PciSubVendor = 0x1462, PciSubDevice = 0x1618) (==) fglrx(0): board vendor info: third party graphics adapter - NOT original ATI (--) fglrx(0): Linear framebuffer (phys) at 0xd0000000 (--) fglrx(0): MMIO registers at 0xfe8e0000 (--) fglrx(0): I/O port at 0x0000b000 (==) fglrx(0): ROM-BIOS at 0x000c0000 (II) fglrx(0): AC Adapter is used (II) fglrx(0): Invalid ATI BIOS from int10, the adapter is not VGA-enabled (II) fglrx(0): ATI Video BIOS revision 9 or later detected (--) fglrx(0): Video RAM: 524288 kByte, Type: DDR2 (II) fglrx(0): PCIE card detected (--) fglrx(0): Using per-process page tables (PPPT) as GART. (WW) fglrx(0): board is an unknown third party board, chipset is supported (II) fglrx(0): Using adapter: 1:0.0. (II) fglrx(0): [FB] MC range(MCFBBase = 0xf00000000, MCFBSize = 0x20000000) (II) fglrx(0): Interrupt handler installed at IRQ 31. (II) fglrx(0): Using adapter: 2:0.0. (II) fglrx(0): [FB] MC range(MCFBBase = 0xf00000000, MCFBSize = 0x20000000) (II) fglrx(0): RandR 1.2 support is enabled! (II) fglrx(0): RandR 1.2 rotation support is enabled! (==) fglrx(0): Center Mode is disabled (II) Loading sub module "fb" (II) LoadModule: "fb" (II) Loading /usr/lib/xorg/modules/libfb.so (II) Module fb: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.0.0 ABI class: X.Org ANSI C Emulation, version 0.4 (II) Loading sub module "ddc" (II) LoadModule: "ddc" (II) Module "ddc" already built-in (II) fglrx(0): Finished Initialize PPLIB! (II) Loading sub module "ddc" (II) LoadModule: "ddc" (II) Module "ddc" already built-in (II) fglrx(0): Connected Display0: DFP on external TMDS [tmds2] (II) fglrx(0): Display0 EDID data --------------------------- (II) fglrx(0): Manufacturer: DEL Model: a038 Serial#: 810829397 (II) fglrx(0): Year: 2008 Week: 51 (II) fglrx(0): EDID Version: 1.3 (II) fglrx(0): Digital Display Input (II) fglrx(0): Max Image Size [cm]: horiz.: 53 vert.: 30 (II) fglrx(0): Gamma: 2.20 (II) fglrx(0): DPMS capabilities: StandBy Suspend Off (II) fglrx(0): Supported color encodings: RGB 4:4:4 YCrCb 4:4:4 (II) fglrx(0): Default color space is primary color space (II) fglrx(0): First detailed timing is preferred mode (II) fglrx(0): redX: 0.640 redY: 0.330 greenX: 0.300 greenY: 0.600 (II) fglrx(0): blueX: 0.150 blueY: 0.060 whiteX: 0.312 whiteY: 0.329 (II) fglrx(0): Supported established timings: (II) fglrx(0): 720x400@70Hz (II) fglrx(0): 640x480@60Hz (II) fglrx(0): 640x480@75Hz (II) fglrx(0): 800x600@60Hz (II) fglrx(0): 800x600@75Hz (II) fglrx(0): 1024x768@60Hz (II) fglrx(0): 1024x768@75Hz (II) fglrx(0): 1280x1024@75Hz (II) fglrx(0): Manufacturer's mask: 0 (II) fglrx(0): Supported standard timings: (II) fglrx(0): #0: hsize: 1152 vsize 864 refresh: 75 vid: 20337 (II) fglrx(0): #1: hsize: 1280 vsize 1024 refresh: 60 vid: 32897 (II) fglrx(0): #2: hsize: 1920 vsize 1080 refresh: 60 vid: 49361 (II) fglrx(0): Supported detailed timing: (II) fglrx(0): clock: 148.5 MHz Image Size: 531 x 298 mm (II) fglrx(0): h_active: 1920 h_sync: 2008 h_sync_end 2052 h_blank_end 2200 h_border: 0 (II) fglrx(0): v_active: 1080 v_sync: 1084 v_sync_end 1089 v_blanking: 1125 v_border: 0 (II) fglrx(0): Serial No: Y183D8CF0TFU (II) fglrx(0): Monitor name: DELL S2409W (II) fglrx(0): Ranges: V min: 50 V max: 76 Hz, H min: 30 H max: 83 kHz, PixClock max 170 MHz (II) fglrx(0): EDID (in hex): (II) fglrx(0): 00ffffffffffff0010ac38a055465430 (II) fglrx(0): 3312010380351e78eeee91a3544c9926 (II) fglrx(0): 0f5054a54b00714f8180d1c001010101 (II) fglrx(0): 010101010101023a801871382d40582c (II) fglrx(0): 4500132a2100001e000000ff00593138 (II) fglrx(0): 3344384346305446550a000000fc0044 (II) fglrx(0): 454c4c205332343039570a20000000fd (II) fglrx(0): 00324c1e5311000a2020202020200059 (II) fglrx(0): End of Display0 EDID data -------------------- (II) fglrx(0): Output DFP2 has no monitor section (II) fglrx(0): Output DFP_EXTTMDS has no monitor section (II) fglrx(0): Output CRT1 has no monitor section (II) fglrx(0): Output CRT2 has no monitor section (II) fglrx(0): Output DFP2 disconnected (II) fglrx(0): Output DFP_EXTTMDS connected (II) fglrx(0): Output CRT1 disconnected (II) fglrx(0): Output CRT2 disconnected (II) fglrx(0): Using exact sizes for initial modes (II) fglrx(0): Output DFP_EXTTMDS using initial mode 1920x1080 (II) fglrx(0): DPI set to (96, 96) (II) fglrx(0): Adapter ATI Radeon HD 4800 Series has 2 configurable heads and 1 displays connected. (==) fglrx(0): QBS disabled (==) fglrx(0): PseudoColor visuals disabled (II) Loading sub module "ramdac" (II) LoadModule: "ramdac" (II) Module "ramdac" already built-in (==) fglrx(0): NoAccel = NO (==) fglrx(0): NoDRI = NO (==) fglrx(0): Capabilities: 0x00000000 (==) fglrx(0): CapabilitiesEx: 0x00000000 (==) fglrx(0): OpenGL ClientDriverName: "fglrx_dri.so" (==) fglrx(0): UseFastTLS=0 (==) fglrx(0): BlockSignalsOnLock=1 (--) Depth 24 pixmap format is 32 bpp (II) Loading extension ATIFGLRXDRI (II) fglrx(0): doing swlDriScreenInit (II) fglrx(0): swlDriScreenInit for fglrx driver ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiOpenByBusid: Searching for BusID PCI:1:0:0 ukiOpenDevice: node name is /dev/ati/card0 ukiOpenDevice: open result is 17, (OK) ukiOpenByBusid: ukiOpenMinor returns 17 ukiOpenByBusid: ukiGetBusid reports PCI:2:0:0 ukiOpenDevice: node name is /dev/ati/card1 ukiOpenDevice: open result is 17, (OK) ukiOpenByBusid: ukiOpenMinor returns 17 ukiOpenByBusid: ukiGetBusid reports PCI:1:0:0 (II) fglrx(0): [uki] DRM interface version 1.0 (II) fglrx(0): [uki] created "fglrx" driver at busid "PCI:1:0:0" (II) fglrx(0): [uki] added 8192 byte SAREA at 0x2000 (II) fglrx(0): [uki] mapped SAREA 0x2000 to 0xb6996000 (II) fglrx(0): [uki] framebuffer handle = 0x3000 (II) fglrx(0): [uki] added 1 reserved context for kernel (II) fglrx(0): swlDriScreenInit done (II) fglrx(0): Kernel Module Version Information: (II) fglrx(0): Name: fglrx (II) fglrx(0): Version: 8.72.11 (II) fglrx(0): Date: Apr 8 2010 (II) fglrx(0): Desc: ATI FireGL DRM kernel module (II) fglrx(0): Kernel Module version matches driver. (II) fglrx(0): Kernel Module Build Time Information: (II) fglrx(0): Build-Kernel UTS_RELEASE: 2.6.32-22-generic-pae (II) fglrx(0): Build-Kernel MODVERSIONS: yes (II) fglrx(0): Build-Kernel __SMP__: yes (II) fglrx(0): Build-Kernel PAGE_SIZE: 0x1000 (II) fglrx(0): [uki] register handle = 0x00004000 (II) fglrx(0): DRI initialization successfull! (II) fglrx(0): FBADPhys: 0xf00000000 FBMappedSize: 0x01068000 (II) fglrx(0): FBMM initialized for area (0,0)-(1920,2240) (II) fglrx(0): FBMM auto alloc for area (0,0)-(1920,1920) (front color buffer - assumption) (II) fglrx(0): Largest offscreen area available: 1920 x 320 (==) fglrx(0): Backing store disabled (II) Loading extension FGLRXEXTENSION (==) fglrx(0): DPMS enabled (II) fglrx(0): Initialized in-driver Xinerama extension (**) fglrx(0): Textured Video is enabled. (II) LoadModule: "glesx" (II) Loading /usr/lib/xorg/extra-modules/modules/glesx.so (II) Module glesx: vendor="X.Org Foundation" compiled for 1.7.1, module version = 1.0.0 (II) Loading extension GLESX (II) Loading sub module "xaa" (II) LoadModule: "xaa" (II) Loading /usr/lib/xorg/modules/libxaa.so (II) Module xaa: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.2.1 ABI class: X.Org Video Driver, version 6.0 (II) fglrx(0): GLESX enableFlags = 94 (II) fglrx(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles Solid Horizontal and Vertical Lines Driver provided ScreenToScreenBitBlt replacement Driver provided FillSolidRects replacement (II) fglrx(0): GLESX is enabled (II) LoadModule: "amdxmm" (II) Loading /usr/lib/xorg/extra-modules/modules/amdxmm.so (II) Module amdxmm: vendor="X.Org Foundation" compiled for 1.7.1, module version = 1.0.0 (II) Loading extension AMDXVOPL (II) fglrx(0): UVD2 feature is available (II) fglrx(0): Enable composite support successfully (II) fglrx(0): X context handle = 0x1 (II) fglrx(0): [DRI] installation complete (==) fglrx(0): Silken mouse enabled (==) fglrx(0): Using HW cursor of display infrastructure! (II) fglrx(0): Disabling in-server RandR and enabling in-driver RandR 1.2. (--) RandR disabled (II) Found 2 VGA devices: arbiter wrapping enabled (II) Initializing built-in extension Generic Event Extension (II) Initializing built-in extension SHAPE (II) Initializing built-in extension MIT-SHM (II) Initializing built-in extension XInputExtension (II) Initializing built-in extension XTEST (II) Initializing built-in extension BIG-REQUESTS (II) Initializing built-in extension SYNC (II) Initializing built-in extension XKEYBOARD (II) Initializing built-in extension XC-MISC (II) Initializing built-in extension SECURITY (II) Initializing built-in extension XINERAMA (II) Initializing built-in extension XFIXES (II) Initializing built-in extension RENDER (II) Initializing built-in extension RANDR (II) Initializing built-in extension COMPOSITE (II) Initializing built-in extension DAMAGE ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiOpenByBusid: Searching for BusID PCI:1:0:0 ukiOpenDevice: node name is /dev/ati/card0 ukiOpenDevice: open result is 18, (OK) ukiOpenByBusid: ukiOpenMinor returns 18 ukiOpenByBusid: ukiGetBusid reports PCI:2:0:0 ukiOpenDevice: node name is /dev/ati/card1 ukiOpenDevice: open result is 18, (OK) ukiOpenByBusid: ukiOpenMinor returns 18 ukiOpenByBusid: ukiGetBusid reports PCI:1:0:0 (II) AIGLX: Loaded and initialized /usr/lib/dri/fglrx_dri.so (II) GLX: Initialized DRI GL provider for screen 0 (II) fglrx(0): Enable the clock gating! (II) fglrx(0): Setting screen physical size to 507 x 285 (II) XKB: reuse xkmfile /var/lib/xkb/server-B20D7FC79C7F597315E3E501AEF10E0D866E8E92.xkm (II) config/udev: Adding input device Power Button (/dev/input/event1) (**) Power Button: Applying InputClass "evdev keyboard catchall" (II) LoadModule: "evdev" (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so (II) Module evdev: vendor="X.Org Foundation" compiled for 1.7.6, module version = 2.3.2 Module class: X.Org XInput Driver ABI class: X.Org XInput driver, version 7.0 (**) Power Button: always reports core events (**) Power Button: Device: "/dev/input/event1" (II) Power Button: Found keys (II) Power Button: Configuring as keyboard (II) XINPUT: Adding extended input device "Power Button" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device Power Button (/dev/input/event0) (**) Power Button: Applying InputClass "evdev keyboard catchall" (**) Power Button: always reports core events (**) Power Button: Device: "/dev/input/event0" (II) Power Button: Found keys (II) Power Button: Configuring as keyboard (II) XINPUT: Adding extended input device "Power Button" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device Logitech USB-PS/2 Optical Mouse (/dev/input/event3) (**) Logitech USB-PS/2 Optical Mouse: Applying InputClass "evdev pointer catchall" (**) Logitech USB-PS/2 Optical Mouse: always reports core events (**) Logitech USB-PS/2 Optical Mouse: Device: "/dev/input/event3" (II) Logitech USB-PS/2 Optical Mouse: Found 12 mouse buttons (II) Logitech USB-PS/2 Optical Mouse: Found scroll wheel(s) (II) Logitech USB-PS/2 Optical Mouse: Found relative axes (II) Logitech USB-PS/2 Optical Mouse: Found x and y relative axes (II) Logitech USB-PS/2 Optical Mouse: Configuring as mouse (**) Logitech USB-PS/2 Optical Mouse: YAxisMapping: buttons 4 and 5 (**) Logitech USB-PS/2 Optical Mouse: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200 (II) XINPUT: Adding extended input device "Logitech USB-PS/2 Optical Mouse" (type: MOUSE) (II) Logitech USB-PS/2 Optical Mouse: initialized for relative axes. (II) config/udev: Adding input device Logitech USB-PS/2 Optical Mouse (/dev/input/mouse1) (II) No input driver/identifier specified (ignoring) (II) config/udev: Adding input device Logitech USB Multimedia Keyboard (/dev/input/event4) (**) Logitech USB Multimedia Keyboard: Applying InputClass "evdev keyboard catchall" (**) Logitech USB Multimedia Keyboard: always reports core events (**) Logitech USB Multimedia Keyboard: Device: "/dev/input/event4" (II) Logitech USB Multimedia Keyboard: Found keys (II) Logitech USB Multimedia Keyboard: Configuring as keyboard (II) XINPUT: Adding extended input device "Logitech USB Multimedia Keyboard" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device Logitech USB Multimedia Keyboard (/dev/input/event5) (**) Logitech USB Multimedia Keyboard: Applying InputClass "evdev keyboard catchall" (**) Logitech USB Multimedia Keyboard: always reports core events (**) Logitech USB Multimedia Keyboard: Device: "/dev/input/event5" (II) Logitech USB Multimedia Keyboard: Found keys (II) Logitech USB Multimedia Keyboard: Configuring as keyboard (II) XINPUT: Adding extended input device "Logitech USB Multimedia Keyboard" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device KEYBOARD (/dev/input/event6) (**) KEYBOARD: Applying InputClass "evdev keyboard catchall" (**) KEYBOARD: always reports core events (**) KEYBOARD: Device: "/dev/input/event6" (II) KEYBOARD: Found keys (II) KEYBOARD: Configuring as keyboard (II) XINPUT: Adding extended input device "KEYBOARD" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device KEYBOARD (/dev/input/event7) (**) KEYBOARD: Applying InputClass "evdev keyboard catchall" (**) KEYBOARD: always reports core events (**) KEYBOARD: Device: "/dev/input/event7" (II) KEYBOARD: Found 14 mouse buttons (II) KEYBOARD: Found scroll wheel(s) (II) KEYBOARD: Found relative axes (II) KEYBOARD: Found keys (II) KEYBOARD: Configuring as mouse (II) KEYBOARD: Configuring as keyboard (**) KEYBOARD: YAxisMapping: buttons 4 and 5 (**) KEYBOARD: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200 (II) XINPUT: Adding extended input device "KEYBOARD" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (EE) KEYBOARD: failed to initialize for relative axes. (II) config/udev: Adding input device KEYBOARD (/dev/input/mouse2) (II) No input driver/identifier specified (ignoring) (II) config/udev: Adding input device Macintosh mouse button emulation (/dev/input/event2) (**) Macintosh mouse button emulation: Applying InputClass "evdev pointer catchall" (**) Macintosh mouse button emulation: always reports core events (**) Macintosh mouse button emulation: Device: "/dev/input/event2" (II) Macintosh mouse button emulation: Found 3 mouse buttons (II) Macintosh mouse button emulation: Found relative axes (II) Macintosh mouse button emulation: Found x and y relative axes (II) Macintosh mouse button emulation: Configuring as mouse (**) Macintosh mouse button emulation: YAxisMapping: buttons 4 and 5 (**) Macintosh mouse button emulation: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200 (II) XINPUT: Adding extended input device "Macintosh mouse button emulation" (type: MOUSE) (II) Macintosh mouse button emulation: initialized for relative axes. (II) config/udev: Adding input device Macintosh mouse button emulation (/dev/input/mouse0) (II) No input driver/identifier specified (ignoring) (II) fglrx(0): Restoring Recent Mode via PCS is not supported in RANDR 1.2 capable environments

    Read the article

  • udp expected behaviour not responding to test result

    - by ernst
    I have a local network topology that is structured as follows: three hosts and a switch in the middle. I am using a switch that supports 10,100,1000 Mbit/s full/half duplex connection. I have configured the hosts with a static ip 172.16.0.1-2-3/25. This is the output of ifconfig eth0 Link encap: Ethernet HWaddr ***** inet addr:172.16.0.3 Bcast:172.16.0.127 Mask:255.255.255.128 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 The output on H1 and H2 is perfectly matchable They are mutually reachable since i have tested the network with ping. I have forced the ethernet interface to work at 10M with ethtool -s eth0 speed 10 duplex full autoneg on this is the output of ethtool eth0 supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full S upported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Full Advertised pause frame use: Symmetric A dvertised auto-negotiation: Yes Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: g Wake-on: d Current message level: 0x000000ff (255) drv probe link timer ifdown ifup rx_err tx_err Link detected: yes – I am doing an experimental test using nttcp to calculate the GOODPUT in the case that H1 and H2 at the same time send data to H3. Since the three links have the same forced capability and the amount of arrving data speed is 10 from H1+10 from H2--20M to H3 it would be expected a bottleneck effect and, due to the non reliable nature of udp, a packet loss. But this doesn't appen since the output of nttcp application shows the same number of byte sended and received. this is the output of nttcp on h3 nttcp -T -r -u 172.16.0.2 & nttcp -T -r -u 172.16.0.1 [1] 4071 Bytes Real s CPU s Real-MBit/s CPU-MBit/s Calls Real-C/s CPU-C/s l 8388608 13.74 0.05 4.8848 1398.0140 2049 149.14 42684.8 Bytes Real s CPU s Real-MBit/s CPU-MBit/s Calls Real-C/s CPU-C/s l 8388608 14.02 0.05 4.7872 1398.0140 2049 146.17 42684.8 1 8388608 13.56 0.06 4.9500 1118.4065 2051 151.28 34181.1 1 8388608 13.89 0.06 4.8310 1198.3084 2051 147.65 36623.0 – How is this possible? Am i missing something? Any help will be gratefully apprecciated, Best regards

    Read the article

  • With CentOS 6 and LXC, "ifconfig" is unable to see network interface (but busybox "ifconfig" works fine)

    - by larsks
    I've just started working with LXC under CentOS 6 (via the libvirt adapter). If I create an LXC container, I'm unable to see any network interfaces when using the native system tools: # ifconfig -a # The behavior is very odd; specifying an interface by names yields neither the expected output nor an error message. This is true even for clearly invalid interface names, like this: # ifconfig foo # The ip command exhibits the same behavior. On the other hand, if I use "ifconfig" provided by busybox, everything works as expected: # busybox ifconfig -a eth0 Link encap:Ethernet HWaddr 52:54:00:E0:12:C8 inet6 addr: fe80::5054:ff:fee0:12c8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:268 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:17814 (17.3 KiB) TX bytes:552 (552.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) So...what does busybox know that the native tools don't? The libvirt config for this environment is pretty standard; the network definition looks like this: <interface type='network'> <mac address='52:54:00:e0:12:c8'/> <source network='default'/> <target dev='veth0'/> </interface> The full configuration is here if you think it might help. I'm running: lxc-0.7.2-2.el6.x86_64 kernel-2.6.32-71.29.1.el6.x86_64 EDIT Weirder and weirder...it's a display issue, not a functionality issue. I can see the output of ifconfig if I pipe it into anything, so for example: # ifconfig eth0 | cat eth0 Link encap:Ethernet HWaddr 52:54:00:E0:12:C8 inet addr:192.168.10.10 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fee0:12c8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:573 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:37914 (37.0 KiB) TX bytes:552 (552.0 b) And in fact even when not piping the output, strace shows that ifconfig is in fact writing the output to file descriptor 1 (aka stdout), so it's not clear why no output is actually showing up. This could be either an LXC or a virsh issue, I guess.

    Read the article

  • Apache/2.2.20 (Ubuntu 11.10) gzip compression won't work on php pages, content is chunked

    - by FamousInteractive
    I'm running into a problem with a new production server whereto I'm transferring projects. The HTML output of the PHP applications isn't compressed by the Apache mod_deflate module. Other resources, as stylesheet and javascript files, even html pages, which are served with the same Content-type (text/html) as the PHP output, are compressed! The projects use the following rules (from HTML5 boilerplate) in the .htaccess: <IfModule mod_deflate.c> # Force deflate for mangled headers developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping/ <IfModule mod_setenvif.c> <IfModule mod_headers.c> SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding </IfModule> </IfModule> # HTML, TXT, CSS, JavaScript, JSON, XML, HTC: <IfModule filter_module> FilterDeclare COMPRESS FilterProvider COMPRESS DEFLATE resp=Content-Type $text/html FilterProvider COMPRESS DEFLATE resp=Content-Type $text/css FilterProvider COMPRESS DEFLATE resp=Content-Type $text/plain FilterProvider COMPRESS DEFLATE resp=Content-Type $text/xml FilterProvider COMPRESS DEFLATE resp=Content-Type $text/x-component FilterProvider COMPRESS DEFLATE resp=Content-Type $application/javascript FilterProvider COMPRESS DEFLATE resp=Content-Type $application/json FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xhtml+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/rss+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/atom+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/vnd.ms-fontobject FilterProvider COMPRESS DEFLATE resp=Content-Type $image/svg+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $image/x-icon FilterProvider COMPRESS DEFLATE resp=Content-Type $application/x-font-ttf FilterProvider COMPRESS DEFLATE resp=Content-Type $font/opentype FilterChain COMPRESS FilterProtocol COMPRESS DEFLATE change=yes;byteranges=no </IfModule> </IfModule> We have a testing machine that runs the same Apache, OS and PHP version. On that machine the compression works just fine on the PHP output. I've checked and compared Apache and PHP config files, all the same as far as I can tell. I've tried several manners of outputting the content of the PHP, using output buffering or just plain echoing the content. Same thing, no compression. Example response headers of a PHP output: HTTP/1.1 200 OK Date: Wed, 25 Apr 2012 23:30:59 GMT Server: Apache Accept-Ranges: bytes Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: public Pragma: no-cache Vary: User-Agent Keep-Alive: timeout=5, max=98 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 Example of response headers on a css file: HTTP/1.1 200 OK Date: Wed, 25 Apr 2012 23:30:59 GMT Server: Apache Last-Modified: Mon, 04 Jul 2011 19:12:36 GMT Vary: Accept-Encoding,User-Agent Content-Encoding: gzip Cache-Control: public Expires: Fri, 25 May 2012 23:30:59 GMT Content-Length: 714 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/css; charset=utf-8 Does anyone has a clue or experienced the same "problem"? thanks!

    Read the article

  • SSH stops at "using username" with IPTables in effect

    - by Rautamiekka
    We used UFW but couldn't make the Source Dedicated ports open, which was weird, so we purged UFW and switched to IPTables, using Webmin to configure. If the inbound chain is on DENY and SSH port open [judged from Webmin], PuTTY will say using username "root" and stops at that instead of asking for public key pw. Inbound chain on ACCEPT the pw is asked. This problem didn't happen with UFW. Picture of IPTables configuration in Webmin: http://s284544448.onlinehome.us/public/PlusLINE%20Dedicated%20Server,%20Webmin,%20IPTables,%200.jpgThe address is to the previous rautamiekka.org. iptables-save when on INPUT DENY: # Generated by iptables-save v1.4.8 on Wed Apr 11 16:09:20 2012 *mangle :PREROUTING ACCEPT [1430:156843] :INPUT ACCEPT [1430:156843] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1415:781598] :POSTROUTING ACCEPT [1415:781598] COMMIT # Completed on Wed Apr 11 16:09:20 2012 # Generated by iptables-save v1.4.8 on Wed Apr 11 16:09:20 2012 *nat :PREROUTING ACCEPT [2:104] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] COMMIT # Completed on Wed Apr 11 16:09:20 2012 # Generated by iptables-save v1.4.8 on Wed Apr 11 16:09:20 2012 *filter :INPUT DROP [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1247:708906] -A INPUT -i lo -m comment --comment "Machine-within traffic - always allowed" -j ACCEPT -A INPUT -p tcp -m comment --comment "Services - TCP" -m tcp -m multiport --dports 22,80,443,10000,20,21 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m comment --comment "Minecraft - TCP" -m tcp --dport 25565 -j ACCEPT -A INPUT -p udp -m comment --comment "Minecraft - UDP" -m udp --dport 25565 -j ACCEPT -A INPUT -p tcp -m comment --comment "Source Dedicated - TCP" -m tcp --dport 27015 -j ACCEPT -A INPUT -p udp -m comment --comment "Source Dedicated - UDP" -m udp -m multiport --dports 4380,27000:27030 -j ACCEPT -A INPUT -p udp -m comment --comment "TS3 - UDP - main port" -m udp --dport 9987 -j ACCEPT -A INPUT -p tcp -m comment --comment "TS3 - TCP - ServerQuery" -m tcp --dport 10011 -j ACCEPT -A OUTPUT -o lo -m comment --comment "Machine-within traffic - always allowed" -j ACCEPT COMMIT # Completed on Wed Apr 11 16:09:20 2012 iptables --list when on INPUT DENY: Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere /* Machine-within traffic - always allowed */ ACCEPT tcp -- anywhere anywhere /* Services - TCP */ tcp multiport dports ssh,www,https,webmin,ftp-data,ftp state NEW,ESTABLISHED ACCEPT tcp -- anywhere anywhere /* Minecraft - TCP */ tcp dpt:25565 ACCEPT udp -- anywhere anywhere /* Minecraft - UDP */ udp dpt:25565 ACCEPT tcp -- anywhere anywhere /* Source Dedicated - TCP */ tcp dpt:27015 ACCEPT udp -- anywhere anywhere /* Source Dedicated - UDP */ udp multiport dports 4380,27000:27030 ACCEPT udp -- anywhere anywhere /* TS3 - UDP - main port */ udp dpt:9987 ACCEPT tcp -- anywhere anywhere /* TS3 - TCP - ServerQuery */ tcp dpt:10011 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere /* Machine-within traffic - always allowed */ The UFW rules prior to purging on INPUT DENY: 127.0.0.1 ALLOW IN 127.0.0.1 3306 DENY IN Anywhere 20,21/tcp ALLOW IN Anywhere 22/tcp (OpenSSH) ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 443/tcp ALLOW IN Anywhere 989 ALLOW IN Anywhere 990 ALLOW IN Anywhere 8075/tcp ALLOW IN Anywhere 9987/udp ALLOW IN Anywhere 10000/tcp ALLOW IN Anywhere 10011/tcp ALLOW IN Anywhere 25565/tcp ALLOW IN Anywhere 27000:27030/tcp ALLOW IN Anywhere 4380/udp ALLOW IN Anywhere 27014:27050/tcp ALLOW IN Anywhere 30033/tcp ALLOW IN Anywhere

    Read the article

  • Diff 2 files while ignoring parts of lines

    - by Millianz
    I would like to diff a file system. Currently my bash script prints out the file system recursively into a file (ls -l -R) and diffs it with an expected output. An example for a line in this file would be: drw---- 100000f3 00000400 0 ./foo/ My current diff command is diff "$TEMP_LOG" "$DIFF_FILE_OUT" --strip-trailing-cr --changed-group-format='%' --unchanged-group-format='' "$SubLog" As you can see I ignore additional lines in the current output file, I only care about lines that match with the master output. I now have the problem though that some files may differ in size, or a folder might even have a different name, but due to it's location I know what access rights it should have. For example: Output: ------- 00000000 00000000 528 ./foo/bar.txt Master: ------- 00000000 00000000 200 ./foo/bar.txt Only the size differs here, and it doesn't matter, I would like to just ignore certain parts of the diff, kind of like an ansi c comment. Master: ------- 00000000 00000000 /*200*/ ./foo/bar.txt -- OR -- Master: d------ 00000000 00000000 /*10*/ ./foo//*123123*///*76456546*//bar.txt Output: d------ 00000000 00000000 0 ./foo/asd/sdf/bar.txt And still have it diff correctly. Is this even possible with diff, or will I have to write a custom script for it? Since I'm fairly new to cygwin I might be using the completely wrong tool all together, I'm happy for any suggestions. Update: Taking a step back, here is the general task at hand that I want to achieve. I want to write a script that checks the file system to see if the read/write permissions are set up correctly. The structure of the file system is under my control, so I don't have to worry about it changing too much. Sometimes folders/files might not be present, but if they are their permissions must be checked. For Example assume that the following is a snapshot of the current file system structure drw ./foo drw ./foo/bar -rw ./foow/bar/bar.txt drw ./foo/baz -rw ./foo/baz/baz.txt And this is what the file system structure might dictate, i.e. if these folders / files are present, the permissions must match. drw ./foo drw ./foo/bar -rw ./foo/bar/bar.txt --- ./foo/bar/foobar.txt drw ./foo/baz -rw ./foo/baz/foobaz.txt In this case the file system checked out ok, since all files present match their expected values. The situation becomes more complicated as soon as certain folders might have any arbitrary name, only due to their location I know what their permissions should be. Assume that the directory ./foo/bar in the above example might be such a case, i.e. instead of bar the folder could have any name, but still match the -rw permissions. This seems like a very complicated situation, and I'm not even sure if I can solve it with bash scripting alone. I might have to write an actual application.

    Read the article

  • How to stretch the image from one screen to two screens

    - by wxiiir
    I want to be able to stretch the image from one monitor to a second monitor even if i have to use some software to do it. For example i want the top half of the image that is now shown on my first monitor to occupy the whole second monitor and i want the bottom half to occupy the whole first monitor, or the other way around i don't really care as long as it works. I would be cool to know how to do the same stuff but to the left half and right half. I know that image pixels would have twice the lenght or height this way but i don't really care about it as long as it works so basically i want the stuff to show on both monitors but with the same pixels as before. I have a hd4870 and windows7 and hd4000 family doesnt support having two monitors behaving like a large one, only hd5000 upwards, this would solve my problems without any of the drawbacks but it just can't be done (or maybe it can via software but i'm just too tired of searching). A solution to make almost any graphic card have two monitors behaving like a large one is matrox dualhead2go but that's just as expensive as a good hd5000 card so it's not worth it. thanks in advance EDIT I guess that nobody so far was able to fully comprehend my problem that was very explicitly written but i will elaborate some more. My hd4870 can have 2 monitors working with it but some stuff like games won't run on both monitors, which sucks. There are some ways to circumvent this problem and two of them are perfect or almost perfect but expensive and the third would be a software solution that would make it possible. The first one is to have and hd5000 family video card which will work just fine with both monitors. The second is to have a matrox dualhead2go that will make my hd4870 detect my two monitors as a large monitor. The third is to have a software that makes my two displays be detected as a large display and then captures the output of the video card, splits the images and renders them as 2d images to both monitors OR a simpler one but that would make outputted pixels double the width or height would be to capture the output of the graphics card to one screen, split it in two and enlarge it to fit both monitors and then output it to the monitors. p.s. By capturing the output of the video card i mean just make the video card process the stuff in a certain way. Making the video card detect two monitors as a large one via software may be a bit impossible or impracticable but stretching the output as a 2d image from one to both monitors for some coders should be a walk in the park so it would be likely that such program would exist or that some widespread softwares for dual monitor would have such function in them.

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to remove a package entirely?

    - by maria
    Hi I'm quite new to Linux, but before using it I was hearing that Windows programs, after uninstallation, leaves a lot of remains on the hard disc, and Linux removes all. I'm using Ubuntu 10.04. To uninstall packages I'm using sudo apt-get autoremove application_name or sudo aptitude purge application_name. Recently I have installed texlive-full and for some reasons I had quickly to uninstall it. After I've entered to terminal updatedb, then locate *texlive* and the output was very long: maria@marysia-ubuntu:~$ locate *texlive* /etc/texmf/fmt.d/10texlive-base.cnf /etc/texmf/fmt.d/10texlive-formats-extra.cnf /etc/texmf/fmt.d/10texlive-lang-cyrillic.cnf /etc/texmf/fmt.d/10texlive-lang-czechslovak.cnf /etc/texmf/fmt.d/10texlive-lang-polish.cnf /etc/texmf/fmt.d/10texlive-latex-base.cnf /etc/texmf/fmt.d/10texlive-math-extra.cnf /etc/texmf/fmt.d/10texlive-metapost.cnf /etc/texmf/fmt.d/10texlive-omega.cnf /etc/texmf/fmt.d/10texlive-xetex.cnf /etc/texmf/hyphen.d/09texlive-base.cnf /etc/texmf/hyphen.d/10texlive-lang-arabic.cnf /etc/texmf/hyphen.d/10texlive-lang-croatian.cnf /etc/texmf/hyphen.d/10texlive-lang-cyrillic.cnf /etc/texmf/hyphen.d/10texlive-lang-czechslovak.cnf /etc/texmf/hyphen.d/10texlive-lang-danish.cnf /etc/texmf/hyphen.d/10texlive-lang-dutch.cnf /etc/texmf/hyphen.d/10texlive-lang-finnish.cnf /etc/texmf/hyphen.d/10texlive-lang-french.cnf /etc/texmf/hyphen.d/10texlive-lang-german.cnf /etc/texmf/hyphen.d/10texlive-lang-greek.cnf /etc/texmf/hyphen.d/10texlive-lang-hungarian.cnf /etc/texmf/hyphen.d/10texlive-lang-indic.cnf /etc/texmf/hyphen.d/10texlive-lang-italian.cnf /etc/texmf/hyphen.d/10texlive-lang-latin.cnf /etc/texmf/hyphen.d/10texlive-lang-latvian.cnf /etc/texmf/hyphen.d/10texlive-lang-lithuanian.cnf /etc/texmf/hyphen.d/10texlive-lang-mongolian.cnf /etc/texmf/hyphen.d/10texlive-lang-norwegian.cnf /etc/texmf/hyphen.d/10texlive-lang-other.cnf /etc/texmf/hyphen.d/10texlive-lang-polish.cnf /etc/texmf/hyphen.d/10texlive-lang-portuguese.cnf /etc/texmf/hyphen.d/10texlive-lang-spanish.cnf /etc/texmf/hyphen.d/10texlive-lang-swedish.cnf /etc/texmf/hyphen.d/10texlive-lang-ukenglish.cnf /etc/texmf/updmap.d/10texlive-base.cfg /etc/texmf/updmap.d/10texlive-fonts-extra.cfg /etc/texmf/updmap.d/10texlive-fonts-recommended.cfg /etc/texmf/updmap.d/10texlive-games.cfg /etc/texmf/updmap.d/10texlive-lang-african.cfg /etc/texmf/updmap.d/10texlive-lang-arabic.cfg /etc/texmf/updmap.d/10texlive-lang-cyrillic.cfg /etc/texmf/updmap.d/10texlive-lang-czechslovak.cfg /etc/texmf/updmap.d/10texlive-lang-french.cfg /etc/texmf/updmap.d/10texlive-lang-greek.cfg /etc/texmf/updmap.d/10texlive-lang-hebrew.cfg /etc/texmf/updmap.d/10texlive-lang-indic.cfg /etc/texmf/updmap.d/10texlive-lang-lithuanian.cfg /etc/texmf/updmap.d/10texlive-lang-mongolian.cfg /etc/texmf/updmap.d/10texlive-lang-polish.cfg /etc/texmf/updmap.d/10texlive-lang-vietnamese.cfg /etc/texmf/updmap.d/10texlive-latex-base.cfg /etc/texmf/updmap.d/10texlive-latex-extra.cfg /etc/texmf/updmap.d/10texlive-math-extra.cfg /etc/texmf/updmap.d/10texlive-omega.cfg /etc/texmf/updmap.d/10texlive-pictures.cfg /etc/texmf/updmap.d/10texlive-science.cfg /var/cache/apt/archives/texlive-base_2009-7_all.deb /var/cache/apt/archives/texlive-bibtex-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-binaries_2009-5ubuntu0.2_i386.deb /var/cache/apt/archives/texlive-common_2009-7_all.deb /var/cache/apt/archives/texlive-doc-base_2009-2_all.deb /var/cache/apt/archives/texlive-doc-bg_2009-2_all.deb /var/cache/apt/archives/texlive-doc-cs+sk_2009-2_all.deb /var/cache/apt/archives/texlive-doc-de_2009-2_all.deb /var/cache/apt/archives/texlive-doc-en_2009-2_all.deb /var/cache/apt/archives/texlive-doc-es_2009-2_all.deb /var/cache/apt/archives/texlive-doc-fi_2009-2_all.deb /var/cache/apt/archives/texlive-doc-fr_2009-2_all.deb /var/cache/apt/archives/texlive-doc-it_2009-2_all.deb /var/cache/apt/archives/texlive-doc-ja_2009-2_all.deb /var/cache/apt/archives/texlive-doc-ko_2009-2_all.deb /var/cache/apt/archives/texlive-doc-mn_2009-2_all.deb /var/cache/apt/archives/texlive-doc-nl_2009-2_all.deb /var/cache/apt/archives/texlive-doc-pl_2009-2_all.deb /var/cache/apt/archives/texlive-doc-pt_2009-2_all.deb /var/cache/apt/archives/texlive-doc-ru_2009-2_all.deb /var/cache/apt/archives/texlive-doc-si_2009-2_all.deb /var/cache/apt/archives/texlive-doc-th_2009-2_all.deb /var/cache/apt/archives/texlive-doc-tr_2009-2_all.deb /var/cache/apt/archives/texlive-doc-uk_2009-2_all.deb /var/cache/apt/archives/texlive-doc-vi_2009-2_all.deb /var/cache/apt/archives/texlive-doc-zh_2009-2_all.deb /var/cache/apt/archives/texlive-extra-utils_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-font-utils_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-fonts-extra-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-fonts-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-fonts-recommended-doc_2009-7_all.deb /var/cache/apt/archives/texlive-fonts-recommended_2009-7_all.deb /var/cache/apt/archives/texlive-formats-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-full_2009-7_all.deb /var/cache/apt/archives/texlive-games_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-generic-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-generic-recommended_2009-7_all.deb /var/cache/apt/archives/texlive-humanities-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-humanities_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-lang-african_2009-3_all.deb /var/cache/apt/archives/texlive-lang-arabic_2009-3_all.deb /var/cache/apt/archives/texlive-lang-armenian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-croatian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-cyrillic_2009-3_all.deb /var/cache/apt/archives/texlive-lang-czechslovak_2009-3_all.deb /var/cache/apt/archives/texlive-lang-danish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-dutch_2009-3_all.deb /var/cache/apt/archives/texlive-lang-finnish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-french_2009-3_all.deb /var/cache/apt/archives/texlive-lang-german_2009-3_all.deb /var/cache/apt/archives/texlive-lang-greek_2009-3_all.deb /var/cache/apt/archives/texlive-lang-hebrew_2009-3_all.deb /var/cache/apt/archives/texlive-lang-hungarian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-indic_2009-3_all.deb /var/cache/apt/archives/texlive-lang-italian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-latin_2009-3_all.deb /var/cache/apt/archives/texlive-lang-latvian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-lithuanian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-mongolian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-norwegian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-other_2009-3_all.deb /var/cache/apt/archives/texlive-lang-polish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-portuguese_2009-3_all.deb /var/cache/apt/archives/texlive-lang-spanish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-swedish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-tibetan_2009-3_all.deb /var/cache/apt/archives/texlive-lang-ukenglish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-vietnamese_2009-3_all.deb /var/cache/apt/archives/texlive-latex-base-doc_2009-7_all.deb /var/cache/apt/archives/texlive-latex-base_2009-7_all.deb /var/cache/apt/archives/texlive-latex-extra-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-latex-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-latex-recommended-doc_2009-7_all.deb /var/cache/apt/archives/texlive-latex-recommended_2009-7_all.deb /var/cache/apt/archives/texlive-latex3_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-luatex_2009-7_all.deb /var/cache/apt/archives/texlive-math-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-metapost-doc_2009-7_all.deb /var/cache/apt/archives/texlive-metapost_2009-7_all.deb /var/cache/apt/archives/texlive-music_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-omega_2009-7_all.deb /var/cache/apt/archives/texlive-pictures-doc_2009-7_all.deb /var/cache/apt/archives/texlive-pictures_2009-7_all.deb /var/cache/apt/archives/texlive-plain-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-pstricks-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-pstricks_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-publishers-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-publishers_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-science-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-science_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-xetex_2009-7_all.deb /var/cache/apt/archives/texlive_2009-7_all.deb /var/lib/dpkg/info/texlive-base.list /var/lib/dpkg/info/texlive-base.postrm /var/lib/dpkg/info/texlive-bibtex-extra.list /var/lib/dpkg/info/texlive-bibtex-extra.postrm /var/lib/dpkg/info/texlive-doc-base.list /var/lib/dpkg/info/texlive-doc-base.postrm /var/lib/dpkg/info/texlive-doc-bg.list /var/lib/dpkg/info/texlive-doc-bg.postrm /var/lib/dpkg/info/texlive-doc-cs+sk.list /var/lib/dpkg/info/texlive-doc-cs+sk.postrm /var/lib/dpkg/info/texlive-doc-de.list /var/lib/dpkg/info/texlive-doc-de.postrm /var/lib/dpkg/info/texlive-doc-en.list /var/lib/dpkg/info/texlive-doc-en.postrm /var/lib/dpkg/info/texlive-doc-es.list /var/lib/dpkg/info/texlive-doc-es.postrm /var/lib/dpkg/info/texlive-doc-fi.list /var/lib/dpkg/info/texlive-doc-fi.postrm /var/lib/dpkg/info/texlive-doc-fr.list /var/lib/dpkg/info/texlive-doc-fr.postrm /var/lib/dpkg/info/texlive-doc-it.list /var/lib/dpkg/info/texlive-doc-it.postrm /var/lib/dpkg/info/texlive-doc-ja.list /var/lib/dpkg/info/texlive-doc-ja.postrm /var/lib/dpkg/info/texlive-doc-ko.list /var/lib/dpkg/info/texlive-doc-ko.postrm /var/lib/dpkg/info/texlive-doc-mn.list /var/lib/dpkg/info/texlive-doc-mn.postrm /var/lib/dpkg/info/texlive-doc-nl.list /var/lib/dpkg/info/texlive-doc-nl.postrm /var/lib/dpkg/info/texlive-doc-pl.list /var/lib/dpkg/info/texlive-doc-pl.postrm /var/lib/dpkg/info/texlive-doc-pt.list /var/lib/dpkg/info/texlive-doc-pt.postrm /var/lib/dpkg/info/texlive-doc-ru.list /var/lib/dpkg/info/texlive-doc-ru.postrm /var/lib/dpkg/info/texlive-doc-si.list /var/lib/dpkg/info/texlive-doc-si.postrm /var/lib/dpkg/info/texlive-doc-th.list /var/lib/dpkg/info/texlive-doc-th.postrm /var/lib/dpkg/info/texlive-doc-tr.list /var/lib/dpkg/info/texlive-doc-tr.postrm /var/lib/dpkg/info/texlive-doc-uk.list /var/lib/dpkg/info/texlive-doc-uk.postrm /var/lib/dpkg/info/texlive-doc-vi.list /var/lib/dpkg/info/texlive-doc-vi.postrm /var/lib/dpkg/info/texlive-doc-zh.list /var/lib/dpkg/info/texlive-doc-zh.postrm /var/lib/dpkg/info/texlive-extra-utils.list /var/lib/dpkg/info/texlive-extra-utils.postrm /var/lib/dpkg/info/texlive-font-utils.list /var/lib/dpkg/info/texlive-font-utils.postrm /var/lib/dpkg/info/texlive-fonts-extra-doc.list /var/lib/dpkg/info/texlive-fonts-extra-doc.postrm /var/lib/dpkg/info/texlive-fonts-extra.list /var/lib/dpkg/info/texlive-fonts-extra.postrm /var/lib/dpkg/info/texlive-fonts-recommended-doc.list /var/lib/dpkg/info/texlive-fonts-recommended-doc.postrm /var/lib/dpkg/info/texlive-fonts-recommended.list /var/lib/dpkg/info/texlive-fonts-recommended.postrm /var/lib/dpkg/info/texlive-formats-extra.list /var/lib/dpkg/info/texlive-formats-extra.postrm /var/lib/dpkg/info/texlive-games.list /var/lib/dpkg/info/texlive-games.postrm /var/lib/dpkg/info/texlive-generic-extra.list /var/lib/dpkg/info/texlive-generic-extra.postrm /var/lib/dpkg/info/texlive-generic-recommended.list /var/lib/dpkg/info/texlive-generic-recommended.postrm /var/lib/dpkg/info/texlive-humanities-doc.list /var/lib/dpkg/info/texlive-humanities-doc.postrm /var/lib/dpkg/info/texlive-humanities.list /var/lib/dpkg/info/texlive-humanities.postrm /var/lib/dpkg/info/texlive-lang-african.list /var/lib/dpkg/info/texlive-lang-african.postrm /var/lib/dpkg/info/texlive-lang-arabic.list /var/lib/dpkg/info/texlive-lang-arabic.postrm /var/lib/dpkg/info/texlive-lang-armenian.list /var/lib/dpkg/info/texlive-lang-armenian.postrm /var/lib/dpkg/info/texlive-lang-croatian.list /var/lib/dpkg/info/texlive-lang-croatian.postrm /var/lib/dpkg/info/texlive-lang-cyrillic.list /var/lib/dpkg/info/texlive-lang-cyrillic.postrm /var/lib/dpkg/info/texlive-lang-czechslovak.list /var/lib/dpkg/info/texlive-lang-czechslovak.postrm /var/lib/dpkg/info/texlive-lang-danish.list /var/lib/dpkg/info/texlive-lang-danish.postrm /var/lib/dpkg/info/texlive-lang-dutch.list /var/lib/dpkg/info/texlive-lang-dutch.postrm /var/lib/dpkg/info/texlive-lang-finnish.list /var/lib/dpkg/info/texlive-lang-finnish.postrm /var/lib/dpkg/info/texlive-lang-french.list /var/lib/dpkg/info/texlive-lang-french.postrm /var/lib/dpkg/info/texlive-lang-german.list /var/lib/dpkg/info/texlive-lang-german.postrm /var/lib/dpkg/info/texlive-lang-greek.list /var/lib/dpkg/info/texlive-lang-greek.postrm /var/lib/dpkg/info/texlive-lang-hebrew.list /var/lib/dpkg/info/texlive-lang-hebrew.postrm /var/lib/dpkg/info/texlive-lang-hungarian.list /var/lib/dpkg/info/texlive-lang-hungarian.postrm /var/lib/dpkg/info/texlive-lang-indic.list /var/lib/dpkg/info/texlive-lang-indic.postrm /var/lib/dpkg/info/texlive-lang-italian.list /var/lib/dpkg/info/texlive-lang-italian.postrm /var/lib/dpkg/info/texlive-lang-latin.list /var/lib/dpkg/info/texlive-lang-latin.postrm /var/lib/dpkg/info/texlive-lang-latvian.list /var/lib/dpkg/info/texlive-lang-latvian.postrm /var/lib/dpkg/info/texlive-lang-lithuanian.list /var/lib/dpkg/info/texlive-lang-lithuanian.postrm /var/lib/dpkg/info/texlive-lang-mongolian.list /var/lib/dpkg/info/texlive-lang-mongolian.postrm /var/lib/dpkg/info/texlive-lang-norwegian.list /var/lib/dpkg/info/texlive-lang-norwegian.postrm /var/lib/dpkg/info/texlive-lang-other.list /var/lib/dpkg/info/texlive-lang-other.postrm /var/lib/dpkg/info/texlive-lang-polish.list /var/lib/dpkg/info/texlive-lang-polish.postrm /var/lib/dpkg/info/texlive-lang-portuguese.list /var/lib/dpkg/info/texlive-lang-portuguese.postrm /var/lib/dpkg/info/texlive-lang-spanish.list /var/lib/dpkg/info/texlive-lang-spanish.postrm /var/lib/dpkg/info/texlive-lang-swedish.list /var/lib/dpkg/info/texlive-lang-swedish.postrm /var/lib/dpkg/info/texlive-lang-tibetan.list /var/lib/dpkg/info/texlive-lang-tibetan.postrm /var/lib/dpkg/info/texlive-lang-ukenglish.list /var/lib/dpkg/info/texlive-lang-ukenglish.postrm /var/lib/dpkg/info/texlive-lang-vietnamese.list /var/lib/dpkg/info/texlive-lang-vietnamese.postrm /var/lib/dpkg/info/texlive-latex-base-doc.list /var/lib/dpkg/info/texlive-latex-base-doc.postrm /var/lib/dpkg/info/texlive-latex-base.list /var/lib/dpkg/info/texlive-latex-base.postrm /var/lib/dpkg/info/texlive-latex-extra-doc.list /var/lib/dpkg/info/texlive-latex-extra-doc.postrm /var/lib/dpkg/info/texlive-latex-extra.list /var/lib/dpkg/info/texlive-latex-extra.postrm /var/lib/dpkg/info/texlive-latex-recommended-doc.list /var/lib/dpkg/info/texlive-latex-recommended-doc.postrm /var/lib/dpkg/info/texlive-latex-recommended.list /var/lib/dpkg/info/texlive-latex-recommended.postrm /var/lib/dpkg/info/texlive-latex3.list /var/lib/dpkg/info/texlive-latex3.postrm /var/lib/dpkg/info/texlive-luatex.list /var/lib/dpkg/info/texlive-luatex.postrm /var/lib/dpkg/info/texlive-math-extra.list /var/lib/dpkg/info/texlive-math-extra.postrm /var/lib/dpkg/info/texlive-metapost-doc.list /var/lib/dpkg/info/texlive-metapost-doc.postrm /var/lib/dpkg/info/texlive-metapost.list /var/lib/dpkg/info/texlive-metapost.postrm /var/lib/dpkg/info/texlive-music.list /var/lib/dpkg/info/texlive-music.postrm /var/lib/dpkg/info/texlive-omega.list /var/lib/dpkg/info/texlive-omega.postrm /var/lib/dpkg/info/texlive-pictures-doc.list /var/lib/dpkg/info/texlive-pictures-doc.postrm /var/lib/dpkg/info/texlive-pictures.list /var/lib/dpkg/info/texlive-pictures.postrm /var/lib/dpkg/info/texlive-plain-extra.list /var/lib/dpkg/info/texlive-plain-extra.postrm /var/lib/dpkg/info/texlive-pstricks-doc.list /var/lib/dpkg/info/texlive-pstricks-doc.postrm /var/lib/dpkg/info/texlive-pstricks.list /var/lib/dpkg/info/texlive-pstricks.postrm /var/lib/dpkg/info/texlive-publishers-doc.list /var/lib/dpkg/info/texlive-publishers-doc.postrm /var/lib/dpkg/info/texlive-publishers.list /var/lib/dpkg/info/texlive-publishers.postrm /var/lib/dpkg/info/texlive-science-doc.list /var/lib/dpkg/info/texlive-science-doc.postrm /var/lib/dpkg/info/texlive-science.list /var/lib/dpkg/info/texlive-science.postrm /var/lib/dpkg/info/texlive-xetex.list /var/lib/dpkg/info/texlive-xetex.postrm maria@marysia-ubuntu:~$ I've used sudo apt-get autoclean without any change. I've installed deborphan and it showed nothing (maybe I've used it in wrong way: just entered command deborphan). Am I doing something wrong or I was told something which is not true? I would like to know two things: how to remove packages (if I'm doing it in wrong way) and how to clean hard disc from remains of all packages I've uninstalled till now (even if I don't remember what it was exactly). I have Ubuntu Tweak installed but I don't know how to use it and I think I prefere terminal commnands. Thanks

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • Synchronizing audio and video using MP4Box / ffmpeg to concatenate files

    - by jdl2003
    I have two H.264 encoded MPEG-4 files that I need to concatenate. I have been using MP4Box for this task by first ensuring the files are encoded identically (even went so far as to compare output from h264_parse on their video tracks) and then concatenating with this command: MP4Box -cat file1.mp4 -cat file2.mp4 output_file.mp4 This works and the output file is playable, but on playback in Quicktime or VLC the second video's audio starts too soon, making the entire second part of the concatenated file out of sync. I have tried reencoding the output through ffmpeg with -vcodec copy and -acodec copy but the sync issue persists. I have also tried converting to MPEG-2 format first, concatenating with a simple cat file1.mpg file2.mpg > output.mpg and reencoding the result with ffmpeg. This was even worse. I know that I can pass commands to MP4Box to adjust the start time of the audio track, but I am trying to do this programmatically for any input video (in the same encoding of course). I am looking for possible solutions that would not require human intervention / manual adjustments. Or, at least, an understanding of what is happening to make the second part of the concatenated video go out of sync.

    Read the article

  • Vboxheadless without a command prompt (VirtualBox)

    - by joe
    I'm trying to run VirtualBox VM's in the background from a service. I'm having trouble starting a process the way I desire. I'd like to start the virtualbox guest in headless mode as a separate process and show nothing as far as GUI. Here's what I've tried: From command line: start vboxheadless -s "Ubuntu Server" In C#: ProcessStartInfo info = new ProcessStartInfo { UseShellExecute = false, RedirectStandardOutput = true, ErrorDialog = false, WindowStyle = ProcessWindowStyle.Hidden, CreateNoWindow = true, FileName = "C:/program files/sun/virtualbox/vboxheadless", Arguments = "-s \"Ubuntu Server\"" }; Process p = new Process(); p.StartInfo = info; p.Start(); String output = p.StandardOutput.ReadToEnd(); //BLOCKS! (output stream isnt closed) I want to be able to get the output to know if starting the server was a success. However, it seems as though the window that's spawned never closes its output stream. It's also worth mentioning that I've tried using vboxmanage startvm "Ubuntu Server" --type=vrdp. I can determine whether the server started properly using this. But it shows a new command prompt window for the newly started VirtualBox guest.

    Read the article

  • Snort/Barnyard2 Logging

    - by Eric
    I need some help with my Snort/Barnyard2 setup. My goal is to have Snort send unified2 logs to Barnyard2 and then have Barnyard2 send the data to other locations. Here is my currrent setup. OS Scientific Linux 6 Snort Version 2.9.2.3 Barnyard2 Version 2.1.9 Snort command snort -c /etc/snort/snort.conf -i eth2 & Barnyard2 command /usr/local/bin/barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort -f snort.log -w /var/log/snort/barnyard.waldo & snort.conf output unified2: filename snort.log, limit 128 barnyard2.conf output alert_syslog: host=127.0.0.1 output database: log, mysql, user=snort dbname=snort password=password host=localhost With this setup, barnyard2 is showing all of the correct information in the database and I'm using BASE to view it on the web GUI. I was hoping to be able to send the full packet data to syslog with barnyard2 but after reading around, it seems that it is impossible to do that. So I then started trying to modify the snort.conf file and add lines like "output alert_full: alert.full". This definitely gave me a lot more information but still not the full packet data like I want. So my question is, is there anyway I can use barnyard2 to send the full packet data of alerts to a human readable file? Since I can't send it directly to syslog, I can create another process to take the data from that file and ship it off to another server. If not, what flags and/or snort.conf configuration would you recommend to get the most data possible but still be able to handle quite a bit of traffic? In the end of it all, these alerts will be shipped to a central server via a SSH tunnel. I'm trying to stay away from databases.

    Read the article

  • Installer not being updated ( probably because of Windows 7 file cache )

    - by Sithu Kyaw
    I'm creating an installer for my Visual FoxPro application using ISTool and Inno Setup. It is ok for me for the first time. But, I updated my code and re-built the EXE file. Then, compiled the installer again. I found that my update was not compiled into the installer and I did not see the update in my running application. I noticed that the EXE file, which was built by VFP, was updated properly. It seems the installation script did not output the updated file. But, when I changed folder names, it did work. I don't want to change folder names whenever I run that installation script. It is not a good idea actually. I think it is because of Windows 7 cache system. Mine is Windows 7 Home Premium Service Pack 1. For example, My previous output file is located at C:\path\to\myinstaller.exe When I compile the installation script, the output file there should be overwritten, but it was not as expected. Although I deleted the file, it did not work. When I changed to output file path as C:\newpath\to\myinstaller.exe, I got the fix, but it is not a solution what I'm looking for. Does anyone how to do that? [Edit] I found that the installed directory was not updated properly. For example, I installed the program to C:\Program files\MyInstalledApp When I run the installer again, that installation directory should be overwritten, but failed. Thus, I got to uninstall the app before I re-install it. Is there any fix for this?

    Read the article

  • I am unable to run a client/server program in php

    - by sibi
    This is the program I am trying to run: $host = "127.0.0.1"; $port = 25003; // don't timeout! set_time_limit(0); // create socket $socket = socket_create(AF_INET, SOCK_STREAM, 0) or die("Could not create socket\n"); // bind socket to port $result = socket_bind($socket, $host, $port) or die("Could not bind to socket\n"); // start listening for connections $result = socket_listen($socket, 3) or die("Could not set up socket listener\n"); // accept incoming connections // spawn another socket to handle communication $spawn = socket_accept($socket) or die("Could not accept incoming connection\n"); // read client input $input = socket_read($spawn, 1024) or die("Could not read input\n"); // clean up input string $input = trim($input); echo "Client Message : ".$input; // reverse client input and send back $output = strrev($input) . "\n"; socket_write($spawn, $output, strlen ($output)) or die("Could not write output\n"); // close sockets socket_close($spawn); socket_close($socket); I enabled the PHP_socket option in WAMP but still I keep getting errors like unable to bind. Can someone please help me out?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >