Search Results

Search found 750 results on 30 pages for 'amd64'.

Page 23/30 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • error executing aapt, all of the sudden

    - by pjv
    I know there are a lot of these topics around but none seem to help in my case, nor describe it exactly. The best similar one is aapt not found under the right path. My problem is that I can be using Eclipse for a whole evening programming, compiling and using my device, and then suddenly I get "error executing aapt" for my current project and ofcourse R.java isn't (properly) generated anymore. I then restart Eclipse and everything goes away. I'm seeing this once a day on average however. I've recently switched to amd64 and installed the latest Android-2.3 SDK and matching tools. I know that there is now a platform-tools folder that has an aapt version that should work SDK version independently. At first I had added this directory to my PATH, as instructed on the SDK website. I've also tried not adding it to my path and making a link platforms/android-9/tools so that every SDK version might use it's own old copy. Needless to say, platform-tools/aapt is there and has the right permissions, and I have been able to execute it on the command-line at any time. When I do write a faulty xml file or sorts, and appropriately get an error, I see an extra line that says "aapt: /lib32/libz.so.1: no version information available". I'm running a recent Gentoo linux system. I have everything installed to support x86 on amd64, but have re-emerged emul-linux-x86-baselibs and zlib just to be sure. The problem persists. I do see some pages that spell horror over some zlib bugs, but I'm not sure if that's related. I realize I'm not on the reference Ubuntu platform, but surely the difference cannot be that great? It might very well be a bug in aapt or the tools itself. Why would it suddenly stop working? I also experience that the ids in R.java were incorrect, namely that simple findViewById() code would give ClassCastExceptions because of mixed ids the one time, and then work perfectly without any changes bu only a "clean project", in the aftermath of a failing aapt. Finally, I've run a few commands on aapt, that don't seem to add any extra information: #ldd aapt ./aapt: /lib32/libz.so.1: no version information available (required by ./aapt) linux-gate.so.1 => (0xffffe000) librt.so.1 => /lib32/librt.so.1 (0x4f864000) libpthread.so.0 => /lib32/libpthread.so.0 (0x4f849000) libz.so.1 => /lib32/libz.so.1 (0xf7707000) libstdc++.so.6 => /usr/lib/gcc/x86_64-pc-linux-gnu/4.4.4/32/libstdc++.so.6 (0x415e9000) libm.so.6 => /lib32/libm.so.6 (0x4f876000) libgcc_s.so.1 => /lib32/libgcc_s.so.1 (0x4fac6000) libc.so.6 => /lib32/libc.so.6 (0x4f5ed000) /lib/ld-linux.so.2 (0x4f5ca000) #file aapt aapt: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, not stripped Can anybody tell anything wrong with my configuration? Does it smell like a bug perhaps (otherwise let's report it (again))? Update 2010-01-06: I've gained some more knowledge. When I recently was trying to export a signed apk, I ran into another error message (full details from the Eclipse error view) regarding aapt I hadn't seen before. Note here too, that I can just restart Eclipse and can export apks again without problems, at least for a little while. I'm starting to think it is related to lack of memory on my system. The message "onvoldoende geheugen beschikbaar" means "insufficient memory available". I have also been seeing insufficient memory errors in DDMS when I'm dumping HPROF files. Here is the error log (shortened): !ENTRY com.android.ide.eclipse.adt 4 0 2011-01-05 23:11:16.097 !MESSAGE Export Wizard Error !STACK 1 org.eclipse.core.runtime.CoreException: Failed to export application at com.android.ide.eclipse.adt.internal.project.ExportHelper.exportReleaseApk(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard.doExport(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard.access$0(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard$1.run(Unknown Source) at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121) Caused by: com.android.ide.eclipse.adt.internal.build.AaptExecException: Error executing aapt. Please check aapt is present at /opt/android-sdk/android-sdk-linux_x86-1.6_r1/platform-tools/aapt at com.android.ide.eclipse.adt.internal.build.BuildHelper.executeAapt(Unknown Source) at com.android.ide.eclipse.adt.internal.build.BuildHelper.packageResources(Unknown Source) ... 5 more Caused by: java.io.IOException: Cannot run program "/opt/android-sdk/android-sdk-linux_x86-1.6_r1/platform-tools/aapt": java.io.IOException: error=12, Onvoldoende geheugen beschikbaar ... Caused by: java.io.IOException: java.io.IOException: error=12, Onvoldoende geheugen beschikbaar ... !SUBENTRY 1 com.android.ide.eclipse.adt 4 0 2011-01-05 23:11:16.098 !MESSAGE Failed to export application !STACK 0 com.android.ide.eclipse.adt.internal.build.AaptExecException: Error executing aapt. Please check aapt is present at /opt/android-sdk/android-sdk-linux_x86-1.6_r1/platform-tools/aapt at com.android.ide.eclipse.adt.internal.build.BuildHelper.executeAapt(Unknown Source) at com.android.ide.eclipse.adt.internal.build.BuildHelper.packageResources(Unknown Source) at com.android.ide.eclipse.adt.internal.project.ExportHelper.exportReleaseApk(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard.doExport(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard.access$0(Unknown Source) at com.android.ide.eclipse.adt.internal.wizards.export.ExportWizard$1.run(Unknown Source) at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121) Caused by: java.io.IOException: Cannot run program "/opt/android-sdk/android-sdk-linux_x86-1.6_r1/platform-tools/aapt": java.io.IOException: error=12, Onvoldoende geheugen beschikbaar ... Caused by: java.io.IOException: java.io.IOException: error=12, Onvoldoende geheugen beschikbaar

    Read the article

  • IRQ problem with 2.6.32/2.6.39 kernel on Debian Squeeze x86_64

    - by MasterM
    I recently assembled a new computer so that all hardware is pretty new. Since then I've been experiencing some problem with IRQs when running Debian 6.0. On random occasions, usually after an hour or so of running I hear a beep and this shows up in dmesg: [ 3537.762795] irq 16: nobody cared (try booting with the "irqpoll" option) [ 3537.762797] Pid: 0, comm: swapper Tainted: P W O 2.6.39-2-amd64 #1 [ 3537.762798] Call Trace: [ 3537.762799] <IRQ> [<ffffffff810924d4>] ? __report_bad_irq+0x3a/0xa2 [ 3537.762803] [<ffffffff810926a4>] ? note_interrupt+0x168/0x1da [ 3537.762805] [<ffffffff81090dd4>] ? handle_irq_event_percpu+0x171/0x18f [ 3537.762807] [<ffffffff8100e0e2>] ? read_tsc+0x5/0x16 [ 3537.762809] [<ffffffff8106b8a2>] ? update_ts_time_stats+0x32/0x6b [ 3537.762810] [<ffffffff81090e26>] ? handle_irq_event+0x34/0x52 [ 3537.762812] [<ffffffff81063fb7>] ? sched_clock_idle_wakeup_event+0x12/0x1c [ 3537.762813] [<ffffffff81092df2>] ? handle_fasteoi_irq+0x82/0xa4 [ 3537.762815] [<ffffffff8100aadb>] ? handle_irq+0x1a/0x23 [ 3537.762816] [<ffffffff8100a384>] ? do_IRQ+0x45/0xaa [ 3537.762818] [<ffffffff81332c93>] ? common_interrupt+0x13/0x13 [ 3537.762818] <EOI> [<ffffffff81332c8e>] ? common_interrupt+0xe/0x13 [ 3537.762821] [<ffffffff81026800>] ? native_safe_halt+0x2/0x3 [ 3537.762829] [<ffffffffa016ed58>] ? acpi_idle_do_entry+0x39/0x62 [processor] [ 3537.762831] [<ffffffffa016edde>] ? acpi_idle_enter_c1+0x5d/0xad [processor] [ 3537.762834] [<ffffffff81261033>] ? cpuidle_idle_call+0x11f/0x1cc [ 3537.762835] [<ffffffff81008dd2>] ? cpu_idle+0xab/0xe1 [ 3537.762837] [<ffffffff8169fc60>] ? start_kernel+0x3e0/0x3eb [ 3537.762838] [<ffffffff8169f3c8>] ? x86_64_start_kernel+0x102/0x10f [ 3537.762839] handlers: [ 3537.762840] [<ffffffffa0358d5a>] (rtl8169_interrupt+0x0/0x2d7 [r8169]) [ 3537.762842] [<ffffffffa08ff2ca>] (nv_kern_isr+0x0/0x54 [nvidia]) [ 3537.762902] Disabling IRQ #16 After that Xorg either hogs on CPU or is unstable (up to hanging the system completely). When I restart Xorg everything is fine again and the problem doesn't occur until next reboot. I tried to upgrade the kernel from stock 2.6.32 to 2.6.39 from unstable repository but that didn't help. Booting with irqpoll option only seems to prolong the initial time period after which the problem occurs. I'm using latest NVIDIA drivers and Realtek firmware from firmware-realtek package. I have two GTX 560Ti that run in SLI. Disabling SLI or taking out one card completely doesn't solve the problem either. Output of uname -a is: Linux whitestar 2.6.39-2-amd64 #1 SMP Wed Jun 8 11:01:04 UTC 2011 x86_64 GNU/Linux Output of lspci is: 00:00.0 Host bridge: Intel Corporation Sandy Bridge DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Sandy Bridge PCI Express Root Port (rev 09) 00:01.1 PCI bridge: Intel Corporation Sandy Bridge PCI Express Root Port (rev 09) 00:16.0 Communication controller: Intel Corporation Cougar Point HECI Controller #1 (rev 04) 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 05) 00:1a.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation Cougar Point High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 2 (rev b5) 00:1c.2 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 3 (rev b5) 00:1c.4 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 5 (rev b5) 00:1c.6 PCI bridge: Intel Corporation 82801 PCI Bridge (rev b5) 00:1d.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation Cougar Point LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation Cougar Point 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation Cougar Point SMBus Controller (rev 05) 01:00.0 VGA compatible controller: nVidia Corporation Device 1200 (rev a1) 01:00.1 Audio device: nVidia Corporation Device 0e0c (rev a1) 02:00.0 VGA compatible controller: nVidia Corporation Device 1200 (rev a1) 02:00.1 Audio device: nVidia Corporation Device 0e0c (rev a1) 04:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) 06:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) 07:00.0 PCI bridge: Device 1b21:1080 (rev 01) 08:02.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8110SC/8169SC Gigabit Ethernet (rev 10) 08:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0) Contents of /proc/interrupts: CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 0: 77 0 0 0 0 0 0 0 IO-APIC-edge timer 1: 2 0 0 0 0 0 0 0 IO-APIC-edge i8042 8: 1 0 0 0 0 0 0 0 IO-APIC-edge rtc0 9: 0 0 0 0 0 0 0 0 IO-APIC-fasteoi acpi 12: 4 0 0 0 0 0 0 0 IO-APIC-edge i8042 16: 699083 0 0 0 0 0 0 0 IO-APIC-fasteoi nvidia, eth0 17: 87810 0 0 0 0 0 0 0 IO-APIC-fasteoi firewire_ohci, hda_intel, nvidia 18: 242 0 0 0 0 0 0 0 IO-APIC-fasteoi hda_intel 23: 85925 0 0 0 0 0 0 0 IO-APIC-fasteoi ehci_hcd:usb5, ehci_hcd:usb6 40: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 41: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 42: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 43: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 44: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 45: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 46: 79853 0 0 0 0 0 0 0 PCI-MSI-edge ahci 48: 1 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 49: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 50: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 51: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 52: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 53: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 54: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 55: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 56: 1 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 57: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 58: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 59: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 60: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 61: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 62: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 63: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 64: 173506 0 0 0 0 0 0 0 PCI-MSI-edge hda_intel NMI: 482 89 25 13 277 24 11 10 Non-maskable interrupts LOC: 783857 194752 114133 70577 372438 179065 117179 162016 Local timer interrupts SPU: 0 0 0 0 0 0 0 0 Spurious interrupts PMI: 482 89 25 13 277 24 11 10 Performance monitoring interrupts IWI: 0 0 0 0 0 0 0 0 IRQ work interrupts RES: 131917 46750 7432 3291 150003 9576 3435 3067 Rescheduling interrupts CAL: 2759 6563 7150 6997 5387 7140 7269 6678 Function call interrupts TLB: 4396 2038 1336 492 5434 1896 1121 606 TLB shootdowns TRM: 0 0 0 0 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 0 0 0 0 Threshold APIC interrupts MCE: 0 0 0 0 0 0 0 0 Machine check exceptions MCP: 37 37 37 37 37 37 37 37 Machine check polls ERR: 0 MIS: 0 Last but not least, right after boot-up those lines are usually present in dmesg: [ 18.367094] hda-intel: IRQ timing workaround is activated for card #1. Suggest a bigger bdl_pos_adj. [ 18.458859] hda-intel: IRQ timing workaround is activated for card #2. Suggest a bigger bdl_pos_adj. I'm not sure if it's related or a symptom of a bigger problem so I'm posting it just in case. I don't really know what other information might be of relevance here. Don't hesitate to ask for more in the comments.

    Read the article

  • How can I get FreeNAS to respond to libvirt shutdown requests

    - by ptomli
    I have a KVM VM of FreeNAS 0.7.1 Shere (revision 5127) running on Ubuntu Server 10.04 and I'm unable to convince the VM to shutdown from the host virsh shutdown freenas I would expect this to send some ACPI? trigger to the VM and FreeNAS then do what it's told. I'm not a FreeBSD fundi so I don't really know what packages or processes to poke to get this running. I have tried to convince powerd to run, but the VM cpus don't have the required freq entry Sysctl HW $ sysctl hw hw.machine: amd64 hw.model: QEMU Virtual CPU version 0.12.3 hw.ncpu: 1 hw.byteorder: 1234 hw.physmem: 523116544 hw.usermem: 463806464 hw.pagesize: 4096 hw.floatingpoint: 1 hw.machine_arch: amd64 hw.realmem: 536850432 hw.aac.iosize_max: 65536 hw.amr.force_sg32: 0 hw.an.an_cache_iponly: 1 hw.an.an_cache_mcastonly: 0 hw.an.an_cache_mode: dbm hw.an.an_dump: off hw.ata.to: 15 hw.ata.wc: 1 hw.ata.atapi_dma: 1 hw.ata.ata_dma_check_80pin: 1 hw.ata.ata_dma: 1 hw.ath.txbuf: 200 hw.ath.rxbuf: 40 hw.ath.regdomain: 0 hw.ath.countrycode: 0 hw.ath.xchanmode: 1 hw.ath.outdoor: 1 hw.ath.calibrate: 30 hw.ath.hal.swba_backoff: 0 hw.ath.hal.sw_brt: 10 hw.ath.hal.dma_brt: 2 hw.bce.msi_enable: 1 hw.bce.tso_enable: 1 hw.bge.allow_asf: 0 hw.cardbus.cis_debug: 0 hw.cardbus.debug: 0 hw.cs.recv_delay: 570 hw.cs.ignore_checksum_failure: 0 hw.cs.debug: 0 hw.cxgb.snd_queue_len: 50 hw.cxgb.use_16k_clusters: 1 hw.cxgb.force_fw_update: 0 hw.cxgb.singleq: 0 hw.cxgb.ofld_disable: 0 hw.cxgb.msi_allowed: 2 hw.cxgb.txq_mr_size: 1024 hw.cxgb.sleep_ticks: 1 hw.cxgb.tx_coalesce: 0 hw.firewire.hold_count: 3 hw.firewire.try_bmr: 1 hw.firewire.fwmem.speed: 2 hw.firewire.fwmem.eui64_lo: 0 hw.firewire.fwmem.eui64_hi: 0 hw.firewire.phydma_enable: 1 hw.firewire.nocyclemaster: 0 hw.firewire.fwe.rx_queue_len: 128 hw.firewire.fwe.tx_speed: 2 hw.firewire.fwe.stream_ch: 1 hw.firewire.fwip.rx_queue_len: 128 hw.firewire.sbp.tags: 0 hw.firewire.sbp.use_doorbell: 0 hw.firewire.sbp.scan_delay: 500 hw.firewire.sbp.login_delay: 1000 hw.firewire.sbp.exclusive_login: 1 hw.firewire.sbp.max_speed: -1 hw.firewire.sbp.auto_login: 1 hw.mfi.max_cmds: 128 hw.mfi.event_class: 0 hw.mfi.event_locale: 65535 hw.pccard.cis_debug: 0 hw.pccard.debug: 0 hw.cbb.debug: 0 hw.cbb.start_32_io: 4096 hw.cbb.start_16_io: 256 hw.cbb.start_memory: 2281701376 hw.pcic.pd6722_vsense: 1 hw.pcic.intr_mask: 57016 hw.pci.honor_msi_blacklist: 1 hw.pci.enable_msix: 1 hw.pci.enable_msi: 1 hw.pci.do_power_resume: 1 hw.pci.do_power_nodriver: 0 hw.pci.enable_io_modes: 1 hw.pci.host_mem_start: 2147483648 hw.syscons.kbd_debug: 1 hw.syscons.kbd_reboot: 1 hw.syscons.bell: 1 hw.syscons.saver.keybonly: 1 hw.syscons.sc_no_suspend_vtswitch: 0 hw.usb.uplcom.interval: 100 hw.usb.uvscom.interval: 100 hw.usb.uvscom.opktsize: 8 hw.wi.debug: 0 hw.wi.txerate: 0 hw.xe.debug: 0 hw.intr_storm_threshold: 1000 hw.availpages: 127714 hw.bus.devctl_disable: 0 hw.ste.rxsyncs: 0 hw.busdma.total_bpages: 32 hw.busdma.zone0.total_bpages: 32 hw.busdma.zone0.free_bpages: 32 hw.busdma.zone0.reserved_bpages: 0 hw.busdma.zone0.active_bpages: 0 hw.busdma.zone0.total_bounced: 0 hw.busdma.zone0.total_deferred: 0 hw.busdma.zone0.lowaddr: 0xffffffff hw.busdma.zone0.alignment: 2 hw.busdma.zone0.boundary: 65536 hw.clockrate: 2808 hw.instruction_sse: 1 hw.apic.enable_extint: 0 hw.kbd.keymap_restrict_change: 0 hw.acpi.supported_sleep_state: S3 S4 S5 hw.acpi.power_button_state: S5 hw.acpi.sleep_button_state: S3 hw.acpi.lid_switch_state: NONE hw.acpi.standby_state: S1 hw.acpi.suspend_state: S3 hw.acpi.sleep_delay: 1 hw.acpi.s4bios: 0 hw.acpi.verbose: 0 hw.acpi.disable_on_reboot: 0 hw.acpi.handle_reboot: 0 hw.acpi.cpu.cx_lowest: C1 Processes $ ps ax PID TT STAT TIME COMMAND 0 ?? DLs 0:00.00 [swapper] 1 ?? ILs 0:00.00 /sbin/init -- 2 ?? DL 0:00.08 [g_event] 3 ?? DL 0:00.29 [g_up] 4 ?? DL 0:00.33 [g_down] 5 ?? DL 0:00.00 [crypto] 6 ?? DL 0:00.00 [crypto returns] 7 ?? DL 0:00.00 [xpt_thrd] 8 ?? DL 0:00.00 [kqueue taskq] 9 ?? DL 0:00.00 [acpi_task_0] 10 ?? RL 34:12.42 [idle: cpu0] 11 ?? WL 0:01.13 [swi4: clock sio] 12 ?? WL 0:00.00 [swi3: vm] 13 ?? WL 0:00.00 [swi1: net] 14 ?? DL 0:00.04 [yarrow] 15 ?? WL 0:00.00 [swi6: task queue] 16 ?? WL 0:00.00 [swi2: cambio] 17 ?? DL 0:00.00 [acpi_task_1] 18 ?? DL 0:00.00 [acpi_task_2] 19 ?? WL 0:00.00 [swi5: +] 20 ?? DL 0:00.01 [thread taskq] 21 ?? WL 0:00.00 [swi6: Giant taskq] 22 ?? WL 0:00.00 [irq9: acpi0] 23 ?? WL 0:00.09 [irq14: ata0] 24 ?? WL 0:00.11 [irq15: ata1] 25 ?? WL 0:00.57 [irq11: ed0 uhci0] 26 ?? DL 0:00.00 [usb0] 27 ?? DL 0:00.00 [usbtask-hc] 28 ?? DL 0:00.00 [usbtask-dr] 29 ?? WL 0:00.01 [irq1: atkbd0] 30 ?? WL 0:00.00 [swi0: sio] 31 ?? DL 0:00.00 [sctp_iterator] 32 ?? DL 0:00.00 [pagedaemon] 33 ?? DL 0:00.00 [vmdaemon] 34 ?? DL 0:00.00 [idlepoll] 35 ?? DL 0:00.00 [pagezero] 36 ?? DL 0:00.01 [bufdaemon] 37 ?? DL 0:00.00 [vnlru] 38 ?? DL 0:00.14 [syncer] 39 ?? DL 0:00.01 [softdepflush] 1221 ?? Is 0:00.00 /sbin/devd 1289 ?? Is 0:00.01 /usr/sbin/syslogd -ss -f /var/etc/syslog.conf 1608 ?? Is 0:00.00 /usr/sbin/cron -s 1692 ?? Ss 0:00.03 /usr/local/sbin/mDNSResponderPosix -b -f /var/etc/mdn 1730 ?? S 0:00.43 /usr/local/sbin/lighttpd -f /var/etc/lighttpd.conf -m 1882 ?? DL 0:00.00 [system_taskq] 1883 ?? DL 0:00.00 [arc_reclaim_thread] 4139 ?? S 0:00.03 /usr/local/bin/php /usr/local/www/exec.php 4144 ?? S 0:00.00 sh -c ps ax 4145 ?? R 0:00.00 ps ax 1816 v0 Is 0:00.01 login [pam] (login) 1818 v0 I+ 0:00.03 -tcsh (csh) 1817 v1 Is+ 0:00.00 /usr/libexec/getty Pc ttyv1 1402 con- I 0:00.00 /usr/local/sbin/afpd -F /var/etc/afpd.conf 1404 con- S 0:00.00 /usr/local/sbin/cnid_metad 1682 con- I 0:02.78 /usr/local/sbin/mt-daapd -m -c /var/etc/mt-daapd.conf 1789 con- S 0:00.18 /usr/local/bin/fuppesd --config-dir /var/etc --config Libvert snippet <domain type='kvm'> <name>freenas</name> <uuid>********-****-****-****-************</uuid> <memory>524288</memory> <currentMemory>524288</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> Is this possible? Ideally I'd like to be able to stop the host without having to manually deal with shutting down the VM.

    Read the article

  • rails using jruby 1.5 - slow!!

    - by gucki
    Hi! I'm currently using passenger with ree 1.8.7 in production for a rails 2.3.5 project using postgresql as a database. ab -n 10000 -c 100: 285.69 [#/sec] (mean) I read jruby should be the fastest solution, so I installed jruby-1.5.0.rc2 together with jdbc postgres adapter and glassfish. As the performance is really poor, I also started running my application using "jruby --server -J-Druby.jit.threshold=0 script/server -e production". Anyway, I only get ab -n 10000 -c 100: 43.88 [#/sec] (mean) Thread_safe! is activated in my rails config. Java seems to use all cores, cpu usage is around 350% (top). ruby -v: jruby 1.5.0.RC2 (ruby 1.8.7 patchlevel 249) (2010-04-28 7c245f3) (Java HotSpot(TM) 64-Bit Server VM 1.6.0_16) [amd64-java] I wonder what I'm doing wrong and how to get better performancre with jruby than with ree? Thanks, Corin

    Read the article

  • in free(): error: junk pointer, too high to make sense Segmentation fault: 11 (core dumped) gprof

    - by Mayank
    I am trying to profile my application. For this I compiled my code with -pg and -lc_p option, it compiled successfully During run time I am getting the following error. in free(): error: junk pointer, too high to make sense Segmentation fault: 11 (core dumped) Doing GDB gives error as. (gdb) b main Breakpoint 1 at 0x5124d4: (gdb) r warning: Unable to get location for thread creation breakpoint: generic error [New LWP 100085] cacheIp in free(): error: junk pointer, too high to make sense Program received signal SIGSEGV, Segmentation fault. [Switching to LWP 100085] 0x00000000006c3a1f in pthread_sigmask () My application is multi threaded and is a combination of C and C++ code. uname -a FreeBSD 6.3-RELEASE FreeBSD 6.3-RELEASE #0: Wed Jan 16 01:43:02 UTC 2008 [email protected]:/usr/obj/usr/src/sys/SMP amd64 Why is the code crashing. Am I missing something.

    Read the article

  • how to debug SIGSEGV in jvm GCTaskThread

    - by ekeren
    Hi, My application is experiencing cashes in production. The crash dump indicates a SIGSEGV has occurred in GCTaskThread It uses JNI, so there might be some source for memory corruption, although I can't be sure. How can I debug this problem - I though of doing -XX:OnError... but i am not sure what will help me debug this. Also, can some of you give a concrete example on how JNI code can crash GC with SIGSEGV EDIT: OS:SUSE Linux Enterprise Server 10 (x86_64) vm_info: Java HotSpot(TM) 64-Bit Server VM (11.0-b15) for linux-amd64 JRE (1.6.0_10-b33), built on Sep 26 2008 01:10:29 by "java_re" with gcc 3.2.2 (SuSE Linux)

    Read the article

  • memory available to 64bit Fedora guest under 32bit XP host running virtualbox

    - by Chris Card
    I have successfully installed a 64 bit Fedora 11 guest os using VirtualBox on a host machine (AMD64) running 32 bit Windows XP . At the moment the host machine has 2 Gb ram installed and I've allocated 1 Gb to the guest, which all works well. The host machine can hold a maximum of 4 Gb ram, so I was wondering if it's worth buying an extra 2 Gb for it. I know that 32 bit Windows XP can't use all of the 4 Gb, but can the guest os use any of the ram that the host os can't use?

    Read the article

  • Examining C/C++ Heap memory statistics in gdb

    - by fd
    I'm trying to investigate the state of the C/C++ heap from within gdb on Linux amd64, is there a nice way to do this? One approach I've tried is to "call mallinfo()" but unfortunately I can't then extract the values I want since gdb deal with the return value properly. I'm not easily able to write a function to be compiled into the binary for the process I am attached to, so I can simply implement my own function to extract the values by calling mallinfo() in my own code this way. Is there perhaps a clever trick that will allow me to do this on-the-fly? Another option could be to locate the heap and traverse the malloc headers / free list; I'd appreciate any pointers to where I could start in finding the location and layout of these. I've been trying to Google and read around the problem for about 2 hours and I've learnt some fascinating stuff but still not found what I need.

    Read the article

  • DLL Load Failed, Not a Valid Win32 App showing for both x86 & x64 DLLs

    - by mitrebox
    Trying to run the latest version of heatmap. http://jjguy.com/heatmap/ DLL load keeps crapping out on me in both 64 & 32 bit dlls. (Similar questions on this seemed irrelevant as I've tried loading both DLLs) I'm running Windows 7. I have uninstalled and re-installed 2.7.3 64 bit. Idle Top line: Python 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] on win32 I've tried loading C:\Python27\DLLs\cHeatmap-x86.dll ImportError: DLL load failed: %1 is not a valid Win32 application. C:\Python27\DLLs\cHeatmap-x64.dll ImportError: DLL load failed: %1 is not a valid Win32 application. I can run heatmap 1.1 but that was before DLLs were added.

    Read the article

  • Why do debug symbols so adversely affect the performance of threaded applications on Linux?

    - by fluffels
    Hi. I'm writing a ray tracer. Recently, I added threading to the program to exploit the additional cores on my i5 Quad Core. In a weird turn of events the debug version of the application is now running slower, but the optimized build is running faster than before I added threading. I'm passing the "-g -pg" flags to gcc for the debug build and the "-O3" flag for the optimized build. Host system: Ubuntu Linux 10.4 AMD64. I know that debug symbols add significant overhead to the program, but the relative performance has always been maintained. I.e. a faster algorithm will always run faster in both debug and optimization builds. Any idea why I'm seeing this behavior?

    Read the article

  • having problems in running my java program..

    - by Tarun
    I'm a beginner in java so this question might seem a little stupid, my jdk and jre are installed in C:\program files, I write my program and save it in in my folder G:\start, now my program compiles without any error(the .class file is also generated), but when i run my program it says "unable to locate G:\lib\amd64\jvm.cfg", so i copy the 'lib' folder from my jdk folder and paste it in G:, again program compiles without any error, but when i run it,it gives my a new error saying "unable to locate G:\bin\server\jvm.dll",so i copy the 'bin' folder to g:, but now when i run, it gives me the same error again "unable to locate G:\bin\server\jvm.dll",where am I going wrong?

    Read the article

  • Efficiency: what block size of kernel-mode memory allocations?

    - by Robert
    I need a big, driver-internal memory buffer with several tens of megabytes (non-paged, since accessed at dispatcher level). Since I think that allocating chunks of non-continuous memory will more likely succeed than allocating one single continuous memory block (especially when memory becomes fragmented) I want to implement that memory buffer as a linked list of memory blocks. What size should the blocks have to efficiently load the memory pages? (read: not to waste any page space) A multiple of 4096? (equally to the page size of the OS) A multiple of 4000? (not to waste another page for OS-internal memory allocation information) Another size? Target platform is Windows NT = 5.1 (XP and above) Target architectures are x86 and amd64 (not Itanium)

    Read the article

  • Bug in malloc or mine?

    - by Martin
    Is this my bug or a bug/assertion fail in malloc itself? alloc.c:2451: sYSMALLOc: Assertion `(old_top == (((mbinptr) (((char *) &((av)-bins[((1) - 1) * 2])) - __builtin_offsetof (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) (old_size) = (unsigned long)((((__builtin_offsetof (struct malloc_chunk, fd_nextsize))+((2 * (sizeof(size_t))) - 1)) & ~((2 * (sizeof(size_t))) - 1))) && ((old_top)-size & 0x1) && ((unsigned long)old_end & pagemask) == 0)' failed. libstdc++6:amd64 4.7.2-2ubuntu1 gcc 4.7.2 ubuntu 12.10/64bit

    Read the article

  • Resizing Ubuntu x64 Server Partition with VirtualBox not reflected in OS

    - by daleyjem
    I've already resized my virtual disk with VirtualBox, but now need to extend the partition of my Ubuntu VM itself. I thought I was on my way with GParted live CD, but after I resize the "extended" filesystem partition, and then the child "lvm2 pv" filesystem partition to fill the unallocated space, df -h still shows the original disk size after I reboot into the VM. Any tips on this? I've scoured the webs tirelessly. Should I be resizing the boot (/dev/sda1) partition instead? Should I try to convert my lvm2 to ext4 or something? I'm lost on this. Note: VirtualBox hard disk is "dynamic". Specs: VBox 4.2.18 Ubuntu 12.04.2 amd64 Gparted 0.16.2-1b-i486

    Read the article

  • Unable to install Konstruktor program

    - by George Barnick
    I've recently switched to Linux, having a good knowledge of how it works before switching, since I've had experience working with Ubuntu-based servers for several months. I've been trying to install a LEGO CAD program that I have found based on an open source library. Information on that is here. The program I'm looking at is called Konstruktor, and for the most part it should be working. One dependency however does not seem to be able to install. The missing dependency is "kdelibs5", which I know what it is and what it's for, but can't get it to install. When trying to install, I get this error: george@GEORGE-PC:~$ sudo apt-get install kdelibs5 Reading package lists... Done Building dependency tree Reading state information... Done Package kdelibs5 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'kdelibs5' has no installation candidate When trying to install the proper package of Konstruktor for my system, I get this error: george@GEORGE-PC:~$ sudo dpkg -i '/home/george/Downloads/konstruktor_0.9-beta1-2_amd64.deb' Selecting previously unselected package konstruktor. (Reading database ... 224684 files and directories currently installed.) Unpacking konstruktor (from .../konstruktor_0.9-beta1-2_amd64.deb) ... dpkg: dependency problems prevent configuration of konstruktor: konstruktor depends on kdelibs5 (>= 4:4.4.5); however: Package kdelibs5 is not installed. dpkg: error processing konstruktor (--install): dependency problems - leaving unconfigured Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf-2.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for mime-support ... Processing triggers for hicolor-icon-theme ... Processing triggers for shared-mime-info ... Unknown media type in type 'all/all' Unknown media type in type 'all/allfiles' Unknown media type in type 'uri/mms' Unknown media type in type 'uri/mmst' Unknown media type in type 'uri/mmsu' Unknown media type in type 'uri/pnm' Unknown media type in type 'uri/rtspt' Unknown media type in type 'uri/rtspu' Errors were encountered while processing: konstruktor I have tried updating and adding apt repositories for KDE packages, but I continuously get the same error. I tried the package "kdelibs5-dev", which installed fine, but "kdelibs5" won't, and Konstruktor still won't install. I am on Ubuntu 13.10 with GNOME shell on amd64 architecture. Any help would be much appreciated. (originally posted my issue here) Thanks ahead of time.

    Read the article

  • gcc segmentation fault on Ubuntu 12.04

    - by Yuval F
    I am trying to compile a C program on Ubuntu precise 12.04. Here's the program: #include <stdio.h> int main(int argc, char** argv) { printf("Hello World!"); return 0; } My gcc version is 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5). Initially it did not find cc1 so I added a soft link. Now I get this message when I try to compile: gcc: internal compiler error: Segmentation fault (program cc1) Compiling the same program with g++ works fine. I tried reinstalling build-essential, but to no avail. What am I missing? EDIT: I tried reinstalling according to @gertyvdijk's suggestion. As it did not help, here is the output of apt-cache policy gcc-4.6: gcc-4.6: Installed: 4.6.3-1ubuntu5 Candidate: 4.6.3-1ubuntu5 Version table: *** 4.6.3-1ubuntu5 0 500 http://il.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages 100 /var/lib/dpkg/status and the output of ls -l /usr/bin/gcc: lrwxrwxrwx 1 root root 7 Mar 13 2012 /usr/bin/gcc -> gcc-4.6 EDIT #2: here's a verbose compiler output: gcc -v aaa.c Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.6/lto-wrapper Target: x86_64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.6.3-1ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.6/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.6 --enable-shared --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.6 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --enable-objc-gc --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) COLLECT_GCC_OPTIONS='-v' '-mtune=generic' '-march=x86-64' /usr/lib/gcc/x86_64-linux-gnu/4.6/cc1 -quiet -v -imultilib . -imultiarch x86_64-linux-gnu aaa.c -quiet -dumpbase aaa.c -mtune=generic -march=x86-64 -auxbase aaa -version -fstack-protector -o /tmp/ccHfcXMs.s gcc: internal compiler error: Segmentation fault (program cc1) Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-4.6/README.Bugs> for instructions.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Macbook Pro 13" Retina (10,2): Keyboard and Touchpad don't work correcltly

    - by Dirk
    I'm dealing with Ubuntu since about 5 years and installed it on several laptops. Now I'm stuck when trying to install Ubuntu 12.04.1 on a brand new Macbook Pro 13" Retina (10,2). I sucessfully can start Ubuntu from an USB stick, the Ubuntu desktop is visible, a mouse cursor is visible. But there is no respond to keyboard or touchpad input. So I cannot really install Ubuntu on the Macbook. The details of my approach: Prepare an empty USB stick Download "ISO 2 USB EFI Booter for Mac" and copy the file bootX64.efi to the USB drive as /efi/boot/bootX64.efi. Download Ubuntu 12.04.1 Desktop for Mac from http://cdimage.ubuntu.com/releases/1-amd64+mac.iso and copy the iso the USB drive as /efi/boot/boot.iso Put the USB stick into the Macbook Press and hold the "alt" button while switching the Macbook on Select "EFI Boot" from the boot menu that appears and press the Return / Enter key Immediately a black terminal screen appears with the headline "Welcome to the Ubuntu ISO << - EFI booter". 30 seconds later the familiar Ubuntu startup graphics screen is showing. Further 20 seconds later Ubuntu has started and the desktop is visible - in wonderfully fine resolution Now the computer does not respond to any actions on the touchpad nor the keyboard Who did install Ubuntu on this Macbook Pro 13" Retina (10,2) successfully? On this site https://help.ubuntu.com/community/MacBookPro this unit is not listed yet, anyway. Any help would be greatly appreciated! Dirk PS: I could now install ubuntu with an external USB Keyboard/Mouse Set. But now, after showing the grub menu, a kernel panic error appears and booting stops :-/ Seems that the ubuntu images fit not to a macbook pro retina 13" (10,2) yet. PPS: Ok, there are new facts: If I edit the boot options and enter " nomodeset noapic" ubuntu starts and Keyboard and Touchpad work! Now I have to enable WiFi... PPPS: After installing Broadcom firmware from USB Live stick as described in other posts, WiFi was enabled. Then I could update ubuntu normally to 12.10. After this, I must not enter "nomodeset noapic" in the grub menu anymore. Last Thing now is the Touchpad. The driver seems not to be there. The touch pad is only showing as mouse. t.b.c.

    Read the article

  • Tomboy error while tring to sync with Ubuntu one; Can anyone help?

    - by Michael Chapman
    So I'm sure you've heard the song before, but after trying to sync my notes with Ubuntu One(on 10.10 AMD64) I get "Could not synchronize notes. Check the details below and try again." Of course the problem is that there are no details and trying again doesn't help. So I ran tomboy -debug and compared my error to any thing I could find about similar problems (such as the post here) but found nothing useful. Any way here's my first error, I got this using preferencessynchronizationUbuntu_one [ERROR 21:08:42.271] Synchronization failed with the following exception: String was not recognized as a valid DateTime. at System.DateTime.Parse (System.String s, IFormatProvider provider, DateTimeStyles styles) [0x00000] in <filename unknown>:0 at System.DateTime.Parse (System.String s, IFormatProvider provider) [0x00000] in <filename unknown>:0 at System.DateTime.Parse (System.String s) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.NoteInfo.ParseJson (Hyena.Json.JsonObject jsonObj) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.ParseJsonNoteArray (Hyena.Json.JsonArray jsonArray) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.ParseJsonNotes (System.String jsonString, System.Nullable`1& latestSyncRevision) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.GetNotes (Boolean includeContent, Int32 sinceRevision, System.Nullable`1& latestSyncRevision) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.WebSyncServer.GetNoteUpdatesSince (Int32 revision) [0x00000] in <filename unknown>:0 at Tomboy.Sync.SyncManager.SynchronizationThread () [0x00000] in <filename unknown>:0 The next thing I tried was using preferencessynchronizationtomboy_web with the default 'http://one.ubuntu.com/notes/' and got the same error plus one more. [ERROR 21:12:31.949] System.ObjectDisposedException: The object was used after being disposed. at System.Net.HttpListener.CheckDisposed () [0x00000] in <filename unknown>:0 at System.Net.HttpListener.EndGetContext (IAsyncResult asyncResult) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.WebSyncPreferencesWidget.<OnAuthButtonClicked>m__1 (IAsyncResult localResult) [0x00000] in <filename unknown>:0 [ERROR 21:13:19.245] Synchronization failed with the following exception: String was not recognized as a valid DateTime. at System.DateTime.Parse (System.String s, IFormatProvider provider, DateTimeStyles styles) [0x00000] in <filename unknown>:0 at System.DateTime.Parse (System.String s, IFormatProvider provider) [0x00000] in <filename unknown>:0 at System.DateTime.Parse (System.String s) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.NoteInfo.ParseJson (Hyena.Json.JsonObject jsonObj) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.ParseJsonNoteArray (Hyena.Json.JsonArray jsonArray) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.ParseJsonNotes (System.String jsonString, System.Nullable`1& latestSyncRevision) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.Api.UserInfo.GetNotes (Boolean includeContent, Int32 sinceRevision, System.Nullable`1& latestSyncRevision) [0x00000] in <filename unknown>:0 at Tomboy.WebSync.WebSyncServer.GetNoteUpdatesSince (Int32 revision) [0x00000] in <filename unknown>:0 at Tomboy.Sync.SyncManager.SynchronizationThread () [0x00000] in <filename unknown>:0 I Have also tried removing then re-adding My computer from my Ubuntu One account, but that did not help either. The only other Thing I have noticed is that under systempreferencesubuntu one services, "Notes" is not listed as a service. I don't know if this is normal or not. Thanks for any help and please let me know if anything is confusing.

    Read the article

  • apt-get broken + dependencies issue + many software uninstalled

    - by vnc786
    OS=ubuntu 12.04 64bit 3.2.0-29-generic what i did? apt-get purge libre* which i thought will remove LO 3.5 but, which was a big mistake after couple of moments, i realise that its is removing all other software so i did cltl-c which stopped the apt process i restarted my system after that i found that following software were missing apt-get,evince(pdf), cheese etc..here is full list http://pastebin.com/CWHrw10y i managed to install APT deb file through dpkg but now i am not able to do any installation so i just removed /var/lib/dpkg/status and created new but that didnt help apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: apt-utils coreutils debconf debconf-i18n dpkg libacl1 libapt-inst1.4 libapt-pkg4.12 libattr1 libbz2-1.0 libdb5.1 libgcc1 liblocale-gettext-perl liblzma5 libselinux1 libstdc++6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl perl-base tar tzdata xz-utils zlib1g Suggested packages: debconf-doc debconf-utils whiptail dialog gnome-utils libterm-readline-gnu-perl libgtk2-perl libnet-ldap-perl libqtgui4-perl libqtcore4-perl apt bzip2 ncompress xz-lzma The following NEW packages will be installed: apt-utils coreutils debconf debconf-i18n dpkg libacl1 libapt-inst1.4 libapt-pkg4.12 libattr1 libbz2-1.0 libdb5.1 libgcc1 liblocale-gettext-perl liblzma5 libselinux1 libstdc++6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl perl-base tar tzdata xz-utils zlib1g 0 upgraded, 24 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/9,246 kB of archives. After this operation, 29.9 MB of additional disk space will be used. Do you want to continue [Y/n]? y E: Cannot get debconf version. Is debconf installed? debconf: apt-extracttemplates failed: No such file or directory dpkg: regarding .../libgcc1_1%3a4.6.3-1ubuntu5_amd64.deb containing libgcc1, pre-dependency problem: libgcc1 pre-depends on multiarch-support multiarch-support is unpacked, but has never been configured. dpkg: error processing /var/cache/apt/archives/libgcc1_1%3a4.6.3-1ubuntu5_amd64.deb (--unpack): pre-dependency problem - not installing libgcc1 No apport report written because MaxReports is reached already Errors were encountered while processing: /var/cache/apt/archives/libgcc1_1%3a4.6.3-1ubuntu5_amd64.deb E: Internal Error, No file name for libc6 W: Could not perform immediate configuration on 'multiarch-support:amd64'. Please see man 5 apt.conf under APT::Immediate-Configure for details. (2) E: Sub-process /usr/bin/dpkg returned an error code (1) # apt-get -u dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libc6 : Depends: libgcc1 but it is not installed Depends: tzdata but it is not installed E: Unmet dependencies. Try using -f. i have tried below link Unable to install due to debconf problem How do I resolve unmet dependencies? but no help... thanks

    Read the article

  • How to get my IR remote to work? Lirc can't see it

    - by user1234567
    I'm using Ubuntu 11.10 (amd64) and I'm trying to get my infrared remote control working. The IR device is a part of a DVB-T USB stick (Based on a RTL2832u chip). I'm using these drivers - it's the only way of getting this device to work under 11.10 that I found. It's a big impromevent from previous Ubuntu version, where I had to edit the driver's code. The device works quite great - and the IR part of it works, too. The driver's page says that the code it's in alpha stage, but I'm pretty sure that my issue has nothing to do with that. If, and only if driver's module is loaded with parameter rtl2832u_rc_mode=2 (which means "use NEC protocol for IR") the remote kind of works, I can see this by running cat /dev/.. ../input6 - when I press a button, random letters appear. The remote works just like a keyboard, but keys are totally messed up - when I press '5' the volume goes down, etc. I would like to use Lirc to fix that, but Lirc can't detect my device (i.e. irw shows nothing). I suspect, it's because something gets into control of the device and sets it up as a keyboard. Lirc seems to be working, it's KDE settings module works too, but it just doesn't detect the device. The Lirc page describes this issue, but since 2009 - the last year when that page was updated, Ubuntu moved from HAL (described there) to DeviceKit, rendering provided instruction useless. I had a similar issue with my previous remote, but the keys were not messed up so much - the remote was usable, so I gave up trying to get Lirc working. I tried the answer provided here, but it changed nothing. I also tried forcing lircd to use my device, but this didn't work too: for i in /sys/class/input/input* ; do echo -n "$(basename "$i"): "; cat "$i/name"; done shows input0: Power Button input1: Power Button input2: Logitech Logitech USB Keyboard input3: A4Tech PS/2+USB Mouse input6: IR-receiver inside an USB DVB receiver But when I run: lircd -n --device=name='IR*' as root (also tried with the full name) I always see: lircd-0.9.0[3983]: lircd(default) ready, using /var/run/lirc/lircd lircd-0.9.0[3983]: accepted new client on /var/run/lirc/lircd lircd-0.9.0[3983]: could not get file information for name=IR* lircd-0.9.0[3983]: default_init(): No such file or directory lircd-0.9.0[3983]: Failed to initialize hardware So, how to set up Lirc with devinput driver in such case?

    Read the article

  • Why does aptitude want to remove a bunch of files?

    - by Mediterran81
    Recently I encountered dependencies resolve issues when using APTITUDE (it is my favorite). Nevertheless, I started to feel that APTITUDE does not behave as it is supposed to be in 64 bits systems while apt-get works fine. Can someone confirm that APTITUDE is buggy in Ubuntu 11.10 amd64? Edit: For example, when tried to install ntfs-config using APTITUDE, it asked me to remove over 100 packages (skype for example), while using apt-get worked fine. han@L502X:~$ sudo aptitude install ntfs-config [sudo] password for han: The following NEW packages will be installed: ntfs-3g{ab} ntfs-config 0 packages upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/640 kB of archives. After unpacking 2,466 kB will be used. The following packages have unmet dependencies: ntfs-3g: Conflicts: ntfsprogs but 2.0.0-1ubuntu4 is installed. The following actions will resolve these dependencies: Remove the following packages: 1) flashplugin-downloader 2) flashplugin-installer 3) libasound2 4) libasound2-plugins 5) libasyncns0 6) libatk1.0-0 7) libaudio2 8) libavahi-client3 9) libavahi-common3 10) libc6 11) libcairo2 12) libcomerr2 13) libcups2 14) libcurl3 15) libdatrie1 16) libdb5.1 17) libdbus-1-3 18) libdbusmenu-qt2 19) libexpat1 20) libffi6 21) libflac8 22) libfontconfig1 23) libfreetype6 24) libgcc1 25) libgcrypt11 26) libgdk-pixbuf2.0-0 27) libglib2.0-0 28) libgnutls26 29) libgpg-error0 30) libgssapi-krb5-2 31) libgtk2.0-0 32) libice6 33) libidn11 34) libjack-jackd2-0 35) libjasper1 36) libjpeg62 37) libjson0 38) libk5crypto3 39) libkeyutils1 40) libkrb5-3 41) libkrb5support0 42) liblcms1 43) libldap-2.4-2 44) libmng1 45) libnspr4 46) libnspr4-0d 47) libnss3 48) libnss3-1d 49) libogg0 50) libpango1.0-0 51) libpcre3 52) libpixman-1-0 53) libpng12-0 54) libpulse0 55) libqt4-dbus 56) libqt4-declarative 57) libqt4-network 58) libqt4-script 59) libqt4-sql 60) libqt4-xml 61) libqt4-xmlpatterns 62) libqtcore4 63) libqtgui4 64) librtmp0 65) libsamplerate0 66) libsasl2-2 67) libsasl2-modules 68) libselinux1 69) libsm6 70) libsndfile1 71) libspeexdsp1 72) libsqlite3-0 73) libssl1.0.0 74) libstdc++6 75) libtasn1-3 76) libthai0 77) libtiff4 78) libuuid1 79) libvorbis0a 80) libvorbisenc2 81) libwrap0 82) libx11-6 83) libxau6 84) libxcb-render0 85) libxcb-shm0 86) libxcb1 87) libxcomposite1 88) libxcursor1 89) libxdamage1 90) libxdmcp6 91) libxext6 92) libxfixes3 93) libxft2 94) libxi6 95) libxinerama1 96) libxrandr2 97) libxrender1 98) libxss1 99) libxt6 100) libxv1 101) nspluginviewer 102) nspluginwrapper 103) ntfsprogs 104) skype 105) sni-qt 106) zlib1g Leave the following dependencies unresolved: 107) flashplugin-downloader recommends libasound2-plugins (>= 1.0.16) Accept this solution? [Y/n/q/?]

    Read the article

  • Failed 12.04 installation

    - by Rob Sayer
    I tried installing Ubuntu 12.04 today. Not an upgrade, a new installation. It didn't work. My computer specs: Computer: Compaq presario CQ-104CA OS: Windows 7 Home 64 bit CPU: AMD V140 BIOS: latest Graphics: amd m880g with ati mobility radeon hd 4250 Wireless: atheros ar9285 Internal HD:SATA I wasn't connected to the internet at the time ... I know of a number of people who have installed ubuntu unconnected and just updated later. It seemed to go normally until I got to the part where I chose to install dual boot linux/windows. Then, the screen went black and the following test appeared (I left out the [OK]'s): checking battery state starting crash report submission daemon stating cpu interrupts balancing daemon stopping system V runlevel compatibility starting configure network device security stopping configure network device security stopping cold plug devices stopping log initial device creation starting enable remaining boot-time encrypting devices starting configure network device security starting save udev log and update rules stopping save udev log and update rules stopping enable remaining boot-time encrypted block devices checking for running unattended-upgrades acpid: exiting speech-dispatcher disabled: edit /etc/default/speech-disorder At this point, the CD is ejected. Then nothing. If I press the return key, it boots Windows. I don't think that's what's supposed to happen. Thinking the cd media or dvd drive may have been faulty, I downloaded the .iso again and made a bootable USB stick, as per your instructions. This time there was no cryptic crash screen. It just booted windows. I can't find any log files it may have left. Thinking the amd64 version may have been the wrong one, I tried downloading the x86 version. Same thing, both from cd and usb drive. Note I downloaded both files twice. I doubt it was a corrupted d/l. This is supposed to be a simple, transparent install. I went to the time and trouble of looking up my devices and drivers re ubuntu beforehand, and was prepared to do some configuration, though I know someone who has the same wireless device and his worked righted out of the box. But I spent over 3 hours trying to install it with only the above to show for it.

    Read the article

  • How i can fix " E: Internal Error, No file name for libc6 "

    - by SMAOUH
    Hello all please i need your help to fix this problem i have 2 broken packages system and i can't reinstall them or make any other option : update , upgrade , install & remove app .... Ubuntu 12.04.3 I have not found any solutions please help me sudo apt-get install -f smaouh@Linux:~$ sudo apt-get install -f [sudo] password for smaouh: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libopenal1 libpam-winbind libao-common gnome-exe-thumbnailer libqca2-plugin-ossl gir1.2-champlain-0.12 libmagickcore4 libmagickwand4 libmagickcore4-extra libcapi20-3 python-unidecode libopenal-data liblqr-1-0 gir1.2-gtkchamplain-0.12 unixodbc wine-gecko2.21 libchamplain-0.12-0 python-glade2 imagemagick-common libosmesa6 oss-compat gimp-help-common esound-common gimp-help-en libmpg123-0 ttf-mscorefonts-installer imagemagick winbind libodbc1 fonts-droid fonts-unfonts-core libchamplain-gtk-0.12-0 libclutter-gtk-1.0-0 gir1.2-gtkclutter-1.0 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 386 not upgraded. 4 not fully installed or removed. After this operation, 0B of additional disk space will be used. dpkg: error processing libc6 (--configure): libc6:amd64 2.15-0ubuntu10.5 cannot be configured because libc6:i386 is in a different version (2.15-0ubuntu10.4) dpkg: dependency problems prevent configuration of libc-dev-bin: libc-dev-bin depends on libc6 (>> 2.15); however: Package libc6 is not configured yet. libc-dev-bin depends on libc6 (<< 2.16); however: Package libc6 is not configured yet. dpkg: error processing libc-dev-bin (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libc6-dev: libc6-dev depends on libc6 (= 2.15-0ubuntu10.5); however: Package libc6 is not configured yet. libc6-dev depends on libc-dev-bin (= 2.15-0ubuntu10.5); however: Package libc-dev-bin is not configured yet. dpkg: error processing libc6-dev (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libc6-i386: libc6-i386 depends on libc6 (= 2.15-0ubuntu10.5); however: Package libc6 is not configured yet. dpkg: error processing libc6-i386 (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already Errors were encountered while processing: libc6 libc-dev-bin libc6-dev libc6-i386 E: Sub-process /usr/bin/dpkg returned an error code (1) smaouh@Linux:~$

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >