Search Results

Search found 6497 results on 260 pages for 'minimum spanning tree'.

Page 85/260 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Installing Eclipse for OSB Development

    - by James Taylor
    OSB provides 2 methods for OSB development, the OSB console, and Eclipse. This post deals with a typical development environment with OSB installed on a remote server and the developer requiring an IDE on their PC for development. As at 11.1.1.4 Eclipse is only IDE supported for OSB development. We are hoping OSB will support JDeveloper in the future. To get the download for Eclipse use the download WebLogic Server with the Oracle Enterprise Pack for Eclipse, e.g. wls1034_oepe111161_win32.exe.To ensure the Eclipse version is compatible with your OSB version I recommend using the Eclipse that comes with the supported WLS server, e.g. OSB 11.1.1.4 you would install WLS 10.3.4+oepe.The install is a 2 step process, install the base Eclipse, then install the OSB plugins. In this example I'm using the 11.1.1.4 install for windows, your versions may differ. You need to download 2 programs, WebLogic Server with the oepe plugin for your OS, and the Oracle Service Bus which is generally generic. Place these files in a directory of your choice. Start the executable I create a new Oracle Home for this installation as it don't want to impact on my JDeveloper install or any other Oracle products installed on my machine. Ignore the support / email notifications Choose a custom install as we only want to install the minimum for Eclipse. If you really want you can do a typical and install everything. Deselect all products then select the Oracle Enterprise Pack for Eclipse. This will select the minimum prerequisites required for install. As I'm only going to use this home for OSB Development I deselect the JRockit JVM. Accept the locations for the installs. If running on a Windows environment you will be asked to start a Node Manger service. This is optional. I have chosen to ignore. Select the user permissions you require, I have set to default. Do a last check to see if the values are correct and continue to install. The install should start. The install should complete successfully. I chose not to run the Quick Start. Extract the OSB download to a location of your choice and double click on the setup.exe. You may be asked to supply a correct java location. Point this to the java installed in your OS. I'm running Windows 7 so I used the 64bit version. Skip the software updates. Set the OSB home to the location of the WLS home installed above Choose a custom install as all we want to install is the OSB Eclipse Plugins. Select OSB IDE. For the rest of the install screens accept the defaults. Start the install There is no need to configure a WLS domain if you only intend to deploy to the remote server. If you need to do this there are other sites how to configure via the configuration wizard. Start Eclipse to make sure the OSB Plugin has been created. In the top right drop down you should see OSB as an option. Connecting to the remote server, select the Server Tab at the bottom Right-click in that frame and select Server. Chose the remote server version and the hostname Provide and name for your server if necessary, and accept the defaults Enter connection details for the remote server Click on the Remote server and it should validate stating its status.Now you ready to develop, Happy developing!

    Read the article

  • Automount of external hard disk

    - by moose
    I have an Intenso 6002560 1TB Memory Station - an external hard disk. This hard disk gets connected via Y-USB cable. When I connect both USB-ends to my Notebook, it gets recognized by my Ubuntu 10.04.4 LTS system: moose@pc07:~$ lsusb [...] Bus 002 Device 005: ID 13fd:1840 Initio Corporation [...] and Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00065e10 Device Boot Start End Blocks Id System /dev/sda1 * 1 37810 303704064 83 Linux /dev/sda2 37810 38914 8864769 5 Extended /dev/sda5 37810 38914 8864768 82 Linux swap / Solaris Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0d6ea32a Device Boot Start End Blocks Id System /dev/sdc1 1 121601 976759008+ c W95 FAT32 (LBA) But it did not get mounted: moose@pc07:/dev$ mount -l /dev/sda1 on / type ext4 (rw,errors=remount-ro,user_xattr) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/moose/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=moose) However, I could mount it manually with mount -t vfat /dev/sdc1 /mnt/sdc1 as you can see here: moose@pc07:~$ mount -l /dev/sda1 on / type ext4 (rw,errors=remount-ro,user_xattr) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/moose/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=moose) /dev/sdc1 on /mnt/sdc1 type vfat (rw) edit: Another command: moose@pc07:~$ sudo blkid -o list device fs_type label mount point UUID ---------------------------------------------------------------------------------------------------------------------- /dev/sda1 ext4 / 45eb611b-517e-425b-8057-0391726cccd5 /dev/sda5 swap <swap> e9dc42f3-594c-4b62-874a-305eda5eed41 moose@pc07:~$ blkid -o list device fs_type label mount point UUID ---------------------------------------------------------------------------------------------------------------------- /dev/sda1 ext4 / 45eb611b-517e-425b-8057-0391726cccd5 /dev/sda5 swap <swap> e9dc42f3-594c-4b62-874a-305eda5eed41 /dev/sdc1 /mnt/sdc1 edit: another command: moose@pc07:~$ ls -l /dev/disk/by-uuid/ total 0 lrwxrwxrwx 1 root root 10 2012-09-30 09:31 45eb611b-517e-425b-8057-0391726cccd5 -> ../../sda1 lrwxrwxrwx 1 root root 10 2012-09-30 09:31 e9dc42f3-594c-4b62-874a-305eda5eed41 -> ../../sda5 Here is a link to a Launchpad question about this problem. But I would like it to mount automatically. What do I have to do?

    Read the article

  • Laptop monitor stopped working and can't be re-enabled on a Dell Latitude E6410

    - by xektrum
    I'm using Ubuntu 12.04 (upgraded from 11.10), everything seemed to work fine until today when my laptop monitor suddenly stopped working. Here are the facts: My laptop is a Dell Latitude E6410, Intel graphics. External Monitor is attached through a docking station. Everything worked fine for about 6-7 month, then upgraded to 12.04 Issue started today after a week of upgrade. I think the issue started after I ran CounterStrike 1.6, both monitors blinked and then only the attached monitor which is connected to a docking station continued to work I thought at first that was a transient issue but then I've rebooted, removed the battery but the same happens. Laptop Monitor and external monitor work fine up to login screen, but after I login it goes black Whenever I try to re-enable laptop monitor from Display Manager I get errors: The selected configuration for displays could not be applied could not set the configuration for CRTC 63 Not sure what technical details are required but here are some: $ xrandr Screen 0: minimum 320 x 200, current 3120 x 1050, maximum 8192 x 8192 eDP1 connected (normal left inverted right x axis y axis) 1440x900 60.0 + 40.0 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 474mm x 296mm 1680x1050 60.0*+ 1280x1024 75.0 60.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) HDMI2 disconnected (normal left inverted right x axis y axis) DP2 disconnected (normal left inverted right x axis y axis) $ tail /var/log/Xorg.0.log [ 8367.132] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.132] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.174] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.174] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.174] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.174] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.265] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.265] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.265] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.265] (WW) intel(0): Page flip failed: Device or resource busy I'm using gnome-shell, and the only ways I've been able to get both display working have been: 1) Booting with laptop disconnected from docking and then re attach external with VGA instead of DVI, but only worked for a session. 2) Removing xserver-xorg-video-intel, but then I gnome-shell is gone as well as dri I would appreciate any suggestions. Regards, ============================= WORKAROUND FOUND ============================= So I have tried few things and here is what worked: I've installed a newer version of xserver-xorg-video-intel (2.19 vs 2.17) from ppa:xorg-edgers/ppa, it didn't work at first, it was only showing low graphics mode, so I tried with a different linux-image 3.0.0-19-generic-pae instead of 3.2.0-24-generic-pae, which I believe is 12.04 precise default, then everything started to work again, Now I've installed 3.4.0-1-generic-pae from same ppa and everything goes flawless so I believe the issue is either with linux-image 3.0.0-19-generic-pae or xserver-xorg-video-intel 2.17. Hope this helps someone in the future. PS: Now xrandr shows multiple modes for my laptop monitor $ xrandr Screen 0: minimum 320 x 200, current 3120 x 1050, maximum 8192 x 8192 eDP1 connected 1440x900+1680+0 (normal left inverted right x axis y axis) 303mm x 189mm 1440x900 60.0*+ 59.9 40.0 1360x768 59.8 60.0 1152x864 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 474mm x 296mm 1680x1050 60.0*+ 1280x1024 75.0 60.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) HDMI2 disconnected (normal left inverted right x axis y axis) DP2 disconnected (normal left inverted right x axis y axis)

    Read the article

  • How to get bearable 2D and 3D performance on AMD Radeon HD 6950?

    - by l0b0
    I have had an AMD Radeon HD 6950 (i.e., Cayman series) for a couple years now, and I have tried a lot of combinations of drivers and settings with terrible results. I'm completely at a loss as to how to proceed. The open source driver has much better 2D performance, but it offloads all OpenGL rendering to the CPU. What I've tried so far: All the latest stable Ubuntu releases in the period, plus one Linux Mint release. All the latest stable AMD Catalyst Proprietary Display Drivers, and currently 13.1. The unofficial wiki installation instructions for every Ubuntu version and the semi-official Ubuntu instructions. All the tips and tweaks I could find for Minecraft (Optifine, reducing settings to minimum), VLC (postprocessing at minimum, rendering at native video size), Catalyst Control Center (flipped every lever in there) and X11 (some binary toggles I can no longer remember). Results: Typically 13-15 FPS in Minecraft, 30 max (100+ in Windows with the same driver version). Around 10 FPS in Team Fortress 2 using the official Steam client. Choppy video playback, in Flash and with VLC. CPU use goes through the roof when rendering video (150% for 1080p on YouTube in Chromium, 100% for 1080p H264 in VLC). glxgears shows 12.5 FPS when maximized. fgl_glxgears shows 10 FPS when maximized. Hardware details from lshw: Motherboard ASUS P6X58D-E CPU Intel Core i7 CPU 950 @ 3.07GHz (never overclocked; 64 bit) 6 GB RAM Video card product "Cayman PRO [Radeon HD 6950]", vendor "Hynix Semiconductor (Hyundai Electronics)" 2 x 1920x1200 monitors, both connected with HDMI. I feel I must be missing something absolutely fundamental here. Is there no accelerated support for anything on 64-bit architectures? Does a dual monitor completely mess up the driver? $ fglrxinfo display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: AMD Radeon HD 6900 Series OpenGL version string: 4.2.11995 Compatibility Profile Context $ glxinfo | grep 'direct rendering' direct rendering: Yes I am currently using the open source driver, with the following results: Full frame rate and low CPU load when playing 1080p video. Black screen (but music in the background) in Team Fortress 2. Similar performance in Minecraft as the Catalyst driver. In hindsight obvious, since both end up offloading the rendering to the CPU. My /var/log/Xorg.0.log after upgrading to AMD Catalyst 13.1. Some possibly important lines: (WW) Falling back to old probe method for fglrx (WW) fglrx: No matching Device section for instance (BusID PCI:0@3:0:1) found The generated xorg.conf. The disabled "monitor" 0-DFP9 is actually an A/V receiver, which sometimes confuses the monitor drivers when turned on/off (but not in Windows). All three "monitor" devices are connected with HDMI. Edit: Chris Carter's suggestion to use the xorg-edgers PPA (Catalyst 13.1) resulted in some improvement, but still pretty bad performance overall: Minecraft stabilizes at 13-17 FPS, but at least the CPU load is "only" at 45-60%. Still 150% CPU use for 1080p video rendering on YouTube in Chromium. Massive improvement for 1080p H264 in VLC: 40-50% CPU use and no visible jitter glxgears performance about doubled to 25-30 FPS when maximized. fgl_glxgears still at ~10 FPS when maximized.

    Read the article

  • Exalytics and Oracle Business Intelligence Enterprise Edition (OBIEE) Partner Workshop

    - by mseika
    Workshop Description Oracle Fusion Middleware 11g is the #1 application infrastructure foundation. It enables enterprises to create and run agile and intelligent business applications and maximize IT efficiency by exploiting modern hardware and software architectures. Oracle Exalytics Business Intelligence Machine is the world’s first engineered system specifically designed to deliver high performance analysis, modeling and planning. Built using industry-standard hardware, market-leading business intelligence software and in-memory database technology, Oracle Exalytics is an optimized system that delivers unmatched speed, visualizations and scalability for Business Intelligence and Enterprise Performance Management applications. This FREE hands-on, partner workshop highlights both the hardware and software components that are engineered to work together to deliver Oracle Exalytics - an optimized version of the industry-leading Oracle TimesTen In-Memory Database with analytic extensions, a highly scalable Oracle server designed specifically for in-memory business intelligence, and Oracle’s proven Business Intelligence Foundation with enhanced visualization capabilities and performance optimizations. This workshop will provide hands-on experience with Oracle's latest engineered system. Topics covered will include TimesTen In-Memory Database and the new Summary Advisor for Exalytics, the technical details (including mobile features) of the latest release of visualization enhancements for OBI-EE, and technical updates on Essbase. After taking this course, you will be well prepared to architect, build, demo, and implement an end-to-end Exalytics solution. You will also be able to extend your current analytical and enterprise performance management application implementations with numerous Oracle technologies specifically enhanced to take advantage of the compute capacity and in-memory capabilities of Oracle Exalytics.If you are a BI or Data Warehouse Architect, developer or consultant, you don’t want to miss this 3-day workshop. Register Now! Presentations Exalytics Architectural Overview Upgrade and Lifecycle Management Times Ten for Exalytics Summary Advisor Utility Essbase and EPM System on Exalytics Dashboard and Analysis Interactions OBIEE 11.1.1.6 Features and Advanced Topics Lab OutlineThe labs showcase Oracle Exalytics core components and functionality and provide expertise of Oracle Business Intelligence 11.1.1.6 new features and updates from prior releases. The hands-on activities are based on an Oracle VirtualBox image with software and training samples pre-installed. Lab Environment Setup Creating and Working with Oracle TimesTen In-Memory Database Running Summary Advisor Utility Working with Exalytics Visualization Features – Dashboard and Analysis Interactions Audience Oracle Partners BI and EPM Application Developers and Implementers System Integrators and Solution Consultants Data Warehouse Developers Enterprise Architects Prerequisites Experience and understanding of OBIEE 11g is required Previous attendance of Oracle Business Intelligence Foundation Suite Workshop or BIEE 11gIntroduction Workshop is highly recommended Good understanding of data warehousing and data modeling for reporting and analysis purpose Strong experience with database technologies preferred Equipment RequirementsThis workshop requires attendees to provide their own laptops for this class.Attendee laptops must meet the following minimum hardware/software requirements: Hardware Minimum 8GB RAM 60 GB free space (includes staging) USB 2.0 port (at least one available) It is strongly recommended that you bring a mouse. You will be working in a development environment and using the mouse heavily. Software One of the following operating systems: 64-bit Windows host/laptop OS 64-bit host/laptop OS with a Windows VM (XP, Server, or Win 7, BIC2g, etc.) Internet Explorer 7.x/8.x or Firefox 3.5.x WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle VirtualBox 4.0.2 or higher Downloadable from http://www.virtualbox.org/wiki/Downloads CPU virtualization mode needs to be enabled. We will provide guidance on the day of the workshop. Attendees will be given a VirtualBox image containing a pre-installed Oracle Exalytics environment. Schedule This workshop is 3 days. - Times vary by country!9:00am: Sign-in and technical setup 9:30am: Workshop starts 5:00pm: Workshop ends Oracle Exalytics and Business Intelligence (OBIEE) Workshop December 11-13, 2012: Oracle BVP, Birmingham, UK Register Here. Questions? Send email to: [email protected] Oracle Platform Technologies Enablement Services

    Read the article

  • Grub Rescue Unknown Filesystem Error. Grub Corrupted or Filesystem?

    - by nightcrawler
    Now it has happened twice & have been pulling my hairs now... I have installed xubuntu on my external hardisk & have been using it for about 3 months. It has three partitions, one of 500 mb mounted at /boot, 2nd one of 48gb mounted at / & the rest (out of 160gb) is ntfs partition....used as normal external storage. The last storage supposedly acts as a buffer b/w Linux distributions & Win platform, buffer in the sense that it provides a universal channel for data transfers. I have constantly used this external hardisk for data transfers b/w win7 laptop & xubuntu (on this external hd) without any hassle. However, on of my desktops where I have ubuntu I (for the first time) attached this external drive which let me do data transfers where all three partitions properly mounted....but then nasty thing occurred the same that occurred before. I (as usual) tried booting via this external hd (one having xubuntu, one having being formerly used under Ubuntu) I got error Now I am totally devastated because similar thing happened ~6months before when I had fedora 17 in my external hd (instead of xubuntu) & after it was used under ubuntu the same happened...i didn't reported it because I already had planned towards debian instead of rpm! The mystery is that as long as I don't attach this external hd under ubuntu the data never** corrupts whereas under win xp/7 I can use it as a normal usb storage of coarse linux partitions aren’t available under win platforms... **From corrupts I mean hd fails to boot with the error mentioned however cant say whether data within remains untouched? It seems that my grub & or MBR is corrupted. Please sir guide me to solve this issue also why I cant attach & use linux external hds under linux platform Disk /dev/sdc: 160.0 GB, 160041884672 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581806 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004e7d0 Device Boot Start End Blocks Id System /dev/sdc1 * 2048 976895 487424 83 Linux /dev/sdc2 978942 96874495 47947777 5 Extended /dev/sdc3 96874496 312575999 107850752 7 HPFS/NTFS/exFAT /dev/sdc5 978944 94726143 46873600 83 Linux /dev/sdc6 94728192 96874495 1073152 82 Linux swap / Solaris I can recall for sure that have seen a thread here when a similar problem occurred & in response someone gave solution of how to mount (now invisible) partitions & recover important data in them. I have misplaced that URL so if any can guide me thither because my important documents resides in / partition What I already have done: Without success I have tried this & related solutions What I plan to do: I believe that filesystem has corrupted & would you recommend solution like this provided I cant recall whether my /boot (500mb) partition was ext4 or ext2 though I am sure that my / (48gb) partition was ext4 UPDATE 1 Attached my external hd under Ubuntu ran followinf command as root grub-install /dev/sdc where /dev/sdc was my external hd containing corrupted xubuntu....it reported all done! I re-ran fdisk -l but to my disappointment it reported Disk /dev/sdc: 160.0 GB, 160041884672 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581806 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1b6b9167 Disk /dev/sdc doesn't contain a valid partition table ...& now I can't even access its ntfs partition (former /dev/sdc3) please help? UPDATE 2 TestDisk (by cgsecurity) failed at founding any partition table :( TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org Disk /dev/sdc - 160 GB / 149 GiB - CHS 19457 255 63 Partition Start End Size in sectors

    Read the article

  • Need help partitioning when reinstalling Ubuntu 14.04

    - by Chris M.
    I upgraded to 14.04 about a month ago on my HP Mini netbook (about 16 GB hard disk). A few days ago the system crashed (I don't know why but I was using internet at the time). When I restarted the computer, Ubuntu would not load. Instead, I got a message from the BIOS saying Reboot and Select proper Boot device or Insert Boot Media in selected Boot device and press a key I took this to mean that I needed to reinstall 14.04. When I try to reinstall Ubuntu from the USB stick, I choose "Erase disk and install Ubuntu" but then I get a message: Some of the partitions you created are too small. Please make the following partitions at least this large: / 3.3 GB If you do not go back to the partitioner and increase the size of these partitions, the installation may fail. At first I hit Continue to see if it would install anyway, and it gave the message: The attempt to mount a file system with type ext4 in SCSI1 (0,0,0), partition # 1 (sda) at / failed. You may resume partitioning from the partitioning menu. The second time I hit Go Back, and it took me to the following partitioning table: Device Type Mount Point Format Size Used System /dev/sda /dev/sda1 ext4 (checked) 3228 MB Unknown /dev/sda5 swap (not checked) 1063 MB Unknown + - Change New Partition Table... Revert Device for boot loader installation: /dev/sda ATA JM Loader 001 (4.3 GB) At this point I'm not sure what to do. I've never partitioned my hard drive before and I don't want to screw things up. (I'm not particularly tech savvy.) Can you instruct me what I should do. (P.S. I'm afraid the table might not appear as I typed it in.) Results from fdisk: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 4294 MB, 4294967296 bytes 255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda doesn't contain a valid partition table Disk /dev/sdb: 7860 MB, 7860125696 bytes 155 heads, 31 sectors/track, 3194 cylinders, total 15351808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009a565 Device Boot Start End Blocks Id System /dev/sdb1 * 2768 15351807 7674520 b W95 FAT32 ubuntu@ubuntu:~$ Here is what it displays when I open the Disks utility (I tried the screenshot terminal command you suggested but it didn't seem to do anything): 4.3 GB Hard Disk /dev/sda Model: JM Loader 001 (01000001) Size: 4.3 GB (4,294,967,296 bytes) Serial Number: 01234123412341234 Assessment: SMART is not supported Volumes Size: 4.3 GB (4,294,967,296 bytes) Device: /dev/sda Contents: Unknown (There is a button in the utility that when you click it gives the following options: Format... Create Disk Image... Restore Disk Image... Benchmark but SMART Data & Self-Tests... is dimmed out) When I hit F9 Change Boot Device Order, it shows the hard drive as: SATA:PM-JM Loader 001 When I hit F10 to get me into the BIOS Setup Utility, under Diagnostic it shows: Primary Hard Disk Self Test Not Support NetworkManager Tool State: disconnected Device: eth0 Type: Wired Driver: atl1c State: unavailable Default: no HW Address: 00:26:55:B0:7F:0C Capabilities: Carrier Detect: yes Wired Properties Carrier: off When I run command lshw -C network, I get: WARNING: you should run this program as super-user. *-network description: Network controller product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:01:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:16 memory:feafc000-feafffff *-network description: Ethernet interface product: AR8132 Fast Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: c0 serial: 00:26:55:b0:7f:0c capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.1-NAPI latency=0 link=no multicast=yes port=twisted pair resources: irq:43 memory:febc0000-febfffff ioport:ec80(size=128) WARNING: output may be incomplete or inaccurate, you should run this program as super-user.

    Read the article

  • Grub2 : Windows 7 can't boot installing with Ubuntu 10.04 on different hard drive

    - by dellphi
    I use a dual boot with two hard disks and two OS is Ubuntu 10.04 and Windows 7. Windows 7 installed on the first disk, first partition. Grub is installed on a second hard disk MBR, and Ubuntu installed on an extended partition on a second hard drive. When I select Windows 7 on the Grub menu, the HDD lamp lights up briefly and then black screen on the monitor, with the status of the keyboard is still functioning. Until now (with the default boot from first HDD), I have to press F12 to get into the Grub to run Linux on a second HDD. ================ fdisk -l ================================ dellph1@dellph1-desktop:~$ fdisk -l omitting empty partition (5) Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00087dec Device Boot Start End Blocks Id System /dev/sda1 * 1 23104 185582848+ 7 HPFS/NTFS /dev/sda2 23105 121601 791177122 5 Extended /dev/sda5 36107 74408 307660783+ 7 HPFS/NTFS /dev/sda6 74409 100081 206218341 7 HPFS/NTFS /dev/sda7 100082 121601 172859368+ 7 HPFS/NTFS Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6d43dfb2 Device Boot Start End Blocks Id System /dev/sdb1 1 10030 80560066 5 Extended /dev/sdb5 * 1 5560 44657601 83 Linux /dev/sdb6 5560 9387 30736384 83 Linux /dev/sdb7 9387 10030 5164032 82 Linux swap / Solaris dellph1@dellph1-desktop:~$ ================= grub.cfg ================== # DO NOT EDIT THIS FILE # It is automatically generated by /usr/sbin/grub-mkconfig using templates from /etc/grub.d and settings from /etc/default/grub # BEGIN /etc/grub.d/00_header if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ ${prev_saved_entry} ]; then set saved_entry=${prev_saved_entry} save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z ${boot_once} ]; then saved_entry=${chosen} save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n ${have_grubenv} ]; then if [ -z ${boot_once} ]; then save_env recordfail; fi; fi } insmod ext2 set root='(hd1,5)' search --no-floppy --fs-uuid --set 2f014a3a-35f3-4d05-87aa-34ca677160b7 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=1024x768 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi insmod ext2 set root='(hd1,5)' search --no-floppy --fs-uuid --set 2f014a3a-35f3-4d05-87aa-34ca677160b7 set locale_dir=($root)/boot/grub/locale set lang=en insmod gettext if [ ${recordfail} = 1 ]; then set timeout=-1 else set timeout=5 fi END /etc/grub.d/00_header BEGIN /etc/grub.d/05_debian_theme insmod ext2 set root='(hd1,5)' search --no-floppy --fs-uuid --set 2f014a3a-35f3-4d05-87aa-34ca677160b7 insmod jpeg if background_image /usr/share/backgrounds/CurlsbyCandy.jpg ; then set color_normal=white/black set color_highlight=black/light-gray else set menu_color_normal=white/black set menu_color_highlight=black/light-gray fi END /etc/grub.d/05_debian_theme BEGIN /etc/grub.d/10_linux menuentry 'Ubuntu, with Linux 2.6.32-24-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd1,5)' search --no-floppy --fs-uuid --set 2f014a3a-35f3-4d05-87aa-34ca677160b7 linux /boot/vmlinuz-2.6.32-24-generic root=UUID=2f014a3a-35f3-4d05-87aa-34ca677160b7 ro splash vga=795 quiet splash nomodeset video=uvesafb:mode_option=1280x1024-24,mtrr=3,scroll=ywrap initrd /boot/initrd.img-2.6.32-24-generic } menuentry 'Ubuntu, with Linux 2.6.32-24-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd1,5)' search --no-floppy --fs-uuid --set 2f014a3a-35f3-4d05-87aa-34ca677160b7 echo 'Loading Linux 2.6.32-24-generic ...' linux /boot/vmlinuz-2.6.32-24-generic root=UUID=2f014a3a-35f3-4d05-87aa-34ca677160b7 ro single splash vga=795 echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-24-generic } END /etc/grub.d/10_linux BEGIN /etc/grub.d/30_os-prober menuentry "Windows 7 (loader) (on /dev/sda1)" { insmod ntfs set root='(hd0,1)' search --no-floppy --fs-uuid --set 5cac2139ac210f58 chainloader +1 } END /etc/grub.d/30_os-prober BEGIN /etc/grub.d/40_multisystem Ajout de MultiSystem MULTISYSTEM MENU menuentry "PLoP Boot Manager" { linux16 /boot/plpbt } menuentry "Smart Boot Manager" { search --set -f /boot/sbootmgr.dsk linux16 /boot/memdisk initrd16 /boot/sbootmgr.dsk } FIN MULTISYSTEM MENU END /etc/grub.d/40_multisystem ================================================ I want to keep the Grub on the second HDD. I have been using the Startup Manager, Boot Manager and Grub Customizer, and this problem still unsolved. The easiest thing that I can possibly do is to install Grub on first HDD, but I was curious and maybe someone can help.

    Read the article

  • installer can't find partition, but fdisk can find them

    - by pxd
    I'm installing ubuntu 12.04, my system had install 2 system -- winxp and ubuntu 10.10. Now, I want to update ubuntu to 12.04. I use usb disk to install 12.04. But, the installer can't not find my partition in my harddisk. But, the fdisk can find them. Can you help me? How to do? ubuntu@ubuntu:~$ sudo lshw -short H/W path Device Class Description system HP 2230s (NN868PA#AB2) /0 bus 3037 /0/9 memory 64KiB BIOS /0/0 processor Intel(R) Core(TM)2 Duo CPU T6570 @ 2.10GHz /0/0/1 memory 2MiB L2 cache /0/0/3 memory 32KiB L1 cache /0/0/0.1 processor Logical CPU /0/0/0.2 processor Logical CPU /0/2 memory 32KiB L1 cache /0/4 memory 2GiB System Memory /0/4/0 memory SODIMM [empty] /0/4/1 memory 2GiB SODIMM DDR2 Synchronous 800 MHz (1.2 ns) /0/100 bridge Mobile 4 Series Chipset Memory Controller Hub /0/100/2 display Mobile 4 Series Chipset Integrated Graphics Controller /0/100/2.1 display Mobile 4 Series Chipset Integrated Graphics Controller /0/100/1a bus 82801I (ICH9 Family) USB UHCI Controller #4 /0/100/1a.1 bus 82801I (ICH9 Family) USB UHCI Controller #5 /0/100/1a.2 bus 82801I (ICH9 Family) USB UHCI Controller #6 /0/100/1a.7 bus 82801I (ICH9 Family) USB2 EHCI Controller #2 /0/100/1b multimedia 82801I (ICH9 Family) HD Audio Controller /0/100/1c bridge 82801I (ICH9 Family) PCI Express Port 1 /0/100/1c.1 bridge 82801I (ICH9 Family) PCI Express Port 2 /0/100/1c.1/0 wlan1 network PRO/Wireless 5100 AGN [Shiloh] Network Connection /0/100/1c.2 bridge 82801I (ICH9 Family) PCI Express Port 3 /0/100/1c.4 bridge 82801I (ICH9 Family) PCI Express Port 5 /0/100/1c.5 bridge 82801I (ICH9 Family) PCI Express Port 6 /0/100/1c.5/0 eth1 network 88E8072 PCI-E Gigabit Ethernet Controller /0/100/1d bus 82801I (ICH9 Family) USB UHCI Controller #1 /0/100/1d.1 bus 82801I (ICH9 Family) USB UHCI Controller #2 /0/100/1d.2 bus 82801I (ICH9 Family) USB UHCI Controller #3 /0/100/1d.7 bus 82801I (ICH9 Family) USB2 EHCI Controller #1 /0/100/1e bridge 82801 Mobile PCI Bridge /0/100/1f bridge ICH9M LPC Interface Controller /0/100/1f.2 scsi0 storage 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] /0/100/1f.2/0 /dev/sda disk 500GB WDC WD5000BEVT-0 /0/100/1f.2/0/1 /dev/sda1 volume 48GiB Windows NTFS volume /0/100/1f.2/0/2 /dev/sda2 volume 416GiB Extended partition /0/100/1f.2/0/2/5 /dev/sda5 volume 97GiB HPFS/NTFS partition /0/100/1f.2/0/2/6 /dev/sda6 volume 198GiB HPFS/NTFS partition /0/100/1f.2/0/2/7 /dev/sda7 volume 27GiB Linux filesystem partition /0/100/1f.2/0/2/8 /dev/sda8 volume 93GiB Linux filesystem partition /0/100/1f.2/1 /dev/cdrom disk CDDVDW TS-L633M /0/1 scsi6 storage /0/1/0.0.0 /dev/sdb disk 15GB STORAGE DEVICE /0/1/0.0.0/0 /dev/sdb disk 15GB /0/1/0.0.0/0/1 /dev/sdb1 volume 14GiB Windows FAT volume /1 power HZ04037 ubuntu@ubuntu:~$ ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x31263125 Device Boot Start End Blocks Id System /dev/sda1 * 63 102277727 51138832+ 7 HPFS/NTFS/exFAT /dev/sda2 102277728 976784129 437253201 f W95 Ext'd (LBA) /dev/sda5 102277791 307078127 102400168+ 7 HPFS/NTFS/exFAT /dev/sda6 307078191 724141151 208531480+ 7 HPFS/NTFS/exFAT /dev/sda7 724142080 781459455 28658688 83 Linux /dev/sda8 781461504 976771071 97654784 83 Linux Disk /dev/sdb: 15.9 GB, 15931539456 bytes 64 heads, 32 sectors/track, 15193 cylinders, total 31116288 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009eb92 Device Boot Start End Blocks Id Systemfile:///home/ubuntu/Pictures/Screenshot%20from%202012-07-07%2010:25:40.png /dev/sdb1 * 32 31115263 15557616 c W95 FAT32 (LBA) ubuntu 12.04 installer can't find the partition in my hard disk, only find device - /dev/sda.(sorry, I'm new user, so can't send image.)

    Read the article

  • Cannot establish maximum resolution on ASUS PB278Q

    - by dentuzhik
    I've recently bought brand new ASUS PB278Q monitor. When trying to connect to my laptop, everything works great, except that I can't get the native resolution of my monitor (2560x1440) working. The automatic is 1920x1080. My graphic card is Nvidia GeForce 320m. Here's output from lspci for it: ~$ lspci | grep VGA 02:00.0 VGA compatible controller: NVIDIA Corporation GT216M [GeForce GT 320M] (rev a2) and also xrandr: ~$ xrandr Screen 0: minimum 8 x 8, current 3286 x 1437, maximum 8192 x 8192 VGA-0 disconnected (normal left inverted right x axis y axis) LVDS-0 connected primary 1366x768+0+669 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.0*+ HDMI-0 connected 1920x1080+1366+0 (normal left inverted right x axis y axis) 600mm x 340mm 1920x1080 60.0*+ 59.9 50.0 30.0 25.0 24.0 60.0 50.0 1680x1050 60.0 1440x900 59.9 1280x1024 75.0 60.0 1280x960 60.0 1280x800 59.8 1280x720 60.0 59.9 50.0 1152x864 75.0 1024x768 75.0 70.1 60.0 800x600 75.0 72.2 60.3 56.2 720x576 50.0 720x480 59.9 640x480 75.0 59.9 59.9 480x576 50.0 480x480 59.9 I have proprietary drivers installed on my machine, here's the info about the monitor from nvidia-settings (Actually I don't have enough reputation to post images, so here's the text): Chip Location: Internal Signal: TDMS Connection link: Single Native resolution: 2560x1440 Refresh rate: 60.00 Hz The monitor is connected to laptop via HDMI cable, and honestly I have no idea what version it is, and what version is my HDMI output of my graphics card. I tried to find how I can figure it out on the web, but had no luck. Also my video card has only VGA and HDMI outs so I can't test neither DVI-D cable nor DisplayPort. So apparently, there's some problem over there. At least I want to know exactly what's going on. I've tried to see if it a linux-specific problem, but windows also gave me the same resolution by default. What I've already tried: Connect through VGA (stupid one, of course it gave me 1920x1080). Checked two HDMI cables (not sure if they're the same or not, as mentioned above). Played around with xrandr and adding custom modes. Didn't help. Surfed for the info a lot on the web, but couldn't get appropriate results. Actually xrandr gives me the following: ~$ cvt 2560 1440 60 # 2560x1440 59.96 Hz (CVT 3.69M9) hsync: 89.52 kHz; pclk: 312.25 MHz Modeline "2560x1440_60.00" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync ~$ xrandr --newmode "2560x1440_60.00" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync ~$ xrandr Screen 0: minimum 8 x 8, current 3286 x 1437, maximum 8192 x 8192 VGA-0 disconnected (normal left inverted right x axis y axis) LVDS-0 connected 1366x768+0+669 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.0*+ HDMI-0 connected primary 1920x1080+1366+0 (normal left inverted right x axis y axis) 600mm x 340mm 1920x1080 60.0*+ 59.9 50.0 30.0 25.0 24.0 60.0 50.0 1680x1050 60.0 1440x900 59.9 1280x1024 75.0 60.0 1280x960 60.0 1280x800 59.8 1280x720 60.0 59.9 50.0 1152x864 75.0 1024x768 75.0 70.1 60.0 800x600 75.0 72.2 60.3 56.2 720x576 50.0 720x480 59.9 640x480 75.0 59.9 59.9 480x576 50.0 480x480 59.9 2560x1440_60.00 (0x34f) 312.2MHz h: width 2560 start 2752 end 3024 total 3488 skew 0 clock 89.5KHz v: height 1440 start 1443 end 1448 total 1493 clock 60.0Hz ~$ xrandr --addmode HDMI-0 2560x1440_60.00 X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 140 (RANDR) Minor opcode of failed request: 18 (RRAddOutputMode) Serial number of failed request: 29 Current serial number in output stream: 30 What I intend to do next: Try another HDMI cable? Try HDMI to DVI-D cable? Try HDMI to DisplayPort cable? Another type of adapters? VGA to DVI-D? Buy another laptop with another graphic card. Damn. My ideas pretty much end here. Any ideas? Any explanations why it isn't working are appreciated.

    Read the article

  • how to tackle this combinatorial algorithm problem

    - by Andrew Bullock
    I have N people who must each take T exams. Each exam takes "some" time, e.g. 30 min (no such thing as finishing early). Exams must be performed in front of an examiner. I need to schedule each person to take each exam in front of an examiner within an overall time period, using the minimum number of examiners for the minimum amount of time (i.e. no examiners idle) There are the following restrictions: No person can be in 2 places at once each person must take each exam once noone should be examined by the same examiner twice I realise that an optimal solution is probably NP-Complete, and that I'm probably best off using a genetic algorithm to obtain a best estimate (similar to this? http://stackoverflow.com/questions/184195/seating-plan-software-recommendations-does-such-a-beast-even-exist). I'm comfortable with how genetic algorithms work, what i'm struggling with is how to model the problem programatically such that i CAN manipulate the parameters genetically.. If each exam took the same amount of time, then i'd divide the time period up into these lengths, and simply create a matrix of time slots vs examiners and drop the candidates in. However because the times of each test are not necessarily the same, i'm a bit lost on how to approach this. currently im doing this: make a list of all "tests" which need to take place, between every candidate and exam start with as many examiners as there are tests repeatedly loop over all examiners, for each one: find an unscheduled test which is eligible for the examiner (based on the restrictions) continue until all tests that can be scheduled, are if there are any unscheduled tests, increment the number of examiners and start again. i'm looking for better suggestions on how to approach this, as it feels rather crude currently.

    Read the article

  • Wix - Upgrade always runs older installer msi and fails in trying to read old msi

    - by rkhj
    I'm having a problem though with the Windows caching of the installer. I'm trying to do an upgrade and each time the Windows installer is launching the installer of the older version. And when I do the upgrade it is complaining about problems with reading the older version's msi file (because its not in the same directory anymore). I did change the UpgradeCode and the ProductCode but kept the PackageCode the same. I also have different ProductVersion codes (2.2.3 vs 2.3.0). Here's a sample of my code: <Upgrade Id="$(var.UpgradeCode)"> <UpgradeVersion Property="OLDAPPFOUND" IncludeMinimum="yes" Minimum="$(var.RTMProductVersion)" IncludeMaximum="no" Maximum="$(var.ProductVersion)"/> <UpgradeVersion Property="NEWAPPFOUND" IncludeMinimum="no" Minimum="$(var.ProductVersion)" OnlyDetect="yes"/> </Upgrade> This is the Install Sequence: <InstallExecuteSequence> <Custom Action='SetUpgradeParams' After='InstallFiles'>Installed AND NEWAPPFOUND</Custom> <Custom Action='Upgrade' After='SetUpgradeParams'>Installed AND NEWAPPFOUND</Custom> </InstallExecuteSequence> The error I am getting is: A network error occurred while attempting to read from the file: Thanks,

    Read the article

  • MIPS return address in main

    - by Alexander
    I am confused why in the code below I need to decrement the stack pointer and store the return address again. If I don't do that... then PCSpim keeps on looping.. Why is that? ######################################################################################################################## ### main ######################################################################################################################## .text .globl main main: addi $sp, $sp, -4 # Make space on stack sw $ra, 0($sp) # Save return address # Start test 1 ############################################################ la $a0, asize1 # 1st parameter: address of asize1[0] la $a1, frame1 # 2nd parameter: address of frame1[0] la $a2, window1 # 3rd parameter: address of window1[0] jal vbsme # call function # Printing $v0 add $a0, $v0, $zero # Load $v0 for printing li $v0, 1 # Load the system call numbers syscall # Print newline. la $a0, newline # Load value for printing li $v0, 4 # Load the system call numbers syscall # Printing $v1 add $a0, $v1, $zero # Load $v1 for printing li $v0, 1 # Load the system call numbers syscall # Print newline. la $a0, newline # Load value for printing li $v0, 4 # Load the system call numbers syscall # Print newline. la $a0, newline # Load value for printing li $v0, 4 # Load the system call numbers syscall ############################################################ # End of test 1 lw $ra, 0($sp) # Restore return address addi $sp, $sp, 4 # Restore stack pointer jr $ra # Return ######################################################################################################################## ### vbsme ######################################################################################################################## #.text .globl vbsme vbsme: addi $sp, $sp, -4 # create space on the stack pointer sw $ra, 0($sp) # save return address exit: add $v1, $t5, $zero # (v1) x coordinate of the block in the frame with the minimum SAD add $v0, $t4, $zero # (v0) y coordinate of the block in the frame with the minimum SAD lw $ra, 0($sp) # restore return address addi $sp, $sp, 4 # restore stack pointer jr $ra # return If I delete: addi $sp, $sp, -4 # create space on the stack pointer sw $ra, 0($sp) # save return address and lw $ra, 0($sp) # restore return address addi $sp, $sp, 4 # restore stack pointer on vbsme: PCSpim keeps on running... Why??? I shouldn't have to increment/decrement the stack pointer on vbsme and then do the jr again right? The jal in main is supposed to handle that

    Read the article

  • Big problem with Dijkstra algorithm in a linked list graph implementation

    - by Nazgulled
    Hi, I have my graph implemented with linked lists, for both vertices and edges and that is becoming an issue for the Dijkstra algorithm. As I said on a previous question, I'm converting this code that uses an adjacency matrix to work with my graph implementation. The problem is that when I find the minimum value I get an array index. This index would have match the vertex index if the graph vertexes were stored in an array instead. And the access to the vertex would be constant. I don't have time to change my graph implementation, but I do have an hash table, indexed by a unique number (but one that does not start at 0, it's like 100090000) which is the problem I'm having. Whenever I need, I use the modulo operator to get a number between 0 and the total number of vertices. This works fine for when I need an array index from the number, but when I need the number from the array index (to access the calculated minimum distance vertex in constant time), not so much. I tried to search for how to inverse the modulo operation, like, 100090000 mod 18000 = 10000 and, 10000 invmod 18000 = 100090000 but couldn't find a way to do it. My next alternative is to build some sort of reference array where, in the example above, arr[10000] = 100090000. That would fix the problem, but would require to loop the whole graph one more time. Do I have any better/easier solution with my current graph implementation?

    Read the article

  • Finding min and max values

    - by user01
    I am trying to write a simple program that reads integers from a data file and outputs the minimum and maximum value. The first integer of the input file will indicate how many more integers will be read, and then the integers will be listed. My program compiles without any problem, however, it returns values that are not part of the set in my test data file. Could anyone help with diagnose this issue? int main(){ FILE *fp = fopen("data.txt", "r"); int count; int num; int i; int min = 0; int max = 0; fscanf (fp, "%d", &count); for (i = 0; i < count; i++) fscanf( fp, "%d", &i); { if (num < min) min = num; if (num > max) max = num; } fclose (fp); printf("Of the %d integers, the minimum value is %d and the maximum value is %d \n", count, min, max);}

    Read the article

  • Eclipse does not start on Windows 7

    - by van
    Suddenly Eclipse today has decides to stop working. The last thing I did was close all perspectives and close Eclipse. When loading eclipse from the command prompt, using: "eclipse.exe -clean" the splash screen loads for a split second then exits. When I run the command: eclipsec -consoleLog -debug it results in the following output: Start VM: -Dosgi.requiredJavaVersion=1.6 -Dhelp.lucene.tokenizer=standard -Xms4096m -Xmx4096m -XX:MaxPermSize=512m -Djava.class.path=d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3. 0.v20130327-1440.jar -os win32 -ws win32 -arch x86_64 -showsplash d:\devtools\eclipse\\plugins\org.eclipse.platform_4.3.0.v20130605-20 00\splash.bmp -launcher d:\devtools\eclipse\eclipsec.exe -name Eclipsec --launcher.library d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher.win 32.win32.x86_64_1.1.200.v20130521-0416\eclipse_1503.dll -startup d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3.0.v201303 27-1440.jar --launcher.appendVmargs -product org.eclipse.epp.package.standard.product -consoleLog -debug -vm C:/Program Files/Java/jdk1.6.0_37/bin\..\jre\bin\server\jvm.dll -vmargs -Dosgi.requiredJavaVersion=1.6 -Dhelp.lucene.tokenizer=standard -Xms4096m -Xmx4096m -XX:MaxPermSize=512m -Djava.class.path=d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3. 0.v20130327-1440.jar Error occurred during initialization of VM Incompatible minimum and maximum heap sizes specified Checking Task Manager shows no Java process running and both the CPU and memory usage are very low. I have tried: Re-installing Eclipse Re-starting my machine But running eclipsec -consoleLog -debug from the command prompt still results in the issue: Error occurred during initialization of VM Incompatible minimum and maximum heap sizes specified

    Read the article

  • How can I include a .eps figure within a Tikz simple flow chart?

    - by Jan
    Hi, I would like to create a simple flow chart in latex with the TikZ package similar to the following example http://www.texample.net/tikz/examples/simple-flow-chart/ However I would like to include figures (a time series plot created in R, as eps or something else) within the flowchart (e.g. for example within a {block}? \documentclass{article} \usepackage[latin1]{inputenc} \usepackage{tikz} \usetikzlibrary{shapes,arrows} \begin{document} \pagestyle{empty} % Define block styles \tikzstyle{decision} = [diamond, draw, fill=blue!20, text width=4.5em, text badly centered, node distance=3cm, inner sep=0pt] \tikzstyle{block} = [rectangle, draw, fill=blue!20, text width=5em, text centered, rounded corners, minimum height=4em] \tikzstyle{line} = [draw, -latex'] \tikzstyle{cloud} = [draw, ellipse,fill=red!20, node distance=3cm, minimum height=2em] \begin{tikzpicture}[node distance = 2cm, auto] % Place nodes \node [block] (init) {initialize model}; \node [cloud, left of=init] (expert) {expert}; \node [cloud, right of=init] (system) {system}; \node [block, below of=init] (identify) {identify candidate models}; \node [block, below of=identify] (evaluate) {evaluate candidate models}; \node [block, left of=evaluate, node distance=3cm] (update) {update model}; \node [decision, below of=evaluate] (decide) {is best candidate better?}; \node [block, below of=decide, node distance=3cm] (stop) {stop}; % Draw edges \path [line] (init) -- (identify); \path [line] (identify) -- (evaluate); \path [line] (evaluate) -- (decide); \path [line] (decide) -| node [near start] {yes} (update); \path [line] (update) |- (identify); \path [line] (decide) -- node {no}(stop); \path [line,dashed] (expert) -- (init); \path [line,dashed] (system) -- (init); \path [line,dashed] (system) |- (evaluate); \end{tikzpicture} \end{document} Thanks, Jan

    Read the article

  • Is there a good way of automatically generating javascript client code from server side python

    - by tat.wright
    I basically want to be able to: Write a few functions in python (with the minimum amount of extra meta data) Turn these functions into a web service (with the minimum of effort / boiler plate) Automatically generate some javascript functions / objects for rpc (this should prevent me from doing as many stupid things as possible like mistyping method names, forgetting the names of methods, passing the wrong number of arguments) Example python: def hello_world(): return "Hello world" javascript: ... <!-- This file is automatically generated (either dynamically or statically) --> <script src="http://myurl.com/webservice/client_side_javascript"> </script> ... <script> $('#button').click(function () { hello_world(function (data){ $('#label').text(data))) } </script> A bit of research has shown me some approaches that come close to this: Automatic generation of json-rpc services from functions with a little boiler plate code in python and then using jquery and json to do the calls (still easy to make mistakes with method names - still need to be aware of urls when calling, very irritating to write these calls yourself in the firebug shell) Using a library like soaplib to generate wsdl from python (by adding copious type information). And then somehow convert this into javascript (not sure if there is even a library to do this) But are there any approaches closer to what I want?

    Read the article

  • Internet Explorer table 1 pixel spacing problem

    - by Dennis G.
    I've found a strange problem with Internet Explorer related to table spacing and cannot find a way to work around it. An empty table results in a single pixel white space with Internet Explorer (6 and 7, 8 not yet tested), while all other browsers ignore the empty table. Here's a picture of the problem: And here is the minimum HTML code to reproduce the issue (please note that there are more margin/padding css attributes and table attributes specified than really needed, I just tested if this fixes IE's behavior): <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <body> <div style="width: 200px; border: 1px black solid"> <table border="0" cellspacing="0" cellpadding="0" style="margin: 0pt; padding: 0pt; border-collapse: collapse;"> <tr> <td style="padding: 0; margin: 0"> </td> </tr> </table> <div style="background: red"> Test </div> </div> </body> </html> I'm not using an empty table as specified in the example above, but this was the minimum code that displays this behavior. Any ideas on how to fix this and remove the white space with IE?

    Read the article

  • Text piped to PowerShell.exe isn't recieved when using [Console]::ReadLine()

    - by crtracy
    I'm getting itermittent data loss when calling .NET [Console]::ReadLine() to read piped input to PowerShell.exe: >ping localhost | powershell -NonInteractive -NoProfile -C "do {$line = [Console]::ReadLine(); ('' + (Get-Date -f 'HH:mm :ss') + $line) | Write-Host; } while ($line -ne $null)" 23:56:45time<1ms 23:56:45 23:56:46time<1ms 23:56:46 23:56:47time<1ms 23:56:47 23:56:47 Normally 'ping localhost' from Vista64 looks like this, so there is a lot of data missing from the output above: Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Ping statistics for ::1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms But using the same API from C# recieves all the data sent to the process (excluding some newline differences). Code: namespace ConOutTime { class Program { static void Main (string[] args) { string s; while ((s = Console.ReadLine ()) != null) { if (s.Length > 0) // don't write time for empty lines Console.WriteLine("{0:HH:mm:ss} {1}", DateTime.Now, s); } } } } Output: 00:44:30 Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: 00:44:30 Reply from ::1: time<1ms 00:44:31 Reply from ::1: time<1ms 00:44:32 Reply from ::1: time<1ms 00:44:33 Reply from ::1: time<1ms 00:44:33 Ping statistics for ::1: 00:44:33 Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), 00:44:33 Approximate round trip times in milli-seconds: 00:44:33 Minimum = 0ms, Maximum = 0ms, Average = 0ms So, if calling the same API from PowerShell instead of C# many parts of StdIn get 'eaten'. Is the PowerShell host reading string from StdIn even though I didn't use 'PowerShell.exe -Command -'?

    Read the article

  • ASP.NET RangeValidator can't do even the most basic math !?!?!?

    - by marc_s
    I'm having an issue with my ASP.NET RangeValidator controls. I want to allow users to enter a discount amount, and this amount must be negative (< $0.00). I want to verify that the amount entered in a textbox is a negative value, so I have this in my page markup: <asp:TextBox ID="tbxDiscount" runat="server" /> <asp:RangeValidator ID="rvDiscount" runat="server" ControlToValidate="tbxDiscount" MinimumValue="0.0" MaximumValue="0.0" EnableClientScript="true" ErrorMessage="Please enter a negative value for a discount" /> and I attempt to set the MinimumValue dynamically in my code before the page gets rendered - to the negative equivalent of my item price. So if the item is $69, I want to set the minimum value to - $69: rvDiscount.MinimumValue = (-1.0m * Price).ToString(); Trouble is: I keep getting this error message: The maximum value 0.0 cannot be less than the minimum value -69.00 for rvDiscount WTF?!?!??! Where I come from, -69 $ IS less than $0 ...... so what's the problem? And more importantly: what is the solution to the problem??

    Read the article

  • How can I optimize retrieving lowest edit distance from a large table in SQL?

    - by Matt
    Hey, I'm having troubles optimizing this Levenshtein Distance calculation I'm doing. I need to do the following: Get the record with the minimum distance for the source string as well as a trimmed version of the source string Pick the record with the minimum distance If the min distances are equal (original vs trimmed), choose the trimmed one with the lowest distance If there are still multiple records that fall under the above two categories, pick the one with the highest frequency Here's my working version: DECLARE @Results TABLE ( ID int, [Name] nvarchar(200), Distance int, Frequency int, Trimmed bit ) INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@Source, [Name])) As Distance, Frequency, 'False' As Trimmed FROM MyTable INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@SourceTrimmed, [Name])) As Distance, Frequency, 'True' As Trimmed FROM MyTable SET @ResultID = (SELECT TOP 1 ID FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @Result = (SELECT TOP 1 [Name] FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultDist = (SELECT TOP 1 Distance FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultTrimmed = (SELECT TOP 1 Trimmed FROM @Results ORDER BY Distance, Trimmed, Frequency) I believe what I need to do here is to.. Not dumb the results to a temporary table Do only 1 select from `MyTable` Setting the results right in the select from the initial select statement. (Since select will set variables and you can set multiple variables in one select statement) I know there has to be a good implementation to this but I can't figure it out... this is as far as I got: SELECT top 1 @ResultID = ID, @Result = [Name], (dbo.Levenshtein(@Source, [Name])) As distOrig, (dbo.Levenshtein(@SourceTrimmed, [Name])) As distTrimmed, Frequency FROM MyTable WHERE /* ... yeah I'm lost */ ORDER BY distOrig, distTrimmed, Frequency Any ideas?

    Read the article

  • Time complexity to fill hash table (homework)?

    - by Heathcliff
    This is a homework question, but I think there's something missing from it. It asks: Provide a sequence of m keys to fill a hash table implemented with linear probing, such that the time to fill it is minimum. And then Provide another sequence of m keys, but such that the time fill it is maximum. Repeat these two questions if the hash table implements quadratic probing I can only assume that the hash table has size m, both because it's the only number given and because we have been using that letter to address a hash table size before when describing the load factor. But I can't think of any sequence to do the first without knowing the hash function that hashes the sequence into the table. If it is a bad hash function, such that, for instance, it hashes every entry to the same index, then both the minimum and maximum time to fill it will take O(n) time, regardless of what the sequence looks like. And in the average case, where I assume the hash function is OK, how am I suppossed to know how long it will take for that hash function to fill the table? Aren't these questions linked to the hash function stronger than they are to the sequence that is hashed? As for the second question, I can assume that, regardless of the hash function, a sequence of size m with the same key repeated m-times will provide the maximum time, because it will cause linear probing from the second entry on. I think that will take O(n) time. Is that correct? Thanks

    Read the article

  • Number of simple mutations to change one string to another?

    - by mstksg
    Hi; I'm sure you've all heard of the "Word game", where you try to change one word to another by changing one letter at a time, and only going through valid English words. I'm trying to implement an A* Algorithm to solve it (just to flesh out my understanding of A*) and one of the things that is needed is a minimum-distance heuristic. That is, the minimum number of one of these three mutations that can turn an arbitrary string a into another string b: 1) Change one letter for another 2) Add one letter at a spot before or after any letter 3) Remove any letter Examples aabca => abaca: aabca abca abaca = 2 abcdebf => bgabf: abcdebf bcdebf bcdbf bgdbf bgabf = 4 I've tried many algorithms out; I can't seem to find one that gives the actual answer every time. In fact, sometimes I'm not sure if even my human reasoning is finding the best answer. Does anyone know any algorithm for such purpose? Or maybe can help me find one? Thanks.

    Read the article

  • Excess errors on model from somewhere

    - by gmile
    I have a User model, and use an acts_as_authentic (from authlogic) on it. My User model have 3 validations on username and looks as following: User < ActiveRecord::Base acts_as_authentic validates_presence_of :username validates_length_of :username, :within => 4..40 validates_uniqueness_of :username end I'm writing a test to see my validations in action. Somehow, I get 2 errors instead of one when validating a uniqueness of a name. To see excess error, I do the following test: describe User do before(:each) do @user = Factory.build(:user) end it "should have a username longer then 3 symbols" do @user2 = Factory(:user) @user.username = @user2.username @user.save puts @user.errors.inspect end end I got 2 errors on username: @errors={"username"=>["has already been taken", "has already been taken"]}. Somehow the validation passes two times. I think authlogic causes that, but I don't have a clue on how to avoid that. Another case of problem is when I set username to nil. Somehow I get four validation errors instead of three: @errors={"username"=>["is too short (minimum is 3 characters)", "should use only letters, numbers, spaces, and .-_@ please.", "can't be blank", "is too short (minimum is 4 characters)"]} I think authlogic is one that causes this strange behaviour. But I can't even imagine on how to solve that. Any ideas?

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >