Search Results

Search found 24334 results on 974 pages for 'directory loop'.

Page 250/974 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Kernel module compilation fails when installing vmware tools

    - by nekooee
    When I install WMWare tools, I get this error for vmhgfs: /tmp/vmware-root/modules/vmhgfs-only/filesystem.c:47:28: fatal error: linux/smp_lock.h: No such file or directory compilation terminated. compilation terminated. make[2]: *** [/tmp/vmware-root/modules/vmhgfs-only/filesystem.o] Error 1 make[1]: *** [_module_/tmp/vmware-root/modules/vmhgfs-only] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.0.0-12-generic' make: *** [vmhgfs.ko] Error 2 make: Leaving directory `/tmp/vmware-root/modules/vmhgfs-only' If you wish to have the shared folders feature, you can install the driver by running vmware-config-tools.pl again after making sure that gcc, binutils, make and the kernel sources for your running kernel are installed on your machine. These packages are available on your distribution's installation CD. And /mnt/hgfs is empty when sharing. If I run vmware-hgfsclient in a terminal, I get the list of shared folders but /mnt/hgfs is empty.

    Read the article

  • Installing Ubuntu

    - by Mister AR
    i got a problem when I wanted to installing ubuntu 12.04 on a VMWare system on my Windows 7 x64 system ... in the end of installing after retrieving Files it stopped and didn't move forward... additionally i got a another problem there where i wanted to installing packages i updated. and gave me error below : installArchives() failed: Error in function: Setting up libssl1.0.0 (1.0.1-4ubuntu5.2) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing libssl1.0.0 (--configure): subprocess installed post-installation script returned error exit status 1 PLz help me soon ! tY all...

    Read the article

  • Pros and cons of the Google font API

    - by Seamus
    I am currently using a font from the google font directory on my website. I don't fully understand how it works, but it seems like when someone opens my site, their browser is told to go and fetch the font from Google. (correct me if I'm wrong). Now, what I'm wondering is, what are the pros and cons of this over just specifying a font family the old-school way? Presumably doing it the google font directory way has the advantage that they'll definitely see the font I want them to. (as long as the font directory is up). But does this way have disadvantages? Maybe using fonts that are stored locally speeds up the site loading?

    Read the article

  • How to merge two .iso images

    - by pgrytdal
    I am following this tutorial to install Android onto my computer VIA Virtual Box. My problem is, they want you to download liveandroidv0.3.iso.001 liveandroidv0.3.iso.002 then they want you to merge these two files with cat liveandroidv0.3.iso.001 liveandroidv0.3.iso.002 > liveandroidv0.3.iso in the Terminal. The problem is, when I run the command, I get the following output cat liveandroidv0.3.iso.001 liveandroidv0.3.iso.002 > liveandroidv0.3.iso cat: liveandroidv0.3.iso.001: No such file or directory cat: liveandroidv0.3.iso.002: No such file or directory So, I was wondering if there was an alternative way to merge these files? Or if you guy's could help me merge them this way? Extra info: OS: Ubuntu 12.10 I downloaded the files to my /downloads folder in my home directory.

    Read the article

  • Compiling midnight commander

    - by notabene
    Hello, I need help with compiling midnight commander so that I can make some changes (for educational purposes). Or even creating the make files. After downloading latest version from git. I try to perform ./autogen.sh . Result is: maint/autopoint: 418: cannot open /usr/share/gettext/archive.tar.gz: No such file tar: This does not look like a tar archive tar: Exiting with failure status due to previous errors cvs checkout: cannot find module `archive' - ignored find: `archive': No such file or directory find: `archive': No such file or directory find: `archive': No such file or directory autopoint: *** infrastructure files for version 0.14.3 not found; this is autopoint from GNU gettext-tools 0.17 autopoint: *** Stop. I have installed gettext and folder /usr/share/gettext does exist. But there is no archive.tar.gz. I have no idea what should this archive contain or where to get it. Can you help me please?

    Read the article

  • Which hidden files and directories do I need?

    - by Sammy Black
    In a previous question, I explained my situation/plan: backing up home directory on external drive, reformatting laptop drive, installing 14.04, putting home directory back. (It hasn't happened yet because I can't seem to find the down time, in case things aren't working right away.) It occurred to me that maybe I don't want all of those hidden files and directories (e.g. .local/share/ubuntuone/syncdaemon/, .cache/google-chrome/, etc.) Just judging by the amount of time in copying, I can tell that some of these hidden directories are large. Question: Are there any hidden directories that I obviously don't need/want when I have the laptop running an updated distribution? Will they cause conflicts? (I plan on copying the backed-up directory tree back onto the laptop with the --no-clobber option.)

    Read the article

  • Why am I am unable to open exe files?

    - by Aaron
    It doesn't matter what disk I use, it can not open the program. I keep getting the following error: Archive: /media/xxxxxxxx/INSTALL/_Setupa.exe [/media/xxxxxxxxxx/INSTALL/_Setupa.exe] End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. zipinfo: cannot find zipfile directory in one of /media/xxxxxx/INSTALL/_Setupa.exe or /media/xxxxxxxxx/INSTALL/_Setupa.exe.zip, and cannot find /media/xxxxxxxxx/INSTALL/_Setupa.exe.ZIP, period. Any ideas?

    Read the article

  • I'm trying to install Bruteforce Savedata from the archiver

    - by Jonathan
    I've just installed UBUNTU 12.04 for curiosity purposes. I'm a gamer and I wanted to install Brute force Save data on my comp. So i download it and it open in the Archive manager i go to run the ".exe" but encounter this message Archive: /home/c4/Desktop/Bruteforce_Save_Data_installer.exe [/home/c4/Desktop/Bruteforce_Save_Data_installer.exe] End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. zipinfo: cannot find zipfile directory in one of /home/c4/Desktop/Bruteforce_Save_Data_installer.exe or /home/c4/Desktop/Bruteforce_Save_Data_installer.exe.zip, and cannot find /home/c4/Desktop/Bruteforce_Save_Data_installer.exe.ZIP, period. Please help!

    Read the article

  • Ubuntu Server 12.04 start fails after update

    - by Abbgrade
    I did an system update on ubuntu server 12.04, which requestet an reboot. Since that, the system never reaches the login. It hangs on: mount: mounting /dev on /root/dev failed: No such file or directory done. mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /proc on /root/proc failed: No such file or directory Target filesystem doesn't have requested /sbin/init. No init found. Try passing init= bootarg. BusyBox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) build-in shell (ash) Enter 'help' for a list of build-in commands. (initramfs) i tried already to repair it using a live system: + Mounting the filesystems (/boot ext, / btrfs) + fsck ran without problems. + /etc/fstab seems to be OK. + apt update/upgrade on chroot succeed. now, i have no more ideas :/

    Read the article

  • How to add another OS entry in Wubi grub

    - by Amey Jah
    I am trying to install another linux distro besides ubuntu. However, I want to retain my existing windows based loader. Currently, as per my knowledge, MsDos loads grub which then loads Ubuntu (with loop back trick). Now, I have a new linux distro installed on /dev/sda8 (/boot for new distro) where as /root for that OS is installed on /dev/sda9. I tried following steps 1. Add entry into 40_custom of ubuntu grub 2. update grub But upon booting via that entry, it is not able to load the new OS and shows me blank screen. What could be the problem? Additional data: grub.cfg file of ubuntu menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-fc296be2-8c59-4f21-a3f8-47c38cd0d537' { gfxmode $linux_gfx_mode insmod gzio insmod ntfs set root='hd0,msdos5' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos5 --hint-efi=hd0,msdos5 --hint-baremetal=ahci0,msdos5 01CD7BB998DB0870 else search --no-floppy --fs-uuid --set=root 01CD7BB998DB0870 fi loopback loop0 /ubuntu/disks/root.disk set root=(loop0) linux /boot/vmlinuz-3.5.0-19-generic root=UUID=01CD7BB998DB0870 loop=/ubuntu/disks/root.disk ro quiet splash $vt_handoff initrd /boot/initrd.img-3.5.0-19-generic } submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-fc296be2-8c59-4f21-a3f8-47c38cd0d537' { menuentry 'Ubuntu, with Linux 3.5.0-19-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.5.0-19-generic-advanced-fc296be2-8c59-4f21-a3f8-47c38cd0d537' { gfxmode $linux_gfx_mode insmod gzio insmod ntfs set root='hd0,msdos5' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos5 --hint-efi=hd0,msdos5 --hint-baremetal=ahci0,msdos5 01CD7BB998DB0870 else search --no-floppy --fs-uuid --set=root 01CD7BB998DB0870 fi loopback loop0 /ubuntu/disks/root.disk set root=(loop0) echo 'Loading Linux 3.5.0-19-generic ...' linux /boot/vmlinuz-3.5.0-19-generic root=UUID=01CD7BB998DB0870 loop=/ubuntu/disks/root.disk ro quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.5.0-19-generic } menuentry 'Ubuntu, with Linux 3.5.0-19-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.5.0-19-generic-recovery-fc296be2-8c59-4f21-a3f8-47c38cd0d537' { insmod gzio insmod ntfs set root='hd0,msdos5' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos5 --hint-efi=hd0,msdos5 --hint-baremetal=ahci0,msdos5 01CD7BB998DB0870 else search --no-floppy --fs-uuid --set=root 01CD7BB998DB0870 fi loopback loop0 /ubuntu/disks/root.disk set root=(loop0) echo 'Loading Linux 3.5.0-19-generic ...' linux /boot/vmlinuz-3.5.0-19-generic root=UUID=01CD7BB998DB0870 loop=/ubuntu/disks/root.disk ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.5.0-19-generic } } ### END /etc/grub.d/10_lupin ### menuentry 'Linux, with Linux core repo kernel' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-true-0f490b6c-e92d-42f0-88e1-0bd3c0d27641'{ load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos8' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos8 --hint-efi=hd0,msdos8 --hint-baremetal=ahci0,msdos8 0f490b6c-e92d-42f0-88e1-0bd3c0d27641 else search --no-floppy --fs-uuid --set=root 0f490b6c-e92d-42f0-88e1-0bd3c0d27641 fi echo 'Loading Linux core repo kernel ...' linux /boot/vmlinuz-linux root=UUID=0f490b6c-e92d-42f0-88e1-0bd3c0d27641 ro quiet echo 'Loading initial ramdisk ...' initrd /boot/initramfs-linux.img } menuentry 'Linux, with Linux core repo kernel (Fallback initramfs)' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-fallback-0f490b6c-e92d-42f0-88e1-0bd3c0d27641' { load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos8' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos8 --hint-efi=hd0,msdos8 --hint-baremetal=ahci0,msdos8 0f490b6c-e92d-42f0-88e1-0bd3c0d27641 else search --no-floppy --fs-uuid --set=root 0f490b6c-e92d-42f0-88e1-0bd3c0d27641 fi echo 'Loading Linux core repo kernel ...' linux /boot/vmlinuz-linux root=UUID=0f490b6c-e92d-42f0-88e1-0bd3c0d27641 ro quiet echo 'Loading initial ramdisk ...' initrd /boot/initramfs-linux-fallback.img } lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk +-sda1 8:1 0 39.2M 0 part +-sda2 8:2 0 19.8G 0 part +-sda3 8:3 0 205.1G 0 part +-sda4 8:4 0 1K 0 part +-sda5 8:5 0 333.7G 0 part /host +-sda6 8:6 0 233.4G 0 part +-sda7 8:7 0 100.4G 0 part +-sda8 8:8 0 100M 0 part +-sda9 8:9 0 14.7G 0 part +-sda10 8:10 0 21.4G 0 part +-sda11 8:11 0 3G 0 part sr0 11:0 1 1024M 0 rom loop0 7:0 0 29G 0 loop / blkid /dev/loop0: UUID="fc296be2-8c59-4f21-a3f8-47c38cd0d537" TYPE="ext4" /dev/sda1: SEC_TYPE="msdos" LABEL="DellUtility" UUID="5450-4444" TYPE="vfat" /dev/sda2: LABEL="RECOVERY" UUID="78C4FAC1C4FA80A4" TYPE="ntfs" /dev/sda3: LABEL="OS" UUID="DACEFCF1CEFCC6B3" TYPE="ntfs" /dev/sda5: UUID="01CD7BB998DB0870" TYPE="ntfs" /dev/sda6: UUID="01CD7BB99CA3F750" TYPE="ntfs" /dev/sda7: LABEL="Windows 8" UUID="01CDBFB52F925F40" TYPE="ntfs" /dev/sda8: UUID="cdbb5770-d29c-401d-850d-ee30a048ca5e" TYPE="ext2" /dev/sda9: UUID="0f490b6c-e92d-42f0-88e1-0bd3c0d27641" TYPE="ext2" /dev/sda10: UUID="2e7682e5-8917-4edc-9bf9-044fea2ad738" TYPE="ext2" /dev/sda11: UUID="6081da70-d622-42b9-b489-309f922b284e" TYPE="swap Any help is appreciated. Please let me know if you need any extra data.

    Read the article

  • Oracle Communications Calendar Server: Upgrading to Version 7 Update 3

    - by joesciallo
    It's been some time since I have posted an entry. Now, with the release of Oracle Communications Calendar Server 7 Update 3, it seems high time to jump start this blog again. To begin with, check out what's new in this release: Authenticating Against an External Directory Booking Window for Calendars Changes to the davadmin Command Enable and Disable Account Autocreation LDAP Pools New Configuration Parameters New Languages New populate-davuniqueid Utility New Schema Objects Non-active Calendar Accounts Are No Longer Searched or Fetched Remote Document Store Authentication The upgrade is a bit more complicated than normal, as you must first apply some new schema elements to your Directory Server(s). To do so, you need to get the comm_dssetup 6.4 patch, patch the comm_dssetup script, and then run the patched comm_dssetup against your Directory Server(s) instances. In addition, if you are using the nsUniqueId attribute as your deployment's unique identifier, you'll want to change that to the new davUniqueId attribute. Consult the Upgrade Procedure for details, as well as DaBrain's blog, before forging ahead with this upgrade. Additional quick links: Problems Fixed in This Release Known Issues Calendar Server Unique Identifier Changes to the davadmin command Get the Calendar Server patch Get the comm_dssetup patch

    Read the article

  • Lost the Hard Drive Icon from the Launcher

    - by Elbe
    I unlocked the C Drive icon connected to the Launcher. Once I moved it onto the desktop, it disappeared. In attempting to create an new icon, I was asked to create a mount point. The mount point directory then showed 2 links rather than the normal 1. I trolled the web for solutions, but I did not find one that addressed any of the stated issues. I attempted to find a solution to reconstruct the hard drives that were found during the install and listed in the Launcher but was unsuccessful in doing so. In summary, I would like to restore the /media directory to the original install which listed the drives correctly and have the drive icons appear in the Launcher as they did at the time of the installation of Ubuntu. I found where all the other icons in the Launcher located in the home Desktop directory but could not find anything that listed the hard drives or the floppy. Ubuntu 12.10 is installed and all the latest updates have been applied.

    Read the article

  • Why can i download anything from the internet?

    - by Nicole
    I get this error message: Archive: /home/nicole/Downloads/iLividSetupV1.exe [/home/nicole/Downloads/iLividSetupV1.exe] End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. zipinfo: cannot find zipfile directory in one of /home/nicole/Downloads/iLividSetupV1.exe or /home/nicole/Downloads/iLividSetupV1.exe.zip, and cannot find /home/nicole/Downloads/iLividSetupV1.exe.ZIP. Why is ubuntu doing this?. I can no longer use my ipod, download songs, download software.

    Read the article

  • Reading the tea leaves from Windows Azure support

    - by jamiet
    A few idle thoughts… Three months ago I had an issue regarding Windows Azure where I was unable to login to the management portal. At the time I contacted Azure support, the issue was soon resolved and I thought no more about it. Until today that is when I received an email from Azure support providing a detailed analysis of the root cause, the fix and moreover precise details about when and where things occurred. The email itself is interesting and I have included the entirety of it below. A few things were interesting to me: The level of detail and the diligence in investigating and reporting the issue I found really rather impressive. They even outline the number of users that were affected (127 in case you can’t be bothered reading). Compare this to the quite pathetic support that another division within Microsoft, Skype, provided to Greg Low recently: Skype support and dead parrot sketches   This line: “Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th. This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification. ” I also found to be particularly interesting. I have long thought that one of the reasons Microsoft has proved to be such a money-making machine in the enterprise is because they provide the infrastructure and then upsell on top of that – and nothing is more infrastructural than Active Directory. It has struck me of late that they are trying to make the same play of late in the cloud by tying all their services into Azure Active Directory and here we see a clear indication of that by making AAD the authentication mechanism for anyone using Windows Azure. I get the feeling that we’re going to hear much much more about AAD in the future; isn’t it about time we could log on to SQL Azure Windows Azure SQL Database without resorting to SQL authentication, for example? And why do Microsoft have two identity providers – Microsoft Account (aka Windows Live ID) and AAD – isn’t it about time those things were combined? As I said, just some idle thoughts. Below is the transcript of the email if you are interested. @Jamiet  This is regarding the support request <redacted> where in you were not able to login into the windows azure management portal with live id. We are providing you with the summary, root cause analysis and information about permanent fix: Incident Title: You were unable to access Windows Azure Portal after Microsoft Account to Azure Active Directory account Migration. Service Impacted: Management Portal Incident Start Date and Time: 8/24/2012 4:30:00 PM Date and Time Service was Restored: 10/17/2012 12:00:00 AM Summary: Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th.   This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification.   While this migration was largely transparent to Windows Azure users, a small number of users whose sign-in names were part of a Windows Live Custom Domain were unable to login.   This incompatibility was not discovered during the Quality Assurance testing phase prior to the migration. Customer Impact: Customers whose sign-in names were part of a Windows Live Custom Domain were unable to sign-in the Management Portal after ~4:00 p.m. PST on August 24th, 2012.   We determined that the issue did impact at least 127 users in 98 of these Windows Live Custom Domains and had a maximum potential impact of 1,110 users in total. Root Cause: The root cause of the issue was an incompatibility in the AAD authentication service to handle logins from Microsoft accounts whose sign-in names were part of a Windows Live Custom Domains.  This issue was not discovered during the Quality Assurance testing phase prior to the migration from Microsoft Account (MSA) to AAD. Mitigations: The issue was mitigated for the majority of affected users by 8:20 a.m. PST on August 25th, 2012 by running some internal scripts to correct many known Windows Live Custom Domains.   The remaining affected domains fell into two categories: Windows Live Custom Domains that were not corrected by 8/25/2012. An additional 48 Windows Live Custom Domains were fixed in the weeks following the incident within 2 business days after the AAD team received an escalation from product support regarding those accounts. Windows Live Custom domains that were also provisioned in Office365. Some of the affected Windows Live Custom Domains had already been provisioned in AAD because their owners signed up for Office365 which is a service that also uses AAD.   In these cases the Azure customers had to work around the issue by renaming their Microsoft Account or using a different Microsoft Account to administer their Azure subscription. Permanent Fix: The Azure Active Directory team permanently fixed the issue for all customers on 10/17/2012 in an upgraded release of the AAD service.

    Read the article

  • Installing Realtek rtl-8192ce on Ubuntu 9.4

    - by dutchman79
    I followed the below steps to install my rtl8192ce drivers on my Ubuntu 9.4 system. But I still got errors and nothing installed and I can't connect to the modem to get onto the Internet. Can someone please help me? Move the file you downloaded to your home directory using your file manager or terminal mv [destination of downloaded file] /home/[username] Now we move to our home directory and Unzip the file using the following command or right click and select Extract here: cd /home/user tar xvjf rtl_92ce_92se_92de_8723ae_88ee_linux_mac80211_0012.0207.2013(1).tar.bz2 Now access the Directory which we extracted cd rtl_92ce_92se_92de_8723ae_88ee_linux_mac80211_0012.0207.2013(1) Next we install the necessary dependencies to compile the driver sudo apt-get install gcc build-essential linux-headers-generic linux-headers-$(uname -r) Now we start the compilation make and then sudo make install Execute modprobe rtl8192ce Now If all went right your system should be running the wireless driver."

    Read the article

  • tty1 prompt before lightdm

    - by David Weldon
    After upgrading to 13.10, every time I boot I'm shown a login prompt (tty1) for ~30 seconds before lightdm automatically starts. Everything works fine after that. Any ideas on what I could try to fix/debug this? My /var/log/lightdm/x-0-greeter.log contains lines like the following: ** (at-spi2-registryd:1381): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files ** (at-spi2-registryd:1381): WARNING **: Unable to register client with session manager WARNING: Failed to open sessions directory: Error opening directory '/usr/share/lightdm/sessions': No such file or directory ** Message: PID 1534 (we are 1534) sent signal 15, shutting down... ** (gnome-settings-daemon:1401): WARNING **: Name taken or bus went away - shutting down Searching for these errors results in a variety of bugs filed over the years. Maybe a clean install will fix this.

    Read the article

  • Increasing efficiency of N-Body gravity simulation

    - by Postman
    I'm making a space exploration type game, it will have many planets and other objects that will all have realistic gravity. I currently have a system in place that works, but if the number of planets goes above 70, the FPS decreases an practically exponential rates. I'm making it in C# and XNA. My guess is that I should be able to do gravity calculations between 100 objects without this kind of strain, so clearly my method is not as efficient as it should be. I have two files, Gravity.cs and EntityEngine.cs. Gravity manages JUST the gravity calculations, EntityEngine creates an instance of Gravity and runs it, along with other entity related methods. EntityEngine.cs public void Update() { foreach (KeyValuePair<string, Entity> e in Entities) { e.Value.Update(); } gravity.Update(); } (Only relevant piece of code from EntityEngine, self explanatory. When an instance of Gravity is made in entityEngine, it passes itself (this) into it, so that gravity can have access to entityEngine.Entities (a dictionary of all planet objects)) Gravity.cs namespace ExplorationEngine { public class Gravity { private EntityEngine entityEngine; private Vector2 Force; private Vector2 VecForce; private float distance; private float mult; public Gravity(EntityEngine e) { entityEngine = e; } public void Update() { //First loop foreach (KeyValuePair<string, Entity> e in entityEngine.Entities) { //Reset the force vector Force = new Vector2(); //Second loop foreach (KeyValuePair<string, Entity> e2 in entityEngine.Entities) { //Make sure the second value is not the current value from the first loop if (e2.Value != e.Value ) { //Find the distance between the two objects. Because Fg = G * ((M1 * M2) / r^2), using Vector2.Distance() and then squaring it //is pointless and inefficient because distance uses a sqrt, squaring the result simple cancels that sqrt. distance = Vector2.DistanceSquared(e2.Value.Position, e.Value.Position); //This makes sure that two planets do not attract eachother if they are touching, completely unnecessary when I add collision, //For now it just makes it so that the planets are not glitchy, performance is not significantly improved by removing this IF if (Math.Sqrt(distance) > (e.Value.Texture.Width / 2 + e2.Value.Texture.Width / 2)) { //Calculate the magnitude of Fg (I'm using my own gravitational constant (G) for the sake of time (I know it's 1 at the moment, but I've been changing it) mult = 1.0f * ((e.Value.Mass * e2.Value.Mass) / distance); //Calculate the direction of the force, simply subtracting the positions and normalizing works, this fixes diagonal vectors //from having a larger value, and basically makes VecForce a direction. VecForce = e2.Value.Position - e.Value.Position; VecForce.Normalize(); //Add the vector for each planet in the second loop to a force var. Force = Vector2.Add(Force, VecForce * mult); //I have tried Force += VecForce * mult, and have not noticed much of an increase in speed. } } } //Add that force to the first loop's planet's position (later on I'll instead add to acceleration, to account for inertia) e.Value.Position += Force; } } } } I have used various tips (about gravity optimizing, not threading) from THIS question (that I made yesterday). I've made this gravity method (Gravity.Update) as efficient as I know how to make it. This O(N^2) algorithm still seems to be eating up all of my CPU power though. Here is a LINK (google drive, go to File download, keep .Exe with the content folder, you will need XNA Framework 4.0 Redist. if you don't already have it) to the current version of my game. Left click makes a planet, right click removes the last planet. Mouse moves the camera, scroll wheel zooms in and out. Watch the FPS and Planet Count to see what I mean about performance issues past 70 planets. (ALL 70 planets must be moving, I've had 100 stationary planets and only 5 or so moving ones while still having 300 fps, the issue arises when 70+ are moving around) After 70 planets are made, performance tanks exponentially. With < 70 planets, I get 330 fps (I have it capped at 300). At 90 planets, the FPS is about 2, more than that and it sticks around at 0 FPS. Strangely enough, when all planets are stationary, the FPS climbs back up to around 300, but as soon as something moves, it goes right back down to what it was, I have no systems in place to make this happen, it just does. I considered multithreading, but that previous question I asked taught me a thing or two, and I see now that that's not a viable option. I've also thought maybe I could do the calculations on my GPU instead, though I don't think it should be necessary. I also do not know how to do this, it is not a simple concept and I want to avoid it unless someone knows a really noob friendly simple way to do it that will work for an n-body gravity calculation. (I have an NVidia gtx 660) Lastly I've considered using a quadtree type system. (Barnes Hut simulation) I've been told (in the previous question) that this is a good method that is commonly used, and it seems logical and straightforward, however the implementation is way over my head and I haven't found a good tutorial for C# yet that explains it in a way I can understand, or uses code I can eventually figure out. So my question is this: How can I make my gravity method more efficient, allowing me to use more than 100 objects (I can render 1000 planets with constant 300+ FPS without gravity calculations), and if I can't do much to improve performance (including some kind of quadtree system), could I use my GPU to do the calculations?

    Read the article

  • How do I install D-Link DWA-140 on Ubuntu 12.04?

    - by Jerrod Griffiths
    When I try to run the .exe file, this error notice comes up. Archive: /media/DWA-140/DWA140.exe [/media/DWA-140/DWA140.exe] End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. zipinfo: cannot find zipfile directory in one of /media/DWA-140/DWA140.exe or /media/DWA-140/DWA140.exe.zip, and cannot find /media/DWA-140/DWA140.exe.ZIP, period. Is there any steps I can take to get this to run? Thanks!

    Read the article

  • Ubuntu 14.04 doesn' t boot after upgrade from 12.04 installed inside Windows 8.1

    - by AdiC
    I have Ubuntu 12.04 installed like an app on windows 8.1 (Ubuntu 12.04 allow to be installed like an app in Windows 8.1 and it can be removed when you don't need it any more from Control Panel). Usually, to chose what os to boot when you start the laptop, you can choose between windows 8.1 and Ubuntu after windows logo appeared at start up and that was ok until I made this upgrade. Now when I try to choose Ubuntu the laptop try to boot it but, after that full colored screen is showed the screen go black and this messages appear: mount: mounting /dev/loop0/ on /root failed : Invalid argument mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /proc on /root/proc failed: No such file or directory Target filesystem doesn' t have requested /sbin/init No init found. Try passing init = bootarg. BusyBox v1.21.1 (Ubuntu 1:1:21.0-1ubuntu1) built-in shell (ash) Enter 'help' for a list of built-in commands (initramfs) _ I don' t know what to do after this screen appears. Please help !

    Read the article

  • Unable to remove a file which have a name like a command argument

    - by Justin
    By inadvertance, I've created a file called -r into my home directory. Please don't ask me how and why, I don't recall. But the fact is that now I cannot get rid of it : rm -rf rm: missing operand Try 'rm --help' for more information. Other try : rm /-/r rm: cannot remove ‘/-/r’: No such file or directory Another one : rm \-r rm: missing operand Try 'rm --help' for more information. Is there a way to remove this file without deleting the whole directory ? Thanks.

    Read the article

  • How to change ext4 Hard drive partion to NTFS for installing Windows xp?

    - by Sina
    I have a ubuntu in Root directory with separated Home directory. This is all my hard partition. I inserted xp cd to install it but I can't find any hard drive suggestion ! I decided to boot it by Gparted and change the Root directory to NTFS, I formatted it and run xp cd again. I can't find any hard drive too. I don't want to change all partitions because I don't want to lose my data on them. What should I do ?

    Read the article

  • How to enable a symlink in this case

    - by Bragboy
    I cannot categorize this question under ubuntu since it has nothing to do with it. But I know people here can definitely answer this. I login to one of my deployment boxes using SSH (no ubuntu here). I am working in a tool called TeamCity which uses a folder called ".BuildServer" under home directory of the user. This folder may grow in size as the application runs but the current user is only given a limited amount of space. But the good thing this I got a folder access outside /home/deploy (deploy being the user here) folder. I now want to link this .BuildServer inside the /home/deploy directory to the other folder where I got permission for (meaning all the files should be re-routed to that directory) Hope my question was clear, please help.

    Read the article

  • Firefox released beta 9 for version 4

    - by anirudha
    Firefox yesterday released 9 beta of Firefox 4. now many plug-in work inside Firefox 4. How to get plugin work in Firefox 4 :- if you like any plug-in who you like to use but not worked in Firefox 4 go to developer site may be they work on them so it’s chance you can get the version who is in development from developer site. because many of plugin wait for a time period for testing or making same version for all plateform like linux or mac osX. How to work with stable and Beta version [standalone] sometime we fell that it’s time wasting because something we need the beta version can’t give. so you can use Firefox 3.6.13 or Firefox 4 both by a trick. you need to install them in other directory. when you trying to install another version beware because they also override the last installation directory so you need to choose the directory where you want to get another version manually.

    Read the article

  • apt-get could not open lock file

    - by user114373
    I am trying to get an nfs client running on a Sheeva-plug running debian 2.6.22. The host is Ubuntu 12.04 and claims (from showmount -e) to be exporting the desired directory. There is no showmount binary in the sheeva-plug, so I'm trying to install it from the nfs-common package: # apt-get install nfs-common The response ends with E: could not open lock file /var/cache/apt/archives/lock - open (no such file or directory) E: Unable to lock the download directory. I am root while doing this. Similar errors arise when trying to install other packages. How do I correct these errors so apt-get will do its work?

    Read the article

  • How to create and maintain patch on Debian package?

    - by ???
    I want to patch on Trac package. I know how to patch and rebuild the package, but there are some things I don't understand very well. My patch is something dangerous and not likely to commit back to the community. So, let me just say, it's a very private patch. But, I want my patch keep working when the Ubuntu packages upgrade. (Should I apt-get source trac and move my patch to the new version of source directory each time the Trac upgrades?) I see there is a patch/ directory (many using quilt I guess) in debian/, but I don't know how to use it? Will debuild automatic apply all patches in the patch/ directory? And what about dpkg-buildpackage? Is there some environ variables to control the selection of patches to apply?

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >